diff options
author | 2021-02-24 16:15:49 +0800 | |
---|---|---|
committer | 2021-03-23 16:01:59 +0100 | |
commit | acb4decc1e900468d51b33c5f1ee445278e716a7 (patch) | |
tree | de3b9f31b95dfacb570c7ff2f7e2f8a34b032ef0 /include/linux/string_helpers.h | |
parent | sched/fair: Optimize test_idle_cores() for !SMT (diff) | |
download | wireguard-linux-acb4decc1e900468d51b33c5f1ee445278e716a7.tar.xz wireguard-linux-acb4decc1e900468d51b33c5f1ee445278e716a7.zip |
sched/fair: Reduce long-tail newly idle balance cost
A long-tail load balance cost is observed on the newly idle path,
this is caused by a race window between the first nr_running check
of the busiest runqueue and its nr_running recheck in detach_tasks.
Before the busiest runqueue is locked, the tasks on the busiest
runqueue could be pulled by other CPUs and nr_running of the busiest
runqueu becomes 1 or even 0 if the running task becomes idle, this
causes detach_tasks breaks with LBF_ALL_PINNED flag set, and triggers
load_balance redo at the same sched_domain level.
In order to find the new busiest sched_group and CPU, load balance will
recompute and update the various load statistics, which eventually leads
to the long-tail load balance cost.
This patch clears LBF_ALL_PINNED flag for this race condition, and hence
reduces the long-tail cost of newly idle balance.
Signed-off-by: Aubrey Li <aubrey.li@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lkml.kernel.org/r/1614154549-116078-1-git-send-email-aubrey.li@intel.com
Diffstat (limited to 'include/linux/string_helpers.h')
0 files changed, 0 insertions, 0 deletions