aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/sched/debug.c
diff options
context:
space:
mode:
authorMel Gorman <mgorman@techsingularity.net>2020-02-24 09:52:16 +0000
committerIngo Molnar <mingo@kernel.org>2020-02-24 11:36:35 +0100
commitfb86f5b2119245afd339280099b4e9417cc0b03a (patch)
tree0413a3074b63cd3d45f5ee5de296f0ce48003ed2 /kernel/sched/debug.c
parentsched/numa: Replace runnable_load_avg by load_avg (diff)
downloadlinux-dev-fb86f5b2119245afd339280099b4e9417cc0b03a.tar.xz
linux-dev-fb86f5b2119245afd339280099b4e9417cc0b03a.zip
sched/numa: Use similar logic to the load balancer for moving between domains with spare capacity
The standard load balancer generally tries to keep the number of running tasks or idle CPUs balanced between NUMA domains. The NUMA balancer allows tasks to move if there is spare capacity but this causes a conflict and utilisation between NUMA nodes gets badly skewed. This patch uses similar logic between the NUMA balancer and load balancer when deciding if a task migrating to its preferred node can use an idle CPU. Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Vincent Guittot <vincent.guittot@linaro.org> Cc: Juri Lelli <juri.lelli@redhat.com> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Valentin Schneider <valentin.schneider@arm.com> Cc: Phil Auld <pauld@redhat.com> Cc: Hillf Danton <hdanton@sina.com> Link: https://lore.kernel.org/r/20200224095223.13361-7-mgorman@techsingularity.net
Diffstat (limited to '')
0 files changed, 0 insertions, 0 deletions