aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/time
diff options
context:
space:
mode:
authorThomas Gleixner <tglx@linutronix.de>2009-04-11 10:43:41 +0200
committerThomas Gleixner <tglx@linutronix.de>2009-05-15 15:32:45 +0200
commitdce48a84adf1806676319f6f480e30a6daa012f9 (patch)
tree79151f5d31d9c3dcdc723ab8877cb943b944890e /kernel/time
parentsched: Don't export sched_mc_power_savings on multi-socket single core system (diff)
downloadlinux-dev-dce48a84adf1806676319f6f480e30a6daa012f9.tar.xz
linux-dev-dce48a84adf1806676319f6f480e30a6daa012f9.zip
sched, timers: move calc_load() to scheduler
Dimitri Sivanich noticed that xtime_lock is held write locked across calc_load() which iterates over all online CPUs. That can cause long latencies for xtime_lock readers on large SMP systems. The load average calculation is an rough estimate anyway so there is no real need to protect the readers vs. the update. It's not a problem when the avenrun array is updated while a reader copies the values. Instead of iterating over all online CPUs let the scheduler_tick code update the number of active tasks shortly before the avenrun update happens. The avenrun update itself is handled by the CPU which calls do_timer(). [ Impact: reduce xtime_lock write locked section ] Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra <peterz@infradead.org>
Diffstat (limited to 'kernel/time')
-rw-r--r--kernel/time/timekeeping.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 687dff49f6e7..52a8bf8931f3 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -22,7 +22,7 @@
/*
* This read-write spinlock protects us from races in SMP while
- * playing with xtime and avenrun.
+ * playing with xtime.
*/
__cacheline_aligned_in_smp DEFINE_SEQLOCK(xtime_lock);