aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/sched
diff options
context:
space:
mode:
authorDavidlohr Bueso <dave@stgolabs.net>2018-12-02 21:31:30 -0800
committerIngo Molnar <mingo@kernel.org>2019-01-21 11:18:50 +0100
commit87ff19cb2f1aa55a5d8b691e6690cc059a59d2ec (patch)
treea68291414ab3e040c7701ed26c361fe7c9f2d13d /kernel/sched
parentMerge branch 'locking/urgent' into locking/core, to pick up dependent fixes (diff)
downloadlinux-dev-87ff19cb2f1aa55a5d8b691e6690cc059a59d2ec.tar.xz
linux-dev-87ff19cb2f1aa55a5d8b691e6690cc059a59d2ec.zip
sched/wake_q: Add branch prediction hint to wake_q_add() cmpxchg
The cmpxchg() will fail when the task is already in the process of waking up, and as such is an extremely rare occurrence. Micro-optimize the call and put an unlikely() around it. To no surprise, when using CONFIG_PROFILE_ANNOTATED_BRANCHES under a number of workloads the incorrect rate was a mere 1-2%. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Waiman Long <longman@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: Yongji Xie <elohimes@gmail.com> Cc: andrea.parri@amarulasolutions.com Cc: lilin24@baidu.com Cc: liuqi16@baidu.com Cc: nixun@baidu.com Cc: xieyongji@baidu.com Cc: yuanlinsi01@baidu.com Cc: zhangyu31@baidu.com Link: https://lkml.kernel.org/r/20181203053130.gwkw6kg72azt2npb@linux-r8p5 Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/core.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d8d76a65cfdd..b05eef7d7a1f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -421,7 +421,7 @@ void wake_q_add(struct wake_q_head *head, struct task_struct *task)
* state, even in the failed case, an explicit smp_mb() must be used.
*/
smp_mb__before_atomic();
- if (cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL))
+ if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
return;
get_task_struct(task);