diff options
author | 2024-12-31 13:50:20 +0800 | |
---|---|---|
committer | 2025-01-13 14:10:22 +0100 | |
commit | 5d808c78d97251af1d3a3e4f253e7d6c39fd871e (patch) | |
tree | bb0ce985f97fb70f3bb3945d9d54bfe96a4a30bb | |
parent | docs: Update Schedstat version to 17 (diff) | |
download | wireguard-linux-5d808c78d97251af1d3a3e4f253e7d6c39fd871e.tar.xz wireguard-linux-5d808c78d97251af1d3a3e4f253e7d6c39fd871e.zip |
sched: Fix race between yield_to() and try_to_wake_up()
We met a SCHED_WARN in set_next_buddy():
__warn_printk
set_next_buddy
yield_to_task_fair
yield_to
kvm_vcpu_yield_to [kvm]
...
After a short dig, we found the rq_lock held by yield_to() may not
be exactly the rq that the target task belongs to. There is a race
window against try_to_wake_up().
CPU0 target_task
blocking on CPU1
lock rq0 & rq1
double check task_rq == p_rq, ok
woken to CPU2 (lock task_pi & rq2)
task_rq = rq2
yield_to_task_fair (w/o lock rq2)
In this race window, yield_to() is operating the task w/o the correct
lock. Fix this by taking task pi_lock first.
Fixes: d95f41220065 ("sched: Add yield_to(task, preempt) functionality")
Signed-off-by: Tianchen Ding <dtcccc@linux.alibaba.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20241231055020.6521-1-dtcccc@linux.alibaba.com
-rw-r--r-- | kernel/sched/syscalls.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c index ff0e5ab4e37c..943406c4ee86 100644 --- a/kernel/sched/syscalls.c +++ b/kernel/sched/syscalls.c @@ -1433,7 +1433,7 @@ int __sched yield_to(struct task_struct *p, bool preempt) struct rq *rq, *p_rq; int yielded = 0; - scoped_guard (irqsave) { + scoped_guard (raw_spinlock_irqsave, &p->pi_lock) { rq = this_rq(); again: |