diff options
author | 2024-08-07 12:13:38 -1000 | |
---|---|---|
committer | 2024-08-08 13:38:19 -1000 | |
commit | 991ef53a4832941c8130008ef35c66ec88c7fa0f (patch) | |
tree | 41b595da506dce8877f3c5e4106f26575fc3c23d | |
parent | sched_ext: Fix unsafe list iteration in process_ddsp_deferred_locals() (diff) | |
download | wireguard-linux-991ef53a4832941c8130008ef35c66ec88c7fa0f.tar.xz wireguard-linux-991ef53a4832941c8130008ef35c66ec88c7fa0f.zip |
sched_ext: Make scx_rq_online() also test cpu_active() in addition to SCX_RQ_ONLINE
scx_rq_online() currently only tests SCX_RQ_ONLINE. This isn't fully correct
- e.g. consume_dispatch_q() uses task_run_on_remote_rq() which tests
scx_rq_online() to see whether the current rq can run the task, and, if so,
calls consume_remote_task() to migrate the task to @rq. While the test
itself was done while locking @rq, @rq can be temporarily unlocked by
consume_remote_task() and nothing prevents SCX_RQ_ONLINE from going offline
before the migration takes place.
To address the issue, add cpu_active() test to scx_rq_online(). There is a
synchronize_rcu() between cpu_active() being cleared and the rq going
offline, so if an on-going scheduling operation sees cpu_active(), the
associated rq is guaranteed to not go offline until the scheduling operation
is complete.
Signed-off-by: Tejun Heo <tj@kernel.org>
Fixes: 60c27fb59f6c ("sched_ext: Implement sched_ext_ops.cpu_online/offline()")
Acked-by: David Vernet <void@manifault.com>
Diffstat (limited to '')
-rw-r--r-- | kernel/sched/ext.c | 9 |
1 files changed, 8 insertions, 1 deletions
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 9efb54172495..17af9c46d891 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -1818,7 +1818,14 @@ dispatch: static bool scx_rq_online(struct rq *rq) { - return likely(rq->scx.flags & SCX_RQ_ONLINE); + /* + * Test both cpu_active() and %SCX_RQ_ONLINE. %SCX_RQ_ONLINE indicates + * the online state as seen from the BPF scheduler. cpu_active() test + * guarantees that, if this function returns %true, %SCX_RQ_ONLINE will + * stay set until the current scheduling operation is complete even if + * we aren't locking @rq. + */ + return likely((rq->scx.flags & SCX_RQ_ONLINE) && cpu_active(cpu_of(rq))); } static void do_enqueue_task(struct rq *rq, struct task_struct *p, u64 enq_flags, |