diff options
author | 2025-01-08 16:53:02 -0800 | |
---|---|---|
committer | 2025-02-04 21:50:06 -0800 | |
commit | 21ef2498622197429c3105254587034d93a745b4 (patch) | |
tree | e1d2b9ec66bc6e6fce08f51649ab5bcad7fab43b | |
parent | docs: Improve discussion of this_cpu_ptr(), add raw_cpu_ptr() (diff) | |
download | linux-rng-21ef2498622197429c3105254587034d93a745b4.tar.xz linux-rng-21ef2498622197429c3105254587034d93a745b4.zip |
rcu: Document self-propagating callbacks
This commit documents the fact that a given RCU callback function can
repost itself.
Reported-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
-rw-r--r-- | kernel/rcu/tree.c | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 475f31deed14..2cd193ed854c 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3107,7 +3107,7 @@ module_param(enable_rcu_lazy, bool, 0444); * critical sections have completed. * * Use this API instead of call_rcu() if you don't want the callback to be - * invoked after very long periods of time, which can happen on systems without + * delayed for very long periods of time, which can happen on systems without * memory pressure and on systems which are lightly loaded or mostly idle. * This function will cause callbacks to be invoked sooner than later at the * expense of extra power. Other than that, this function is identical to, and @@ -3138,6 +3138,12 @@ EXPORT_SYMBOL_GPL(call_rcu_hurry); * might well execute concurrently with RCU read-side critical sections * that started after call_rcu() was invoked. * + * It is perfectly legal to repost an RCU callback, potentially with + * a different callback function, from within its callback function. + * The specified function will be invoked after another full grace period + * has elapsed. This use case is similar in form to the common practice + * of reposting a timer from within its own handler. + * * RCU read-side critical sections are delimited by rcu_read_lock() * and rcu_read_unlock(), and may be nested. In addition, but only in * v5.0 and later, regions of code across which interrupts, preemption, |