diff options
author | 2025-01-28 18:32:49 -0800 | |
---|---|---|
committer | 2025-02-05 07:12:06 -0800 | |
commit | 3cec27453db49a176e688b7721c3cd26be5ef835 (patch) | |
tree | 93c9f7472e8c5bf00677dd1bff96bab0e6e16321 /scripts/generate_rust_analyzer.py | |
parent | srcu: Add srcu_down_read_fast() and srcu_up_read_fast() (diff) | |
download | wireguard-linux-3cec27453db49a176e688b7721c3cd26be5ef835.tar.xz wireguard-linux-3cec27453db49a176e688b7721c3cd26be5ef835.zip |
srcu: Make SRCU-fast also be NMI-safe
BPF uses rcu_read_lock_trace() in NMI context, so srcu_read_lock_fast()
must be NMI-safe if it is to have any chance of addressing RCU Tasks
Trace use cases. This commit therefore causes srcu_read_lock_fast()
and srcu_read_unlock_fast() to use atomic_long_inc() instead of
this_cpu_inc() on architectures that support NMIs but do not have
NMI-safe implementations of this_cpu_inc(). Note that both x86 and
arm64 have NMI-safe implementations of this_cpu_inc(), and thus do not
pay the performance penalty inherent in atomic_inc_long().
It is tempting to use this trick to fold srcu_read_lock_nmisafe()
into srcu_read_lock(), but this would need careful thought, review,
and performance analysis. Though those smp_mb() calls might well make
performance a non-issue.
Reported-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Diffstat (limited to 'scripts/generate_rust_analyzer.py')
0 files changed, 0 insertions, 0 deletions