diff options
author | 2022-12-09 09:09:46 +0800 | |
---|---|---|
committer | 2022-12-08 17:50:17 -0800 | |
commit | 0893d6007db5cf397f3fc92b2a6935c3ed0c6f00 (patch) | |
tree | 6d85a0755af1269b066db084620002ba8fb065b7 /scripts/generate_rust_analyzer.py | |
parent | selftests/bpf: Bring test_offload.py back to life (diff) | |
download | wireguard-linux-0893d6007db5cf397f3fc92b2a6935c3ed0c6f00.tar.xz wireguard-linux-0893d6007db5cf397f3fc92b2a6935c3ed0c6f00.zip |
bpf: Reuse freed element in free_by_rcu during allocation
When there are batched freeing operations on a specific CPU, part of
the freed elements ((high_watermark - lower_watermark) / 2 + 1) will be
indirectly moved into waiting_for_gp list through free_by_rcu list.
After call_rcu_in_progress becomes false again, the remaining elements
in free_by_rcu list will be moved to waiting_for_gp list by the next
invocation of free_bulk(). However if the expiration of RCU tasks trace
grace period is relatively slow, none element in free_by_rcu list will
be moved.
So instead of invoking __alloc_percpu_gfp() or kmalloc_node() to
allocate a new object, in alloc_bulk() just check whether or not there is
freed element in free_by_rcu list and reuse it if available.
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20221209010947.3130477-2-houtao@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'scripts/generate_rust_analyzer.py')
0 files changed, 0 insertions, 0 deletions