aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2025-01-16bpf: verifier: Refactor helper access type trackingDaniel Xu13-54/+42
Previously, the verifier was treating all PTR_TO_STACK registers passed to a helper call as potentially written to by the helper. However, all calls to check_stack_range_initialized() already have precise access type information available. Rather than treat ACCESS_HELPER as a proxy for BPF_WRITE, pass enum bpf_access_type to check_stack_range_initialized() to more precisely track helper arguments. One benefit from this precision is that registers tracked as valid spills and passed as a read-only helper argument remain tracked after the call. Rather than being marked STACK_MISC afterwards. An additional benefit is the verifier logs are also more precise. For this particular error, users will enjoy a slightly clearer message. See included selftest updates for examples. Acked-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Daniel Xu <dxu@dxuuu.xyz> Link: https://lore.kernel.org/r/ff885c0e5859e0cd12077c3148ff0754cad4f7ed.1736886479.git.dxu@dxuuu.xyz Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-16bpf: tcp: Mark bpf_load_hdr_opt() arg2 as read-writeDaniel Xu1-1/+1
MEM_WRITE attribute is defined as: "Non-presence of MEM_WRITE means that MEM is only being read". bpf_load_hdr_opt() both reads and writes from its arg2 - void *search_res. This matters a lot for the next commit where we more precisely track stack accesses. Without this annotation, the verifier will make false assumptions about the contents of memory written to by helpers and possibly prune valid branches. Fixes: 6fad274f06f0 ("bpf: Add MEM_WRITE attribute") Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Daniel Xu <dxu@dxuuu.xyz> Link: https://lore.kernel.org/r/730e45f8c39be2a5f3d8c4406cceca9d574cbf14.1736886479.git.dxu@dxuuu.xyz Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-16bpf: verifier: Add missing newline on verbose() callDaniel Xu1-1/+1
The print was missing a newline. Signed-off-by: Daniel Xu <dxu@dxuuu.xyz> Link: https://lore.kernel.org/r/59cbe18367b159cd470dc6d5c652524c1dc2b984.1736886479.git.dxu@dxuuu.xyz Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-16selftests/bpf: Add distilled BTF test about marking BTF_IS_EMBEDDEDPu Lehui1-0/+72
When redirecting the split BTF to the vmlinux base BTF, we need to mark the distilled base struct/union members of split BTF structs/unions in id_map with BTF_IS_EMBEDDED. This indicates that these types must match both name and size later. So if a needed composite type, which is the member of composite type in the split BTF, has a different size in the base BTF we wish to relocate with, btf__relocate() should error out. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250115100241.4171581-4-pulehui@huaweicloud.com
2025-01-16libbpf: Fix incorrect traversal end type ID when marking BTF_IS_EMBEDDEDPu Lehui1-1/+1
When redirecting the split BTF to the vmlinux base BTF, we need to mark the distilled base struct/union members of split BTF structs/unions in id_map with BTF_IS_EMBEDDED. This indicates that these types must match both name and size later. Therefore, we need to traverse the entire split BTF, which involves traversing type IDs from nr_dist_base_types to nr_types. However, the current implementation uses an incorrect traversal end type ID, so let's correct it. Fixes: 19e00c897d50 ("libbpf: Split BTF relocation") Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250115100241.4171581-3-pulehui@huaweicloud.com
2025-01-16libbpf: Fix return zero when elf_begin failedPu Lehui1-0/+1
The error number of elf_begin is omitted when encapsulating the btf_find_elf_sections function. Fixes: c86f180ffc99 ("libbpf: Make btf_parse_elf process .BTF.base transparently") Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250115100241.4171581-2-pulehui@huaweicloud.com
2025-01-16selftests/bpf: Fix btf leak on new btf alloc failure in btf_distill testPu Lehui1-2/+2
Fix btf leak on new btf alloc failure in btf_distill test. Fixes: affdeb50616b ("selftests/bpf: Extend distilled BTF tests to cover BTF relocation") Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250115100241.4171581-1-pulehui@huaweicloud.com
2025-01-16veristat: Load struct_ops programs only onceEduard Zingerman1-0/+38
libbpf automatically adjusts autoload for struct_ops programs, see libbpf.c:bpf_object_adjust_struct_ops_autoload. For example, if there is a map: SEC(".struct_ops.link") struct sched_ext_ops ops = { .enqueue = foo, .tick = bar, }; Both 'foo' and 'bar' would be loaded if 'ops' autocreate is true, both 'foo' and 'bar' would be skipped if 'ops' autocreate is false. This means that when veristat processes object file with 'ops', it would load 4 programs in total: two programs per each 'process_prog' call. The adjustment occurs at object load time, and libbpf remembers association between 'ops' and 'foo'/'bar' at object open time. The only way to persuade libbpf to load one of two is to adjust map initial value, such that only one program is referenced. This patch does exactly that, significantly reducing time to process object files with big number of struct_ops programs. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250115223835.919989-1-eddyz87@gmail.com
2025-01-16selftests/bpf: Fix undefined UINT_MAX in veristat.cTony Ambardar1-0/+1
Include <limits.h> in 'veristat.c' to provide a UINT_MAX definition and avoid multiple compile errors against mips64el/musl-libc: veristat.c: In function 'max_verifier_log_size': veristat.c:1135:36: error: 'UINT_MAX' undeclared (first use in this function) 1135 | const int SMALL_LOG_SIZE = UINT_MAX >> 8; | ^~~~~~~~ veristat.c:24:1: note: 'UINT_MAX' is defined in header '<limits.h>'; did you forget to '#include <limits.h>'? 23 | #include <math.h> +++ |+#include <limits.h> 24 | Fixes: 1f7c33630724 ("selftests/bpf: Increase verifier log limit in veristat") Signed-off-by: Tony Ambardar <tony.ambardar@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250116075036.3459898-1-tony.ambardar@gmail.com
2025-01-15bpf: Send signals asynchronously if !preemptiblePuranjay Mohan1-1/+1
BPF programs can execute in all kinds of contexts and when a program running in a non-preemptible context uses the bpf_send_signal() kfunc, it will cause issues because this kfunc can sleep. Change `irqs_disabled()` to `!preemptible()`. Reported-by: syzbot+97da3d7e0112d59971de@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/67486b09.050a0220.253251.0084.GAE@google.com/ Fixes: 1bc7896e9ef4 ("bpf: Fix deadlock with rq_lock in bpf_send_signal()") Signed-off-by: Puranjay Mohan <puranjay@kernel.org> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20250115103647.38487-1-puranjay@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-15selftests/bpf: Fix test_xdp_adjust_tail_grow2 selftest on powerpcSaket Kumar Bhaskar2-0/+4
On powerpc cache line size is 128 bytes, so skb_shared_info must be aligned accordingly. Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20250110103109.3670793-1-skb99@linux.ibm.com
2025-01-10selftests/bpf: Migrate test_xdp_redirect.c to test_xdp_do_redirect.cBastien Curutchet (eBPF Foundation)3-30/+15
prog_tests/xdp_do_redirect.c is the only user of the BPF programs located in progs/test_xdp_do_redirect.c and progs/test_xdp_redirect.c. There is no need to keep both files with such close names. Move test_xdp_redirect.c contents to test_xdp_do_redirect.c and remove progs/test_xdp_redirect.c Signed-off-by: Bastien Curutchet (eBPF Foundation) <bastien.curutchet@bootlin.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://patch.msgid.link/20250110-xdp_redirect-v2-3-b8f3ae53e894@bootlin.com
2025-01-10selftests/bpf: Migrate test_xdp_redirect.sh to xdp_do_redirect.cBastien Curutchet (eBPF Foundation)3-80/+165
test_xdp_redirect.sh can't be used by the BPF CI. Migrate test_xdp_redirect.sh into a new test case in xdp_do_redirect.c. It uses the same network topology and the same BPF programs located in progs/test_xdp_redirect.c and progs/xdp_dummy.c. Remove test_xdp_redirect.sh and its Makefile entry. Signed-off-by: Bastien Curutchet (eBPF Foundation) <bastien.curutchet@bootlin.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://patch.msgid.link/20250110-xdp_redirect-v2-2-b8f3ae53e894@bootlin.com
2025-01-10selftests/bpf: test_xdp_redirect: Rename BPF sectionsBastien Curutchet (eBPF Foundation)2-4/+4
SEC("redirect_to_111") and SEC("redirect_to_222") can't be loaded by the __load() helper. Rename both sections SEC("xdp") so it can be interpreted by the __load() helper in upcoming patch. Update the test_xdp_redirect.sh to use the program name instead of the section name to load the BPF program. Signed-off-by: Bastien Curutchet (eBPF Foundation) <bastien.curutchet@bootlin.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Reviewed-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com> Link: https://patch.msgid.link/20250110-xdp_redirect-v2-1-b8f3ae53e894@bootlin.com
2025-01-10veristat: Document verifier log dumping capabilityDaniel Xu1-2/+3
`-vl2` is a useful combination of flags to dump the entire verification log. This is helpful when making changes to the verifier, as you can see what it thinks program one instruction at a time. This was more or less a hidden feature before. Document it so others can discover it. Signed-off-by: Daniel Xu <dxu@dxuuu.xyz> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/d57bbcca81e06ae8dcdadaedb99a48dced67e422.1736466129.git.dxu@dxuuu.xyz
2025-01-10bpftool: Fix control flow graph segfault during edge creationChristoph Werle1-0/+1
If the last instruction of a control flow graph building block is a BPF_CALL, an incorrect edge with e->dst set to NULL is created and results in a segfault during graph output. Ensure that BPF_CALL as last instruction of a building block is handled correctly and only generates a single edge unlike actual BPF_JUMP* instructions. Signed-off-by: Christoph Werle <christoph.werle@longjmp.de> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Tested-by: Quentin Monnet <qmo@kernel.org> Reviewed-by: Quentin Monnet <qmo@kernel.org> Link: https://lore.kernel.org/bpf/20250108220937.1470029-1-christoph.werle@longjmp.de
2025-01-10selftests/bpf: Add a test for kprobe multi with unique_matchYonghong Song1-0/+27
Add a kprobe multi subtest to test kprobe multi unique_match option. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250109174028.3368967-1-yonghong.song@linux.dev
2025-01-10libbpf: Add unique_match option for multi kprobeYonghong Song2-2/+15
Jordan reported an issue in Meta production environment where func try_to_wake_up() is renamed to try_to_wake_up.llvm.<hash>() by clang compiler at lto mode. The original 'kprobe/try_to_wake_up' does not work any more since try_to_wake_up() does not match the actual func name in /proc/kallsyms. There are a couple of ways to resolve this issue. For example, in attach_kprobe(), we could do lookup in /proc/kallsyms so try_to_wake_up() can be replaced by try_to_wake_up.llvm.<hach>(). Or we can force users to use bpf_program__attach_kprobe() where they need to lookup /proc/kallsyms to find out try_to_wake_up.llvm.<hach>(). But these two approaches requires extra work by either libbpf or user. Luckily, suggested by Andrii, multi kprobe already supports wildcard ('*') for symbol matching. In the above example, 'try_to_wake_up*' can match to try_to_wake_up() or try_to_wake_up.llvm.<hash>() and this allows bpf prog works for different kernels as some kernels may have try_to_wake_up() and some others may have try_to_wake_up.llvm.<hash>(). The original intention is to kprobe try_to_wake_up() only, so an optional field unique_match is added to struct bpf_kprobe_multi_opts. If the field is set to true, the number of matched functions must be one. Otherwise, the attachment will fail. In the above case, multi kprobe with 'try_to_wake_up*' and unique_match preserves user functionality. Reported-by: Jordan Rome <linux@jordanrome.com> Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250109174023.3368432-1-yonghong.song@linux.dev
2025-01-08bpf: Remove migrate_{disable|enable} from bpf_selem_free()Hou Tao1-2/+0
bpf_selem_free() has the following three callers: (1) bpf_local_storage_update It will be invoked through ->map_update_elem syscall or helpers for storage map. Migration has already been disabled in these running contexts. (2) bpf_sk_storage_clone It has already disabled migration before invoking bpf_selem_free(). (3) bpf_selem_free_list bpf_selem_free_list() has three callers: bpf_selem_unlink_storage(), bpf_local_storage_update() and bpf_local_storage_destroy(). The callers of bpf_selem_unlink_storage() includes: storage map ->map_delete_elem syscall, storage map delete helpers and bpf_local_storage_map_free(). These contexts have already disabled migration when invoking bpf_selem_unlink() which invokes bpf_selem_unlink_storage() and bpf_selem_free_list() correspondingly. bpf_local_storage_update() has been analyzed as the first caller above. bpf_local_storage_destroy() is invoked when freeing the local storage for the kernel object. Now cgroup, task, inode and sock storage have already disabled migration before invoking bpf_local_storage_destroy(). After the analyses above, it is safe to remove migrate_{disable|enable} from bpf_selem_free(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-17-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} from bpf_local_storage_free()Hou Tao1-5/+2
bpf_local_storage_free() has three callers: 1) bpf_local_storage_alloc() Its caller must have disabled migration. 2) bpf_local_storage_destroy() Its four callers (bpf_{cgrp|inode|task|sk}_storage_free()) have already invoked migrate_disable() before invoking bpf_local_storage_destroy(). 3) bpf_selem_unlink() Its callers include: cgrp/inode/task/sk storage ->map_delete_elem callbacks, bpf_{cgrp|inode|task|sk}_storage_delete() helpers and bpf_local_storage_map_free(). All of these callers have already disabled migration before invoking bpf_selem_unlink(). Therefore, it is OK to remove migrate_{disable|enable} pair from bpf_local_storage_free(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-16-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} from bpf_local_storage_alloc()Hou Tao1-6/+2
These two callers of bpf_local_storage_alloc() are the same as bpf_selem_alloc(): bpf_sk_storage_clone() and bpf_local_storage_update(). The running contexts of these two callers have already disabled migration, therefore, there is no need to add extra migrate_{disable|enable} pair in bpf_local_storage_alloc(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-15-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} from bpf_selem_alloc()Hou Tao1-2/+0
bpf_selem_alloc() has two callers: (1) bpf_sk_storage_clone_elem() bpf_sk_storage_clone() has already disabled migration before invoking bpf_sk_storage_clone_elem(). (2) bpf_local_storage_update() Its callers include: cgrp/task/inode/sock storage ->map_update_elem() callbacks and bpf_{cgrp|task|inode|sk}_storage_get() helpers. These running contexts have already disabled migration Therefore, there is no need to add extra migrate_{disable|enable} pair in bpf_selem_alloc(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-14-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable,enable} in bpf_cpumask_release()Hou Tao1-2/+0
When BPF program invokes bpf_cpumask_release(), the migration must have been disabled. When bpf_cpumask_release_dtor() invokes bpf_cpumask_release(), the caller bpf_obj_free_fields() also has disabled migration, therefore, it is OK to remove the unnecessary migrate_{disable|enable} pair in bpf_cpumask_release(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-13-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} in bpf_obj_free_fields()Hou Tao2-6/+0
The callers of bpf_obj_free_fields() have already guaranteed that the migration is disabled, therefore, there is no need to invoke migrate_{disable,enable} pair in bpf_obj_free_fields()'s underly implementation. This patch removes unnecessary migrate_{disable|enable} pairs from bpf_obj_free_fields() and its callees: bpf_list_head_free() and bpf_rb_root_free(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-12-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Disable migration before calling ops->map_free()Hou Tao4-13/+11
The freeing of all map elements may invoke bpf_obj_free_fields() to free the special fields in the map value. Since these special fields may be allocated from bpf memory allocator, migrate_{disable|enable} pairs are necessary for the freeing of these special fields. To simplify reasoning about when migrate_disable() is needed for the freeing of these special fields, let the caller to guarantee migration is disabled before invoking bpf_obj_free_fields(). Therefore, disabling migration before calling ops->map_free() to simplify the freeing of map values or special fields allocated from bpf memory allocator. After disabling migration in bpf_map_free(), there is no need for additional migration_{disable|enable} pairs in these ->map_free() callbacks. Remove these redundant invocations. The migrate_{disable|enable} pairs in the underlying implementation of bpf_obj_free_fields() will be removed by the following patch. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-11-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Disable migration in bpf_selem_free_rcuHou Tao1-0/+3
bpf_selem_free_rcu() calls bpf_obj_free_fields() to free the special fields in map value (e.g., kptr). Since kptrs may be allocated from bpf memory allocator, migrate_{disable|enable} pairs are necessary for the freeing of these kptrs. To simplify reasoning about when migrate_disable() is needed for the freeing of these dynamically-allocated kptrs, let the caller to guarantee migration is disabled before invoking bpf_obj_free_fields(). Therefore, the patch adds migrate_{disable|enable} pair in bpf_selem_free_rcu(). The migrate_{disable|enable} pairs in the underlying implementation of bpf_obj_free_fields() will be removed by the following patch. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-10-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Disable migration when cloning sock storageHou Tao1-0/+2
bpf_sk_storage_clone() will call bpf_selem_free() to free the clone element when the allocation of new sock storage fails. bpf_selem_free() will call check_and_free_fields() to free the special fields in the element. Since the allocated element is not visible to bpf syscall or bpf program when bpf_local_storage_alloc() fails, these special fields in the element must be all zero when invoking bpf_selem_free(). To be uniform with other callers of bpf_selem_free(), disabling migration when cloning sock storage. Adding migrate_{disable|enable} pair also benefits the potential switching from kzalloc to bpf memory allocator for sock storage. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-9-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Disable migration when destroying sock storageHou Tao1-4/+5
When destroying sock storage, it invokes bpf_local_storage_destroy() to remove all storage elements saved in the sock storage. The destroy procedure will call bpf_selem_free() to free the element, and bpf_selem_free() calls bpf_obj_free_fields() to free the special fields in map value (e.g., kptr). Since kptrs may be allocated from bpf memory allocator, migrate_{disable|enable} pairs are necessary for the freeing of these kptrs. To simplify reasoning about when migrate_disable() is needed for the freeing of these dynamically-allocated kptrs, let the caller to guarantee migration is disabled before invoking bpf_obj_free_fields(). Therefore, the patch adds migrate_{disable|enable} pair in bpf_sock_storage_free(). The migrate_{disable|enable} pairs in the underlying implementation of bpf_obj_free_fields() will be removed by The following patch. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-8-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Disable migration when destroying inode storageHou Tao1-4/+5
When destroying inode storage, it invokes bpf_local_storage_destroy() to remove all storage elements saved in the inode storage. The destroy procedure will call bpf_selem_free() to free the element, and bpf_selem_free() calls bpf_obj_free_fields() to free the special fields in map value (e.g., kptr). Since kptrs may be allocated from bpf memory allocator, migrate_{disable|enable} pairs are necessary for the freeing of these kptrs. To simplify reasoning about when migrate_disable() is needed for the freeing of these dynamically-allocated kptrs, let the caller to guarantee migration is disabled before invoking bpf_obj_free_fields(). Therefore, the patch adds migrate_{disable|enable} pair in bpf_inode_storage_free(). The migrate_{disable|enable} pairs in the underlying implementation of bpf_obj_free_fields() will be removed by the following patch. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-7-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} from bpf_task_storage_lock helpersHou Tao1-8/+7
Three callers of bpf_task_storage_lock() are ->map_lookup_elem, ->map_update_elem, ->map_delete_elem from bpf syscall. BPF syscall for these three operations of task storage has already disabled migration. Another two callers are bpf_task_storage_get() and bpf_task_storage_delete() helpers which will be used by BPF program. Two callers of bpf_task_storage_trylock() are bpf_task_storage_get() and bpf_task_storage_delete() helpers. The running contexts of these helpers have already disabled migration. Therefore, it is safe to remove migrate_{disable|enable} from task storage lock helpers for these call sites. However, bpf_task_storage_free() also invokes bpf_task_storage_lock() and its running context doesn't disable migration, therefore, add the missed migrate_{disable|enable} in bpf_task_storage_free(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-6-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} from bpf_cgrp_storage_lock helpersHou Tao1-8/+7
Three callers of bpf_cgrp_storage_lock() are ->map_lookup_elem, ->map_update_elem, ->map_delete_elem from bpf syscall. BPF syscall for these three operations of cgrp storage has already disabled migration. Two call sites of bpf_cgrp_storage_trylock() are bpf_cgrp_storage_get(), and bpf_cgrp_storage_delete() helpers. The running contexts of these helpers have already disabled migration. Therefore, it is safe to remove migrate_disable() for these callers. However, bpf_cgrp_storage_free() also invokes bpf_cgrp_storage_lock() and its running context doesn't disable migration. Therefore, also add the missed migrate_{disabled|enable} in bpf_cgrp_storage_free(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-5-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} in htab_elem_freeHou Tao1-2/+0
htab_elem_free() has two call-sites: delete_all_elements() has already disabled migration, free_htab_elem() is invoked by other 4 functions: __htab_map_lookup_and_delete_elem, __htab_map_lookup_and_delete_batch, htab_map_update_elem and htab_map_delete_elem. BPF syscall has already disabled migration before invoking ->map_update_elem, ->map_delete_elem, and ->map_lookup_and_delete_elem callbacks for hash map. __htab_map_lookup_and_delete_batch() also disables migration before invoking free_htab_elem(). ->map_update_elem() and ->map_delete_elem() of hash map may be invoked by BPF program and the running context of BPF program has already disabled migration. Therefore, it is safe to remove the migration_{disable|enable} pair in htab_elem_free() Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-4-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} in ->map_for_each_callbackHou Tao2-10/+7
BPF program may call bpf_for_each_map_elem(), and it will call the ->map_for_each_callback callback of related bpf map. Considering the running context of bpf program has already disabled migration, remove the unnecessary migrate_{disable|enable} pair in the implementations of ->map_for_each_callback. To ensure the guarantee will not be voilated later, also add cant_migrate() check in the implementations. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-3-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} from LPM trieHou Tao1-16/+4
Both bpf program and bpf syscall may invoke ->update or ->delete operation for LPM trie. For bpf program, its running context has already disabled migration explicitly through (migrate_disable()) or implicitly through (preempt_disable() or disable irq). For bpf syscall, the migration is disabled through the use of bpf_disable_instrumentation() before invoking the corresponding map operation callback. Therefore, it is safe to remove the migrate_{disable|enable){} pair from LPM trie. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-2-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08selftests/bpf: Add kprobe session recursion check testJiri Olsa2-0/+7
Adding kprobe.session probe to bpf_kfunc_common_test that misses bpf program execution due to recursion check and making sure it increases the program missed count properly. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20250106175048.1443905-2-jolsa@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Return error for missed kprobe multi bpf program executionJiri Olsa1-1/+1
When kprobe multi bpf program can't be executed due to recursion check, we currently return 0 (success) to fprobe layer where it's ignored for standard kprobe multi probes. For kprobe session the success return value will make fprobe layer to install return probe and try to execute it as well. But the return session probe should not get executed, because the entry part did not run. FWIW the return probe bpf program most likely won't get executed, because its recursion check will likely fail as well, but we don't need to run it in the first place.. also we can make this clear and obvious. It also affects missed counts for kprobe session program execution, which are now doubled (extra count for not executed return probe). Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20250106175048.1443905-1-jolsa@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Move out synchronize_rcu_tasks_trace from mutex CSPu Lehui1-8/+13
Commit ef1b808e3b7c ("bpf: Fix UAF via mismatching bpf_prog/attachment RCU flavors") resolved a possible UAF issue in uprobes that attach non-sleepable bpf prog by explicitly waiting for a tasks-trace-RCU grace period. But, in the current implementation, synchronize_rcu_tasks_trace is included within the mutex critical section, which increases the length of the critical section and may affect performance. So let's move out synchronize_rcu_tasks_trace from mutex CS. Signed-off-by: Pu Lehui <pulehui@huawei.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20250104013946.1111785-1-pulehui@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Fix range_tree_set() error handlingSoma Nakata1-1/+5
range_tree_set() might fail and return -ENOMEM, causing subsequent `bpf_arena_alloc_pages` to fail. Add the error handling. Signed-off-by: Soma Nakata <soma.nakata@somane.sakura.ne.jp> Acked-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250106231536.52856-1-soma.nakata@somane.sakura.ne.jp Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08selftests/bpf: add -std=gnu11 to BPF_CFLAGS and CFLAGSIhor Solodrai1-2/+6
Latest versions of GCC BPF use C23 standard by default. This causes compilation errors in vmlinux.h due to bool types declarations. Add -std=gnu11 to BPF_CFLAGS and CFLAGS. This aligns with the version of the standard used when building the kernel currently [1]. For more details see the discussions at [2] and [3]. [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Makefile#n465 [2] https://lore.kernel.org/bpf/EYcXjcKDCJY7Yb0GGtAAb7nLKPEvrgWdvWpuNzXm2qi6rYMZDixKv5KwfVVMBq17V55xyC-A1wIjrqG3aw-Imqudo9q9X7D7nLU2gWgbN0w=@pm.me/ [3] https://lore.kernel.org/bpf/20250106202715.1232864-1-ihor.solodrai@pm.me/ CC: Jose E. Marchesi <jose.marchesi@oracle.com> Signed-off-by: Ihor Solodrai <ihor.solodrai@pm.me> Link: https://lore.kernel.org/r/20250107235813.2964472-1-ihor.solodrai@pm.me Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-06selftests/bpf: Handle prog/attach type comparison in veristatMykyta Yatsenko1-2/+35
Implemented handling of prog type and attach type stats comparison in veristat. To test this change: ``` ./veristat pyperf600.bpf.o -o csv > base1.csv ./veristat pyperf600.bpf.o -o csv > base2.csv ./veristat -C base2.csv base1.csv -o csv ...,raw_tracepoint,raw_tracepoint,MATCH, ...,cgroup_inet_ingress,cgroup_inet_ingress,MATCH ``` Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Tested-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20250106144321.32337-1-mykyta.yatsenko5@gmail.com
2025-01-06selftests/bpf: add -fno-strict-aliasing to BPF_CFLAGSIhor Solodrai1-27/+1
Following the discussion at [1], set -fno-strict-aliasing flag for all BPF object build rules. Remove now unnecessary <test>-CFLAGS variables. [1] https://lore.kernel.org/bpf/20250106185447.951609-1-ihor.solodrai@pm.me/ CC: Jose E. Marchesi <jose.marchesi@oracle.com> Signed-off-by: Ihor Solodrai <ihor.solodrai@pm.me> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20250106201728.1219791-1-ihor.solodrai@pm.me Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-06selftests/bpf: test bpf_for within spin lock sectionEmil Tsalapatis1-0/+26
Add a selftest to ensure BPF for loops within critical sections are accepted by the verifier. Signed-off-by: Emil Tsalapatis (Meta) <emil@etsalapatis.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20250104202528.882482-3-emil@etsalapatis.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-06bpf: Allow bpf_for/bpf_repeat calls while holding a spinlockEmil Tsalapatis1-1/+19
Add the bpf_iter_num_* kfuncs called by bpf_for in special_kfunc_list, and allow the calls even while holding a spin lock. Signed-off-by: Emil Tsalapatis (Meta) <emil@etsalapatis.com> Reviewed-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20250104202528.882482-2-emil@etsalapatis.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-06bpf/tests: Add 32 bits only long conditional jump testsChristophe Leroy1-6/+58
Commit f1517eb790f9 ("bpf/tests: Expand branch conversion JIT test") introduced "Long conditional jump tests" but due to those tests making use of 64 bits DIV and MOD, they don't get jited on powerpc/32, leading to the long conditional jump test being skiped for unrelated reason. Add 4 new tests that are restricted to 32 bits ALU so that the jump tests can also be performed on platforms that do no support 64 bits operations. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/609f87a2d84e032c8d9ccb9ba7aebef893698f1e.1736154762.git.christophe.leroy@csgroup.eu
2025-01-06bpf, arm64: Emit A64_{ADD,SUB}_I when possible in emit_{lse,ll_sc}_atomic()Peilin Ye1-8/+4
Currently in emit_{lse,ll_sc}_atomic(), if there is an offset, we add it to the base address by doing e.g.: if (off) { emit_a64_mov_i(1, tmp, off, ctx); emit(A64_ADD(1, tmp, tmp, dst), ctx); [...] As pointed out by Xu, we can use emit_a64_add_i() (added in the previous patch) instead, which tries to combine the above into a single A64_ADD_I or A64_SUB_I when possible. Suggested-by: Xu Kuohai <xukuohai@huaweicloud.com> Signed-off-by: Peilin Ye <yepeilin@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Xu Kuohai <xukuohai@huawei.com> Link: https://lore.kernel.org/bpf/9ad3034a62361d91a99af24efa03f48c4c9e13ea.1735868489.git.yepeilin@google.com
2025-01-06bpf, arm64: Factor out emit_a64_add_i()Peilin Ye1-8/+14
As suggested by Xu, factor out emit_a64_add_i() for later use. No functional change. Suggested-by: Xu Kuohai <xukuohai@huaweicloud.com> Signed-off-by: Peilin Ye <yepeilin@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Xu Kuohai <xukuohai@huawei.com> Link: https://lore.kernel.org/bpf/fedbaca80e6d8bd5bcba1ac5320dfbbdab14472e.1735868489.git.yepeilin@google.com
2025-01-06bpf, arm64: Simplify if logic in emit_lse_atomic()Peilin Ye1-10/+8
Delete that unnecessary outer if clause. No functional change. Signed-off-by: Peilin Ye <yepeilin@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Xu Kuohai <xukuohai@huawei.com> Link: https://lore.kernel.org/bpf/e8520e5503a489e2dea8526077976ae5a0ab1849.1735868489.git.yepeilin@google.com
2025-01-06selftests/bpf: Avoid generating untracked files when running bpf selftestsJiayuan Chen1-2/+2
Currently, when we run the BPF selftests with the following command: make -C tools/testing/selftests TARGETS=bpf SKIP_TARGETS="" The command generates untracked files and directories with make version less than 4.4: ''' Untracked files: (use "git add <file>..." to include in what will be committed) tools/testing/selftests/bpfFEATURE-DUMP.selftests tools/testing/selftests/bpffeature/ ''' We lost slash after word "bpf". The reason is slash appending code is as follow: ''' OUTPUT := $(OUTPUT)/ $(eval include ../../../build/Makefile.feature) OUTPUT := $(patsubst %/,%,$(OUTPUT)) ''' This way of assigning values to OUTPUT will never be effective for the variable OUTPUT provided via the command argument [1] and BPF makefile is called from parent Makfile(tools/testing/selftests/Makefile) like: ''' all: ... $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET ''' According to GNU make, we can use override Directive to fix this issue [2]. [1] https://www.gnu.org/software/make/manual/make.html#Overriding [2] https://www.gnu.org/software/make/manual/make.html#Override-Directive Fixes: dc3a8804d790 ("selftests/bpf: Adapt OUTPUT appending logic to lower versions of Make") Signed-off-by: Jiayuan Chen <mrpre@163.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20241224075957.288018-1-mrpre@163.com
2025-01-03bpf: Reject struct_ops registration that uses module ptr and the module btf_id is missingMartin KaFai Lau3-5/+26
There is a UAF report in the bpf_struct_ops when CONFIG_MODULES=n. In particular, the report is on tcp_congestion_ops that has a "struct module *owner" member. For struct_ops that has a "struct module *owner" member, it can be extended either by the regular kernel module or by the bpf_struct_ops. bpf_try_module_get() will be used to do the refcounting and different refcount is done based on the owner pointer. When CONFIG_MODULES=n, the btf_id of the "struct module" is missing: WARN: resolve_btfids: unresolved symbol module Thus, the bpf_try_module_get() cannot do the correct refcounting. Not all subsystem's struct_ops requires the "struct module *owner" member. e.g. the recent sched_ext_ops. This patch is to disable bpf_struct_ops registration if the struct_ops has the "struct module *" member and the "struct module" btf_id is missing. The btf_type_is_fwd() helper is moved to the btf.h header file for this test. This has happened since the beginning of bpf_struct_ops which has gone through many changes. The Fixes tag is set to a recent commit that this patch can apply cleanly. Considering CONFIG_MODULES=n is not common and the age of the issue, targeting for bpf-next also. Fixes: 1611603537a4 ("bpf: Create argument information for nullable arguments.") Reported-by: Robert Morris <rtm@csail.mit.edu> Closes: https://lore.kernel.org/bpf/74665.1733669976@localhost/ Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Tested-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20241220201818.127152-1-martin.lau@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-12-30bpf: Use refcount_t instead of atomic_t for mmap_countPei Xiao1-4/+4
Use an API that resembles more the actual use of mmap_count. Found by cocci: kernel/bpf/arena.c:245:6-25: WARNING: atomic_dec_and_test variation before object free at line 249. Fixes: b90d77e5fd78 ("bpf: Fix remap of arena.") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202412292037.LXlYSHKl-lkp@intel.com/ Signed-off-by: Pei Xiao <xiaopei01@kylinos.cn> Link: https://lore.kernel.org/r/6ecce439a6bc81adb85d5080908ea8959b792a50.1735542814.git.xiaopei01@kylinos.cn Signed-off-by: Alexei Starovoitov <ast@kernel.org>