aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2025-01-08bpf: Disable migration when destroying sock storageHou Tao1-4/+5
When destroying sock storage, it invokes bpf_local_storage_destroy() to remove all storage elements saved in the sock storage. The destroy procedure will call bpf_selem_free() to free the element, and bpf_selem_free() calls bpf_obj_free_fields() to free the special fields in map value (e.g., kptr). Since kptrs may be allocated from bpf memory allocator, migrate_{disable|enable} pairs are necessary for the freeing of these kptrs. To simplify reasoning about when migrate_disable() is needed for the freeing of these dynamically-allocated kptrs, let the caller to guarantee migration is disabled before invoking bpf_obj_free_fields(). Therefore, the patch adds migrate_{disable|enable} pair in bpf_sock_storage_free(). The migrate_{disable|enable} pairs in the underlying implementation of bpf_obj_free_fields() will be removed by The following patch. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-8-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Disable migration when destroying inode storageHou Tao1-4/+5
When destroying inode storage, it invokes bpf_local_storage_destroy() to remove all storage elements saved in the inode storage. The destroy procedure will call bpf_selem_free() to free the element, and bpf_selem_free() calls bpf_obj_free_fields() to free the special fields in map value (e.g., kptr). Since kptrs may be allocated from bpf memory allocator, migrate_{disable|enable} pairs are necessary for the freeing of these kptrs. To simplify reasoning about when migrate_disable() is needed for the freeing of these dynamically-allocated kptrs, let the caller to guarantee migration is disabled before invoking bpf_obj_free_fields(). Therefore, the patch adds migrate_{disable|enable} pair in bpf_inode_storage_free(). The migrate_{disable|enable} pairs in the underlying implementation of bpf_obj_free_fields() will be removed by the following patch. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-7-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} from bpf_task_storage_lock helpersHou Tao1-8/+7
Three callers of bpf_task_storage_lock() are ->map_lookup_elem, ->map_update_elem, ->map_delete_elem from bpf syscall. BPF syscall for these three operations of task storage has already disabled migration. Another two callers are bpf_task_storage_get() and bpf_task_storage_delete() helpers which will be used by BPF program. Two callers of bpf_task_storage_trylock() are bpf_task_storage_get() and bpf_task_storage_delete() helpers. The running contexts of these helpers have already disabled migration. Therefore, it is safe to remove migrate_{disable|enable} from task storage lock helpers for these call sites. However, bpf_task_storage_free() also invokes bpf_task_storage_lock() and its running context doesn't disable migration, therefore, add the missed migrate_{disable|enable} in bpf_task_storage_free(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-6-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} from bpf_cgrp_storage_lock helpersHou Tao1-8/+7
Three callers of bpf_cgrp_storage_lock() are ->map_lookup_elem, ->map_update_elem, ->map_delete_elem from bpf syscall. BPF syscall for these three operations of cgrp storage has already disabled migration. Two call sites of bpf_cgrp_storage_trylock() are bpf_cgrp_storage_get(), and bpf_cgrp_storage_delete() helpers. The running contexts of these helpers have already disabled migration. Therefore, it is safe to remove migrate_disable() for these callers. However, bpf_cgrp_storage_free() also invokes bpf_cgrp_storage_lock() and its running context doesn't disable migration. Therefore, also add the missed migrate_{disabled|enable} in bpf_cgrp_storage_free(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-5-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} in htab_elem_freeHou Tao1-2/+0
htab_elem_free() has two call-sites: delete_all_elements() has already disabled migration, free_htab_elem() is invoked by other 4 functions: __htab_map_lookup_and_delete_elem, __htab_map_lookup_and_delete_batch, htab_map_update_elem and htab_map_delete_elem. BPF syscall has already disabled migration before invoking ->map_update_elem, ->map_delete_elem, and ->map_lookup_and_delete_elem callbacks for hash map. __htab_map_lookup_and_delete_batch() also disables migration before invoking free_htab_elem(). ->map_update_elem() and ->map_delete_elem() of hash map may be invoked by BPF program and the running context of BPF program has already disabled migration. Therefore, it is safe to remove the migration_{disable|enable} pair in htab_elem_free() Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-4-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} in ->map_for_each_callbackHou Tao2-10/+7
BPF program may call bpf_for_each_map_elem(), and it will call the ->map_for_each_callback callback of related bpf map. Considering the running context of bpf program has already disabled migration, remove the unnecessary migrate_{disable|enable} pair in the implementations of ->map_for_each_callback. To ensure the guarantee will not be voilated later, also add cant_migrate() check in the implementations. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-3-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} from LPM trieHou Tao1-16/+4
Both bpf program and bpf syscall may invoke ->update or ->delete operation for LPM trie. For bpf program, its running context has already disabled migration explicitly through (migrate_disable()) or implicitly through (preempt_disable() or disable irq). For bpf syscall, the migration is disabled through the use of bpf_disable_instrumentation() before invoking the corresponding map operation callback. Therefore, it is safe to remove the migrate_{disable|enable){} pair from LPM trie. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-2-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08selftests/bpf: Add kprobe session recursion check testJiri Olsa2-0/+7
Adding kprobe.session probe to bpf_kfunc_common_test that misses bpf program execution due to recursion check and making sure it increases the program missed count properly. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20250106175048.1443905-2-jolsa@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Return error for missed kprobe multi bpf program executionJiri Olsa1-1/+1
When kprobe multi bpf program can't be executed due to recursion check, we currently return 0 (success) to fprobe layer where it's ignored for standard kprobe multi probes. For kprobe session the success return value will make fprobe layer to install return probe and try to execute it as well. But the return session probe should not get executed, because the entry part did not run. FWIW the return probe bpf program most likely won't get executed, because its recursion check will likely fail as well, but we don't need to run it in the first place.. also we can make this clear and obvious. It also affects missed counts for kprobe session program execution, which are now doubled (extra count for not executed return probe). Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20250106175048.1443905-1-jolsa@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Move out synchronize_rcu_tasks_trace from mutex CSPu Lehui1-8/+13
Commit ef1b808e3b7c ("bpf: Fix UAF via mismatching bpf_prog/attachment RCU flavors") resolved a possible UAF issue in uprobes that attach non-sleepable bpf prog by explicitly waiting for a tasks-trace-RCU grace period. But, in the current implementation, synchronize_rcu_tasks_trace is included within the mutex critical section, which increases the length of the critical section and may affect performance. So let's move out synchronize_rcu_tasks_trace from mutex CS. Signed-off-by: Pu Lehui <pulehui@huawei.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20250104013946.1111785-1-pulehui@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Fix range_tree_set() error handlingSoma Nakata1-1/+5
range_tree_set() might fail and return -ENOMEM, causing subsequent `bpf_arena_alloc_pages` to fail. Add the error handling. Signed-off-by: Soma Nakata <soma.nakata@somane.sakura.ne.jp> Acked-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250106231536.52856-1-soma.nakata@somane.sakura.ne.jp Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08selftests/bpf: add -std=gnu11 to BPF_CFLAGS and CFLAGSIhor Solodrai1-2/+6
Latest versions of GCC BPF use C23 standard by default. This causes compilation errors in vmlinux.h due to bool types declarations. Add -std=gnu11 to BPF_CFLAGS and CFLAGS. This aligns with the version of the standard used when building the kernel currently [1]. For more details see the discussions at [2] and [3]. [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Makefile#n465 [2] https://lore.kernel.org/bpf/EYcXjcKDCJY7Yb0GGtAAb7nLKPEvrgWdvWpuNzXm2qi6rYMZDixKv5KwfVVMBq17V55xyC-A1wIjrqG3aw-Imqudo9q9X7D7nLU2gWgbN0w=@pm.me/ [3] https://lore.kernel.org/bpf/20250106202715.1232864-1-ihor.solodrai@pm.me/ CC: Jose E. Marchesi <jose.marchesi@oracle.com> Signed-off-by: Ihor Solodrai <ihor.solodrai@pm.me> Link: https://lore.kernel.org/r/20250107235813.2964472-1-ihor.solodrai@pm.me Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-06selftests/bpf: Handle prog/attach type comparison in veristatMykyta Yatsenko1-2/+35
Implemented handling of prog type and attach type stats comparison in veristat. To test this change: ``` ./veristat pyperf600.bpf.o -o csv > base1.csv ./veristat pyperf600.bpf.o -o csv > base2.csv ./veristat -C base2.csv base1.csv -o csv ...,raw_tracepoint,raw_tracepoint,MATCH, ...,cgroup_inet_ingress,cgroup_inet_ingress,MATCH ``` Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Tested-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20250106144321.32337-1-mykyta.yatsenko5@gmail.com
2025-01-06selftests/bpf: add -fno-strict-aliasing to BPF_CFLAGSIhor Solodrai1-27/+1
Following the discussion at [1], set -fno-strict-aliasing flag for all BPF object build rules. Remove now unnecessary <test>-CFLAGS variables. [1] https://lore.kernel.org/bpf/20250106185447.951609-1-ihor.solodrai@pm.me/ CC: Jose E. Marchesi <jose.marchesi@oracle.com> Signed-off-by: Ihor Solodrai <ihor.solodrai@pm.me> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20250106201728.1219791-1-ihor.solodrai@pm.me Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-06selftests/bpf: test bpf_for within spin lock sectionEmil Tsalapatis1-0/+26
Add a selftest to ensure BPF for loops within critical sections are accepted by the verifier. Signed-off-by: Emil Tsalapatis (Meta) <emil@etsalapatis.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20250104202528.882482-3-emil@etsalapatis.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-06bpf: Allow bpf_for/bpf_repeat calls while holding a spinlockEmil Tsalapatis1-1/+19
Add the bpf_iter_num_* kfuncs called by bpf_for in special_kfunc_list, and allow the calls even while holding a spin lock. Signed-off-by: Emil Tsalapatis (Meta) <emil@etsalapatis.com> Reviewed-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20250104202528.882482-2-emil@etsalapatis.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-06bpf/tests: Add 32 bits only long conditional jump testsChristophe Leroy1-6/+58
Commit f1517eb790f9 ("bpf/tests: Expand branch conversion JIT test") introduced "Long conditional jump tests" but due to those tests making use of 64 bits DIV and MOD, they don't get jited on powerpc/32, leading to the long conditional jump test being skiped for unrelated reason. Add 4 new tests that are restricted to 32 bits ALU so that the jump tests can also be performed on platforms that do no support 64 bits operations. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/609f87a2d84e032c8d9ccb9ba7aebef893698f1e.1736154762.git.christophe.leroy@csgroup.eu
2025-01-06bpf, arm64: Emit A64_{ADD,SUB}_I when possible in emit_{lse,ll_sc}_atomic()Peilin Ye1-8/+4
Currently in emit_{lse,ll_sc}_atomic(), if there is an offset, we add it to the base address by doing e.g.: if (off) { emit_a64_mov_i(1, tmp, off, ctx); emit(A64_ADD(1, tmp, tmp, dst), ctx); [...] As pointed out by Xu, we can use emit_a64_add_i() (added in the previous patch) instead, which tries to combine the above into a single A64_ADD_I or A64_SUB_I when possible. Suggested-by: Xu Kuohai <xukuohai@huaweicloud.com> Signed-off-by: Peilin Ye <yepeilin@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Xu Kuohai <xukuohai@huawei.com> Link: https://lore.kernel.org/bpf/9ad3034a62361d91a99af24efa03f48c4c9e13ea.1735868489.git.yepeilin@google.com
2025-01-06bpf, arm64: Factor out emit_a64_add_i()Peilin Ye1-8/+14
As suggested by Xu, factor out emit_a64_add_i() for later use. No functional change. Suggested-by: Xu Kuohai <xukuohai@huaweicloud.com> Signed-off-by: Peilin Ye <yepeilin@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Xu Kuohai <xukuohai@huawei.com> Link: https://lore.kernel.org/bpf/fedbaca80e6d8bd5bcba1ac5320dfbbdab14472e.1735868489.git.yepeilin@google.com
2025-01-06bpf, arm64: Simplify if logic in emit_lse_atomic()Peilin Ye1-10/+8
Delete that unnecessary outer if clause. No functional change. Signed-off-by: Peilin Ye <yepeilin@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Xu Kuohai <xukuohai@huawei.com> Link: https://lore.kernel.org/bpf/e8520e5503a489e2dea8526077976ae5a0ab1849.1735868489.git.yepeilin@google.com
2025-01-06selftests/bpf: Avoid generating untracked files when running bpf selftestsJiayuan Chen1-2/+2
Currently, when we run the BPF selftests with the following command: make -C tools/testing/selftests TARGETS=bpf SKIP_TARGETS="" The command generates untracked files and directories with make version less than 4.4: ''' Untracked files: (use "git add <file>..." to include in what will be committed) tools/testing/selftests/bpfFEATURE-DUMP.selftests tools/testing/selftests/bpffeature/ ''' We lost slash after word "bpf". The reason is slash appending code is as follow: ''' OUTPUT := $(OUTPUT)/ $(eval include ../../../build/Makefile.feature) OUTPUT := $(patsubst %/,%,$(OUTPUT)) ''' This way of assigning values to OUTPUT will never be effective for the variable OUTPUT provided via the command argument [1] and BPF makefile is called from parent Makfile(tools/testing/selftests/Makefile) like: ''' all: ... $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET ''' According to GNU make, we can use override Directive to fix this issue [2]. [1] https://www.gnu.org/software/make/manual/make.html#Overriding [2] https://www.gnu.org/software/make/manual/make.html#Override-Directive Fixes: dc3a8804d790 ("selftests/bpf: Adapt OUTPUT appending logic to lower versions of Make") Signed-off-by: Jiayuan Chen <mrpre@163.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20241224075957.288018-1-mrpre@163.com
2025-01-03bpf: Reject struct_ops registration that uses module ptr and the module btf_id is missingMartin KaFai Lau3-5/+26
There is a UAF report in the bpf_struct_ops when CONFIG_MODULES=n. In particular, the report is on tcp_congestion_ops that has a "struct module *owner" member. For struct_ops that has a "struct module *owner" member, it can be extended either by the regular kernel module or by the bpf_struct_ops. bpf_try_module_get() will be used to do the refcounting and different refcount is done based on the owner pointer. When CONFIG_MODULES=n, the btf_id of the "struct module" is missing: WARN: resolve_btfids: unresolved symbol module Thus, the bpf_try_module_get() cannot do the correct refcounting. Not all subsystem's struct_ops requires the "struct module *owner" member. e.g. the recent sched_ext_ops. This patch is to disable bpf_struct_ops registration if the struct_ops has the "struct module *" member and the "struct module" btf_id is missing. The btf_type_is_fwd() helper is moved to the btf.h header file for this test. This has happened since the beginning of bpf_struct_ops which has gone through many changes. The Fixes tag is set to a recent commit that this patch can apply cleanly. Considering CONFIG_MODULES=n is not common and the age of the issue, targeting for bpf-next also. Fixes: 1611603537a4 ("bpf: Create argument information for nullable arguments.") Reported-by: Robert Morris <rtm@csail.mit.edu> Closes: https://lore.kernel.org/bpf/74665.1733669976@localhost/ Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Tested-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20241220201818.127152-1-martin.lau@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-12-30bpf: Use refcount_t instead of atomic_t for mmap_countPei Xiao1-4/+4
Use an API that resembles more the actual use of mmap_count. Found by cocci: kernel/bpf/arena.c:245:6-25: WARNING: atomic_dec_and_test variation before object free at line 249. Fixes: b90d77e5fd78 ("bpf: Fix remap of arena.") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202412292037.LXlYSHKl-lkp@intel.com/ Signed-off-by: Pei Xiao <xiaopei01@kylinos.cn> Link: https://lore.kernel.org/r/6ecce439a6bc81adb85d5080908ea8959b792a50.1735542814.git.xiaopei01@kylinos.cn Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-12-30bpf: Remove unused MT_ENTRY defineLorenzo Pieralisi1-2/+0
The range tree introduction removed the need for maple tree usage but missed removing the MT_ENTRY defined value that was used to mark maple tree allocated entries. Remove the MT_ENTRY define. Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Link: https://lore.kernel.org/r/20241223115901.14207-1-lpieralisi@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-12-30selftests/bpf: fix veristat comp mode with new statsMahe Tardy1-1/+7
Commit 82c1f13de315 ("selftests/bpf: Add more stats into veristat") introduced new stats, added by default in the CSV output, that were not added to parse_stat_value, used in parse_stats_csv which is used in comparison mode. Thus it broke comparison mode altogether making it fail with "Unrecognized stat #7" and EINVAL. One quirk is that PROG_TYPE and ATTACH_TYPE have been transformed to strings using libbpf_bpf_prog_type_str and libbpf_bpf_attach_type_str respectively. Since we might not want to compare those string values, we just skip the parsing in this patch. We might want to translate it back to the enum value or compare the string value directly. Fixes: 82c1f13de315 ("selftests/bpf: Add more stats into veristat") Signed-off-by: Mahe Tardy <mahe.tardy@gmail.com> Tested-by: Mykyta Yatsenko<yatsenko@meta.com> Link: https://lore.kernel.org/r/20241220152218.28405-1-mahe.tardy@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-12-30bpf: Fix holes in special_kfunc_list if !CONFIG_NETThomas Weißschuh1-0/+3
If the function is not available its entry has to be replaced with BTF_ID_UNUSED instead of skipped. Otherwise the list doesn't work correctly. Reported-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> Closes: https://lore.kernel.org/lkml/CAADnVQJQpVziHzrPCCpGE5=8uzw2OkxP8gqe1FkJ6_XVVyVbNw@mail.gmail.com/ Fixes: 00a5acdbf398 ("bpf: Fix configuration-dependent BTF function references") Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20241219-bpf-fix-special_kfunc_list-v1-1-d9d50dd61505@weissschuh.net Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-12-30selftests/bpf: Add testcases for BPF_MULMatan Shachnai1-0/+134
The previous commit improves precision of BPF_MUL. Add tests to exercise updated BPF_MUL. Signed-off-by: Matan Shachnai <m.shachnai@gmail.com> Link: https://lore.kernel.org/r/20241218032337.12214-3-m.shachnai@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-12-30bpf, verifier: Improve precision of BPF_MULMatan Shachnai1-44/+36
This patch improves (or maintains) the precision of register value tracking in BPF_MUL across all possible inputs. It also simplifies scalar32_min_max_mul() and scalar_min_max_mul(). As it stands, BPF_MUL is composed of three functions: case BPF_MUL: tnum_mul(); scalar32_min_max_mul(); scalar_min_max_mul(); The current implementation of scalar_min_max_mul() restricts the u64 input ranges of dst_reg and src_reg to be within [0, U32_MAX]: /* Both values are positive, so we can work with unsigned and * copy the result to signed (unless it exceeds S64_MAX). */ if (umax_val > U32_MAX || dst_reg->umax_value > U32_MAX) { /* Potential overflow, we know nothing */ __mark_reg64_unbounded(dst_reg); return; } This restriction is done to avoid unsigned overflow, which could otherwise wrap the result around 0, and leave an unsound output where umin > umax. We also observe that limiting these u64 input ranges to [0, U32_MAX] leads to a loss of precision. Consider the case where the u64 bounds of dst_reg are [0, 2^34] and the u64 bounds of src_reg are [0, 2^2]. While the multiplication of these two bounds doesn't overflow and is sound [0, 2^36], the current scalar_min_max_mul() would set the entire register state to unbounded. Importantly, we update BPF_MUL to allow signed bound multiplication (i.e. multiplying negative bounds) as well as allow u64 inputs to take on values from [0, U64_MAX]. We perform signed multiplication on two bounds [a,b] and [c,d] by multiplying every combination of the bounds (i.e. a*c, a*d, b*c, and b*d) and checking for overflow of each product. If there is an overflow, we mark the signed bounds unbounded [S64_MIN, S64_MAX]. In the case of no overflow, we take the minimum of these products to be the resulting smin, and the maximum to be the resulting smax. The key idea here is that if there’s no possibility of overflow, either when multiplying signed bounds or unsigned bounds, we can safely multiply the respective bounds; otherwise, we set the bounds that exhibit overflow (during multiplication) to unbounded. if (check_mul_overflow(*dst_umax, src_reg->umax_value, dst_umax) || (check_mul_overflow(*dst_umin, src_reg->umin_value, dst_umin))) { /* Overflow possible, we know nothing */ *dst_umin = 0; *dst_umax = U64_MAX; } ... Below, we provide an example BPF program (below) that exhibits the imprecision in the current BPF_MUL, where the outputs are all unbounded. In contrast, the updated BPF_MUL produces a bounded register state: BPF_LD_IMM64(BPF_REG_1, 11), BPF_LD_IMM64(BPF_REG_2, 4503599627370624), BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), BPF_LD_IMM64(BPF_REG_3, 809591906117232263), BPF_ALU64_REG(BPF_MUL, BPF_REG_3, BPF_REG_1), BPF_MOV64_IMM(BPF_REG_0, 1), BPF_EXIT_INSN(), Verifier log using the old BPF_MUL: func#0 @0 0: R1=ctx() R10=fp0 0: (18) r1 = 0xb ; R1_w=11 2: (18) r2 = 0x10000000000080 ; R2_w=0x10000000000080 4: (87) r2 = -r2 ; R2_w=scalar() 5: (87) r2 = -r2 ; R2_w=scalar() 6: (5f) r1 &= r2 ; R1_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=11,var_off=(0x0; 0xb)) R2_w=scalar() 7: (18) r3 = 0xb3c3f8c99262687 ; R3_w=0xb3c3f8c99262687 9: (2f) r3 *= r1 ; R1_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=11,var_off=(0x0; 0xb)) R3_w=scalar() ... Verifier using the new updated BPF_MUL (more precise bounds at label 9) func#0 @0 0: R1=ctx() R10=fp0 0: (18) r1 = 0xb ; R1_w=11 2: (18) r2 = 0x10000000000080 ; R2_w=0x10000000000080 4: (87) r2 = -r2 ; R2_w=scalar() 5: (87) r2 = -r2 ; R2_w=scalar() 6: (5f) r1 &= r2 ; R1_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=11,var_off=(0x0; 0xb)) R2_w=scalar() 7: (18) r3 = 0xb3c3f8c99262687 ; R3_w=0xb3c3f8c99262687 9: (2f) r3 *= r1 ; R1_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=11,var_off=(0x0; 0xb)) R3_w=scalar(smin=0,smax=umax=0x7b96bb0a94a3a7cd,var_off=(0x0; 0x7fffffffffffffff)) ... Finally, we proved the soundness of the new scalar_min_max_mul() and scalar32_min_max_mul() functions. Typically, multiplication operations are expensive to check with bitvector-based solvers. We were able to prove the soundness of these functions using Non-Linear Integer Arithmetic (NIA) theory. Additionally, using Agni [2,3], we obtained the encodings for scalar32_min_max_mul() and scalar_min_max_mul() in bitvector theory, and were able to prove their soundness using 8-bit bitvectors (instead of 64-bit bitvectors that the functions actually use). In conclusion, with this patch, 1. We were able to show that we can improve the overall precision of BPF_MUL. We proved (using an SMT solver) that this new version of BPF_MUL is at least as precise as the current version for all inputs and more precise for some inputs. 2. We are able to prove the soundness of the new scalar_min_max_mul() and scalar32_min_max_mul(). By leveraging the existing proof of tnum_mul [1], we can say that the composition of these three functions within BPF_MUL is sound. [1] https://ieeexplore.ieee.org/abstract/document/9741267 [2] https://link.springer.com/chapter/10.1007/978-3-031-37709-9_12 [3] https://people.cs.rutgers.edu/~sn349/papers/sas24-preprint.pdf Co-developed-by: Harishankar Vishwanathan <harishankar.vishwanathan@gmail.com> Signed-off-by: Harishankar Vishwanathan <harishankar.vishwanathan@gmail.com> Co-developed-by: Srinivas Narayana <srinivas.narayana@rutgers.edu> Signed-off-by: Srinivas Narayana <srinivas.narayana@rutgers.edu> Co-developed-by: Santosh Nagarakatte <santosh.nagarakatte@rutgers.edu> Signed-off-by: Santosh Nagarakatte <santosh.nagarakatte@rutgers.edu> Signed-off-by: Matan Shachnai <m.shachnai@gmail.com> Link: https://lore.kernel.org/r/20241218032337.12214-2-m.shachnai@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-12-30libbpf: Set MFD_NOEXEC_SEAL when creating memfdDaniel Xu1-1/+13
Starting from 105ff5339f49 ("mm/memfd: add MFD_NOEXEC_SEAL and MFD_EXEC") and until 1717449b4417 ("memfd: drop warning for missing exec-related flags"), the kernel would print a warning if neither MFD_NOEXEC_SEAL nor MFD_EXEC is set in memfd_create(). If libbpf runs on on a kernel between these two commits (eg. on an improperly backported system), it'll trigger this warning. To avoid this warning (and also be more secure), explicitly set MFD_NOEXEC_SEAL. But since libbpf can be run on potentially very old kernels, leave a fallback for kernels without MFD_NOEXEC_SEAL support. Signed-off-by: Daniel Xu <dxu@dxuuu.xyz> Link: https://lore.kernel.org/r/6e62c2421ad7eb1da49cbf16da95aaaa7f94d394.1735594195.git.dxu@dxuuu.xyz Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-12-20selftests/bpf: Clear out Python syntax warningsAriel Otilibili1-14/+14
Invalid escape sequences are used, and produced syntax warnings: $ test_bpftool_synctypes.py test_bpftool_synctypes.py:69: SyntaxWarning: invalid escape sequence '\[' self.start_marker = re.compile(f'(static )?const bool {self.array_name}\[.*\] = {{\n') test_bpftool_synctypes.py:83: SyntaxWarning: invalid escape sequence '\[' pattern = re.compile('\[(BPF_\w*)\]\s*= (true|false),?$') test_bpftool_synctypes.py:181: SyntaxWarning: invalid escape sequence '\s' pattern = re.compile('^\s*(BPF_\w+),?(\s+/\*.*\*/)?$') test_bpftool_synctypes.py:229: SyntaxWarning: invalid escape sequence '\*' start_marker = re.compile(f'\*{block_name}\* := {{') test_bpftool_synctypes.py:229: SyntaxWarning: invalid escape sequence '\*' start_marker = re.compile(f'\*{block_name}\* := {{') test_bpftool_synctypes.py:230: SyntaxWarning: invalid escape sequence '\*' pattern = re.compile('\*\*([\w/-]+)\*\*') test_bpftool_synctypes.py:248: SyntaxWarning: invalid escape sequence '\s' start_marker = re.compile(f'"\s*{block_name} := {{') test_bpftool_synctypes.py:249: SyntaxWarning: invalid escape sequence '\w' pattern = re.compile('([\w/]+) [|}]') test_bpftool_synctypes.py:267: SyntaxWarning: invalid escape sequence '\s' start_marker = re.compile(f'"\s*{macro}\s*" [|}}]') test_bpftool_synctypes.py:267: SyntaxWarning: invalid escape sequence '\s' start_marker = re.compile(f'"\s*{macro}\s*" [|}}]') test_bpftool_synctypes.py:268: SyntaxWarning: invalid escape sequence '\w' pattern = re.compile('([\w-]+) ?(?:\||}[ }\]])') test_bpftool_synctypes.py:287: SyntaxWarning: invalid escape sequence '\w' pattern = re.compile('(?:.*=\')?([\w/]+)') test_bpftool_synctypes.py:319: SyntaxWarning: invalid escape sequence '\w' pattern = re.compile('([\w-]+) ?(?:\||}[ }\]"])') test_bpftool_synctypes.py:341: SyntaxWarning: invalid escape sequence '\|' start_marker = re.compile('\|COMMON_OPTIONS\| replace:: {') test_bpftool_synctypes.py:342: SyntaxWarning: invalid escape sequence '\*' pattern = re.compile('\*\*([\w/-]+)\*\*') Escaping them clears out the warnings. $ tools/testing/selftests/bpf/test_bpftool_synctypes.py; echo $? 0 Signed-off-by: Ariel Otilibili <ariel.otilibili-anieli@eurecom.fr> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Quentin Monnet <qmo@kernel.org> Reviewed-by: Quentin Monnet <qmo@kernel.org> Link: https://docs.python.org/3/library/re.html Link: https://lore.kernel.org/bpf/20241211220012.714055-2-ariel.otilibili-anieli@eurecom.fr
2024-12-18bpf: bpf_local_storage: Always use bpf_mem_alloc in PREEMPT_RTMartin KaFai Lau1-2/+6
In PREEMPT_RT, kmalloc(GFP_ATOMIC) is still not safe in non preemptible context. bpf_mem_alloc must be used in PREEMPT_RT. This patch is to enforce bpf_mem_alloc in the bpf_local_storage when CONFIG_PREEMPT_RT is enabled. [ 35.118559] BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48 [ 35.118566] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1832, name: test_progs [ 35.118569] preempt_count: 1, expected: 0 [ 35.118571] RCU nest depth: 1, expected: 1 [ 35.118577] INFO: lockdep is turned off. ... [ 35.118647] __might_resched+0x433/0x5b0 [ 35.118677] rt_spin_lock+0xc3/0x290 [ 35.118700] ___slab_alloc+0x72/0xc40 [ 35.118723] __kmalloc_noprof+0x13f/0x4e0 [ 35.118732] bpf_map_kzalloc+0xe5/0x220 [ 35.118740] bpf_selem_alloc+0x1d2/0x7b0 [ 35.118755] bpf_local_storage_update+0x2fa/0x8b0 [ 35.118784] bpf_sk_storage_get_tracing+0x15a/0x1d0 [ 35.118791] bpf_prog_9a118d86fca78ebb_trace_inet_sock_set_state+0x44/0x66 [ 35.118795] bpf_trace_run3+0x222/0x400 [ 35.118820] __bpf_trace_inet_sock_set_state+0x11/0x20 [ 35.118824] trace_inet_sock_set_state+0x112/0x130 [ 35.118830] inet_sk_state_store+0x41/0x90 [ 35.118836] tcp_set_state+0x3b3/0x640 There is no need to adjust the gfp_flags passing to the bpf_mem_cache_alloc_flags() which only honors the GFP_KERNEL. The verifier has ensured GFP_KERNEL is passed only in sleepable context. It has been an old issue since the first introduction of the bpf_local_storage ~5 years ago, so this patch targets the bpf-next. bpf_mem_alloc is needed to solve it, so the Fixes tag is set to the commit when bpf_mem_alloc was first used in the bpf_local_storage. Fixes: 08a7ce384e33 ("bpf: Use bpf_mem_cache_alloc/free in bpf_local_storage_elem") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20241218193000.2084281-1-martin.lau@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-12-17veristat: Fix top source line stat collectionMykyta Yatsenko1-1/+9
Fix comparator implementation to return most popular source code lines instead of least. Introduce min/max macro for building veristat outside of Linux repository. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241217181113.364651-1-mykyta.yatsenko5@gmail.com
2024-12-16bpf: lsm: Remove hook to bpf_task_storage_freeSong Liu1-1/+0
free_task() already calls bpf_task_storage_free(). It is not necessary to call it again on security_task_free(). Remove the hook. Signed-off-by: Song Liu <song@kernel.org> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Acked-by: Matt Bobrowski <mattbobrowski@google.com> Link: https://patch.msgid.link/20241212075956.2614894-1-song@kernel.org
2024-12-15Linux 6.13-rc3Linus Torvalds1-1/+1
2024-12-14bpf: Avoid deadlock caused by nested kprobe and fentry bpf programsPriya Bala Govindasamy1-0/+6
BPF program types like kprobe and fentry can cause deadlocks in certain situations. If a function takes a lock and one of these bpf programs is hooked to some point in the function's critical section, and if the bpf program tries to call the same function and take the same lock it will lead to deadlock. These situations have been reported in the following bug reports. In percpu_freelist - Link: https://lore.kernel.org/bpf/CAADnVQLAHwsa+2C6j9+UC6ScrDaN9Fjqv1WjB1pP9AzJLhKuLQ@mail.gmail.com/T/ Link: https://lore.kernel.org/bpf/CAPPBnEYm+9zduStsZaDnq93q1jPLqO-PiKX9jy0MuL8LCXmCrQ@mail.gmail.com/T/ In bpf_lru_list - Link: https://lore.kernel.org/bpf/CAPPBnEajj+DMfiR_WRWU5=6A7KKULdB5Rob_NJopFLWF+i9gCA@mail.gmail.com/T/ Link: https://lore.kernel.org/bpf/CAPPBnEZQDVN6VqnQXvVqGoB+ukOtHGZ9b9U0OLJJYvRoSsMY_g@mail.gmail.com/T/ Link: https://lore.kernel.org/bpf/CAPPBnEaCB1rFAYU7Wf8UxqcqOWKmRPU1Nuzk3_oLk6qXR7LBOA@mail.gmail.com/T/ Similar bugs have been reported by syzbot. In queue_stack_maps - Link: https://lore.kernel.org/lkml/0000000000004c3fc90615f37756@google.com/ Link: https://lore.kernel.org/all/20240418230932.2689-1-hdanton@sina.com/T/ In lpm_trie - Link: https://lore.kernel.org/linux-kernel/00000000000035168a061a47fa38@google.com/T/ In ringbuf - Link: https://lore.kernel.org/bpf/20240313121345.2292-1-hdanton@sina.com/T/ Prevent kprobe and fentry bpf programs from attaching to these critical sections by removing CC_FLAGS_FTRACE for percpu_freelist.o, bpf_lru_list.o, queue_stack_maps.o, lpm_trie.o, ringbuf.o files. The bugs reported by syzbot are due to tracepoint bpf programs being called in the critical sections. This patch does not aim to fix deadlocks caused by tracepoint programs. However, it does prevent deadlocks from occurring in similar situations due to kprobe and fentry programs. Signed-off-by: Priya Bala Govindasamy <pgovind2@uci.edu> Link: https://lore.kernel.org/r/CAPPBnEZpjGnsuA26Mf9kYibSaGLm=oF6=12L21X1GEQdqjLnzQ@mail.gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-12-13selftests/bpf: Add tests for raw_tp NULL argsKumar Kartikeya Dwivedi2-0/+27
Add tests to ensure that arguments are correctly marked based on their specified positions, and whether they get marked correctly as maybe null. For modules, all tracepoint parameters should be marked PTR_MAYBE_NULL by default. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241213221929.3495062-4-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-12-13bpf: Augment raw_tp arguments with PTR_MAYBE_NULLKumar Kartikeya Dwivedi2-10/+147
Arguments to a raw tracepoint are tagged as trusted, which carries the semantics that the pointer will be non-NULL. However, in certain cases, a raw tracepoint argument may end up being NULL. More context about this issue is available in [0]. Thus, there is a discrepancy between the reality, that raw_tp arguments can actually be NULL, and the verifier's knowledge, that they are never NULL, causing explicit NULL check branch to be dead code eliminated. A previous attempt [1], i.e. the second fixed commit, was made to simulate symbolic execution as if in most accesses, the argument is a non-NULL raw_tp, except for conditional jumps. This tried to suppress branch prediction while preserving compatibility, but surfaced issues with production programs that were difficult to solve without increasing verifier complexity. A more complete discussion of issues and fixes is available at [2]. Fix this by maintaining an explicit list of tracepoints where the arguments are known to be NULL, and mark the positional arguments as PTR_MAYBE_NULL. Additionally, capture the tracepoints where arguments are known to be ERR_PTR, and mark these arguments as scalar values to prevent potential dereference. Each hex digit is used to encode NULL-ness (0x1) or ERR_PTR-ness (0x2), shifted by the zero-indexed argument number x 4. This can be represented as follows: 1st arg: 0x1 2nd arg: 0x10 3rd arg: 0x100 ... and so on (likewise for ERR_PTR case). In the future, an automated pass will be used to produce such a list, or insert __nullable annotations automatically for tracepoints. Each compilation unit will be analyzed and results will be collated to find whether a tracepoint pointer is definitely not null, maybe null, or an unknown state where verifier conservatively marks it PTR_MAYBE_NULL. A proof of concept of this tool from Eduard is available at [3]. Note that in case we don't find a specification in the raw_tp_null_args array and the tracepoint belongs to a kernel module, we will conservatively mark the arguments as PTR_MAYBE_NULL. This is because unlike for in-tree modules, out-of-tree module tracepoints may pass NULL freely to the tracepoint. We don't protect against such tracepoints passing ERR_PTR (which is uncommon anyway), lest we mark all such arguments as SCALAR_VALUE. While we are it, let's adjust the test raw_tp_null to not perform dereference of the skb->mark, as that won't be allowed anymore, and make it more robust by using inline assembly to test the dead code elimination behavior, which should still stay the same. [0]: https://lore.kernel.org/bpf/ZrCZS6nisraEqehw@jlelli-thinkpadt14gen4.remote.csb [1]: https://lore.kernel.org/all/20241104171959.2938862-1-memxor@gmail.com [2]: https://lore.kernel.org/bpf/20241206161053.809580-1-memxor@gmail.com [3]: https://github.com/eddyz87/llvm-project/tree/nullness-for-tracepoint-params Reported-by: Juri Lelli <juri.lelli@redhat.com> # original bug Reported-by: Manu Bretelle <chantra@meta.com> # bugs in masking fix Fixes: 3f00c5239344 ("bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs") Fixes: cb4158ce8ec8 ("bpf: Mark raw_tp arguments with PTR_MAYBE_NULL") Reviewed-by: Eduard Zingerman <eddyz87@gmail.com> Co-developed-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241213221929.3495062-3-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-12-13bpf: Revert "bpf: Mark raw_tp arguments with PTR_MAYBE_NULL"Kumar Kartikeya Dwivedi4-87/+9
This patch reverts commit cb4158ce8ec8 ("bpf: Mark raw_tp arguments with PTR_MAYBE_NULL"). The patch was well-intended and meant to be as a stop-gap fixing branch prediction when the pointer may actually be NULL at runtime. Eventually, it was supposed to be replaced by an automated script or compiler pass detecting possibly NULL arguments and marking them accordingly. However, it caused two main issues observed for production programs and failed to preserve backwards compatibility. First, programs relied on the verifier not exploring == NULL branch when pointer is not NULL, thus they started failing with a 'dereference of scalar' error. Next, allowing raw_tp arguments to be modified surfaced the warning in the verifier that warns against reg->off when PTR_MAYBE_NULL is set. More information, context, and discusson on both problems is available in [0]. Overall, this approach had several shortcomings, and the fixes would further complicate the verifier's logic, and the entire masking scheme would have to be removed eventually anyway. Hence, revert the patch in preparation of a better fix avoiding these issues to replace this commit. [0]: https://lore.kernel.org/bpf/20241206161053.809580-1-memxor@gmail.com Reported-by: Manu Bretelle <chantra@meta.com> Fixes: cb4158ce8ec8 ("bpf: Mark raw_tp arguments with PTR_MAYBE_NULL") Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241213221929.3495062-2-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-12-13bpf: Fix configuration-dependent BTF function referencesThomas Weißschuh2-0/+12
These BTF functions are not available unconditionally, only reference them when they are available. Avoid the following build warnings: BTF .tmp_vmlinux1.btf.o btf_encoder__tag_kfunc: failed to find kfunc 'bpf_send_signal_task' in BTF btf_encoder__tag_kfuncs: failed to tag kfunc 'bpf_send_signal_task' NM .tmp_vmlinux1.syms KSYMS .tmp_vmlinux1.kallsyms.S AS .tmp_vmlinux1.kallsyms.o LD .tmp_vmlinux2 NM .tmp_vmlinux2.syms KSYMS .tmp_vmlinux2.kallsyms.S AS .tmp_vmlinux2.kallsyms.o LD vmlinux BTFIDS vmlinux WARN: resolve_btfids: unresolved symbol prog_test_ref_kfunc WARN: resolve_btfids: unresolved symbol bpf_crypto_ctx WARN: resolve_btfids: unresolved symbol bpf_send_signal_task WARN: resolve_btfids: unresolved symbol bpf_modify_return_test_tp WARN: resolve_btfids: unresolved symbol bpf_dynptr_from_xdp WARN: resolve_btfids: unresolved symbol bpf_dynptr_from_skb Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241213-bpf-cond-ids-v1-1-881849997219@weissschuh.net
2024-12-13selftest/bpf: Replace magic constants by macrosAnton Protopopov1-3/+3
Replace magic constants in a BTF structure initialization code by proper macros, as is done in other similar selftests. Suggested-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Anton Protopopov <aspsk@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241213130934.1087929-8-aspsk@isovalent.com