aboutsummaryrefslogtreecommitdiffstats
path: root/tools/testing/selftests/bpf/Makefile (follow)
AgeCommit message (Collapse)AuthorFilesLines
2022-09-22selftests/bpf: Add liburandom_read.so to TEST_GEN_FILESYauheni Kaliuta1-0/+1
Added urandom_read shared lib is missing from the list of installed files what makes urandom_read test after `make install` or `make gen_tar` broken. Add the library to TEST_GEN_FILES. The names in the list do not contain $(OUTPUT) since it's added by lib.mk code. Fixes: 00a0fa2d7d49 ("selftests/bpf: Add urandom_read shared lib and USDTs") Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220920161409.129953-1-ykaliuta@redhat.com
2022-09-21selftests/bpf: Add test for bpf_verify_pkcs7_signature() kfuncRoberto Sassu1-3/+11
Perform several tests to ensure the correct implementation of the bpf_verify_pkcs7_signature() kfunc. Do the tests with data signed with a generated testing key (by using sign-file from scripts/) and with the tcp_bic.ko kernel module if it is found in the system. The test does not fail if tcp_bic.ko is not found. First, perform an unsuccessful signature verification without data. Second, perform a successful signature verification with the session keyring and a new one created for testing. Then, ensure that permission and validation checks are done properly on the keyring provided to bpf_verify_pkcs7_signature(), despite those checks were deferred at the time the keyring was retrieved with bpf_lookup_user_key(). The tests expect to encounter an error if the Search permission is removed from the keyring, or the keyring is expired. Finally, perform a successful and unsuccessful signature verification with the keyrings with pre-determined IDs (the last test fails because the key is not in the platform keyring). The test is currently in the deny list for s390x (JIT does not support calling kernel function). Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Link: https://lore.kernel.org/r/20220920075951.929132-13-roberto.sassu@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-09-16selftests/bpf: Add veristat tool for mass-verifying BPF object filesAndrii Nakryiko1-1/+6
Add a small tool, veristat, that allows mass-verification of a set of *libbpf-compatible* BPF ELF object files. For each such object file, veristat will attempt to verify each BPF program *individually*. Regardless of success or failure, it parses BPF verifier stats and outputs them in human-readable table format. In the future we can also add CSV and JSON output for more scriptable post-processing, if necessary. veristat allows to specify a set of stats that should be output and ordering between multiple objects and files (e.g., so that one can easily order by total instructions processed, instead of default file name, prog name, verdict, total instructions order). This tool should be useful for validating various BPF verifier changes or even validating different kernel versions for regressions. Here's an example for some of the heaviest selftests/bpf BPF object files: $ sudo ./veristat -s insns,file,prog {pyperf,loop,test_verif_scale,strobemeta,test_cls_redirect,profiler}*.linked3.o File Program Verdict Duration, us Total insns Total states Peak states ------------------------------------ ------------------------------------ ------- ------------ ----------- ------------ ----------- loop3.linked3.o while_true failure 350990 1000001 9663 9663 test_verif_scale3.linked3.o balancer_ingress success 115244 845499 8636 2141 test_verif_scale2.linked3.o balancer_ingress success 77688 773445 3048 788 pyperf600.linked3.o on_event success 2079872 624585 30335 30241 pyperf600_nounroll.linked3.o on_event success 353972 568128 37101 2115 strobemeta.linked3.o on_event success 455230 557149 15915 13537 test_verif_scale1.linked3.o balancer_ingress success 89880 554754 8636 2141 strobemeta_nounroll2.linked3.o on_event success 433906 501725 17087 1912 loop6.linked3.o trace_virtqueue_add_sgs success 282205 398057 8717 919 loop1.linked3.o nested_loops success 125630 361349 5504 5504 pyperf180.linked3.o on_event success 2511740 160398 11470 11446 pyperf100.linked3.o on_event success 744329 87681 6213 6191 test_cls_redirect.linked3.o cls_redirect success 54087 78925 4782 903 strobemeta_subprogs.linked3.o on_event success 57898 65420 1954 403 test_cls_redirect_subprogs.linked3.o cls_redirect success 54522 64965 4619 958 strobemeta_nounroll1.linked3.o on_event success 43313 57240 1757 382 pyperf50.linked3.o on_event success 194355 46378 3263 3241 profiler2.linked3.o tracepoint__syscalls__sys_enter_kill success 23869 43372 1423 542 pyperf_subprogs.linked3.o on_event success 29179 36358 2499 2499 profiler1.linked3.o tracepoint__syscalls__sys_enter_kill success 13052 27036 1946 936 profiler3.linked3.o tracepoint__syscalls__sys_enter_kill success 21023 26016 2186 915 profiler2.linked3.o kprobe__vfs_link success 5255 13896 303 271 profiler1.linked3.o kprobe__vfs_link success 7792 12687 1042 1041 profiler3.linked3.o kprobe__vfs_link success 7332 10601 865 865 profiler2.linked3.o kprobe_ret__do_filp_open success 3417 8900 216 199 profiler2.linked3.o kprobe__vfs_symlink success 3548 8775 203 186 pyperf_global.linked3.o on_event success 10007 7563 520 520 profiler3.linked3.o kprobe_ret__do_filp_open success 4708 6464 532 532 profiler1.linked3.o kprobe_ret__do_filp_open success 3090 6445 508 508 profiler3.linked3.o kprobe__vfs_symlink success 4477 6358 521 521 profiler1.linked3.o kprobe__vfs_symlink success 3381 6347 507 507 profiler2.linked3.o raw_tracepoint__sched_process_exec success 2464 5874 292 189 profiler3.linked3.o raw_tracepoint__sched_process_exec success 2677 4363 397 283 profiler2.linked3.o kprobe__proc_sys_write success 1800 4355 143 138 profiler1.linked3.o raw_tracepoint__sched_process_exec success 1649 4019 333 240 pyperf600_bpf_loop.linked3.o on_event success 2711 3966 306 306 profiler2.linked3.o raw_tracepoint__sched_process_exit success 1234 3138 83 66 profiler3.linked3.o kprobe__proc_sys_write success 1755 2623 223 223 profiler1.linked3.o kprobe__proc_sys_write success 1222 2456 193 193 loop2.linked3.o while_true success 608 1783 57 30 profiler3.linked3.o raw_tracepoint__sched_process_exit success 789 1680 146 146 profiler1.linked3.o raw_tracepoint__sched_process_exit success 592 1526 133 133 strobemeta_bpf_loop.linked3.o on_event success 1015 1512 106 106 loop4.linked3.o combinations success 165 524 18 17 profiler3.linked3.o raw_tracepoint__sched_process_fork success 196 299 25 25 profiler1.linked3.o raw_tracepoint__sched_process_fork success 109 265 19 19 profiler2.linked3.o raw_tracepoint__sched_process_fork success 111 265 19 19 loop5.linked3.o while_true success 47 84 9 9 ------------------------------------ ------------------------------------ ------- ------------ ----------- ------------ ----------- Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220909193053.577111-4-andrii@kernel.org
2022-09-07selftests/bpf: regroup and declare similar kfuncs selftests in an arrayBenjamin Tissoires1-2/+3
Similar to tools/testing/selftests/bpf/prog_tests/dynptr.c: we declare an array of tests that we run one by one in a for loop. Followup patches will add more similar-ish tests, so avoid a lot of copy paste by grouping the declaration in an array. For light skeletons, we have to rely on the offsetof() macro so we can statically declare which program we are using. In the libbpf case, we can rely on bpf_object__find_program_by_name(). So also change the Makefile to generate both light skeletons and normal ones. Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20220906151303.2780789-2-benjamin.tissoires@redhat.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-09-02selftests/bpf: Store BPF object files with .bpf.o extensionDaniel Müller1-18/+18
BPF object files are, in a way, the final artifact produced as part of the ahead-of-time compilation process. That makes them somewhat special compared to "regular" object files, which are a intermediate build artifacts that can typically be removed safely. As such, it can make sense to name them differently to make it easier to spot this difference at a glance. Among others, libbpf-bootstrap [0] has established the extension .bpf.o for BPF object files. It seems reasonable to follow this example and establish the same denomination for selftest build artifacts. To that end, this change adjusts the corresponding part of the build system and the test programs loading BPF object files to work with .bpf.o files. [0] https://github.com/libbpf/libbpf-bootstrap Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Müller <deso@posteo.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220901222253.1199242-1-deso@posteo.net
2022-08-23selftests/bpf: Make sure bpf_{g,s}et_retval is exposed everywhereStanislav Fomichev1-0/+1
For each hook, have a simple bpf_set_retval(bpf_get_retval) program and make sure it loads for the hooks we want. The exceptions are the hooks which don't propagate the error to the callers: - sockops - recvmsg - getpeername - getsockname - cg_skb ingress and egress Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/r/20220823222555.523590-6-sdf@google.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-08selftests, xsk: Rename AF_XDP testing appMaciej Fijalkowski1-2/+2
Recently, xsk part of libbpf was moved to selftests/bpf directory and lives on its own because there is an AF_XDP testing application that needs it called xdpxceiver. That name makes it a bit hard to indicate who maintains it as there are other XDP samples in there, whereas this one is strictly about AF_XDP. Do s/xdpxceiver/xskxceiver so that it will be easier to figure out who maintains it. A follow-up patch will correct MAINTAINERS file. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220707111613.49031-2-maciej.fijalkowski@intel.com
2022-07-07selftests/bpf: Add benchmark for local_storage RCU Tasks Trace usageDave Marchevsky1-1/+3
This benchmark measures grace period latency and kthread cpu usage of RCU Tasks Trace when many processes are creating/deleting BPF local_storage. Intent here is to quantify improvement on these metrics after Paul's recent RCU Tasks patches [0]. Specifically, fork 15k tasks which call a bpf prog that creates/destroys task local_storage and sleep in a loop, resulting in many call_rcu_tasks_trace calls. To determine grace period latency, trace time elapsed between rcu_tasks_trace_pregp_step and rcu_tasks_trace_postgp; for cpu usage look at rcu_task_trace_kthread's stime in /proc/PID/stat. On my virtualized test environment (Skylake, 8 cpus) benchmark results demonstrate significant improvement: BEFORE Paul's patches: SUMMARY tasks_trace grace period latency avg 22298.551 us stddev 1302.165 us SUMMARY ticks per tasks_trace grace period avg 2.291 stddev 0.324 AFTER Paul's patches: SUMMARY tasks_trace grace period latency avg 16969.197 us stddev 2525.053 us SUMMARY ticks per tasks_trace grace period avg 1.146 stddev 0.178 Note that since these patches are not in bpf-next benchmarking was done by cherry-picking this patch onto rcu tree. [0] https://lore.kernel.org/rcu/20220620225402.GA3842369@paulmck-ThinkPad-P17-Gen-1/ Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Paul E. McKenney <paulmck@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220705190018.3239050-1-davemarchevsky@fb.com
2022-06-28libbpf: move xsk.{c,h} into selftests/bpfAndrii Nakryiko1-0/+2
Remove deprecated xsk APIs from libbpf. But given we have selftests relying on this, move those files (with minimal adjustments to make them compilable) under selftests/bpf. We also remove all the removed APIs from libbpf.map, while overall keeping version inheritance chain, as most APIs are backwards compatible so there is no need to reassign them as LIBBPF_1.0.0 versions. Cc: Magnus Karlsson <magnus.karlsson@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220627211527.2245459-2-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-06-22selftests/bpf: Add benchmark for local_storage getDave Marchevsky1-1/+3
Add a benchmarks to demonstrate the performance cliff for local_storage get as the number of local_storage maps increases beyond current local_storage implementation's cache size. "sequential get" and "interleaved get" benchmarks are added, both of which do many bpf_task_storage_get calls on sets of task local_storage maps of various counts, while considering a single specific map to be 'important' and counting task_storage_gets to the important map separately in addition to normal 'hits' count of all gets. Goal here is to mimic scenario where a particular program using one map - the important one - is running on a system where many other local_storage maps exist and are accessed often. While "sequential get" benchmark does bpf_task_storage_get for map 0, 1, ..., {9, 99, 999} in order, "interleaved" benchmark interleaves 4 bpf_task_storage_gets for the important map for every 10 map gets. This is meant to highlight performance differences when important map is accessed far more frequently than non-important maps. A "hashmap control" benchmark is also included for easy comparison of standard bpf hashmap lookup vs local_storage get. The benchmark is similar to "sequential get", but creates and uses BPF_MAP_TYPE_HASH instead of local storage. Only one inner map is created - a hashmap meant to hold tid -> data mapping for all tasks. Size of the hashmap is hardcoded to my system's PID_MAX_LIMIT (4,194,304). The number of these keys which are actually fetched as part of the benchmark is configurable. Addition of this benchmark is inspired by conversation with Alexei in a previous patchset's thread [0], which highlighted the need for such a benchmark to motivate and validate improvements to local_storage implementation. My approach in that series focused on improving performance for explicitly-marked 'important' maps and was rejected with feedback to make more generally-applicable improvements while avoiding explicitly marking maps as important. Thus the benchmark reports both general and important-map-focused metrics, so effect of future work on both is clear. Regarding the benchmark results. On a powerful system (Skylake, 20 cores, 256gb ram): Hashmap Control =============== num keys: 10 hashmap (control) sequential get: hits throughput: 20.900 ± 0.334 M ops/s, hits latency: 47.847 ns/op, important_hits throughput: 20.900 ± 0.334 M ops/s num keys: 1000 hashmap (control) sequential get: hits throughput: 13.758 ± 0.219 M ops/s, hits latency: 72.683 ns/op, important_hits throughput: 13.758 ± 0.219 M ops/s num keys: 10000 hashmap (control) sequential get: hits throughput: 6.995 ± 0.034 M ops/s, hits latency: 142.959 ns/op, important_hits throughput: 6.995 ± 0.034 M ops/s num keys: 100000 hashmap (control) sequential get: hits throughput: 4.452 ± 0.371 M ops/s, hits latency: 224.635 ns/op, important_hits throughput: 4.452 ± 0.371 M ops/s num keys: 4194304 hashmap (control) sequential get: hits throughput: 3.043 ± 0.033 M ops/s, hits latency: 328.587 ns/op, important_hits throughput: 3.043 ± 0.033 M ops/s Local Storage ============= num_maps: 1 local_storage cache sequential get: hits throughput: 47.298 ± 0.180 M ops/s, hits latency: 21.142 ns/op, important_hits throughput: 47.298 ± 0.180 M ops/s local_storage cache interleaved get: hits throughput: 55.277 ± 0.888 M ops/s, hits latency: 18.091 ns/op, important_hits throughput: 55.277 ± 0.888 M ops/s num_maps: 10 local_storage cache sequential get: hits throughput: 40.240 ± 0.802 M ops/s, hits latency: 24.851 ns/op, important_hits throughput: 4.024 ± 0.080 M ops/s local_storage cache interleaved get: hits throughput: 48.701 ± 0.722 M ops/s, hits latency: 20.533 ns/op, important_hits throughput: 17.393 ± 0.258 M ops/s num_maps: 16 local_storage cache sequential get: hits throughput: 44.515 ± 0.708 M ops/s, hits latency: 22.464 ns/op, important_hits throughput: 2.782 ± 0.044 M ops/s local_storage cache interleaved get: hits throughput: 49.553 ± 2.260 M ops/s, hits latency: 20.181 ns/op, important_hits throughput: 15.767 ± 0.719 M ops/s num_maps: 17 local_storage cache sequential get: hits throughput: 38.778 ± 0.302 M ops/s, hits latency: 25.788 ns/op, important_hits throughput: 2.284 ± 0.018 M ops/s local_storage cache interleaved get: hits throughput: 43.848 ± 1.023 M ops/s, hits latency: 22.806 ns/op, important_hits throughput: 13.349 ± 0.311 M ops/s num_maps: 24 local_storage cache sequential get: hits throughput: 19.317 ± 0.568 M ops/s, hits latency: 51.769 ns/op, important_hits throughput: 0.806 ± 0.024 M ops/s local_storage cache interleaved get: hits throughput: 24.397 ± 0.272 M ops/s, hits latency: 40.989 ns/op, important_hits throughput: 6.863 ± 0.077 M ops/s num_maps: 32 local_storage cache sequential get: hits throughput: 13.333 ± 0.135 M ops/s, hits latency: 75.000 ns/op, important_hits throughput: 0.417 ± 0.004 M ops/s local_storage cache interleaved get: hits throughput: 16.898 ± 0.383 M ops/s, hits latency: 59.178 ns/op, important_hits throughput: 4.717 ± 0.107 M ops/s num_maps: 100 local_storage cache sequential get: hits throughput: 6.360 ± 0.107 M ops/s, hits latency: 157.233 ns/op, important_hits throughput: 0.064 ± 0.001 M ops/s local_storage cache interleaved get: hits throughput: 7.303 ± 0.362 M ops/s, hits latency: 136.930 ns/op, important_hits throughput: 1.907 ± 0.094 M ops/s num_maps: 1000 local_storage cache sequential get: hits throughput: 0.452 ± 0.010 M ops/s, hits latency: 2214.022 ns/op, important_hits throughput: 0.000 ± 0.000 M ops/s local_storage cache interleaved get: hits throughput: 0.542 ± 0.007 M ops/s, hits latency: 1843.341 ns/op, important_hits throughput: 0.136 ± 0.002 M ops/s Looking at the "sequential get" results, it's clear that as the number of task local_storage maps grows beyond the current cache size (16), there's a significant reduction in hits throughput. Note that current local_storage implementation assigns a cache_idx to maps as they are created. Since "sequential get" is creating maps 0..n in order and then doing bpf_task_storage_get calls in the same order, the benchmark is effectively ensuring that a map will not be in cache when the program tries to access it. For "interleaved get" results, important-map hits throughput is greatly increased as the important map is more likely to be in cache by virtue of being accessed far more frequently. Throughput still reduces as # maps increases, though. To get a sense of the overhead of the benchmark program, I commented out bpf_task_storage_get/bpf_map_lookup_elem in local_storage_bench.c and ran the benchmark on the same host as the 'real' run. Results: Hashmap Control =============== num keys: 10 hashmap (control) sequential get: hits throughput: 54.288 ± 0.655 M ops/s, hits latency: 18.420 ns/op, important_hits throughput: 54.288 ± 0.655 M ops/s num keys: 1000 hashmap (control) sequential get: hits throughput: 52.913 ± 0.519 M ops/s, hits latency: 18.899 ns/op, important_hits throughput: 52.913 ± 0.519 M ops/s num keys: 10000 hashmap (control) sequential get: hits throughput: 53.480 ± 1.235 M ops/s, hits latency: 18.699 ns/op, important_hits throughput: 53.480 ± 1.235 M ops/s num keys: 100000 hashmap (control) sequential get: hits throughput: 54.982 ± 1.902 M ops/s, hits latency: 18.188 ns/op, important_hits throughput: 54.982 ± 1.902 M ops/s num keys: 4194304 hashmap (control) sequential get: hits throughput: 50.858 ± 0.707 M ops/s, hits latency: 19.662 ns/op, important_hits throughput: 50.858 ± 0.707 M ops/s Local Storage ============= num_maps: 1 local_storage cache sequential get: hits throughput: 110.990 ± 4.828 M ops/s, hits latency: 9.010 ns/op, important_hits throughput: 110.990 ± 4.828 M ops/s local_storage cache interleaved get: hits throughput: 161.057 ± 4.090 M ops/s, hits latency: 6.209 ns/op, important_hits throughput: 161.057 ± 4.090 M ops/s num_maps: 10 local_storage cache sequential get: hits throughput: 112.930 ± 1.079 M ops/s, hits latency: 8.855 ns/op, important_hits throughput: 11.293 ± 0.108 M ops/s local_storage cache interleaved get: hits throughput: 115.841 ± 2.088 M ops/s, hits latency: 8.633 ns/op, important_hits throughput: 41.372 ± 0.746 M ops/s num_maps: 16 local_storage cache sequential get: hits throughput: 115.653 ± 0.416 M ops/s, hits latency: 8.647 ns/op, important_hits throughput: 7.228 ± 0.026 M ops/s local_storage cache interleaved get: hits throughput: 138.717 ± 1.649 M ops/s, hits latency: 7.209 ns/op, important_hits throughput: 44.137 ± 0.525 M ops/s num_maps: 17 local_storage cache sequential get: hits throughput: 112.020 ± 1.649 M ops/s, hits latency: 8.927 ns/op, important_hits throughput: 6.598 ± 0.097 M ops/s local_storage cache interleaved get: hits throughput: 128.089 ± 1.960 M ops/s, hits latency: 7.807 ns/op, important_hits throughput: 38.995 ± 0.597 M ops/s num_maps: 24 local_storage cache sequential get: hits throughput: 92.447 ± 5.170 M ops/s, hits latency: 10.817 ns/op, important_hits throughput: 3.855 ± 0.216 M ops/s local_storage cache interleaved get: hits throughput: 128.844 ± 2.808 M ops/s, hits latency: 7.761 ns/op, important_hits throughput: 36.245 ± 0.790 M ops/s num_maps: 32 local_storage cache sequential get: hits throughput: 102.042 ± 1.462 M ops/s, hits latency: 9.800 ns/op, important_hits throughput: 3.194 ± 0.046 M ops/s local_storage cache interleaved get: hits throughput: 126.577 ± 1.818 M ops/s, hits latency: 7.900 ns/op, important_hits throughput: 35.332 ± 0.507 M ops/s num_maps: 100 local_storage cache sequential get: hits throughput: 111.327 ± 1.401 M ops/s, hits latency: 8.983 ns/op, important_hits throughput: 1.113 ± 0.014 M ops/s local_storage cache interleaved get: hits throughput: 131.327 ± 1.339 M ops/s, hits latency: 7.615 ns/op, important_hits throughput: 34.302 ± 0.350 M ops/s num_maps: 1000 local_storage cache sequential get: hits throughput: 101.978 ± 0.563 M ops/s, hits latency: 9.806 ns/op, important_hits throughput: 0.102 ± 0.001 M ops/s local_storage cache interleaved get: hits throughput: 141.084 ± 1.098 M ops/s, hits latency: 7.088 ns/op, important_hits throughput: 35.430 ± 0.276 M ops/s Adjusting for overhead, latency numbers for "hashmap control" and "sequential get" are: hashmap_control_1k: ~53.8ns hashmap_control_10k: ~124.2ns hashmap_control_100k: ~206.5ns sequential_get_1: ~12.1ns sequential_get_10: ~16.0ns sequential_get_16: ~13.8ns sequential_get_17: ~16.8ns sequential_get_24: ~40.9ns sequential_get_32: ~65.2ns sequential_get_100: ~148.2ns sequential_get_1000: ~2204ns Clearly demonstrating a cliff. In the discussion for v1 of this patch, Alexei noted that local_storage was 2.5x faster than a large hashmap when initially implemented [1]. The benchmark results show that local_storage is 5-10x faster: a long-running BPF application putting some pid-specific info into a hashmap for each pid it sees will probably see on the order of 10-100k pids. Bench numbers for hashmaps of this size are ~10x slower than sequential_get_16, but as the number of local_storage maps grows far past local_storage cache size the performance advantage shrinks and eventually reverses. When running the benchmarks it may be necessary to bump 'open files' ulimit for a successful run. [0]: https://lore.kernel.org/all/20220420002143.1096548-1-davemarchevsky@fb.com [1]: https://lore.kernel.org/bpf/20220511173305.ftldpn23m4ski3d3@MBP-98dd607d3435.dhcp.thefacebook.com/ Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Link: https://lore.kernel.org/r/20220620222554.270578-1-davemarchevsky@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-06-17selftests/bpf: Don't force lld on non-x86 architecturesAndrii Nakryiko1-2/+9
LLVM's lld linker doesn't have a universal architecture support (e.g., it definitely doesn't work on s390x), so be safe and force lld for urandom_read and liburandom_read.so only on x86 architectures. This should fix s390x CI runs. Fixes: 3e6fe5ce4d48 ("libbpf: Fix internal USDT address translation logic for shared libraries") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220617045512.1339795-1-andrii@kernel.org
2022-06-16selftests/bpf: Add selftests for raw syncookie helpersMaxim Mikityanskiy1-1/+2
This commit adds selftests for the new BPF helpers: bpf_tcp_raw_{gen,check}_syncookie_ipv{4,6}. xdp_synproxy_kern.c is a BPF program that generates SYN cookies on allowed TCP ports and sends SYNACKs to clients, accelerating synproxy iptables module. xdp_synproxy.c is a userspace control application that allows to configure the following options in runtime: list of allowed ports, MSS, window scale, TTL. A selftest is added to prog_tests that leverages the above programs to test the functionality of the new helpers. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20220615134847.3753567-5-maximmi@nvidia.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-06-17libbpf: Fix internal USDT address translation logic for shared librariesAndrii Nakryiko1-5/+9
Perform the same virtual address to file offset translation that libbpf is doing for executable ELF binaries also for shared libraries. Currently libbpf is making a simplifying and sometimes wrong assumption that for shared libraries relative virtual addresses inside ELF are always equal to file offsets. Unfortunately, this is not always the case with LLVM's lld linker, which now by default generates quite more complicated ELF segments layout. E.g., for liburandom_read.so from selftests/bpf, here's an excerpt from readelf output listing ELF segments (a.k.a. program headers): Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align PHDR 0x000040 0x0000000000000040 0x0000000000000040 0x0001f8 0x0001f8 R 0x8 LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0005e4 0x0005e4 R 0x1000 LOAD 0x0005f0 0x00000000000015f0 0x00000000000015f0 0x000160 0x000160 R E 0x1000 LOAD 0x000750 0x0000000000002750 0x0000000000002750 0x000210 0x000210 RW 0x1000 LOAD 0x000960 0x0000000000003960 0x0000000000003960 0x000028 0x000029 RW 0x1000 Compare that to what is generated by GNU ld (or LLVM lld's with extra -znoseparate-code argument which disables this cleverness in the name of file size reduction): Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x000550 0x000550 R 0x1000 LOAD 0x001000 0x0000000000001000 0x0000000000001000 0x000131 0x000131 R E 0x1000 LOAD 0x002000 0x0000000000002000 0x0000000000002000 0x0000ac 0x0000ac R 0x1000 LOAD 0x002dc0 0x0000000000003dc0 0x0000000000003dc0 0x000262 0x000268 RW 0x1000 You can see from the first example above that for executable (Flg == "R E") PT_LOAD segment (LOAD #2), Offset doesn't match VirtAddr columns. And it does in the second case (GNU ld output). This is important because all the addresses, including USDT specs, operate in a virtual address space, while kernel is expecting file offsets when performing uprobe attach. So such mismatches have to be properly taken care of and compensated by libbpf, which is what this patch is fixing. Also patch clarifies few function and variable names, as well as updates comments to reflect this important distinction (virtaddr vs file offset) and to ephasize that shared libraries are not all that different from executables in this regard. This patch also changes selftests/bpf Makefile to force urand_read and liburand_read.so to be built with Clang and LLVM's lld (and explicitly request this ELF file size optimization through -znoseparate-code linker parameter) to validate libbpf logic and ensure regressions don't happen in the future. I've bundled these selftests changes together with libbpf changes to keep the above description tied with both libbpf and selftests changes. Fixes: 74cc6311cec9 ("libbpf: Add USDT notes parsing and resolution logic") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220616055543.3285835-1-andrii@kernel.org
2022-06-11selftest/bpf/benchs: Add bpf_map benchmarkFeng Zhou1-1/+3
Add benchmark for hash_map to reproduce the worst case that non-stop update when map's free is zero. Just like this: ./run_bench_bpf_hashmap_full_update.sh Setting up benchmark 'bpf-hashmap-ful-update'... Benchmark 'bpf-hashmap-ful-update' started. 1:hash_map_full_perf 555830 events per sec ... Signed-off-by: Feng Zhou <zhoufeng.zf@bytedance.com> Link: https://lore.kernel.org/r/20220610023308.93798-3-zhoufeng.zf@bytedance.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-05-18selftests/bpf: Add missed ima_setup.sh in MakefileHangbin Liu1-1/+1
When build bpf test and install it to another folder, e.g. make -j10 install -C tools/testing/selftests/ TARGETS="bpf" \ SKIP_TARGETS="" INSTALL_PATH=/tmp/kselftests The ima_setup.sh is missed in target folder, which makes test_ima failed. Fix it by adding ima_setup.sh to TEST_PROGS_EXTENDED. Fixes: 34b82d3ac105 ("bpf: Add a selftest for bpf_ima_inode_hash") Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220516040020.653291-1-liuhangbin@gmail.com
2022-05-16selftests/bpf: Fix building bpf selftests staticallyYosry Ahmed1-2/+4
bpf selftests can no longer be built with CFLAGS=-static with liburandom_read.so and its dependent target. Filter out -static for liburandom_read.so and its dependent target. When building statically, this leaves urandom_read relying on system-wide shared libraries. Signed-off-by: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220514002115.1376033-1-yosryahmed@google.com
2022-05-09selftests/bpf: Prevent skeleton generation raceAndrii Nakryiko1-5/+5
Prevent "classic" and light skeleton generation rules from stomping on each other's toes due to the use of the same <obj>.linked{1,2,3}.o naming pattern. There is no coordination and synchronizataion between .skel.h and .lskel.h rules, so they can easily overwrite each other's intermediate object files, leading to errors like: /bin/sh: line 1: 170928 Bus error (core dumped) /data/users/andriin/linux/tools/testing/selftests/bpf/tools/sbin/bpftool gen skeleton /data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.linked3.o name test_ksyms_weak > /data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.skel.h make: *** [Makefile:507: /data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.skel.h] Error 135 make: *** Deleting file '/data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.skel.h' Fix by using different suffix for light skeleton rule. Fixes: c48e51c8b07a ("bpf: selftests: Add selftests for module kfunc support") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220509004148.1801791-2-andrii@kernel.org
2022-04-05selftests/bpf: Add urandom_read shared lib and USDTsAndrii Nakryiko1-2/+9
Extend urandom_read helper binary to include USDTs of 4 combinations: semaphore/semaphoreless (refcounted and non-refcounted) and based in executable or shared library. We also extend urandom_read with ability to report it's own PID to parent process and wait for parent process to ready itself up for tracing urandom_read. We utilize popen() and underlying pipe properties for proper signaling. Once urandom_read is ready, we add few tests to validate that libbpf's USDT attachment handles all the above combinations of semaphore (or lack of it) and static or shared library USDTs. Also, we validate that libbpf handles shared libraries both with PID filter and without one (i.e., -1 for PID argument). Having the shared library case tested with and without PID is important because internal logic differs on kernels that don't support BPF cookies. On such older kernels, attaching to USDTs in shared libraries without specifying concrete PID doesn't work in principle, because it's impossible to determine shared library's load address to derive absolute IPs for uprobe attachments. Without absolute IPs, it's impossible to perform correct look up of USDT spec based on uprobe's absolute IP (the only kind available from BPF at runtime). This is not the problem on newer kernels with BPF cookie as we don't need IP-to-ID lookup because BPF cookie value *is* spec ID. So having those two situations as separate subtests is good because libbpf CI is able to test latest selftests against old kernels (e.g., 4.9 and 5.5), so we'll be able to disable PID-less shared lib attachment for old kernels, but will still leave PID-specific one enabled to validate this legacy logic is working correctly. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Link: https://lore.kernel.org/bpf/20220404234202.331384-8-andrii@kernel.org
2022-04-05selftests/bpf: Add basic USDT selftestsAndrii Nakryiko1-6/+8
Add semaphore-based USDT to test_progs itself and write basic tests to valicate both auto-attachment and manual attachment logic, as well as BPF-side functionality. Also add subtests to validate that libbpf properly deduplicates USDT specs and handles spec overflow situations correctly, as well as proper "rollback" of partially-attached multi-spec USDT. BPF-side of selftest intentionally consists of two files to validate that usdt.bpf.h header can be included from multiple source code files that are subsequently linked into final BPF object file without causing any symbol duplication or other issues. We are validating that __weak maps and bpf_usdt_xxx() API functions defined in usdt.bpf.h do work as intended. USDT selftests utilize sys/sdt.h header that on Ubuntu systems comes from systemtap-sdt-devel package. But to simplify everyone's life, including CI but especially casual contributors to bpf/bpf-next that are trying to build selftests, I've checked in sys/sdt.h header from [0] directly. This way it will work on all architectures and distros without having to figure it out for every relevant combination and adding any extra implicit package dependencies. [0] https://sourceware.org/git?p=systemtap.git;a=blob_plain;f=includes/sys/sdt.h;h=ca0162b4dc57520b96638c8ae79ad547eb1dd3a1;hb=HEAD Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Link: https://lore.kernel.org/bpf/20220404234202.331384-7-andrii@kernel.org
2022-03-17selftests/bpf: Test subskeleton functionalityDelyan Kratunov1-2/+10
This patch changes the selftests/bpf Makefile to also generate a subskel.h for every skel.h it would have normally generated. Separately, it also introduces a new subskeleton test which tests library objects, externs, weak symbols, kconfigs, and user maps. Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1bd24956940bbbfe169bb34f7f87b11df52ef011.1647473511.git.delyank@fb.com
2022-03-16bpf: selftests: Remove libcap usage from test_progsMartin KaFai Lau1-2/+3
This patch removes the libcap usage from test_progs. bind_perm.c is the only user. cap_*_effective() helpers added in the earlier patch are directly used instead. No other selftest binary is using libcap, so '-lcap' is also removed from the Makefile. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Stanislav Fomichev <sdf@google.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20220316173835.2039334-1-kafai@fb.com
2022-03-16bpf: selftests: Remove libcap usage from test_verifierMartin KaFai Lau1-1/+2
This patch removes the libcap usage from test_verifier. The cap_*_effective() helpers added in the earlier patch are used instead. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20220316173829.2038682-1-kafai@fb.com
2022-02-20selftests/bpf: Fix btfgen testsAndrii Nakryiko1-1/+2
There turned out to be a few problems with btfgen selftests. First, core_btfgen tests are failing in BPF CI due to the use of full-featured bpftool, which has extra dependencies on libbfd, libcap, etc, which are present in BPF CI's build environment, but those shared libraries are missing in QEMU image in which test_progs is running. To fix this problem, use minimal bootstrap version of bpftool instead. It only depend on libelf and libz, same as libbpf, so doesn't add any new requirements (and bootstrap bpftool still implementes entire `bpftool gen` functionality, which is quite convenient). Second problem is even more interesting. Both core_btfgen and core_reloc reuse the same set of struct core_reloc_test_case array of test case definitions. That in itself is not a problem, but btfgen test replaces test_case->btf_src_file property with the path to temporary file into which minimized BTF is output by bpftool. This interferes with original core_reloc tests, depending on order of tests execution (core_btfgen is run first in sequential mode and skrews up subsequent core_reloc run by pointing to already deleted temporary file, instead of the original BTF files) and whether those two runs share the same process (in parallel mode the chances are high for them to run in two separate processes and so not interfere with each other). To prevent this interference, create and use local copy of a test definition. Mark original array as constant to catch accidental modifcations. Note that setup_type_id_case_success() and setup_type_id_case_success() still modify common test_case->output memory area, but it is ok as each setup function has to re-initialize it completely anyways. In sequential mode it leads to deterministic and correct initialization. In parallel mode they will either each have their own process, or if core_reloc and core_btfgen happen to be run by the same worker process, they will still do that sequentially within the worker process. If they are sharded across multiple processes, they don't really share anything anyways. Also, rename core_btfgen into core_reloc_btfgen, as it is indeed just a "flavor" of core_reloc test, not an independent set of tests. So make it more obvious. Last problem that needed solving was that location of bpftool differs between test_progs and test_progs' flavors (e.g., test_progs-no_alu32). To keep it simple, create a symlink to bpftool both inside selftests/bpf/ directory and selftests/bpf/<flavor> subdirectory. That way, from inside core_reloc test, location to bpftool is just "./bpftool". v2->v3: - fix bpftool location relative the test_progs-no_alu32; v1->v2: - fix corruption of core_reloc_test_case. Fixes: 704c91e59fe0 ("selftests/bpf: Test "bpftool gen min_core_btf") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yucong Sun <sunyucong@gmail.com> Link: https://lore.kernel.org/bpf/20220220042720.3336684-1-andrii@kernel.org
2022-02-04selftests/bpf: Test bpf_core_types_are_compat() functionality.Matteo Croce1-1/+1
Add several tests to check bpf_core_types_are_compat() functionality: - candidate type name exists and types match - candidate type name exists but types don't match - nested func protos at kernel recursion limit - nested func protos above kernel recursion limit. Such bpf prog is rejected during the load. Signed-off-by: Matteo Croce <mcroce@microsoft.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220204005519.60361-3-mcroce@linux.microsoft.com
2022-01-20selftests/bpf: fail build on compilation warningAndrii Nakryiko1-2/+2
It's very easy to miss compilation warnings without -Werror, which is not set for selftests. libbpf and bpftool are already strict about this, so make selftests/bpf also treat compilation warnings as errors to catch such regressions early. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220120060529.1890907-2-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2021-12-16selftests/bpf: Enable cross-building with clangJean-Philippe Brucker1-4/+4
Cross building using clang requires passing the "-target" flag rather than using the CROSS_COMPILE prefix. Makefile.include transforms CROSS_COMPILE into CLANG_CROSS_FLAGS. Clear CROSS_COMPILE for bpftool and the host libbpf, and use the clang flags for urandom_read and bench. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20211216163842.829836-7-jean-philippe@linaro.org
2021-12-11selftests/bpf: Add benchmark for bpf_strncmp() helperHou Tao1-1/+3
Add benchmark to compare the performance between home-made strncmp() in bpf program and bpf_strncmp() helper. In summary, the performance win of bpf_strncmp() under x86-64 is greater than 18% when the compared string length is greater than 64, and is 179% when the length is 4095. Under arm64 the performance win is even bigger: 33% when the length is greater than 64 and 600% when the length is 4095. The following is the details: no-helper-X: use home-made strncmp() to compare X-sized string helper-Y: use bpf_strncmp() to compare Y-sized string Under x86-64: no-helper-1 3.504 ± 0.000M/s (drops 0.000 ± 0.000M/s) helper-1 3.347 ± 0.001M/s (drops 0.000 ± 0.000M/s) no-helper-8 3.357 ± 0.001M/s (drops 0.000 ± 0.000M/s) helper-8 3.307 ± 0.001M/s (drops 0.000 ± 0.000M/s) no-helper-32 3.064 ± 0.000M/s (drops 0.000 ± 0.000M/s) helper-32 3.253 ± 0.001M/s (drops 0.000 ± 0.000M/s) no-helper-64 2.563 ± 0.001M/s (drops 0.000 ± 0.000M/s) helper-64 3.040 ± 0.001M/s (drops 0.000 ± 0.000M/s) no-helper-128 1.975 ± 0.000M/s (drops 0.000 ± 0.000M/s) helper-128 2.641 ± 0.000M/s (drops 0.000 ± 0.000M/s) no-helper-512 0.759 ± 0.000M/s (drops 0.000 ± 0.000M/s) helper-512 1.574 ± 0.000M/s (drops 0.000 ± 0.000M/s) no-helper-2048 0.329 ± 0.000M/s (drops 0.000 ± 0.000M/s) helper-2048 0.602 ± 0.000M/s (drops 0.000 ± 0.000M/s) no-helper-4095 0.117 ± 0.000M/s (drops 0.000 ± 0.000M/s) helper-4095 0.327 ± 0.000M/s (drops 0.000 ± 0.000M/s) Under arm64: no-helper-1 2.806 ± 0.004M/s (drops 0.000 ± 0.000M/s) helper-1 2.819 ± 0.002M/s (drops 0.000 ± 0.000M/s) no-helper-8 2.797 ± 0.109M/s (drops 0.000 ± 0.000M/s) helper-8 2.786 ± 0.025M/s (drops 0.000 ± 0.000M/s) no-helper-32 2.399 ± 0.011M/s (drops 0.000 ± 0.000M/s) helper-32 2.703 ± 0.002M/s (drops 0.000 ± 0.000M/s) no-helper-64 2.020 ± 0.015M/s (drops 0.000 ± 0.000M/s) helper-64 2.702 ± 0.073M/s (drops 0.000 ± 0.000M/s) no-helper-128 1.604 ± 0.001M/s (drops 0.000 ± 0.000M/s) helper-128 2.516 ± 0.002M/s (drops 0.000 ± 0.000M/s) no-helper-512 0.699 ± 0.000M/s (drops 0.000 ± 0.000M/s) helper-512 2.106 ± 0.003M/s (drops 0.000 ± 0.000M/s) no-helper-2048 0.215 ± 0.000M/s (drops 0.000 ± 0.000M/s) helper-2048 1.223 ± 0.003M/s (drops 0.000 ± 0.000M/s) no-helper-4095 0.112 ± 0.000M/s (drops 0.000 ± 0.000M/s) helper-4095 0.796 ± 0.000M/s (drops 0.000 ± 0.000M/s) Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211210141652.877186-4-houtao1@huawei.com
2021-12-10Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski1-20/+29
Andrii Nakryiko says: ==================== bpf-next 2021-12-10 v2 We've added 115 non-merge commits during the last 26 day(s) which contain a total of 182 files changed, 5747 insertions(+), 2564 deletions(-). The main changes are: 1) Various samples fixes, from Alexander Lobakin. 2) BPF CO-RE support in kernel and light skeleton, from Alexei Starovoitov. 3) A batch of new unified APIs for libbpf, logging improvements, version querying, etc. Also a batch of old deprecations for old APIs and various bug fixes, in preparation for libbpf 1.0, from Andrii Nakryiko. 4) BPF documentation reorganization and improvements, from Christoph Hellwig and Dave Tucker. 5) Support for declarative initialization of BPF_MAP_TYPE_PROG_ARRAY in libbpf, from Hengqi Chen. 6) Verifier log fixes, from Hou Tao. 7) Runtime-bounded loops support with bpf_loop() helper, from Joanne Koong. 8) Extend branch record capturing to all platforms that support it, from Kajol Jain. 9) Light skeleton codegen improvements, from Kumar Kartikeya Dwivedi. 10) bpftool doc-generating script improvements, from Quentin Monnet. 11) Two libbpf v0.6 bug fixes, from Shuyi Cheng and Vincent Minet. 12) Deprecation warning fix for perf/bpf_counter, from Song Liu. 13) MAX_TAIL_CALL_CNT unification and MIPS build fix for libbpf, from Tiezhu Yang. 14) BTF_KING_TYPE_TAG follow-up fixes, from Yonghong Song. 15) Selftests fixes and improvements, from Ilya Leoshkevich, Jean-Philippe Brucker, Jiri Olsa, Maxim Mikityanskiy, Tirthendu Sarkar, Yucong Sun, and others. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (115 commits) libbpf: Add "bool skipped" to struct bpf_map libbpf: Fix typo in btf__dedup@LIBBPF_0.0.2 definition bpftool: Switch bpf_object__load_xattr() to bpf_object__load() selftests/bpf: Remove the only use of deprecated bpf_object__load_xattr() selftests/bpf: Add test for libbpf's custom log_buf behavior selftests/bpf: Replace all uses of bpf_load_btf() with bpf_btf_load() libbpf: Deprecate bpf_object__load_xattr() libbpf: Add per-program log buffer setter and getter libbpf: Preserve kernel error code and remove kprobe prog type guessing libbpf: Improve logging around BPF program loading libbpf: Allow passing user log setting through bpf_object_open_opts libbpf: Allow passing preallocated log_buf when loading BTF into kernel libbpf: Add OPTS-based bpf_btf_load() API libbpf: Fix bpf_prog_load() log_buf logic for log_level 0 samples/bpf: Remove unneeded variable bpf: Remove redundant assignment to pointer t selftests/bpf: Fix a compilation warning perf/bpf_counter: Use bpf_map_create instead of bpf_create_map samples: bpf: Fix 'unknown warning group' build warning on Clang samples: bpf: Fix xdp_sample_user.o linking with Clang ... ==================== Link: https://lore.kernel.org/r/20211210234746.2100561-1-andrii@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-12-02selftests/bpf: Build testing_helpers.o out of treeJean-Philippe Brucker1-18/+22
Add $(OUTPUT) prefix to testing_helpers.o, so it can be built out of tree when necessary. At the moment, in addition to being built in-tree even when out-of-tree is required, testing_helpers.o is not built with the right recipe when cross-building. For consistency the other helpers, cgroup_helpers and trace_helpers, can also be passed as objects instead of source. Use *_HELPERS variable to keep the Makefile readable. Fixes: f87c1930ac29 ("selftests/bpf: Merge test_stub.c into testing_helpers.c") Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201145101.823159-1-jean-philippe@linaro.org
2021-12-02selftests/bpf: Additional test for CO-RE in the kernel.Alexei Starovoitov1-1/+1
Add a test where randmap() function is appended to three different bpf programs. That action checks struct bpf_core_relo replication logic and offset adjustment in gen loader part of libbpf. Fourth bpf program has 360 CO-RE relocations from vmlinux, bpf_testmod, and non-existing type. It tests candidate cache logic. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-16-alexei.starovoitov@gmail.com
2021-12-02selftests/bpf: Convert map_ptr_kern test to use light skeleton.Alexei Starovoitov1-1/+2
To exercise CO-RE in the kernel further convert map_ptr_kern test to light skeleton. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-15-alexei.starovoitov@gmail.com
2021-12-02selftests/bpf: Add lskel version of kfunc test.Alexei Starovoitov1-1/+1
Add light skeleton version of kfunc_call_test_subprog test. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-13-alexei.starovoitov@gmail.com
2021-11-30selftest/bpf/benchs: Add bpf_loop benchmarkJoanne Koong1-1/+3
Add benchmark to measure the throughput and latency of the bpf_loop call. Testing this on my dev machine on 1 thread, the data is as follows: nr_loops: 10 bpf_loop - throughput: 198.519 ± 0.155 M ops/s, latency: 5.037 ns/op nr_loops: 100 bpf_loop - throughput: 247.448 ± 0.305 M ops/s, latency: 4.041 ns/op nr_loops: 500 bpf_loop - throughput: 260.839 ± 0.380 M ops/s, latency: 3.834 ns/op nr_loops: 1000 bpf_loop - throughput: 262.806 ± 0.629 M ops/s, latency: 3.805 ns/op nr_loops: 5000 bpf_loop - throughput: 264.211 ± 1.508 M ops/s, latency: 3.785 ns/op nr_loops: 10000 bpf_loop - throughput: 265.366 ± 3.054 M ops/s, latency: 3.768 ns/op nr_loops: 50000 bpf_loop - throughput: 235.986 ± 20.205 M ops/s, latency: 4.238 ns/op nr_loops: 100000 bpf_loop - throughput: 264.482 ± 0.279 M ops/s, latency: 3.781 ns/op nr_loops: 500000 bpf_loop - throughput: 309.773 ± 87.713 M ops/s, latency: 3.228 ns/op nr_loops: 1000000 bpf_loop - throughput: 262.818 ± 4.143 M ops/s, latency: 3.805 ns/op >From this data, we can see that the latency per loop decreases as the number of loops increases. On this particular machine, each loop had an overhead of about ~4 ns, and we were able to run ~250 million loops per second. Signed-off-by: Joanne Koong <joannekoong@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211130030622.4131246-5-joannekoong@fb.com
2021-11-18Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-1/+1
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-11-16selftests/bpf: Add uprobe triggering overhead benchmarksAndrii Nakryiko1-1/+3
Add benchmark to measure overhead of uprobes and uretprobes. Also have a baseline (no uprobe attached) benchmark. On my dev machine, baseline benchmark can trigger 130M user_target() invocations. When uprobe is attached, this falls to just 700K. With uretprobe, we get down to 520K: $ sudo ./bench trig-uprobe-base -a Summary: hits 131.289 ± 2.872M/s # UPROBE $ sudo ./bench -a trig-uprobe-without-nop Summary: hits 0.729 ± 0.007M/s $ sudo ./bench -a trig-uprobe-with-nop Summary: hits 1.798 ± 0.017M/s # URETPROBE $ sudo ./bench -a trig-uretprobe-without-nop Summary: hits 0.508 ± 0.012M/s $ sudo ./bench -a trig-uretprobe-with-nop Summary: hits 0.883 ± 0.008M/s So there is almost 2.5x performance difference between probing nop vs non-nop instruction for entry uprobe. And 1.7x difference for uretprobe. This means that non-nop uprobe overhead is around 1.4 microseconds for uprobe and 2 microseconds for non-nop uretprobe. For nop variants, uprobe and uretprobe overhead is down to 0.556 and 1.13 microseconds, respectively. For comparison, just doing a very low-overhead syscall (with no BPF programs attached anywhere) gives: $ sudo ./bench trig-base -a Summary: hits 4.830 ± 0.036M/s So uprobes are about 2.67x slower than pure context switch. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211116013041.4072571-1-andrii@kernel.org
2021-11-12tools/runqslower: Fix cross-buildJean-Philippe Brucker1-1/+1
Commit be79505caf3f ("tools/runqslower: Install libbpf headers when building") uses the target libbpf to build the host bpftool, which doesn't work when cross-building: make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- -C tools/bpf/runqslower O=/tmp/runqslower ... LINK /tmp/runqslower/bpftool/bpftool /usr/bin/ld: /tmp/runqslower/libbpf/libbpf.a(libbpf-in.o): Relocations in generic ELF (EM: 183) /usr/bin/ld: /tmp/runqslower/libbpf/libbpf.a: error adding symbols: file in wrong format collect2: error: ld returned 1 exit status When cross-building, the target architecture differs from the host. The bpftool used for building runqslower is executed on the host, and thus must use a different libbpf than that used for runqslower itself. Remove the LIBBPF_OUTPUT and LIBBPF_DESTDIR parameters, so the bpftool build makes its own library if necessary. In the selftests, pass the host bpftool, already a prerequisite for the runqslower recipe, as BPFTOOL_OUTPUT. The runqslower Makefile will use the bpftool that's already built for selftests instead of making a new one. Fixes: be79505caf3f ("tools/runqslower: Install libbpf headers when building") Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20211112155128.565680-1-jean-philippe@linaro.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2021-11-11selftests/bpf: Minor cleanups and normalization of MakefileAndrii Nakryiko1-16/+16
Few clean ups and single-line simplifications. Also split CLEAN command into multiple $(RM) invocations as it gets dangerously close to too long argument list. Make sure that -o <output.o> is used always as the last argument for saner verbose make output. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211111053624.190580-3-andrii@kernel.org
2021-11-07selftests/bpf: Pass sanitizer flags to linker through LDFLAGSAndrii Nakryiko1-0/+1
When adding -fsanitize=address to SAN_CFLAGS, it has to be passed both to compiler through CFLAGS as well as linker through LDFLAGS. Add SAN_CFLAGS into LDFLAGS to allow building selftests with ASAN. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Hengqi Chen <hengqi.chen@gmail.com> Link: https://lore.kernel.org/bpf/20211107165521.9240-2-andrii@kernel.org
2021-11-07selftests/bpf: Use explicit bpf_test_load_program() helper callsAndrii Nakryiko1-2/+1
Remove the second part of prog loading testing helper re-definition: -Dbpf_load_program=bpf_test_load_program This completes the clean up of deprecated libbpf program loading APIs. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Link: https://lore.kernel.org/bpf/20211103220845.2676888-13-andrii@kernel.org
2021-11-07selftests/bpf: Use explicit bpf_prog_test_load() calls everywhereAndrii Nakryiko1-1/+1
-Dbpf_prog_load_deprecated=bpf_prog_test_load trick is both ugly and breaks when deprecation goes into effect due to macro magic. Convert all the uses to explicit bpf_prog_test_load() calls which avoid deprecation errors and makes everything less magical. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Link: https://lore.kernel.org/bpf/20211103220845.2676888-12-andrii@kernel.org
2021-11-07selftests/bpf: Merge test_stub.c into testing_helpers.cAndrii Nakryiko1-16/+17
Move testing prog and object load wrappers (bpf_prog_test_load and bpf_test_load_program) into testing_helpers.{c,h} and get rid of otherwise useless test_stub.c. Make testing_helpers.c available to non-test_progs binaries as well. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Link: https://lore.kernel.org/bpf/20211103220845.2676888-11-andrii@kernel.org
2021-11-07libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()Andrii Nakryiko1-1/+1
Add a new unified OPTS-based low-level API for program loading, bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory" parameters as input arguments (program type, name, license, instructions) and all the other optional (as in not required to specify for all types of BPF programs) fields into struct bpf_prog_load_opts. This makes all the other non-extensible APIs variant for BPF_PROG_LOAD obsolete and they are slated for deprecation in libbpf v0.7: - bpf_load_program(); - bpf_load_program_xattr(); - bpf_verify_program(). Implementation-wise, internal helper libbpf__bpf_prog_load is refactored to become a public bpf_prog_load() API. struct bpf_prog_load_params used internally is replaced by public struct bpf_prog_load_opts. Unfortunately, while conceptually all this is pretty straightforward, the biggest complication comes from the already existing bpf_prog_load() *high-level* API, which has nothing to do with BPF_PROG_LOAD command. We try really hard to have a new API named bpf_prog_load(), though, because it maps naturally to BPF_PROG_LOAD command. For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated() and mark it as COMPAT_VERSION() for shared library users compiled against old version of libbpf. Statically linked users and shared lib users compiled against new version of libbpf headers will get "rerouted" to bpf_prog_deprecated() through a macro helper that decides whether to use new or old bpf_prog_load() based on number of input arguments (see ___libbpf_overload in libbpf_common.h). To test that existing bpf_prog_load()-using code compiles and works as expected, I've compiled and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile -Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with the macro-based overload approach. I don't expect anyone else to do something like this in practice, though. This is testing-specific way to replace bpf_prog_load() calls with special testing variant of it, which adds extra prog_flags value. After testing I kept this selftests hack, but ensured that we use a new bpf_prog_load_deprecated name for this. This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated. bpf_object interface has to be used for working with struct bpf_program. Libbpf doesn't support loading just a bpf_program. The silver lining is that when we get to libbpf 1.0 all these complication will be gone and we'll have one clean bpf_prog_load() low-level API with no backwards compatibility hackery surrounding it. [0] Closes: https://github.com/libbpf/libbpf/issues/284 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-01selftests, bpf: Fix broken riscv buildBjörn Töpel1-1/+1
This patch is closely related to commit 6016df8fe874 ("selftests/bpf: Fix broken riscv build"). When clang includes the system include directories, but targeting BPF program, __BITS_PER_LONG defaults to 32, unless explicitly set. Work around this problem, by explicitly setting __BITS_PER_LONG to __riscv_xlen. Signed-off-by: Björn Töpel <bjorn@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211028161057.520552-5-bjorn@kernel.org
2021-10-28selftests/bpf: Add weak/typeless ksym test for light skeletonKumar Kartikeya Dwivedi1-2/+2
Also, avoid using CO-RE features, as lskel doesn't support CO-RE, yet. Include both light and libbpf skeleton in same file to test both of them together. In c48e51c8b07a ("bpf: selftests: Add selftests for module kfunc support"), I added support for generating both lskel and libbpf skel for a BPF object, however the name parameter for bpftool caused collisions when included in same file together. This meant that every test needed a separate file for a libbpf/light skeleton separation instead of subtests. Change that by appending a "_lskel" suffix to the name for files using light skeleton, and convert all existing users. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211028063501.2239335-7-memxor@gmail.com
2021-10-28bpf/benchs: Add benchmark tests for bloom filter throughput + false positiveJoanne Koong1-2/+4
This patch adds benchmark tests for the throughput (for lookups + updates) and the false positive rate of bloom filter lookups, as well as some minor refactoring of the bash script for running the benchmarks. These benchmarks show that as the number of hash functions increases, the throughput and the false positive rate of the bloom filter decreases. >From the benchmark data, the approximate average false-positive rates are roughly as follows: 1 hash function = ~30% 2 hash functions = ~15% 3 hash functions = ~5% 4 hash functions = ~2.5% 5 hash functions = ~1% 6 hash functions = ~0.5% 7 hash functions = ~0.35% 8 hash functions = ~0.15% 9 hash functions = ~0.1% 10 hash functions = ~0% For reference data, the benchmarks run on one thread on a machine with one numa node for 1 to 5 hash functions for 8-byte and 64-byte values are as follows: 1 hash function: 50k entries 8-byte value Lookups - 51.1 M/s operations Updates - 33.6 M/s operations False positive rate: 24.15% 64-byte value Lookups - 15.7 M/s operations Updates - 15.1 M/s operations False positive rate: 24.2% 100k entries 8-byte value Lookups - 51.0 M/s operations Updates - 33.4 M/s operations False positive rate: 24.04% 64-byte value Lookups - 15.6 M/s operations Updates - 14.6 M/s operations False positive rate: 24.06% 500k entries 8-byte value Lookups - 50.5 M/s operations Updates - 33.1 M/s operations False positive rate: 27.45% 64-byte value Lookups - 15.6 M/s operations Updates - 14.2 M/s operations False positive rate: 27.42% 1 mil entries 8-byte value Lookups - 49.7 M/s operations Updates - 32.9 M/s operations False positive rate: 27.45% 64-byte value Lookups - 15.4 M/s operations Updates - 13.7 M/s operations False positive rate: 27.58% 2.5 mil entries 8-byte value Lookups - 47.2 M/s operations Updates - 31.8 M/s operations False positive rate: 30.94% 64-byte value Lookups - 15.3 M/s operations Updates - 13.2 M/s operations False positive rate: 30.95% 5 mil entries 8-byte value Lookups - 41.1 M/s operations Updates - 28.1 M/s operations False positive rate: 31.01% 64-byte value Lookups - 13.3 M/s operations Updates - 11.4 M/s operations False positive rate: 30.98% 2 hash functions: 50k entries 8-byte value Lookups - 34.1 M/s operations Updates - 20.1 M/s operations False positive rate: 9.13% 64-byte value Lookups - 8.4 M/s operations Updates - 7.9 M/s operations False positive rate: 9.21% 100k entries 8-byte value Lookups - 33.7 M/s operations Updates - 18.9 M/s operations False positive rate: 9.13% 64-byte value Lookups - 8.4 M/s operations Updates - 7.7 M/s operations False positive rate: 9.19% 500k entries 8-byte value Lookups - 32.7 M/s operations Updates - 18.1 M/s operations False positive rate: 12.61% 64-byte value Lookups - 8.4 M/s operations Updates - 7.5 M/s operations False positive rate: 12.61% 1 mil entries 8-byte value Lookups - 30.6 M/s operations Updates - 18.9 M/s operations False positive rate: 12.54% 64-byte value Lookups - 8.0 M/s operations Updates - 7.0 M/s operations False positive rate: 12.52% 2.5 mil entries 8-byte value Lookups - 25.3 M/s operations Updates - 16.7 M/s operations False positive rate: 16.77% 64-byte value Lookups - 7.9 M/s operations Updates - 6.5 M/s operations False positive rate: 16.88% 5 mil entries 8-byte value Lookups - 20.8 M/s operations Updates - 14.7 M/s operations False positive rate: 16.78% 64-byte value Lookups - 7.0 M/s operations Updates - 6.0 M/s operations False positive rate: 16.78% 3 hash functions: 50k entries 8-byte value Lookups - 25.1 M/s operations Updates - 14.6 M/s operations False positive rate: 7.65% 64-byte value Lookups - 5.8 M/s operations Updates - 5.5 M/s operations False positive rate: 7.58% 100k entries 8-byte value Lookups - 24.7 M/s operations Updates - 14.1 M/s operations False positive rate: 7.71% 64-byte value Lookups - 5.8 M/s operations Updates - 5.3 M/s operations False positive rate: 7.62% 500k entries 8-byte value Lookups - 22.9 M/s operations Updates - 13.9 M/s operations False positive rate: 2.62% 64-byte value Lookups - 5.6 M/s operations Updates - 4.8 M/s operations False positive rate: 2.7% 1 mil entries 8-byte value Lookups - 19.8 M/s operations Updates - 12.6 M/s operations False positive rate: 2.60% 64-byte value Lookups - 5.3 M/s operations Updates - 4.4 M/s operations False positive rate: 2.69% 2.5 mil entries 8-byte value Lookups - 16.2 M/s operations Updates - 10.7 M/s operations False positive rate: 4.49% 64-byte value Lookups - 4.9 M/s operations Updates - 4.1 M/s operations False positive rate: 4.41% 5 mil entries 8-byte value Lookups - 18.8 M/s operations Updates - 9.2 M/s operations False positive rate: 4.45% 64-byte value Lookups - 5.2 M/s operations Updates - 3.9 M/s operations False positive rate: 4.54% 4 hash functions: 50k entries 8-byte value Lookups - 19.7 M/s operations Updates - 11.1 M/s operations False positive rate: 1.01% 64-byte value Lookups - 4.4 M/s operations Updates - 4.0 M/s operations False positive rate: 1.00% 100k entries 8-byte value Lookups - 19.5 M/s operations Updates - 10.9 M/s operations False positive rate: 1.00% 64-byte value Lookups - 4.3 M/s operations Updates - 3.9 M/s operations False positive rate: 0.97% 500k entries 8-byte value Lookups - 18.2 M/s operations Updates - 10.6 M/s operations False positive rate: 2.05% 64-byte value Lookups - 4.3 M/s operations Updates - 3.7 M/s operations False positive rate: 2.05% 1 mil entries 8-byte value Lookups - 15.5 M/s operations Updates - 9.6 M/s operations False positive rate: 1.99% 64-byte value Lookups - 4.0 M/s operations Updates - 3.4 M/s operations False positive rate: 1.99% 2.5 mil entries 8-byte value Lookups - 13.8 M/s operations Updates - 7.7 M/s operations False positive rate: 3.91% 64-byte value Lookups - 3.7 M/s operations Updates - 3.6 M/s operations False positive rate: 3.78% 5 mil entries 8-byte value Lookups - 13.0 M/s operations Updates - 6.9 M/s operations False positive rate: 3.93% 64-byte value Lookups - 3.5 M/s operations Updates - 3.7 M/s operations False positive rate: 3.39% 5 hash functions: 50k entries 8-byte value Lookups - 16.4 M/s operations Updates - 9.1 M/s operations False positive rate: 0.78% 64-byte value Lookups - 3.5 M/s operations Updates - 3.2 M/s operations False positive rate: 0.77% 100k entries 8-byte value Lookups - 16.3 M/s operations Updates - 9.0 M/s operations False positive rate: 0.79% 64-byte value Lookups - 3.5 M/s operations Updates - 3.2 M/s operations False positive rate: 0.78% 500k entries 8-byte value Lookups - 15.1 M/s operations Updates - 8.8 M/s operations False positive rate: 1.82% 64-byte value Lookups - 3.4 M/s operations Updates - 3.0 M/s operations False positive rate: 1.78% 1 mil entries 8-byte value Lookups - 13.2 M/s operations Updates - 7.8 M/s operations False positive rate: 1.81% 64-byte value Lookups - 3.2 M/s operations Updates - 2.8 M/s operations False positive rate: 1.80% 2.5 mil entries 8-byte value Lookups - 10.5 M/s operations Updates - 5.9 M/s operations False positive rate: 0.29% 64-byte value Lookups - 3.2 M/s operations Updates - 2.4 M/s operations False positive rate: 0.28% 5 mil entries 8-byte value Lookups - 9.6 M/s operations Updates - 5.7 M/s operations False positive rate: 0.30% 64-byte value Lookups - 3.2 M/s operations Updates - 2.7 M/s operations False positive rate: 0.30% Signed-off-by: Joanne Koong <joannekoong@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211027234504.30744-5-joannekoong@fb.com
2021-10-25selftests/bpf: Support multiple tests per fileAndrii Nakryiko1-4/+3
Revamp how test discovery works for test_progs and allow multiple test entries per file. Any global void function with no arguments and serial_test_ or test_ prefix is considered a test. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022223228.99920-3-andrii@kernel.org
2021-10-08bpftool: Add install-bin target to install binary onlyQuentin Monnet1-1/+1
With "make install", bpftool installs its binary and its bash completion file. Usually, this is what we want. But a few components in the kernel repository (namely, BPF iterators and selftests) also install bpftool locally before using it. In such a case, bash completion is not necessary and is just a useless build artifact. Let's add an "install-bin" target to bpftool, to offer a way to install the binary only. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211007194438.34443-13-quentin@isovalent.com
2021-10-08tools/runqslower: Install libbpf headers when buildingQuentin Monnet1-6/+9
API headers from libbpf should not be accessed directly from the library's source directory. Instead, they should be exported with "make install_headers". Let's make sure that runqslower installs the headers properly when building. We use a libbpf_hdrs target to mark the logical dependency on libbpf's headers export for a number of object files, even though the headers should have been exported at this time (since bpftool needs them, and is required to generate the skeleton or the vmlinux.h). When descending from a parent Makefile, the specific output directories for building the library and exporting the headers are configurable with BPFOBJ_OUTPUT and BPF_DESTDIR, respectively. This is in addition to OUTPUT, on top of which those variables are constructed by default. Also adjust the Makefile for the BPF selftests. We pass a number of variables to the "make" invocation, because we want to point runqslower to the (target) libbpf shared with other tools, instead of building its own version. In addition, runqslower relies on (target) bpftool, and we also want to pass the proper variables to its Makefile so that bpftool itself reuses the same libbpf. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211007194438.34443-6-quentin@isovalent.com
2021-10-08tools/resolve_btfids: Install libbpf headers when buildingQuentin Monnet1-2/+5
API headers from libbpf should not be accessed directly from the library's source directory. Instead, they should be exported with "make install_headers". Let's make sure that resolve_btfids installs the headers properly when building. When descending from a parent Makefile, the specific output directories for building the library and exporting the headers are configurable with LIBBPF_OUT and LIBBPF_DESTDIR, respectively. This is in addition to OUTPUT, on top of which those variables are constructed by default. Also adjust the Makefile for the BPF selftests in order to point to the (target) libbpf shared with other tools, instead of building a version specific to resolve_btfids. Remove libbpf's order-only dependencies on the include directories (they are created by libbpf and don't need to exist beforehand). Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211007194438.34443-5-quentin@isovalent.com
2021-10-08bpftool: Install libbpf headers instead of including the dirQuentin Monnet1-0/+2
Bpftool relies on libbpf, therefore it relies on a number of headers from the library and must be linked against the library. The Makefile for bpftool exposes these objects by adding tools/lib as an include directory ("-I$(srctree)/tools/lib"). This is a working solution, but this is not the cleanest one. The risk is to involuntarily include objects that are not intended to be exposed by the libbpf. The headers needed to compile bpftool should in fact be "installed" from libbpf, with its "install_headers" Makefile target. In addition, there is one header which is internal to the library and not supposed to be used by external applications, but that bpftool uses anyway. Adjust the Makefile in order to install the header files properly before compiling bpftool. Also copy the additional internal header file (nlattr.h), but call it out explicitly. Build (and install headers) in a subdirectory under bpftool/ instead of tools/lib/bpf/. When descending from a parent Makefile, this is configurable by setting the OUTPUT, LIBBPF_OUTPUT and LIBBPF_DESTDIR variables. Also adjust the Makefile for BPF selftests, so as to reuse the (host) libbpf compiled earlier and to avoid compiling a separate version of the library just for bpftool. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211007194438.34443-4-quentin@isovalent.com