aboutsummaryrefslogtreecommitdiffstats
path: root/tools/lib/bpf/libbpf_probes.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2024-03-28bpf: improve error message for unsupported helperMykyta Yatsenko1-2/+4
BPF verifier emits "unknown func" message when given BPF program type does not support BPF helper. This message may be confusing for users, as important context that helper is unknown only to current program type is not provided. This patch changes message to "program of this type cannot use helper " and aligns dependent code in libbpf and tests. Any suggestions on improving/changing this message are welcome. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Quentin Monnet <qmo@kernel.org> Link: https://lore.kernel.org/r/20240325152210.377548-1-yatsenko@meta.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-11libbpf: Add support for bpf_arena.Alexei Starovoitov1-0/+7
mmap() bpf_arena right after creation, since the kernel needs to remember the address returned from mmap. This is user_vm_start. LLVM will generate bpf_arena_cast_user() instructions where necessary and JIT will add upper 32-bit of user_vm_start to such pointers. Fix up bpf_map_mmap_sz() to compute mmap size as map->value_size * map->max_entries for arrays and PAGE_SIZE * map->max_entries for arena. Don't set BTF at arena creation time, since it doesn't support it. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240308010812.89848-9-alexei.starovoitov@gmail.com
2024-01-24libbpf: Wire up token_fd into feature probing logicAndrii Nakryiko1-3/+8
Adjust feature probing callbacks to take into account optional token_fd. In unprivileged contexts, some feature detectors would fail to detect kernel support just because BPF program, BPF map, or BTF object can't be loaded due to privileged nature of those operations. So when BPF object is loaded with BPF token, this token should be used for feature probing. This patch is setting support for this scenario, but we don't yet pass non-zero token FD. This will be added in the next patch. We also switched BPF cookie detector from using kprobe program to tracepoint one, as tracepoint is somewhat less dangerous BPF program type and has higher likelihood of being allowed through BPF token in the future. This change has no effect on detection behavior. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20240124022127.2379740-25-andrii@kernel.org
2024-01-23libbpf: Find correct module BTFs for struct_ops maps and progs.Kui-Feng Lee1-0/+1
Locate the module BTFs for struct_ops maps and progs and pass them to the kernel. This ensures that the kernel correctly resolves type IDs from the appropriate module BTFs. For the map of a struct_ops object, the FD of the module BTF is set to bpf_map to keep a reference to the module BTF. The FD is passed to the kernel as value_type_btf_obj_fd when the struct_ops object is loaded. For a bpf_struct_ops prog, attach_btf_obj_fd of bpf_prog is the FD of a module BTF in the kernel. Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240119225005.668602-13-thinker.li@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-06-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-0/+2
Cross-merge networking fixes after downstream PR. Conflicts: net/sched/sch_taprio.c d636fc5dd692 ("net: sched: add rcu annotations around qdisc->qdisc_sleeping") dced11ef84fb ("net/sched: taprio: don't overwrite "sch" variable in taprio_dump_class_stats()") net/ipv4/sysctl_net_ipv4.c e209fee4118f ("net/ipv4: ping_group_range: allow GID from 2147483648 to 4294967294") ccce324dabfe ("tcp: make the first N SYN RTO backoffs linear") https://lore.kernel.org/all/20230605100816.08d41a7b@canb.auug.org.au/ No adjacent changes. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-05bpf: netfilter: Add BPF_NETFILTER bpf_attach_typeFlorian Westphal1-0/+2
Andrii Nakryiko writes: And we currently don't have an attach type for NETLINK BPF link. Thankfully it's not too late to add it. I see that link_create() in kernel/bpf/syscall.c just bypasses attach_type check. We shouldn't have done that. Instead we need to add BPF_NETLINK attach type to enum bpf_attach_type. And wire all that properly throughout the kernel and libbpf itself. This adds BPF_NETFILTER and uses it. This breaks uabi but this wasn't in any non-rc release yet, so it should be fine. v2: check link_attack prog type in link_create too Fixes: 84601d6ee68a ("bpf: add bpf_link support for BPF_NETFILTER programs") Suggested-by: Andrii Nakryiko <andrii.nakryiko@gmail.com> Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/CAEf4BzZ69YgrQW7DHCJUT_X+GqMq_ZQQPBwopaJJVGFD5=d5Vg@mail.gmail.com/ Link: https://lore.kernel.org/bpf/20230605131445.32016-1-fw@strlen.de
2023-05-26libbpf: Ensure libbpf always opens files with O_CLOEXECAndrii Nakryiko1-1/+1
Make sure that libbpf code always gets FD with O_CLOEXEC flag set, regardless if file is open through open() or fopen(). For the latter this means to add "e" to mode string, which is supported since pretty ancient glibc v2.7. Also drop the outdated TODO comment in usdt.c, which was already completed. Suggested-by: Lennart Poettering <lennart@poettering.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230525221311.2136408-1-andrii@kernel.org
2023-04-21tools: bpftool: print netfilter link infoFlorian Westphal1-0/+1
Dump protocol family, hook and priority value: $ bpftool link 2: netfilter prog 14 ip input prio -128 pids install(3264) 5: netfilter prog 14 ip6 forward prio 21 pids a.out(3387) 9: netfilter prog 14 ip prerouting prio 123 pids a.out(5700) 10: netfilter prog 14 ip input prio 21 pids test2(5701) v2: Quentin Monnet suggested to also add 'bpftool net' support: $ bpftool net xdp: tc: flow_dissector: netfilter: ip prerouting prio 21 prog_id 14 ip input prio -128 prog_id 14 ip input prio 21 prog_id 14 ip forward prio 21 prog_id 14 ip output prio 21 prog_id 14 ip postrouting prio 21 prog_id 14 'bpftool net' only dumps netfilter link type, links are sorted by protocol family, hook and priority. v5: fix bpf ci failure: libbpf needs small update to prog_type_name[] and probe_prog_load helper. v4: don't fail with -EOPNOTSUPP in libbpf probe_prog_load, update prog_type_name[] with "netfilter" entry (bpf ci) v3: fix bpf.h copy, 'reserved' member was removed (Alexei) use p_err, not fprintf (Quentin) Suggested-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/eeeaac99-9053-90c2-aa33-cc1ecb1ae9ca@isovalent.com/ Reviewed-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Florian Westphal <fw@strlen.de> Link: https://lore.kernel.org/r/20230421170300.24115-6-fw@strlen.de Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-02-06libbpf: Correctly set the kernel code version in Debian kernel.Hao Xiang1-0/+83
In a previous commit, Ubuntu kernel code version is correctly set by retrieving the information from /proc/version_signature. commit<5b3d72987701d51bf31823b39db49d10970f5c2d> (libbpf: Improve LINUX_VERSION_CODE detection) The /proc/version_signature file doesn't present in at least the older versions of Debian distributions (eg, Debian 9, 10). The Debian kernel has a similar issue where the release information from uname() syscall doesn't give the kernel code version that matches what the kernel actually expects. Below is an example content from Debian 10. release: 4.19.0-23-amd64 version: #1 SMP Debian 4.19.269-1 (2022-12-20) x86_64 Debian reports incorrect kernel version in utsname::release returned by uname() syscall, which in older kernels (Debian 9, 10) leads to kprobe BPF programs failing to load due to the version check mismatch. Fortunately, the correct kernel code version presents in the utsname::version returned by uname() syscall in Debian kernels. This change adds another get kernel version function to handle Debian in addition to the previously added get kernel version function to handle Ubuntu. Some minor refactoring work is also done to make the code more readable. Signed-off-by: Hao Xiang <hao.xiang@bytedance.com> Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230203234842.2933903-1-hao.xiang@bytedance.com
2022-11-29Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-1/+1
tools/lib/bpf/ringbuf.c 927cbb478adf ("libbpf: Handle size overflow for ringbuf mmap") b486d19a0ab0 ("libbpf: checkpatch: Fixed code alignments in ringbuf.c") https://lore.kernel.org/all/20221121122707.44d1446a@canb.auug.org.au/ Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-11-17libbpf: Use page size as max_entries when probing ring buffer mapHou Tao1-1/+1
Using page size as max_entries when probing ring buffer map, else the probe may fail on host with 64KB page size (e.g., an ARM64 host). After the fix, the output of "bpftool feature" on above host will be correct. Before : eBPF map_type ringbuf is NOT available eBPF map_type user_ringbuf is NOT available After : eBPF map_type ringbuf is available eBPF map_type user_ringbuf is available Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221116072351.1168938-2-houtao@huaweicloud.com
2022-10-25libbpf: Support new cgroup local storageYonghong Song1-0/+1
Add support for new cgroup local storage. Acked-by: David Vernet <void@manifault.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20221026042856.673989-1-yhs@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-09-21bpf: Add libbpf logic for user-space ring bufferDavid Vernet1-0/+1
Now that all of the logic is in place in the kernel to support user-space produced ring buffers, we can add the user-space logic to libbpf. This patch therefore adds the following public symbols to libbpf: struct user_ring_buffer * user_ring_buffer__new(int map_fd, const struct user_ring_buffer_opts *opts); void *user_ring_buffer__reserve(struct user_ring_buffer *rb, __u32 size); void *user_ring_buffer__reserve_blocking(struct user_ring_buffer *rb, __u32 size, int timeout_ms); void user_ring_buffer__submit(struct user_ring_buffer *rb, void *sample); void user_ring_buffer__discard(struct user_ring_buffer *rb, void user_ring_buffer__free(struct user_ring_buffer *rb); A user-space producer must first create a struct user_ring_buffer * object with user_ring_buffer__new(), and can then reserve samples in the ring buffer using one of the following two symbols: void *user_ring_buffer__reserve(struct user_ring_buffer *rb, __u32 size); void *user_ring_buffer__reserve_blocking(struct user_ring_buffer *rb, __u32 size, int timeout_ms); With user_ring_buffer__reserve(), a pointer to a 'size' region of the ring buffer will be returned if sufficient space is available in the buffer. user_ring_buffer__reserve_blocking() provides similar semantics, but will block for up to 'timeout_ms' in epoll_wait if there is insufficient space in the buffer. This function has the guarantee from the kernel that it will receive at least one event-notification per invocation to bpf_ringbuf_drain(), provided that at least one sample is drained, and the BPF program did not pass the BPF_RB_NO_WAKEUP flag to bpf_ringbuf_drain(). Once a sample is reserved, it must either be committed to the ring buffer with user_ring_buffer__submit(), or discarded with user_ring_buffer__discard(). Signed-off-by: David Vernet <void@manifault.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220920000100.477320-4-void@manifault.com
2022-08-04libbpf: Initialize err in probe_map_createFlorian Fainelli1-1/+1
GCC-11 warns about the possibly unitialized err variable in probe_map_create: libbpf_probes.c: In function 'probe_map_create': libbpf_probes.c:361:38: error: 'err' may be used uninitialized in this function [-Werror=maybe-uninitialized] 361 | return fd < 0 && err == exp_err ? 1 : 0; | ~~~~^~~~~~~~~~ Fixes: 878d8def0603 ("libbpf: Rework feature-probing APIs") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20220801025109.1206633-1-f.fainelli@gmail.com
2022-06-28libbpf: remove deprecated probing APIsAndrii Nakryiko1-120/+5
Get rid of deprecated feature-probing APIs. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220627211527.2245459-5-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2021-12-28libbpf: Improve LINUX_VERSION_CODE detectionAndrii Nakryiko1-16/+0
Ubuntu reports incorrect kernel version through uname(), which on older kernels leads to kprobe BPF programs failing to load due to the version check mismatch. Accommodate Ubuntu's quirks with LINUX_VERSION_CODE by using Ubuntu-specific /proc/version_code to fetch major/minor/patch versions to form LINUX_VERSION_CODE. While at it, consolide libbpf's kernel version detection code between libbpf.c and libbpf_probes.c. [0] Closes: https://github.com/libbpf/libbpf/issues/421 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20211222231003.2334940-1-andrii@kernel.org
2021-12-17libbpf: Rework feature-probing APIsAndrii Nakryiko1-50/+185
Create three extensible alternatives to inconsistently named feature-probing APIs: - libbpf_probe_bpf_prog_type() instead of bpf_probe_prog_type(); - libbpf_probe_bpf_map_type() instead of bpf_probe_map_type(); - libbpf_probe_bpf_helper() instead of bpf_probe_helper(). Set up return values such that libbpf can report errors (e.g., if some combination of input arguments isn't possible to validate, etc), in addition to whether the feature is supported (return value 1) or not supported (return value 0). Also schedule deprecation of those three APIs. Also schedule deprecation of bpf_probe_large_insn_limit(). Also fix all the existing detection logic for various program and map types that never worked: - BPF_PROG_TYPE_LIRC_MODE2; - BPF_PROG_TYPE_TRACING; - BPF_PROG_TYPE_LSM; - BPF_PROG_TYPE_EXT; - BPF_PROG_TYPE_SYSCALL; - BPF_PROG_TYPE_STRUCT_OPS; - BPF_MAP_TYPE_STRUCT_OPS; - BPF_MAP_TYPE_BLOOM_FILTER. Above prog/map types needed special setups and detection logic to work. Subsequent patch adds selftests that will make sure that all the detection logic keeps working for all current and future program and map types, avoiding otherwise inevitable bit rot. [0] Closes: https://github.com/libbpf/libbpf/issues/312 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Cc: Julia Kartseva <hex@fb.com> Link: https://lore.kernel.org/bpf/20211217171202.3352835-2-andrii@kernel.org
2021-12-10libbpf: Add OPTS-based bpf_btf_load() APIAndrii Nakryiko1-1/+1
Similar to previous bpf_prog_load() and bpf_map_create() APIs, add bpf_btf_load() API which is taking optional OPTS struct. Schedule bpf_load_btf() for deprecation in v0.8 ([0]). This makes naming consistent with BPF_BTF_LOAD command, sets up an API for extensibility in the future, moves options parameters (log-related fields) into optional options, and also allows to pass log_level directly. It also removes log buffer auto-allocation logic from low-level API (consistent with bpf_prog_load() behavior), but preserves a special treatment of log_level == 0 with non-NULL log_buf, which matches low-level bpf_prog_load() and high-level libbpf APIs for BTF and program loading behaviors. [0] Closes: https://github.com/libbpf/libbpf/issues/419 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-3-andrii@kernel.org
2021-11-25libbpf: Use bpf_map_create() consistently internallyAndrii Nakryiko1-15/+15
Remove all the remaining uses of to-be-deprecated bpf_create_map*() APIs. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124193233.3115996-3-andrii@kernel.org
2021-11-07libbpf: Remove internal use of deprecated bpf_prog_load() variantsAndrii Nakryiko1-11/+9
Remove all the internal uses of bpf_load_program_xattr(), which is slated for deprecation in v0.7. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211103220845.2676888-5-andrii@kernel.org
2021-10-28libbpf: Use O_CLOEXEC uniformly when opening fdsKumar Kartikeya Dwivedi1-1/+1
There are some instances where we don't use O_CLOEXEC when opening an fd, fix these up. Otherwise, it is possible that a parallel fork causes these fds to leak into a child process on execve. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211028063501.2239335-6-memxor@gmail.com
2021-08-07libbpf: Fix probe for BPF_PROG_TYPE_CGROUP_SOCKOPTRobin Gögge1-1/+3
This patch fixes the probe for BPF_PROG_TYPE_CGROUP_SOCKOPT, so the probe reports accurate results when used by e.g. bpftool. Fixes: 4cdbfb59c44a ("libbpf: support sockopt hooks") Signed-off-by: Robin Gögge <r.goegge@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20210728225825.2357586-1-r.goegge@gmail.com
2020-11-06libbpf: Add support for task local storageKP Singh1-0/+1
Updates the bpf_probe_map_type API to also support BPF_MAP_TYPE_TASK_STORAGE similar to other local storage maps. Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20201106103747.2780972-4-kpsingh@chromium.org
2020-08-25bpf: Implement bpf_local_storage for inodesKP Singh1-2/+3
Similar to bpf_local_storage for sockets, add local storage for inodes. The life-cycle of storage is managed with the life-cycle of the inode. i.e. the storage is destroyed along with the owning inode. The BPF LSM allocates an __rcu pointer to the bpf_local_storage in the security blob which are now stackable and can co-exist with other LSMs. Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200825182919.1118197-6-kpsingh@chromium.org
2020-08-18libbpf: Centralize poisoning and poison reallocarray()Andrii Nakryiko1-3/+0
Most of libbpf source files already include libbpf_internal.h, so it's a good place to centralize identifier poisoning. So move kernel integer type poisoning there. And also add reallocarray to a poison list to prevent accidental use of it. libbpf_reallocarray() should be used universally instead. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819013607.3607269-4-andriin@fb.com
2020-07-17libbpf: Add support for SK_LOOKUP program typeJakub Sitnicki1-0/+3
Make libbpf aware of the newly added program type, and assign it a section name. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200717103536.397595-13-jakub@cloudflare.com
2020-06-01libbpf: Add BPF ring buffer supportAndrii Nakryiko1-0/+5
Declaring and instantiating BPF ring buffer doesn't require any changes to libbpf, as it's just another type of maps. So using existing BTF-defined maps syntax with __uint(type, BPF_MAP_TYPE_RINGBUF) and __uint(max_elements, <size-of-ring-buf>) is all that's necessary to create and use BPF ring buffer. This patch adds BPF ring buffer consumer to libbpf. It is very similar to perf_buffer implementation in terms of API, but also attempts to fix some minor problems and inconveniences with existing perf_buffer API. ring_buffer support both single ring buffer use case (with just using ring_buffer__new()), as well as allows to add more ring buffers, each with its own callback and context. This allows to efficiently poll and consume multiple, potentially completely independent, ring buffers, using single epoll instance. The latter is actually a problem in practice for applications that are using multiple sets of perf buffers. They have to create multiple instances for struct perf_buffer and poll them independently or in a loop, each approach having its own problems (e.g., inability to use a common poll timeout). struct ring_buffer eliminates this problem by aggregating many independent ring buffer instances under the single "ring buffer manager". Second, perf_buffer's callback can't return error, so applications that need to stop polling due to error in data or data signalling the end, have to use extra mechanisms to signal that polling has to stop. ring_buffer's callback can return error, which will be passed through back to user code and can be acted upon appropariately. Two APIs allow to consume ring buffer data: - ring_buffer__poll(), which will wait for data availability notification and will consume data only from reported ring buffer(s); this API allows to efficiently use resources by reading data only when it becomes available; - ring_buffer__consume(), will attempt to read new records regardless of data availablity notification sub-system. This API is useful for cases when lowest latency is required, in expense of burning CPU resources. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200529075424.3139988-3-andriin@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-03-30bpf: Introduce BPF_PROG_TYPE_LSMKP Singh1-0/+1
Introduce types and configs for bpf programs that can be attached to LSM hooks. The programs can be enabled by the config option CONFIG_BPF_LSM. Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Brendan Jackman <jackmanb@google.com> Reviewed-by: Florent Revest <revest@google.com> Reviewed-by: Thomas Garnier <thgarnie@google.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: James Morris <jamorris@linux.microsoft.com> Link: https://lore.kernel.org/bpf/20200329004356.27286-2-kpsingh@chromium.org
2020-01-22libbpf: Add support for program extensionsAlexei Starovoitov1-0/+1
Add minimal support for program extensions. bpf_object_open_opts() needs to be called with attach_prog_fd = target_prog_fd and BPF program extension needs to have in .c file section definition like SEC("freplace/func_to_be_replaced"). libbpf will search for "func_to_be_replaced" in the target_prog_fd's BTF and will pass it in attach_btf_id to the kernel. This approach works for tests, but more compex use case may need to request function name (and attach_btf_id that kernel sees) to be more dynamic. Such API will be added in future patches. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20200121005348.2769920-3-ast@kernel.org
2020-01-10libbpf: Poison kernel-only integer typesAndrii Nakryiko1-0/+3
It's been a recurring issue with types like u32 slipping into libbpf source code accidentally. This is not detected during builds inside kernel source tree, but becomes a compilation error in libbpf's Github repo. Libbpf is supposed to use only __{s,u}{8,16,32,64} typedefs, so poison {s,u}{8,16,32,64} explicitly in every .c file. Doing that in a bit more centralized way, e.g., inside libbpf_internal.h breaks selftests, which are both using kernel u32 and libbpf_internal.h. This patch also fixes a new u32 occurence in libbpf.c, added recently. Fixes: 590a00888250 ("bpf: libbpf: Add STRUCT_OPS support") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200110181916.271446-1-andriin@fb.com
2020-01-09bpf: libbpf: Add STRUCT_OPS supportMartin KaFai Lau1-0/+2
This patch adds BPF STRUCT_OPS support to libbpf. The only sec_name convention is SEC(".struct_ops") to identify the struct_ops implemented in BPF, e.g. To implement a tcp_congestion_ops: SEC(".struct_ops") struct tcp_congestion_ops dctcp = { .init = (void *)dctcp_init, /* <-- a bpf_prog */ /* ... some more func prts ... */ .name = "bpf_dctcp", }; Each struct_ops is defined as a global variable under SEC(".struct_ops") as above. libbpf creates a map for each variable and the variable name is the map's name. Multiple struct_ops is supported under SEC(".struct_ops"). In the bpf_object__open phase, libbpf will look for the SEC(".struct_ops") section and find out what is the btf-type the struct_ops is implementing. Note that the btf-type here is referring to a type in the bpf_prog.o's btf. A "struct bpf_map" is added by bpf_object__add_map() as other maps do. It will then collect (through SHT_REL) where are the bpf progs that the func ptrs are referring to. No btf_vmlinux is needed in the open phase. In the bpf_object__load phase, the map-fields, which depend on the btf_vmlinux, are initialized (in bpf_map__init_kern_struct_ops()). It will also set the prog->type, prog->attach_btf_id, and prog->expected_attach_type. Thus, the prog's properties do not rely on its section name. [ Currently, the bpf_prog's btf-type ==> btf_vmlinux's btf-type matching process is as simple as: member-name match + btf-kind match + size match. If these matching conditions fail, libbpf will reject. The current targeting support is "struct tcp_congestion_ops" which most of its members are function pointers. The member ordering of the bpf_prog's btf-type can be different from the btf_vmlinux's btf-type. ] Then, all obj->maps are created as usual (in bpf_object__create_maps()). Once the maps are created and prog's properties are all set, the libbpf will proceed to load all the progs. bpf_map__attach_struct_ops() is added to register a struct_ops map to a kernel subsystem. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200109003514.3856730-1-kafai@fb.com
2020-01-08libbpf: Add probe for large INSN limitMichal Rostecki1-0/+21
Introduce a new probe which checks whether kernel has large maximum program size which was increased in the following commit: c04c0d2b968a ("bpf: increase complexity limit and maximum program size") Based on the similar check in Cilium[0], authored by Daniel Borkmann. [0] https://github.com/cilium/cilium/commit/657d0f585afd26232cfa5d4e70b6f64d2ea91596 Co-authored-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Link: https://lore.kernel.org/bpf/20200108162428.25014-2-mrostecki@opensuse.org
2019-10-31libbpf: Add support for prog_tracingAlexei Starovoitov1-0/+1
Cleanup libbpf from expected_attach_type == attach_btf_id hack and introduce BPF_PROG_TYPE_TRACING. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191030223212.953010-3-ast@kernel.org
2019-07-29tools/libbpf_probes: Add new devmap_hash typeToke Høiland-Jørgensen1-0/+1
This adds the definition for BPF_MAP_TYPE_DEVMAP_HASH to libbpf_probes.c in tools/lib/bpf. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-06-27libbpf: support sockopt hooksStanislav Fomichev1-0/+1
Make libbpf aware of new sockopt hooks so it can derive prog type and hook point from the section names. Cc: Andrii Nakryiko <andriin@fb.com> Cc: Martin Lau <kafai@fb.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-31libbpf: Return btf_fd for load_sk_storage_btfMichal Rostecki1-9/+4
Before this change, function load_sk_storage_btf expected that libbpf__probe_raw_btf was returning a BTF descriptor, but in fact it was returning an information about whether the probe was successful (0 or 1). load_sk_storage_btf was using that value as an argument of the close function, which was resulting in closing stdout and thus terminating the process which called that function. That bug was visible in bpftool. `bpftool feature` subcommand was always exiting too early (because of closed stdout) and it didn't display all requested probes. `bpftool -j feature` or `bpftool -p feature` were not returning a valid json object. This change renames the libbpf__probe_raw_btf function to libbpf__load_raw_btf, which now returns a BTF descriptor, as expected in load_sk_storage_btf. v2: - Fix typo in the commit message. v3: - Simplify BTF descriptor handling in bpf_object__probe_btf_* functions. - Rename libbpf__probe_raw_btf function to libbpf__load_raw_btf and return a BTF descriptor. v4: - Fix typo in the commit message. Fixes: d7c4b3980c18 ("libbpf: detect supported kernel BTF features and sanitize BTF") Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-13libbpf: detect supported kernel BTF features and sanitize BTFAndrii Nakryiko1-32/+41
Depending on used versions of libbpf, Clang, and kernel, it's possible to have valid BPF object files with valid BTF information, that still won't load successfully due to Clang emitting newer BTF features (e.g., BTF_KIND_FUNC, .BTF.ext's line_info/func_info, BTF_KIND_DATASEC, etc), that are not yet supported by older kernel. This patch adds detection of BTF features and sanitizes BPF object's BTF by substituting various supported BTF kinds, which have compatible layout: - BTF_KIND_FUNC -> BTF_KIND_TYPEDEF - BTF_KIND_FUNC_PROTO -> BTF_KIND_ENUM - BTF_KIND_VAR -> BTF_KIND_INT - BTF_KIND_DATASEC -> BTF_KIND_STRUCT Replacement is done in such a way as to preserve as much information as possible (names, sizes, etc) where possible without violating kernel's validation rules. v2->v3: - remove duplicate #defines from libbpf_util.h v1->v2: - add internal libbpf_internal.h w/ common stuff - switch SK storage BTF to use new libbpf__probe_raw_btf() Reported-by: Alexei Starovoitov <ast@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-27bpf: Support BPF_MAP_TYPE_SK_STORAGE in bpf map probingMartin KaFai Lau1-1/+73
This patch supports probing for the new BPF_MAP_TYPE_SK_STORAGE. BPF_MAP_TYPE_SK_STORAGE enforces BTF usage, so the new probe requires to create and load a BTF also. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-26tools: sync bpf.hMatt Mullins1-0/+1
This adds BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE, and fixes up the error: enumeration value ‘BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE’ not handled in switch [-Werror=switch-enum] build errors it would otherwise cause in libbpf. Signed-off-by: Matt Mullins <mmullins@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-12libbpf: Support sysctl hookAndrey Ignatov1-0/+1
Support BPF_PROG_TYPE_CGROUP_SYSCTL program in libbpf: identifying program and attach types by section name, probe. Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-01-22tools: bpftool: add probes for eBPF helper functionsQuentin Monnet1-0/+63
Similarly to what was done for program types and map types, add a set of probes to test the availability of the different eBPF helper functions on the current system. For each known program type, all known helpers are tested, in order to establish a compatibility matrix. Output is provided as a set of lists of available helpers, one per program type. Sample output: # bpftool feature probe kernel ... Scanning eBPF helper functions... eBPF helpers supported for program type socket_filter: - bpf_map_lookup_elem - bpf_map_update_elem - bpf_map_delete_elem ... eBPF helpers supported for program type kprobe: - bpf_map_lookup_elem - bpf_map_update_elem - bpf_map_delete_elem ... # bpftool --json --pretty feature probe kernel { ... "helpers": { "socket_filter_available_helpers": ["bpf_map_lookup_elem", \ "bpf_map_update_elem","bpf_map_delete_elem", ... ], "kprobe_available_helpers": ["bpf_map_lookup_elem", \ "bpf_map_update_elem","bpf_map_delete_elem", ... ], ... } } v5: - In libbpf.map, move global symbol to the new LIBBPF_0.0.2 section. v4: - Use "enum bpf_func_id" instead of "__u32" in bpf_probe_helper() declaration for the type of the argument used to pass the id of the helper to probe. - Undef BPF_HELPER_MAKE_ENTRY after using it. v3: - Do not pass kernel version from bpftool to libbpf probes (kernel version for testing program with kprobes is retrieved directly from libbpf). - Dump one list of available helpers per program type (instead of one list of compatible program types per helper). v2: - Move probes from bpftool to libbpf. - Test all program types for each helper, print a list of working prog types for each helper. - Fall back on include/uapi/linux/bpf.h for names and ids of helpers. - Remove C-style macros output from this patch. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-01-22tools: bpftool: add probes for eBPF map typesQuentin Monnet1-0/+84
Add new probes for eBPF map types, to detect what are the ones available on the system. Try creating one map of each type, and see if the kernel complains. Sample output: # bpftool feature probe kernel ... Scanning eBPF map types... eBPF map_type hash is available eBPF map_type array is available eBPF map_type prog_array is available ... # bpftool --json --pretty feature probe kernel { ... "map_types": { "have_hash_map_type": true, "have_array_map_type": true, "have_prog_array_map_type": true, ... } } v5: - In libbpf.map, move global symbol to the new LIBBPF_0.0.2 section. v3: - Use a switch with all enum values for setting specific map parameters, so that gcc complains at compile time (-Wswitch-enum) if new map types were added to the kernel but libbpf was not updated. v2: - Move probes from bpftool to libbpf. - Remove C-style macros output from this patch. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-01-22tools: bpftool: add probes for eBPF program typesQuentin Monnet1-0/+95
Introduce probes for supported BPF program types in libbpf, and call it from bpftool to test what types are available on the system. The probe simply consists in loading a very basic program of that type and see if the verifier complains or not. Sample output: # bpftool feature probe kernel ... Scanning eBPF program types... eBPF program_type socket_filter is available eBPF program_type kprobe is available eBPF program_type sched_cls is available ... # bpftool --json --pretty feature probe kernel { ... "program_types": { "have_socket_filter_prog_type": true, "have_kprobe_prog_type": true, "have_sched_cls_prog_type": true, ... } } v5: - In libbpf.map, move global symbol to a new LIBBPF_0.0.2 section. - Rename (non-API function) prog_load() as probe_load(). v3: - Get kernel version for checking kprobes availability from libbpf instead of from bpftool. Do not pass kernel_version as an argument when calling libbpf probes. - Use a switch with all enum values for setting specific program parameters just before probing, so that gcc complains at compile time (-Wswitch-enum) if new prog types were added to the kernel but libbpf was not updated. - Add a comment in libbpf.h about setrlimit() usage to allow many consecutive probe attempts. v2: - Move probes from bpftool to libbpf. - Remove C-style macros output from this patch. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>