aboutsummaryrefslogtreecommitdiffstats
path: root/tools
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2022-03-24 13:13:26 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2022-03-24 13:13:26 -0700
commit169e77764adc041b1dacba84ea90516a895d43b2 (patch)
treeaf7124681fa65d40fccee902af5194ab9f9c95f4 /tools
parentMerge tag 'vfio-v5.18-rc1' of https://github.com/awilliam/linux-vfio (diff)
parentMerge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net (diff)
downloadlinux-dev-169e77764adc041b1dacba84ea90516a895d43b2.tar.xz
linux-dev-169e77764adc041b1dacba84ea90516a895d43b2.zip
Merge tag 'net-next-5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski: "The sprinkling of SPI drivers is because we added a new one and Mark sent us a SPI driver interface conversion pull request. Core ---- - Introduce XDP multi-buffer support, allowing the use of XDP with jumbo frame MTUs and combination with Rx coalescing offloads (LRO). - Speed up netns dismantling (5x) and lower the memory cost a little. Remove unnecessary per-netns sockets. Scope some lists to a netns. Cut down RCU syncing. Use batch methods. Allow netdev registration to complete out of order. - Support distinguishing timestamp types (ingress vs egress) and maintaining them across packet scrubbing points (e.g. redirect). - Continue the work of annotating packet drop reasons throughout the stack. - Switch netdev error counters from an atomic to dynamically allocated per-CPU counters. - Rework a few preempt_disable(), local_irq_save() and busy waiting sections problematic on PREEMPT_RT. - Extend the ref_tracker to allow catching use-after-free bugs. BPF --- - Introduce "packing allocator" for BPF JIT images. JITed code is marked read only, and used to be allocated at page granularity. Custom allocator allows for more efficient memory use, lower iTLB pressure and prevents identity mapping huge pages from getting split. - Make use of BTF type annotations (e.g. __user, __percpu) to enforce the correct probe read access method, add appropriate helpers. - Convert the BPF preload to use light skeleton and drop the user-mode-driver dependency. - Allow XDP BPF_PROG_RUN test infra to send real packets, enabling its use as a packet generator. - Allow local storage memory to be allocated with GFP_KERNEL if called from a hook allowed to sleep. - Introduce fprobe (multi kprobe) to speed up mass attachment (arch bits to come later). - Add unstable conntrack lookup helpers for BPF by using the BPF kfunc infra. - Allow cgroup BPF progs to return custom errors to user space. - Add support for AF_UNIX iterator batching. - Allow iterator programs to use sleepable helpers. - Support JIT of add, and, or, xor and xchg atomic ops on arm64. - Add BTFGen support to bpftool which allows to use CO-RE in kernels without BTF info. - Large number of libbpf API improvements, cleanups and deprecations. Protocols --------- - Micro-optimize UDPv6 Tx, gaining up to 5% in test on dummy netdev. - Adjust TSO packet sizes based on min_rtt, allowing very low latency links (data centers) to always send full-sized TSO super-frames. - Make IPv6 flow label changes (AKA hash rethink) more configurable, via sysctl and setsockopt. Distinguish between server and client behavior. - VxLAN support to "collect metadata" devices to terminate only configured VNIs. This is similar to VLAN filtering in the bridge. - Support inserting IPv6 IOAM information to a fraction of frames. - Add protocol attribute to IP addresses to allow identifying where given address comes from (kernel-generated, DHCP etc.) - Support setting socket and IPv6 options via cmsg on ping6 sockets. - Reject mis-use of ECN bits in IP headers as part of DSCP/TOS. Define dscp_t and stop taking ECN bits into account in fib-rules. - Add support for locked bridge ports (for 802.1X). - tun: support NAPI for packets received from batched XDP buffs, doubling the performance in some scenarios. - IPv6 extension header handling in Open vSwitch. - Support IPv6 control message load balancing in bonding, prevent neighbor solicitation and advertisement from using the wrong port. Support NS/NA monitor selection similar to existing ARP monitor. - SMC - improve performance with TCP_CORK and sendfile() - support auto-corking - support TCP_NODELAY - MCTP (Management Component Transport Protocol) - add user space tag control interface - I2C binding driver (as specified by DMTF DSP0237) - Multi-BSSID beacon handling in AP mode for WiFi. - Bluetooth: - handle MSFT Monitor Device Event - add MGMT Adv Monitor Device Found/Lost events - Multi-Path TCP: - add support for the SO_SNDTIMEO socket option - lots of selftest cleanups and improvements - Increase the max PDU size in CAN ISOTP to 64 kB. Driver API ---------- - Add HW counters for SW netdevs, a mechanism for devices which offload packet forwarding to report packet statistics back to software interfaces such as tunnels. - Select the default NIC queue count as a fraction of number of physical CPU cores, instead of hard-coding to 8. - Expose devlink instance locks to drivers. Allow device layer of drivers to use that lock directly instead of creating their own which always runs into ordering issues in devlink callbacks. - Add header/data split indication to guide user space enabling of TCP zero-copy Rx. - Allow configuring completion queue event size. - Refactor page_pool to enable fragmenting after allocation. - Add allocation and page reuse statistics to page_pool. - Improve Multiple Spanning Trees support in the bridge to allow reuse of topologies across VLANs, saving HW resources in switches. - DSA (Distributed Switch Architecture): - replay and offload of host VLAN entries - offload of static and local FDB entries on LAG interfaces - FDB isolation and unicast filtering New hardware / drivers ---------------------- - Ethernet: - LAN937x T1 PHYs - Davicom DM9051 SPI NIC driver - Realtek RTL8367S, RTL8367RB-VB switch and MDIO - Microchip ksz8563 switches - Netronome NFP3800 SmartNICs - Fungible SmartNICs - MediaTek MT8195 switches - WiFi: - mt76: MediaTek mt7916 - mt76: MediaTek mt7921u USB adapters - brcmfmac: Broadcom BCM43454/6 - Mobile: - iosm: Intel M.2 7360 WWAN card Drivers ------- - Convert many drivers to the new phylink API built for split PCS designs but also simplifying other cases. - Intel Ethernet NICs: - add TTY for GNSS module for E810T device - improve AF_XDP performance - GTP-C and GTP-U filter offload - QinQ VLAN support - Mellanox Ethernet NICs (mlx5): - support xdp->data_meta - multi-buffer XDP - offload tc push_eth and pop_eth actions - Netronome Ethernet NICs (nfp): - flow-independent tc action hardware offload (police / meter) - AF_XDP - Other Ethernet NICs: - at803x: fiber and SFP support - xgmac: mdio: preamble suppression and custom MDC frequencies - r8169: enable ASPM L1.2 if system vendor flags it as safe - macb/gem: ZynqMP SGMII - hns3: add TX push mode - dpaa2-eth: software TSO - lan743x: multi-queue, mdio, SGMII, PTP - axienet: NAPI and GRO support - Mellanox Ethernet switches (mlxsw): - source and dest IP address rewrites - RJ45 ports - Marvell Ethernet switches (prestera): - basic routing offload - multi-chain TC ACL offload - NXP embedded Ethernet switches (ocelot & felix): - PTP over UDP with the ocelot-8021q DSA tagging protocol - basic QoS classification on Felix DSA switch using dcbnl - port mirroring for ocelot switches - Microchip high-speed industrial Ethernet (sparx5): - offloading of bridge port flooding flags - PTP Hardware Clock - Other embedded switches: - lan966x: PTP Hardward Clock - qca8k: mdio read/write operations via crafted Ethernet packets - Qualcomm 802.11ax WiFi (ath11k): - add LDPC FEC type and 802.11ax High Efficiency data in radiotap - enable RX PPDU stats in monitor co-exist mode - Intel WiFi (iwlwifi): - UHB TAS enablement via BIOS - band disablement via BIOS - channel switch offload - 32 Rx AMPDU sessions in newer devices - MediaTek WiFi (mt76): - background radar detection - thermal management improvements on mt7915 - SAR support for more mt76 platforms - MBSSID and 6 GHz band on mt7915 - RealTek WiFi: - rtw89: AP mode - rtw89: 160 MHz channels and 6 GHz band - rtw89: hardware scan - Bluetooth: - mt7921s: wake on Bluetooth, SCO over I2S, wide-band-speed (WBS) - Microchip CAN (mcp251xfd): - multiple RX-FIFOs and runtime configurable RX/TX rings - internal PLL, runtime PM handling simplification - improve chip detection and error handling after wakeup" * tag 'net-next-5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2521 commits) llc: fix netdevice reference leaks in llc_ui_bind() drivers: ethernet: cpsw: fix panic when interrupt coaleceing is set via ethtool ice: don't allow to run ice_send_event_to_aux() in atomic ctx ice: fix 'scheduling while atomic' on aux critical err interrupt net/sched: fix incorrect vlan_push_eth dest field net: bridge: mst: Restrict info size queries to bridge ports net: marvell: prestera: add missing destroy_workqueue() in prestera_module_init() drivers: net: xgene: Fix regression in CRC stripping net: geneve: add missing netlink policy and size for IFLA_GENEVE_INNER_PROTO_INHERIT net: dsa: fix missing host-filtered multicast addresses net/mlx5e: Fix build warning, detected write beyond size of field iwlwifi: mvm: Don't fail if PPAG isn't supported selftests/bpf: Fix kprobe_multi test. Revert "rethook: x86: Add rethook x86 implementation" Revert "arm64: rethook: Add arm64 rethook implementation" Revert "powerpc: Add rethook support" Revert "ARM: rethook: Add rethook arm implementation" netdevice: add missing dm_private kdoc net: bridge: mst: prevent NULL deref in br_mst_info_size() selftests: forwarding: Use same VRF for port and VLAN upper ...
Diffstat (limited to 'tools')
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-gen.rst115
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool.rst13
-rw-r--r--tools/bpf/bpftool/Documentation/common_options.rst13
-rw-r--r--tools/bpf/bpftool/Makefile34
-rw-r--r--tools/bpf/bpftool/bash-completion/bpftool18
-rw-r--r--tools/bpf/bpftool/btf.c2
-rw-r--r--tools/bpf/bpftool/cgroup.c6
-rw-r--r--tools/bpf/bpftool/common.c46
-rw-r--r--tools/bpf/bpftool/feature.c141
-rw-r--r--tools/bpf/bpftool/gen.c1367
-rw-r--r--tools/bpf/bpftool/link.c3
-rw-r--r--tools/bpf/bpftool/main.c31
-rw-r--r--tools/bpf/bpftool/main.h8
-rw-r--r--tools/bpf/bpftool/map.c44
-rw-r--r--tools/bpf/bpftool/net.c2
-rw-r--r--tools/bpf/bpftool/pids.c11
-rw-r--r--tools/bpf/bpftool/prog.c52
-rw-r--r--tools/bpf/bpftool/skeleton/pid_iter.bpf.c22
-rw-r--r--tools/bpf/bpftool/skeleton/pid_iter.h2
-rw-r--r--tools/bpf/bpftool/struct_ops.c4
-rw-r--r--tools/bpf/bpftool/xlated_dumper.c5
-rw-r--r--tools/bpf/resolve_btfids/Makefile6
-rw-r--r--tools/include/uapi/linux/bpf.h155
-rw-r--r--tools/include/uapi/linux/if_link.h1
-rw-r--r--tools/lib/bpf/Makefile4
-rw-r--r--tools/lib/bpf/bpf.c22
-rw-r--r--tools/lib/bpf/bpf.h20
-rw-r--r--tools/lib/bpf/bpf_helpers.h2
-rw-r--r--tools/lib/bpf/bpf_tracing.h103
-rw-r--r--tools/lib/bpf/btf.c31
-rw-r--r--tools/lib/bpf/btf.h34
-rw-r--r--tools/lib/bpf/btf_dump.c11
-rw-r--r--tools/lib/bpf/gen_loader.c15
-rw-r--r--tools/lib/bpf/hashmap.c3
-rw-r--r--tools/lib/bpf/libbpf.c934
-rw-r--r--tools/lib/bpf/libbpf.h234
-rw-r--r--tools/lib/bpf/libbpf.map18
-rw-r--r--tools/lib/bpf/libbpf_internal.h17
-rw-r--r--tools/lib/bpf/libbpf_legacy.h26
-rw-r--r--tools/lib/bpf/libbpf_version.h2
-rw-r--r--tools/lib/bpf/netlink.c180
-rw-r--r--tools/lib/bpf/relo_core.c79
-rw-r--r--tools/lib/bpf/relo_core.h42
-rw-r--r--tools/lib/bpf/skel_internal.h253
-rw-r--r--tools/lib/bpf/xsk.c15
-rw-r--r--tools/perf/tests/llvm.c2
-rw-r--r--tools/perf/util/bpf-loader.c74
-rw-r--r--tools/perf/util/bpf_map.c28
-rw-r--r--tools/scripts/Makefile.include4
-rw-r--r--tools/testing/selftests/bpf/.gitignore2
-rw-r--r--tools/testing/selftests/bpf/Makefile29
-rw-r--r--tools/testing/selftests/bpf/README.rst12
-rw-r--r--tools/testing/selftests/bpf/benchs/bench_ringbufs.c2
-rw-r--r--tools/testing/selftests/bpf/benchs/bench_trigger.c6
-rw-r--r--tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c60
-rw-r--r--tools/testing/selftests/bpf/cap_helpers.c67
-rw-r--r--tools/testing/selftests/bpf/cap_helpers.h19
-rw-r--r--tools/testing/selftests/bpf/config5
-rwxr-xr-xtools/testing/selftests/bpf/ima_setup.sh35
-rw-r--r--tools/testing/selftests/bpf/network_helpers.c86
-rw-r--r--tools/testing/selftests/bpf/network_helpers.h9
-rw-r--r--tools/testing/selftests/bpf/prog_tests/align.c218
-rw-r--r--tools/testing/selftests/bpf/prog_tests/atomics.c149
-rw-r--r--tools/testing/selftests/bpf/prog_tests/attach_probe.c18
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bind_perm.c64
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bpf_cookie.c195
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bpf_iter.c20
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bpf_iter_setsockopt_unix.c100
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bpf_mod_race.c230
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bpf_nf.c52
-rw-r--r--tools/testing/selftests/bpf/prog_tests/btf.c25
-rw-r--r--tools/testing/selftests/bpf/prog_tests/btf_dump.c54
-rw-r--r--tools/testing/selftests/bpf/prog_tests/btf_tag.c207
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup_attach_autodetach.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c14
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup_attach_override.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup_getset_retval.c481
-rw-r--r--tools/testing/selftests/bpf/prog_tests/check_mtu.c40
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cls_redirect.c10
-rw-r--r--tools/testing/selftests/bpf/prog_tests/core_kern.c16
-rw-r--r--tools/testing/selftests/bpf/prog_tests/core_kern_overflow.c13
-rw-r--r--tools/testing/selftests/bpf/prog_tests/core_reloc.c63
-rw-r--r--tools/testing/selftests/bpf/prog_tests/custom_sec_handlers.c176
-rw-r--r--tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c27
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fentry_fexit.c24
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fentry_test.c7
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c34
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fexit_stress.c22
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fexit_test.c7
-rw-r--r--tools/testing/selftests/bpf/prog_tests/find_vma.c30
-rw-r--r--tools/testing/selftests/bpf/prog_tests/flow_dissector.c33
-rw-r--r--tools/testing/selftests/bpf/prog_tests/flow_dissector_load_bytes.c24
-rw-r--r--tools/testing/selftests/bpf/prog_tests/for_each.c32
-rw-r--r--tools/testing/selftests/bpf/prog_tests/get_func_args_test.c12
-rw-r--r--tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c10
-rw-r--r--tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/global_data.c32
-rw-r--r--tools/testing/selftests/bpf/prog_tests/global_data_init.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/global_func_args.c14
-rw-r--r--tools/testing/selftests/bpf/prog_tests/kfree_skb.c16
-rw-r--r--tools/testing/selftests/bpf/prog_tests/kfunc_call.c46
-rw-r--r--tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c323
-rw-r--r--tools/testing/selftests/bpf/prog_tests/ksyms_module.c27
-rw-r--r--tools/testing/selftests/bpf/prog_tests/l4lb_all.c35
-rw-r--r--tools/testing/selftests/bpf/prog_tests/log_buf.c6
-rw-r--r--tools/testing/selftests/bpf/prog_tests/map_lock.c15
-rw-r--r--tools/testing/selftests/bpf/prog_tests/map_ptr.c16
-rw-r--r--tools/testing/selftests/bpf/prog_tests/modify_return.c33
-rw-r--r--tools/testing/selftests/bpf/prog_tests/obj_name.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/perf_branches.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/perf_link.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/pkt_access.c26
-rw-r--r--tools/testing/selftests/bpf/prog_tests/pkt_md_access.c14
-rw-r--r--tools/testing/selftests/bpf/prog_tests/prog_run_opts.c77
-rw-r--r--tools/testing/selftests/bpf/prog_tests/prog_run_xattr.c83
-rw-r--r--tools/testing/selftests/bpf/prog_tests/queue_stack_map.c46
-rw-r--r--tools/testing/selftests/bpf/prog_tests/raw_tp_test_run.c64
-rw-r--r--tools/testing/selftests/bpf/prog_tests/raw_tp_writable_test_run.c16
-rw-r--r--tools/testing/selftests/bpf/prog_tests/send_signal.c17
-rw-r--r--tools/testing/selftests/bpf/prog_tests/signal_pending.c23
-rw-r--r--tools/testing/selftests/bpf/prog_tests/skb_ctx.c81
-rw-r--r--tools/testing/selftests/bpf/prog_tests/skb_helpers.c16
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sock_fields.c58
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockmap_basic.c86
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockmap_listen.c12
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockopt_sk.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/spinlock.c14
-rw-r--r--tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/stacktrace_map_skip.c63
-rw-r--r--tools/testing/selftests/bpf/prog_tests/subprogs.c77
-rw-r--r--tools/testing/selftests/bpf/prog_tests/subskeleton.c78
-rw-r--r--tools/testing/selftests/bpf/prog_tests/syscall.c10
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tailcalls.c274
-rw-r--r--tools/testing/selftests/bpf/prog_tests/task_pt_regs.c16
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tc_redirect.c523
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_bpf_syscall_macro.c73
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_ima.c149
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_profiler.c14
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_skb_pkt_end.c15
-rw-r--r--tools/testing/selftests/bpf/prog_tests/timer.c7
-rw-r--r--tools/testing/selftests/bpf/prog_tests/timer_mim.c7
-rw-r--r--tools/testing/selftests/bpf/prog_tests/trace_ext.c28
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp.c34
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_adjust_frags.c146
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c251
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_attach.c29
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c141
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c72
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c63
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c201
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_info.c14
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_link.c26
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_noinline.c44
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_perf.c19
-rw-r--r--tools/testing/selftests/bpf/progs/atomics.c28
-rw-r--r--tools/testing/selftests/bpf/progs/bloom_filter_bench.c7
-rw-r--r--tools/testing/selftests/bpf/progs/bloom_filter_map.c5
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_setsockopt_unix.c60
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_task.c54
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_unix.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_loop.c9
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_loop_bench.c3
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_misc.h19
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_mod_race.c100
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_syscall_macro.c84
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_tracing_net.h2
-rw-r--r--tools/testing/selftests/bpf/progs/btf_type_tag_percpu.c66
-rw-r--r--tools/testing/selftests/bpf/progs/btf_type_tag_user.c40
-rw-r--r--tools/testing/selftests/bpf/progs/cgroup_getset_retval_getsockopt.c45
-rw-r--r--tools/testing/selftests/bpf/progs/cgroup_getset_retval_setsockopt.c52
-rw-r--r--tools/testing/selftests/bpf/progs/core_kern.c16
-rw-r--r--tools/testing/selftests/bpf/progs/core_kern_overflow.c22
-rw-r--r--tools/testing/selftests/bpf/progs/fexit_sleep.c9
-rw-r--r--tools/testing/selftests/bpf/progs/freplace_cls_redirect.c12
-rw-r--r--tools/testing/selftests/bpf/progs/ima.c66
-rw-r--r--tools/testing/selftests/bpf/progs/kfunc_call_race.c14
-rw-r--r--tools/testing/selftests/bpf/progs/kfunc_call_test.c52
-rw-r--r--tools/testing/selftests/bpf/progs/kprobe_multi.c100
-rw-r--r--tools/testing/selftests/bpf/progs/ksym_race.c13
-rw-r--r--tools/testing/selftests/bpf/progs/local_storage.c19
-rw-r--r--tools/testing/selftests/bpf/progs/perfbuf_bench.c3
-rw-r--r--tools/testing/selftests/bpf/progs/ringbuf_bench.c3
-rw-r--r--tools/testing/selftests/bpf/progs/sample_map_ret0.c24
-rw-r--r--tools/testing/selftests/bpf/progs/sockmap_parse_prog.c2
-rw-r--r--tools/testing/selftests/bpf/progs/sockopt_sk.c35
-rw-r--r--tools/testing/selftests/bpf/progs/stacktrace_map_skip.c68
-rw-r--r--tools/testing/selftests/bpf/progs/test_bpf_nf.c118
-rw-r--r--tools/testing/selftests/bpf/progs/test_btf_decl_tag.c (renamed from tools/testing/selftests/bpf/progs/btf_decl_tag.c)0
-rw-r--r--tools/testing/selftests/bpf/progs/test_btf_haskv.c3
-rw-r--r--tools/testing/selftests/bpf/progs/test_btf_newkv.c3
-rw-r--r--tools/testing/selftests/bpf/progs/test_btf_nokv.c12
-rw-r--r--tools/testing/selftests/bpf/progs/test_custom_sec_handlers.c63
-rw-r--r--tools/testing/selftests/bpf/progs/test_probe_user.c15
-rw-r--r--tools/testing/selftests/bpf/progs/test_ringbuf.c3
-rw-r--r--tools/testing/selftests/bpf/progs/test_send_signal_kern.c2
-rw-r--r--tools/testing/selftests/bpf/progs/test_sk_lookup.c15
-rw-r--r--tools/testing/selftests/bpf/progs/test_skb_cgroup_id_kern.c12
-rw-r--r--tools/testing/selftests/bpf/progs/test_sock_fields.c63
-rw-r--r--tools/testing/selftests/bpf/progs/test_sockmap_progs_query.c24
-rw-r--r--tools/testing/selftests/bpf/progs/test_subskeleton.c28
-rw-r--r--tools/testing/selftests/bpf/progs/test_subskeleton_lib.c61
-rw-r--r--tools/testing/selftests/bpf/progs/test_subskeleton_lib2.c16
-rw-r--r--tools/testing/selftests/bpf/progs/test_tc_dtime.c349
-rw-r--r--tools/testing/selftests/bpf/progs/test_tc_edt.c12
-rw-r--r--tools/testing/selftests/bpf/progs/test_tcp_check_syncookie_kern.c12
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c10
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_shrink.c32
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c2
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_do_redirect.c100
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_update_frags.c42
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_frags_helpers.c27
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_helpers.c8
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_with_devmap_frags_helpers.c27
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_with_devmap_helpers.c9
-rw-r--r--tools/testing/selftests/bpf/progs/trace_printk.c3
-rw-r--r--tools/testing/selftests/bpf/progs/trace_vprintk.c3
-rw-r--r--tools/testing/selftests/bpf/progs/trigger_bench.c9
-rw-r--r--tools/testing/selftests/bpf/progs/xdp_redirect_multi_kern.c2
-rw-r--r--tools/testing/selftests/bpf/test_cgroup_storage.c2
-rw-r--r--tools/testing/selftests/bpf/test_cpp.cpp90
-rwxr-xr-xtools/testing/selftests/bpf/test_lirc_mode2.sh5
-rw-r--r--tools/testing/selftests/bpf/test_lru_map.c15
-rwxr-xr-xtools/testing/selftests/bpf/test_lwt_ip_encap.sh10
-rwxr-xr-xtools/testing/selftests/bpf/test_lwt_seg6local.sh170
-rw-r--r--tools/testing/selftests/bpf/test_maps.c2
-rw-r--r--tools/testing/selftests/bpf/test_sock_addr.c6
-rw-r--r--tools/testing/selftests/bpf/test_sockmap.c4
-rwxr-xr-xtools/testing/selftests/bpf/test_tcp_check_syncookie.sh5
-rwxr-xr-xtools/testing/selftests/bpf/test_tunnel.sh2
-rw-r--r--tools/testing/selftests/bpf/test_verifier.c136
-rwxr-xr-xtools/testing/selftests/bpf/test_xdp_meta.sh38
-rwxr-xr-xtools/testing/selftests/bpf/test_xdp_redirect.sh30
-rwxr-xr-xtools/testing/selftests/bpf/test_xdp_redirect_multi.sh60
-rwxr-xr-xtools/testing/selftests/bpf/test_xdp_veth.sh39
-rwxr-xr-xtools/testing/selftests/bpf/test_xdp_vlan.sh66
-rw-r--r--tools/testing/selftests/bpf/trace_helpers.c77
-rw-r--r--tools/testing/selftests/bpf/trace_helpers.h3
-rw-r--r--tools/testing/selftests/bpf/verifier/atomic_invalid.c6
-rw-r--r--tools/testing/selftests/bpf/verifier/bounds.c4
-rw-r--r--tools/testing/selftests/bpf/verifier/bounds_deduction.c2
-rw-r--r--tools/testing/selftests/bpf/verifier/calls.c183
-rw-r--r--tools/testing/selftests/bpf/verifier/ctx.c12
-rw-r--r--tools/testing/selftests/bpf/verifier/direct_packet_access.c2
-rw-r--r--tools/testing/selftests/bpf/verifier/helper_access_var_len.c6
-rw-r--r--tools/testing/selftests/bpf/verifier/jmp32.c16
-rw-r--r--tools/testing/selftests/bpf/verifier/precise.c4
-rw-r--r--tools/testing/selftests/bpf/verifier/raw_stack.c4
-rw-r--r--tools/testing/selftests/bpf/verifier/ref_tracking.c6
-rw-r--r--tools/testing/selftests/bpf/verifier/search_pruning.c2
-rw-r--r--tools/testing/selftests/bpf/verifier/sock.c83
-rw-r--r--tools/testing/selftests/bpf/verifier/spill_fill.c38
-rw-r--r--tools/testing/selftests/bpf/verifier/unpriv.c4
-rw-r--r--tools/testing/selftests/bpf/verifier/value_illegal_alu.c4
-rw-r--r--tools/testing/selftests/bpf/verifier/value_ptr_arith.c4
-rw-r--r--tools/testing/selftests/bpf/verifier/var_off.c2
-rwxr-xr-xtools/testing/selftests/bpf/vmtest.sh2
-rw-r--r--tools/testing/selftests/bpf/xdp_redirect_multi.c8
-rw-r--r--tools/testing/selftests/bpf/xdping.c4
-rw-r--r--tools/testing/selftests/bpf/xdpxceiver.c85
-rw-r--r--tools/testing/selftests/bpf/xdpxceiver.h2
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/hw_stats_l3.sh31
-rwxr-xr-xtools/testing/selftests/drivers/net/netdevsim/hw_stats_l3.sh421
-rw-r--r--tools/testing/selftests/net/.gitignore2
-rw-r--r--tools/testing/selftests/net/Makefile3
-rw-r--r--tools/testing/selftests/net/af_unix/test_unix_oob.c6
-rwxr-xr-xtools/testing/selftests/net/cmsg_ipv6.sh156
-rw-r--r--tools/testing/selftests/net/cmsg_sender.c506
-rw-r--r--tools/testing/selftests/net/cmsg_so_mark.c67
-rwxr-xr-xtools/testing/selftests/net/cmsg_so_mark.sh32
-rwxr-xr-xtools/testing/selftests/net/cmsg_time.sh83
-rwxr-xr-xtools/testing/selftests/net/fcnal-test.sh2
-rwxr-xr-xtools/testing/selftests/net/fib_rule_tests.sh86
-rwxr-xr-xtools/testing/selftests/net/fib_tests.sh147
-rw-r--r--tools/testing/selftests/net/forwarding/Makefile1
-rwxr-xr-xtools/testing/selftests/net/forwarding/bridge_locked_port.sh176
-rwxr-xr-xtools/testing/selftests/net/forwarding/bridge_vlan_aware.sh5
-rwxr-xr-xtools/testing/selftests/net/forwarding/bridge_vlan_unaware.sh5
-rw-r--r--tools/testing/selftests/net/forwarding/fib_offload_lib.sh12
-rw-r--r--tools/testing/selftests/net/forwarding/forwarding.config.sample2
-rwxr-xr-xtools/testing/selftests/net/forwarding/hw_stats_l3.sh332
-rw-r--r--tools/testing/selftests/net/forwarding/lib.sh69
-rwxr-xr-xtools/testing/selftests/net/forwarding/pedit_ip.sh201
-rwxr-xr-xtools/testing/selftests/net/forwarding/tc_police.sh52
-rwxr-xr-xtools/testing/selftests/net/mptcp/mptcp_connect.sh19
-rwxr-xr-xtools/testing/selftests/net/mptcp/mptcp_join.sh2751
-rwxr-xr-xtools/testing/selftests/net/mptcp/pm_netlink.sh18
-rw-r--r--tools/testing/selftests/net/mptcp/pm_nl_ctl.c88
-rw-r--r--tools/testing/selftests/net/mptcp/settings2
-rwxr-xr-xtools/testing/selftests/net/pmtu.sh141
-rw-r--r--tools/testing/selftests/net/psock_fanout.c5
-rw-r--r--tools/testing/selftests/net/reuseport_bpf_numa.c2
-rwxr-xr-xtools/testing/selftests/net/rtnetlink.sh4
-rwxr-xr-xtools/testing/selftests/net/test_vxlan_vnifiltering.sh579
-rw-r--r--tools/testing/selftests/net/timestamping.c4
-rw-r--r--tools/testing/selftests/net/toeplitz.c6
-rw-r--r--tools/testing/selftests/net/txtimestamp.c6
-rw-r--r--tools/testing/selftests/ptp/testptp.c18
-rw-r--r--tools/testing/selftests/tc-testing/tdc_config.py2
-rw-r--r--tools/testing/vsock/vsock_test.c215
299 files changed, 16411 insertions, 3793 deletions
diff --git a/tools/bpf/bpftool/Documentation/bpftool-gen.rst b/tools/bpf/bpftool/Documentation/bpftool-gen.rst
index bc276388f432..68454ef28f58 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-gen.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-gen.rst
@@ -25,6 +25,8 @@ GEN COMMANDS
| **bpftool** **gen object** *OUTPUT_FILE* *INPUT_FILE* [*INPUT_FILE*...]
| **bpftool** **gen skeleton** *FILE* [**name** *OBJECT_NAME*]
+| **bpftool** **gen subskeleton** *FILE* [**name** *OBJECT_NAME*]
+| **bpftool** **gen min_core_btf** *INPUT* *OUTPUT* *OBJECT* [*OBJECT*...]
| **bpftool** **gen help**
DESCRIPTION
@@ -149,6 +151,50 @@ DESCRIPTION
(non-read-only) data from userspace, with same simplicity
as for BPF side.
+ **bpftool gen subskeleton** *FILE*
+ Generate BPF subskeleton C header file for a given *FILE*.
+
+ Subskeletons are similar to skeletons, except they do not own
+ the corresponding maps, programs, or global variables. They
+ require that the object file used to generate them is already
+ loaded into a *bpf_object* by some other means.
+
+ This functionality is useful when a library is included into a
+ larger BPF program. A subskeleton for the library would have
+ access to all objects and globals defined in it, without
+ having to know about the larger program.
+
+ Consequently, there are only two functions defined
+ for subskeletons:
+
+ - **example__open(bpf_object\*)**
+ Instantiates a subskeleton from an already opened (but not
+ necessarily loaded) **bpf_object**.
+
+ - **example__destroy()**
+ Frees the storage for the subskeleton but *does not* unload
+ any BPF programs or maps.
+
+ **bpftool** **gen min_core_btf** *INPUT* *OUTPUT* *OBJECT* [*OBJECT*...]
+ Generate a minimum BTF file as *OUTPUT*, derived from a given
+ *INPUT* BTF file, containing all needed BTF types so one, or
+ more, given eBPF objects CO-RE relocations may be satisfied.
+
+ When kernels aren't compiled with CONFIG_DEBUG_INFO_BTF,
+ libbpf, when loading an eBPF object, has to rely on external
+ BTF files to be able to calculate CO-RE relocations.
+
+ Usually, an external BTF file is built from existing kernel
+ DWARF data using pahole. It contains all the types used by
+ its respective kernel image and, because of that, is big.
+
+ The min_core_btf feature builds smaller BTF files, customized
+ to one or multiple eBPF objects, so they can be distributed
+ together with an eBPF CO-RE based application, turning the
+ application portable to different kernel versions.
+
+ Check examples bellow for more information how to use it.
+
**bpftool gen help**
Print short help message.
@@ -215,7 +261,9 @@ This is example BPF application with two BPF programs and a mix of BPF maps
and global variables. Source code is split across two source code files.
**$ clang -target bpf -g example1.bpf.c -o example1.bpf.o**
+
**$ clang -target bpf -g example2.bpf.c -o example2.bpf.o**
+
**$ bpftool gen object example.bpf.o example1.bpf.o example2.bpf.o**
This set of commands compiles *example1.bpf.c* and *example2.bpf.c*
@@ -329,3 +377,70 @@ BPF ELF object file *example.bpf.o*.
my_static_var: 7
This is a stripped-out version of skeleton generated for above example code.
+
+min_core_btf
+------------
+
+**$ bpftool btf dump file 5.4.0-example.btf format raw**
+
+::
+
+ [1] INT 'long unsigned int' size=8 bits_offset=0 nr_bits=64 encoding=(none)
+ [2] CONST '(anon)' type_id=1
+ [3] VOLATILE '(anon)' type_id=1
+ [4] ARRAY '(anon)' type_id=1 index_type_id=21 nr_elems=2
+ [5] PTR '(anon)' type_id=8
+ [6] CONST '(anon)' type_id=5
+ [7] INT 'char' size=1 bits_offset=0 nr_bits=8 encoding=(none)
+ [8] CONST '(anon)' type_id=7
+ [9] INT 'unsigned int' size=4 bits_offset=0 nr_bits=32 encoding=(none)
+ <long output>
+
+**$ bpftool btf dump file one.bpf.o format raw**
+
+::
+
+ [1] PTR '(anon)' type_id=2
+ [2] STRUCT 'trace_event_raw_sys_enter' size=64 vlen=4
+ 'ent' type_id=3 bits_offset=0
+ 'id' type_id=7 bits_offset=64
+ 'args' type_id=9 bits_offset=128
+ '__data' type_id=12 bits_offset=512
+ [3] STRUCT 'trace_entry' size=8 vlen=4
+ 'type' type_id=4 bits_offset=0
+ 'flags' type_id=5 bits_offset=16
+ 'preempt_count' type_id=5 bits_offset=24
+ <long output>
+
+**$ bpftool gen min_core_btf 5.4.0-example.btf 5.4.0-smaller.btf one.bpf.o**
+
+**$ bpftool btf dump file 5.4.0-smaller.btf format raw**
+
+::
+
+ [1] TYPEDEF 'pid_t' type_id=6
+ [2] STRUCT 'trace_event_raw_sys_enter' size=64 vlen=1
+ 'args' type_id=4 bits_offset=128
+ [3] STRUCT 'task_struct' size=9216 vlen=2
+ 'pid' type_id=1 bits_offset=17920
+ 'real_parent' type_id=7 bits_offset=18048
+ [4] ARRAY '(anon)' type_id=5 index_type_id=8 nr_elems=6
+ [5] INT 'long unsigned int' size=8 bits_offset=0 nr_bits=64 encoding=(none)
+ [6] TYPEDEF '__kernel_pid_t' type_id=8
+ [7] PTR '(anon)' type_id=3
+ [8] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED
+ <end>
+
+Now, the "5.4.0-smaller.btf" file may be used by libbpf as an external BTF file
+when loading the "one.bpf.o" object into the "5.4.0-example" kernel. Note that
+the generated BTF file won't allow other eBPF objects to be loaded, just the
+ones given to min_core_btf.
+
+::
+
+ LIBBPF_OPTS(bpf_object_open_opts, opts, .btf_custom_path = "5.4.0-smaller.btf");
+ struct bpf_object *obj;
+
+ obj = bpf_object__open_file("one.bpf.o", &opts);
+
+ ...
diff --git a/tools/bpf/bpftool/Documentation/bpftool.rst b/tools/bpf/bpftool/Documentation/bpftool.rst
index 7084dd9fa2f8..6965c94dfdaf 100644
--- a/tools/bpf/bpftool/Documentation/bpftool.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool.rst
@@ -20,7 +20,8 @@ SYNOPSIS
**bpftool** **version**
- *OBJECT* := { **map** | **program** | **cgroup** | **perf** | **net** | **feature** }
+ *OBJECT* := { **map** | **program** | **link** | **cgroup** | **perf** | **net** | **feature** |
+ **btf** | **gen** | **struct_ops** | **iter** }
*OPTIONS* := { { **-V** | **--version** } | |COMMON_OPTIONS| }
@@ -31,6 +32,8 @@ SYNOPSIS
*PROG-COMMANDS* := { **show** | **list** | **dump jited** | **dump xlated** | **pin** |
**load** | **attach** | **detach** | **help** }
+ *LINK-COMMANDS* := { **show** | **list** | **pin** | **detach** | **help** }
+
*CGROUP-COMMANDS* := { **show** | **list** | **attach** | **detach** | **help** }
*PERF-COMMANDS* := { **show** | **list** | **help** }
@@ -39,6 +42,14 @@ SYNOPSIS
*FEATURE-COMMANDS* := { **probe** | **help** }
+ *BTF-COMMANDS* := { **show** | **list** | **dump** | **help** }
+
+ *GEN-COMMANDS* := { **object** | **skeleton** | **min_core_btf** | **help** }
+
+ *STRUCT-OPS-COMMANDS* := { **show** | **list** | **dump** | **register** | **unregister** | **help** }
+
+ *ITER-COMMANDS* := { **pin** | **help** }
+
DESCRIPTION
===========
*bpftool* allows for inspection and simple modification of BPF objects
diff --git a/tools/bpf/bpftool/Documentation/common_options.rst b/tools/bpf/bpftool/Documentation/common_options.rst
index 908487b9c2ad..4107a586b68b 100644
--- a/tools/bpf/bpftool/Documentation/common_options.rst
+++ b/tools/bpf/bpftool/Documentation/common_options.rst
@@ -4,12 +4,13 @@
Print short help message (similar to **bpftool help**).
-V, --version
- Print version number (similar to **bpftool version**), and optional
- features that were included when bpftool was compiled. Optional
- features include linking against libbfd to provide the disassembler
- for JIT-ted programs (**bpftool prog dump jited**) and usage of BPF
- skeletons (some features like **bpftool prog profile** or showing
- pids associated to BPF objects may rely on it).
+ Print bpftool's version number (similar to **bpftool version**), the
+ number of the libbpf version in use, and optional features that were
+ included when bpftool was compiled. Optional features include linking
+ against libbfd to provide the disassembler for JIT-ted programs
+ (**bpftool prog dump jited**) and usage of BPF skeletons (some
+ features like **bpftool prog profile** or showing pids associated to
+ BPF objects may rely on it).
-j, --json
Generate JSON output. For commands that cannot produce JSON, this
diff --git a/tools/bpf/bpftool/Makefile b/tools/bpf/bpftool/Makefile
index 83369f55df61..9800f966fd51 100644
--- a/tools/bpf/bpftool/Makefile
+++ b/tools/bpf/bpftool/Makefile
@@ -18,37 +18,33 @@ BPF_DIR = $(srctree)/tools/lib/bpf
ifneq ($(OUTPUT),)
_OUTPUT := $(OUTPUT)
else
- _OUTPUT := $(CURDIR)
+ _OUTPUT := $(CURDIR)/
endif
-BOOTSTRAP_OUTPUT := $(_OUTPUT)/bootstrap/
+BOOTSTRAP_OUTPUT := $(_OUTPUT)bootstrap/
-LIBBPF_OUTPUT := $(_OUTPUT)/libbpf/
+LIBBPF_OUTPUT := $(_OUTPUT)libbpf/
LIBBPF_DESTDIR := $(LIBBPF_OUTPUT)
-LIBBPF_INCLUDE := $(LIBBPF_DESTDIR)/include
+LIBBPF_INCLUDE := $(LIBBPF_DESTDIR)include
LIBBPF_HDRS_DIR := $(LIBBPF_INCLUDE)/bpf
LIBBPF := $(LIBBPF_OUTPUT)libbpf.a
LIBBPF_BOOTSTRAP_OUTPUT := $(BOOTSTRAP_OUTPUT)libbpf/
LIBBPF_BOOTSTRAP_DESTDIR := $(LIBBPF_BOOTSTRAP_OUTPUT)
-LIBBPF_BOOTSTRAP_INCLUDE := $(LIBBPF_BOOTSTRAP_DESTDIR)/include
+LIBBPF_BOOTSTRAP_INCLUDE := $(LIBBPF_BOOTSTRAP_DESTDIR)include
LIBBPF_BOOTSTRAP_HDRS_DIR := $(LIBBPF_BOOTSTRAP_INCLUDE)/bpf
LIBBPF_BOOTSTRAP := $(LIBBPF_BOOTSTRAP_OUTPUT)libbpf.a
-# We need to copy hashmap.h and nlattr.h which is not otherwise exported by
-# libbpf, but still required by bpftool.
-LIBBPF_INTERNAL_HDRS := $(addprefix $(LIBBPF_HDRS_DIR)/,hashmap.h nlattr.h)
-LIBBPF_BOOTSTRAP_INTERNAL_HDRS := $(addprefix $(LIBBPF_BOOTSTRAP_HDRS_DIR)/,hashmap.h)
-
-ifeq ($(BPFTOOL_VERSION),)
-BPFTOOL_VERSION := $(shell make -rR --no-print-directory -sC ../../.. kernelversion)
-endif
+# We need to copy hashmap.h, nlattr.h, relo_core.h and libbpf_internal.h
+# which are not otherwise exported by libbpf, but still required by bpftool.
+LIBBPF_INTERNAL_HDRS := $(addprefix $(LIBBPF_HDRS_DIR)/,hashmap.h nlattr.h relo_core.h libbpf_internal.h)
+LIBBPF_BOOTSTRAP_INTERNAL_HDRS := $(addprefix $(LIBBPF_BOOTSTRAP_HDRS_DIR)/,hashmap.h relo_core.h libbpf_internal.h)
$(LIBBPF_OUTPUT) $(BOOTSTRAP_OUTPUT) $(LIBBPF_BOOTSTRAP_OUTPUT) $(LIBBPF_HDRS_DIR) $(LIBBPF_BOOTSTRAP_HDRS_DIR):
$(QUIET_MKDIR)mkdir -p $@
$(LIBBPF): $(wildcard $(BPF_DIR)/*.[ch] $(BPF_DIR)/Makefile) | $(LIBBPF_OUTPUT)
$(Q)$(MAKE) -C $(BPF_DIR) OUTPUT=$(LIBBPF_OUTPUT) \
- DESTDIR=$(LIBBPF_DESTDIR) prefix= $(LIBBPF) install_headers
+ DESTDIR=$(LIBBPF_DESTDIR:/=) prefix= $(LIBBPF) install_headers
$(LIBBPF_INTERNAL_HDRS): $(LIBBPF_HDRS_DIR)/%.h: $(BPF_DIR)/%.h | $(LIBBPF_HDRS_DIR)
$(call QUIET_INSTALL, $@)
@@ -56,7 +52,7 @@ $(LIBBPF_INTERNAL_HDRS): $(LIBBPF_HDRS_DIR)/%.h: $(BPF_DIR)/%.h | $(LIBBPF_HDRS_
$(LIBBPF_BOOTSTRAP): $(wildcard $(BPF_DIR)/*.[ch] $(BPF_DIR)/Makefile) | $(LIBBPF_BOOTSTRAP_OUTPUT)
$(Q)$(MAKE) -C $(BPF_DIR) OUTPUT=$(LIBBPF_BOOTSTRAP_OUTPUT) \
- DESTDIR=$(LIBBPF_BOOTSTRAP_DESTDIR) prefix= \
+ DESTDIR=$(LIBBPF_BOOTSTRAP_DESTDIR:/=) prefix= \
ARCH= CROSS_COMPILE= CC=$(HOSTCC) LD=$(HOSTLD) $@ install_headers
$(LIBBPF_BOOTSTRAP_INTERNAL_HDRS): $(LIBBPF_BOOTSTRAP_HDRS_DIR)/%.h: $(BPF_DIR)/%.h | $(LIBBPF_BOOTSTRAP_HDRS_DIR)
@@ -83,7 +79,9 @@ CFLAGS += -DPACKAGE='"bpftool"' -D__EXPORTED_HEADERS__ \
-I$(srctree)/kernel/bpf/ \
-I$(srctree)/tools/include \
-I$(srctree)/tools/include/uapi
+ifneq ($(BPFTOOL_VERSION),)
CFLAGS += -DBPFTOOL_VERSION='"$(BPFTOOL_VERSION)"'
+endif
ifneq ($(EXTRA_CFLAGS),)
CFLAGS += $(EXTRA_CFLAGS)
endif
@@ -95,7 +93,7 @@ INSTALL ?= install
RM ?= rm -f
FEATURE_USER = .bpftool
-FEATURE_TESTS = libbfd disassembler-four-args reallocarray zlib libcap \
+FEATURE_TESTS = libbfd disassembler-four-args zlib libcap \
clang-bpf-co-re
FEATURE_DISPLAY = libbfd disassembler-four-args zlib libcap \
clang-bpf-co-re
@@ -120,10 +118,6 @@ ifeq ($(feature-disassembler-four-args), 1)
CFLAGS += -DDISASM_FOUR_ARGS_SIGNATURE
endif
-ifeq ($(feature-reallocarray), 0)
-CFLAGS += -DCOMPAT_NEED_REALLOCARRAY
-endif
-
LIBS = $(LIBBPF) -lelf -lz
LIBS_BOOTSTRAP = $(LIBBPF_BOOTSTRAP) -lelf -lz
ifeq ($(feature-libcap), 1)
diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool
index 493753a4962e..5df8d72c5179 100644
--- a/tools/bpf/bpftool/bash-completion/bpftool
+++ b/tools/bpf/bpftool/bash-completion/bpftool
@@ -1003,9 +1003,25 @@ _bpftool()
;;
esac
;;
+ subskeleton)
+ case $prev in
+ $command)
+ _filedir
+ return 0
+ ;;
+ *)
+ _bpftool_once_attr 'name'
+ return 0
+ ;;
+ esac
+ ;;
+ min_core_btf)
+ _filedir
+ return 0
+ ;;
*)
[[ $prev == $object ]] && \
- COMPREPLY=( $( compgen -W 'object skeleton help' -- "$cur" ) )
+ COMPREPLY=( $( compgen -W 'object skeleton subskeleton help min_core_btf' -- "$cur" ) )
;;
esac
;;
diff --git a/tools/bpf/bpftool/btf.c b/tools/bpf/bpftool/btf.c
index 59833125ac0a..a2c665beda87 100644
--- a/tools/bpf/bpftool/btf.c
+++ b/tools/bpf/bpftool/btf.c
@@ -902,7 +902,7 @@ static int do_show(int argc, char **argv)
equal_fn_for_key_as_id, NULL);
btf_map_table = hashmap__new(hash_fn_for_key_as_id,
equal_fn_for_key_as_id, NULL);
- if (!btf_prog_table || !btf_map_table) {
+ if (IS_ERR(btf_prog_table) || IS_ERR(btf_map_table)) {
hashmap__free(btf_prog_table);
hashmap__free(btf_map_table);
if (fd >= 0)
diff --git a/tools/bpf/bpftool/cgroup.c b/tools/bpf/bpftool/cgroup.c
index 3571a281c43f..effe136119d7 100644
--- a/tools/bpf/bpftool/cgroup.c
+++ b/tools/bpf/bpftool/cgroup.c
@@ -50,6 +50,7 @@ static int show_bpf_prog(int id, enum bpf_attach_type attach_type,
const char *attach_flags_str,
int level)
{
+ char prog_name[MAX_PROG_FULL_NAME];
struct bpf_prog_info info = {};
__u32 info_len = sizeof(info);
int prog_fd;
@@ -63,6 +64,7 @@ static int show_bpf_prog(int id, enum bpf_attach_type attach_type,
return -1;
}
+ get_prog_full_name(&info, prog_fd, prog_name, sizeof(prog_name));
if (json_output) {
jsonw_start_object(json_wtr);
jsonw_uint_field(json_wtr, "id", info.id);
@@ -73,7 +75,7 @@ static int show_bpf_prog(int id, enum bpf_attach_type attach_type,
jsonw_uint_field(json_wtr, "attach_type", attach_type);
jsonw_string_field(json_wtr, "attach_flags",
attach_flags_str);
- jsonw_string_field(json_wtr, "name", info.name);
+ jsonw_string_field(json_wtr, "name", prog_name);
jsonw_end_object(json_wtr);
} else {
printf("%s%-8u ", level ? " " : "", info.id);
@@ -81,7 +83,7 @@ static int show_bpf_prog(int id, enum bpf_attach_type attach_type,
printf("%-15s", attach_type_name[attach_type]);
else
printf("type %-10u", attach_type);
- printf(" %-15s %-15s\n", attach_flags_str, info.name);
+ printf(" %-15s %-15s\n", attach_flags_str, prog_name);
}
close(prog_fd);
diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c
index fa8eb8134344..0c1e06cf50b9 100644
--- a/tools/bpf/bpftool/common.c
+++ b/tools/bpf/bpftool/common.c
@@ -24,6 +24,7 @@
#include <bpf/bpf.h>
#include <bpf/hashmap.h>
#include <bpf/libbpf.h> /* libbpf_num_possible_cpus */
+#include <bpf/btf.h>
#include "main.h"
@@ -55,7 +56,6 @@ const char * const attach_type_name[__MAX_BPF_ATTACH_TYPE] = {
[BPF_CGROUP_UDP6_RECVMSG] = "recvmsg6",
[BPF_CGROUP_GETSOCKOPT] = "getsockopt",
[BPF_CGROUP_SETSOCKOPT] = "setsockopt",
-
[BPF_SK_SKB_STREAM_PARSER] = "sk_skb_stream_parser",
[BPF_SK_SKB_STREAM_VERDICT] = "sk_skb_stream_verdict",
[BPF_SK_SKB_VERDICT] = "sk_skb_verdict",
@@ -75,6 +75,7 @@ const char * const attach_type_name[__MAX_BPF_ATTACH_TYPE] = {
[BPF_SK_REUSEPORT_SELECT] = "sk_skb_reuseport_select",
[BPF_SK_REUSEPORT_SELECT_OR_MIGRATE] = "sk_skb_reuseport_select_or_migrate",
[BPF_PERF_EVENT] = "perf_event",
+ [BPF_TRACE_KPROBE_MULTI] = "trace_kprobe_multi",
};
void p_err(const char *fmt, ...)
@@ -304,6 +305,49 @@ const char *get_fd_type_name(enum bpf_obj_type type)
return names[type];
}
+void get_prog_full_name(const struct bpf_prog_info *prog_info, int prog_fd,
+ char *name_buff, size_t buff_len)
+{
+ const char *prog_name = prog_info->name;
+ const struct btf_type *func_type;
+ const struct bpf_func_info finfo = {};
+ struct bpf_prog_info info = {};
+ __u32 info_len = sizeof(info);
+ struct btf *prog_btf = NULL;
+
+ if (buff_len <= BPF_OBJ_NAME_LEN ||
+ strlen(prog_info->name) < BPF_OBJ_NAME_LEN - 1)
+ goto copy_name;
+
+ if (!prog_info->btf_id || prog_info->nr_func_info == 0)
+ goto copy_name;
+
+ info.nr_func_info = 1;
+ info.func_info_rec_size = prog_info->func_info_rec_size;
+ if (info.func_info_rec_size > sizeof(finfo))
+ info.func_info_rec_size = sizeof(finfo);
+ info.func_info = ptr_to_u64(&finfo);
+
+ if (bpf_obj_get_info_by_fd(prog_fd, &info, &info_len))
+ goto copy_name;
+
+ prog_btf = btf__load_from_kernel_by_id(info.btf_id);
+ if (!prog_btf)
+ goto copy_name;
+
+ func_type = btf__type_by_id(prog_btf, finfo.type_id);
+ if (!func_type || !btf_is_func(func_type))
+ goto copy_name;
+
+ prog_name = btf__name_by_offset(prog_btf, func_type->name_off);
+
+copy_name:
+ snprintf(name_buff, buff_len, "%s", prog_name);
+
+ if (prog_btf)
+ btf__free(prog_btf);
+}
+
int get_fd_type(int fd)
{
char path[PATH_MAX];
diff --git a/tools/bpf/bpftool/feature.c b/tools/bpf/bpftool/feature.c
index e999159fa28d..c2f43a5d38e0 100644
--- a/tools/bpf/bpftool/feature.c
+++ b/tools/bpf/bpftool/feature.c
@@ -3,6 +3,7 @@
#include <ctype.h>
#include <errno.h>
+#include <fcntl.h>
#include <string.h>
#include <unistd.h>
#include <net/if.h>
@@ -45,6 +46,11 @@ static bool run_as_unprivileged;
/* Miscellaneous utility functions */
+static bool grep(const char *buffer, const char *pattern)
+{
+ return !!strstr(buffer, pattern);
+}
+
static bool check_procfs(void)
{
struct statfs st_fs;
@@ -135,6 +141,32 @@ static void print_end_section(void)
/* Probing functions */
+static int get_vendor_id(int ifindex)
+{
+ char ifname[IF_NAMESIZE], path[64], buf[8];
+ ssize_t len;
+ int fd;
+
+ if (!if_indextoname(ifindex, ifname))
+ return -1;
+
+ snprintf(path, sizeof(path), "/sys/class/net/%s/device/vendor", ifname);
+
+ fd = open(path, O_RDONLY | O_CLOEXEC);
+ if (fd < 0)
+ return -1;
+
+ len = read(fd, buf, sizeof(buf));
+ close(fd);
+ if (len < 0)
+ return -1;
+ if (len >= (ssize_t)sizeof(buf))
+ return -1;
+ buf[len] = '\0';
+
+ return strtol(buf, NULL, 0);
+}
+
static int read_procfs(const char *path)
{
char *endptr, *line = NULL;
@@ -478,6 +510,40 @@ static bool probe_bpf_syscall(const char *define_prefix)
return res;
}
+static bool
+probe_prog_load_ifindex(enum bpf_prog_type prog_type,
+ const struct bpf_insn *insns, size_t insns_cnt,
+ char *log_buf, size_t log_buf_sz,
+ __u32 ifindex)
+{
+ LIBBPF_OPTS(bpf_prog_load_opts, opts,
+ .log_buf = log_buf,
+ .log_size = log_buf_sz,
+ .log_level = log_buf ? 1 : 0,
+ .prog_ifindex = ifindex,
+ );
+ int fd;
+
+ errno = 0;
+ fd = bpf_prog_load(prog_type, NULL, "GPL", insns, insns_cnt, &opts);
+ if (fd >= 0)
+ close(fd);
+
+ return fd >= 0 && errno != EINVAL && errno != EOPNOTSUPP;
+}
+
+static bool probe_prog_type_ifindex(enum bpf_prog_type prog_type, __u32 ifindex)
+{
+ /* nfp returns -EINVAL on exit(0) with TC offload */
+ struct bpf_insn insns[2] = {
+ BPF_MOV64_IMM(BPF_REG_0, 2),
+ BPF_EXIT_INSN()
+ };
+
+ return probe_prog_load_ifindex(prog_type, insns, ARRAY_SIZE(insns),
+ NULL, 0, ifindex);
+}
+
static void
probe_prog_type(enum bpf_prog_type prog_type, bool *supported_types,
const char *define_prefix, __u32 ifindex)
@@ -487,8 +553,7 @@ probe_prog_type(enum bpf_prog_type prog_type, bool *supported_types,
size_t maxlen;
bool res;
- if (ifindex)
- /* Only test offload-able program types */
+ if (ifindex) {
switch (prog_type) {
case BPF_PROG_TYPE_SCHED_CLS:
case BPF_PROG_TYPE_XDP:
@@ -497,7 +562,11 @@ probe_prog_type(enum bpf_prog_type prog_type, bool *supported_types,
return;
}
- res = bpf_probe_prog_type(prog_type, ifindex);
+ res = probe_prog_type_ifindex(prog_type, ifindex);
+ } else {
+ res = libbpf_probe_bpf_prog_type(prog_type, NULL);
+ }
+
#ifdef USE_LIBCAP
/* Probe may succeed even if program load fails, for unprivileged users
* check that we did not fail because of insufficient permissions
@@ -526,6 +595,26 @@ probe_prog_type(enum bpf_prog_type prog_type, bool *supported_types,
define_prefix);
}
+static bool probe_map_type_ifindex(enum bpf_map_type map_type, __u32 ifindex)
+{
+ LIBBPF_OPTS(bpf_map_create_opts, opts);
+ int key_size, value_size, max_entries;
+ int fd;
+
+ opts.map_ifindex = ifindex;
+
+ key_size = sizeof(__u32);
+ value_size = sizeof(__u32);
+ max_entries = 1;
+
+ fd = bpf_map_create(map_type, NULL, key_size, value_size, max_entries,
+ &opts);
+ if (fd >= 0)
+ close(fd);
+
+ return fd >= 0;
+}
+
static void
probe_map_type(enum bpf_map_type map_type, const char *define_prefix,
__u32 ifindex)
@@ -535,7 +624,19 @@ probe_map_type(enum bpf_map_type map_type, const char *define_prefix,
size_t maxlen;
bool res;
- res = bpf_probe_map_type(map_type, ifindex);
+ if (ifindex) {
+ switch (map_type) {
+ case BPF_MAP_TYPE_HASH:
+ case BPF_MAP_TYPE_ARRAY:
+ break;
+ default:
+ return;
+ }
+
+ res = probe_map_type_ifindex(map_type, ifindex);
+ } else {
+ res = libbpf_probe_bpf_map_type(map_type, NULL);
+ }
/* Probe result depends on the success of map creation, no additional
* check required for unprivileged users
@@ -559,6 +660,33 @@ probe_map_type(enum bpf_map_type map_type, const char *define_prefix,
define_prefix);
}
+static bool
+probe_helper_ifindex(enum bpf_func_id id, enum bpf_prog_type prog_type,
+ __u32 ifindex)
+{
+ struct bpf_insn insns[2] = {
+ BPF_EMIT_CALL(id),
+ BPF_EXIT_INSN()
+ };
+ char buf[4096] = {};
+ bool res;
+
+ probe_prog_load_ifindex(prog_type, insns, ARRAY_SIZE(insns), buf,
+ sizeof(buf), ifindex);
+ res = !grep(buf, "invalid func ") && !grep(buf, "unknown func ");
+
+ switch (get_vendor_id(ifindex)) {
+ case 0x19ee: /* Netronome specific */
+ res = res && !grep(buf, "not supported by FW") &&
+ !grep(buf, "unsupported function id");
+ break;
+ default:
+ break;
+ }
+
+ return res;
+}
+
static void
probe_helper_for_progtype(enum bpf_prog_type prog_type, bool supported_type,
const char *define_prefix, unsigned int id,
@@ -567,7 +695,10 @@ probe_helper_for_progtype(enum bpf_prog_type prog_type, bool supported_type,
bool res = false;
if (supported_type) {
- res = bpf_probe_helper(id, prog_type, ifindex);
+ if (ifindex)
+ res = probe_helper_ifindex(id, prog_type, ifindex);
+ else
+ res = libbpf_probe_bpf_helper(prog_type, id, NULL);
#ifdef USE_LIBCAP
/* Probe may succeed even if program load fails, for
* unprivileged users check that we did not fail because of
diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
index b4695df2ea3d..7ba7ff55d2ea 100644
--- a/tools/bpf/bpftool/gen.c
+++ b/tools/bpf/bpftool/gen.c
@@ -14,6 +14,7 @@
#include <unistd.h>
#include <bpf/bpf.h>
#include <bpf/libbpf.h>
+#include <bpf/libbpf_internal.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/mman.h>
@@ -63,11 +64,11 @@ static void get_obj_name(char *name, const char *file)
sanitize_identifier(name);
}
-static void get_header_guard(char *guard, const char *obj_name)
+static void get_header_guard(char *guard, const char *obj_name, const char *suffix)
{
int i;
- sprintf(guard, "__%s_SKEL_H__", obj_name);
+ sprintf(guard, "__%s_%s__", obj_name, suffix);
for (i = 0; guard[i]; i++)
guard[i] = toupper(guard[i]);
}
@@ -208,15 +209,47 @@ static int codegen_datasec_def(struct bpf_object *obj,
return 0;
}
+static const struct btf_type *find_type_for_map(struct btf *btf, const char *map_ident)
+{
+ int n = btf__type_cnt(btf), i;
+ char sec_ident[256];
+
+ for (i = 1; i < n; i++) {
+ const struct btf_type *t = btf__type_by_id(btf, i);
+ const char *name;
+
+ if (!btf_is_datasec(t))
+ continue;
+
+ name = btf__str_by_offset(btf, t->name_off);
+ if (!get_datasec_ident(name, sec_ident, sizeof(sec_ident)))
+ continue;
+
+ if (strcmp(sec_ident, map_ident) == 0)
+ return t;
+ }
+ return NULL;
+}
+
+static bool is_internal_mmapable_map(const struct bpf_map *map, char *buf, size_t sz)
+{
+ if (!bpf_map__is_internal(map) || !(bpf_map__map_flags(map) & BPF_F_MMAPABLE))
+ return false;
+
+ if (!get_map_ident(map, buf, sz))
+ return false;
+
+ return true;
+}
+
static int codegen_datasecs(struct bpf_object *obj, const char *obj_name)
{
struct btf *btf = bpf_object__btf(obj);
- int n = btf__type_cnt(btf);
struct btf_dump *d;
struct bpf_map *map;
const struct btf_type *sec;
- char sec_ident[256], map_ident[256];
- int i, err = 0;
+ char map_ident[256];
+ int err = 0;
d = btf_dump__new(btf, codegen_btf_dump_printf, NULL, NULL);
err = libbpf_get_error(d);
@@ -225,31 +258,10 @@ static int codegen_datasecs(struct bpf_object *obj, const char *obj_name)
bpf_object__for_each_map(map, obj) {
/* only generate definitions for memory-mapped internal maps */
- if (!bpf_map__is_internal(map))
+ if (!is_internal_mmapable_map(map, map_ident, sizeof(map_ident)))
continue;
- if (!(bpf_map__def(map)->map_flags & BPF_F_MMAPABLE))
- continue;
-
- if (!get_map_ident(map, map_ident, sizeof(map_ident)))
- continue;
-
- sec = NULL;
- for (i = 1; i < n; i++) {
- const struct btf_type *t = btf__type_by_id(btf, i);
- const char *name;
-
- if (!btf_is_datasec(t))
- continue;
-
- name = btf__str_by_offset(btf, t->name_off);
- if (!get_datasec_ident(name, sec_ident, sizeof(sec_ident)))
- continue;
- if (strcmp(sec_ident, map_ident) == 0) {
- sec = t;
- break;
- }
- }
+ sec = find_type_for_map(btf, map_ident);
/* In some cases (e.g., sections like .rodata.cst16 containing
* compiler allocated string constants only) there will be
@@ -274,6 +286,96 @@ out:
return err;
}
+static bool btf_is_ptr_to_func_proto(const struct btf *btf,
+ const struct btf_type *v)
+{
+ return btf_is_ptr(v) && btf_is_func_proto(btf__type_by_id(btf, v->type));
+}
+
+static int codegen_subskel_datasecs(struct bpf_object *obj, const char *obj_name)
+{
+ struct btf *btf = bpf_object__btf(obj);
+ struct btf_dump *d;
+ struct bpf_map *map;
+ const struct btf_type *sec, *var;
+ const struct btf_var_secinfo *sec_var;
+ int i, err = 0, vlen;
+ char map_ident[256], sec_ident[256];
+ bool strip_mods = false, needs_typeof = false;
+ const char *sec_name, *var_name;
+ __u32 var_type_id;
+
+ d = btf_dump__new(btf, codegen_btf_dump_printf, NULL, NULL);
+ if (!d)
+ return -errno;
+
+ bpf_object__for_each_map(map, obj) {
+ /* only generate definitions for memory-mapped internal maps */
+ if (!is_internal_mmapable_map(map, map_ident, sizeof(map_ident)))
+ continue;
+
+ sec = find_type_for_map(btf, map_ident);
+ if (!sec)
+ continue;
+
+ sec_name = btf__name_by_offset(btf, sec->name_off);
+ if (!get_datasec_ident(sec_name, sec_ident, sizeof(sec_ident)))
+ continue;
+
+ strip_mods = strcmp(sec_name, ".kconfig") != 0;
+ printf(" struct %s__%s {\n", obj_name, sec_ident);
+
+ sec_var = btf_var_secinfos(sec);
+ vlen = btf_vlen(sec);
+ for (i = 0; i < vlen; i++, sec_var++) {
+ DECLARE_LIBBPF_OPTS(btf_dump_emit_type_decl_opts, opts,
+ .indent_level = 2,
+ .strip_mods = strip_mods,
+ /* we'll print the name separately */
+ .field_name = "",
+ );
+
+ var = btf__type_by_id(btf, sec_var->type);
+ var_name = btf__name_by_offset(btf, var->name_off);
+ var_type_id = var->type;
+
+ /* static variables are not exposed through BPF skeleton */
+ if (btf_var(var)->linkage == BTF_VAR_STATIC)
+ continue;
+
+ /* The datasec member has KIND_VAR but we want the
+ * underlying type of the variable (e.g. KIND_INT).
+ */
+ var = skip_mods_and_typedefs(btf, var->type, NULL);
+
+ printf("\t\t");
+ /* Func and array members require special handling.
+ * Instead of producing `typename *var`, they produce
+ * `typeof(typename) *var`. This allows us to keep a
+ * similar syntax where the identifier is just prefixed
+ * by *, allowing us to ignore C declaration minutiae.
+ */
+ needs_typeof = btf_is_array(var) || btf_is_ptr_to_func_proto(btf, var);
+ if (needs_typeof)
+ printf("typeof(");
+
+ err = btf_dump__emit_type_decl(d, var_type_id, &opts);
+ if (err)
+ goto out;
+
+ if (needs_typeof)
+ printf(")");
+
+ printf(" *%s;\n", var_name);
+ }
+ printf(" } %s;\n", sec_ident);
+ }
+
+out:
+ btf_dump__free(d);
+ return err;
+}
+
static void codegen(const char *template, ...)
{
const char *src, *end;
@@ -362,6 +464,69 @@ static size_t bpf_map_mmap_sz(const struct bpf_map *map)
return map_sz;
}
+/* Emit type size asserts for all top-level fields in memory-mapped internal maps. */
+static void codegen_asserts(struct bpf_object *obj, const char *obj_name)
+{
+ struct btf *btf = bpf_object__btf(obj);
+ struct bpf_map *map;
+ struct btf_var_secinfo *sec_var;
+ int i, vlen;
+ const struct btf_type *sec;
+ char map_ident[256], var_ident[256];
+
+ codegen("\
+ \n\
+ __attribute__((unused)) static void \n\
+ %1$s__assert(struct %1$s *s) \n\
+ { \n\
+ #ifdef __cplusplus \n\
+ #define _Static_assert static_assert \n\
+ #endif \n\
+ ", obj_name);
+
+ bpf_object__for_each_map(map, obj) {
+ if (!is_internal_mmapable_map(map, map_ident, sizeof(map_ident)))
+ continue;
+
+ sec = find_type_for_map(btf, map_ident);
+ if (!sec) {
+ /* best effort, couldn't find the type for this map */
+ continue;
+ }
+
+ sec_var = btf_var_secinfos(sec);
+ vlen = btf_vlen(sec);
+
+ for (i = 0; i < vlen; i++, sec_var++) {
+ const struct btf_type *var = btf__type_by_id(btf, sec_var->type);
+ const char *var_name = btf__name_by_offset(btf, var->name_off);
+ long var_size;
+
+ /* static variables are not exposed through BPF skeleton */
+ if (btf_var(var)->linkage == BTF_VAR_STATIC)
+ continue;
+
+ var_size = btf__resolve_size(btf, var->type);
+ if (var_size < 0)
+ continue;
+
+ var_ident[0] = '\0';
+ strncat(var_ident, var_name, sizeof(var_ident) - 1);
+ sanitize_identifier(var_ident);
+
+ printf("\t_Static_assert(sizeof(s->%s->%s) == %ld, \"unexpected size of '%s'\");\n",
+ map_ident, var_ident, var_size, var_ident);
+ }
+ }
+ codegen("\
+ \n\
+ #ifdef __cplusplus \n\
+ #undef _Static_assert \n\
+ #endif \n\
+ } \n\
+ ");
+}
+
static void codegen_attach_detach(struct bpf_object *obj, const char *obj_name)
{
struct bpf_program *prog;
@@ -378,13 +543,16 @@ static void codegen_attach_detach(struct bpf_object *obj, const char *obj_name)
int prog_fd = skel->progs.%2$s.prog_fd; \n\
", obj_name, bpf_program__name(prog));
- switch (bpf_program__get_type(prog)) {
+ switch (bpf_program__type(prog)) {
case BPF_PROG_TYPE_RAW_TRACEPOINT:
tp_name = strchr(bpf_program__section_name(prog), '/') + 1;
- printf("\tint fd = bpf_raw_tracepoint_open(\"%s\", prog_fd);\n", tp_name);
+ printf("\tint fd = skel_raw_tracepoint_open(\"%s\", prog_fd);\n", tp_name);
break;
case BPF_PROG_TYPE_TRACING:
- printf("\tint fd = bpf_raw_tracepoint_open(NULL, prog_fd);\n");
+ if (bpf_program__expected_attach_type(prog) == BPF_TRACE_ITER)
+ printf("\tint fd = skel_link_create(prog_fd, 0, BPF_TRACE_ITER);\n");
+ else
+ printf("\tint fd = skel_raw_tracepoint_open(NULL, prog_fd);\n");
break;
default:
printf("\tint fd = ((void)prog_fd, 0); /* auto-attach not supported */\n");
@@ -468,8 +636,8 @@ static void codegen_destroy(struct bpf_object *obj, const char *obj_name)
if (!get_map_ident(map, ident, sizeof(ident)))
continue;
if (bpf_map__is_internal(map) &&
- (bpf_map__def(map)->map_flags & BPF_F_MMAPABLE))
- printf("\tmunmap(skel->%1$s, %2$zd);\n",
+ (bpf_map__map_flags(map) & BPF_F_MMAPABLE))
+ printf("\tskel_free_map_data(skel->%1$s, skel->maps.%1$s.initial_value, %2$zd);\n",
ident, bpf_map_mmap_sz(map));
codegen("\
\n\
@@ -478,7 +646,7 @@ static void codegen_destroy(struct bpf_object *obj, const char *obj_name)
}
codegen("\
\n\
- free(skel); \n\
+ skel_free(skel); \n\
} \n\
",
obj_name);
@@ -522,7 +690,7 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h
{ \n\
struct %1$s *skel; \n\
\n\
- skel = calloc(sizeof(*skel), 1); \n\
+ skel = skel_alloc(sizeof(*skel)); \n\
if (!skel) \n\
goto cleanup; \n\
skel->ctx.sz = (void *)&skel->links - (void *)skel; \n\
@@ -532,27 +700,22 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h
const void *mmap_data = NULL;
size_t mmap_size = 0;
- if (!get_map_ident(map, ident, sizeof(ident)))
- continue;
-
- if (!bpf_map__is_internal(map) ||
- !(bpf_map__def(map)->map_flags & BPF_F_MMAPABLE))
+ if (!is_internal_mmapable_map(map, ident, sizeof(ident)))
continue;
codegen("\
- \n\
- skel->%1$s = \n\
- mmap(NULL, %2$zd, PROT_READ | PROT_WRITE,\n\
- MAP_SHARED | MAP_ANONYMOUS, -1, 0); \n\
- if (skel->%1$s == (void *) -1) \n\
- goto cleanup; \n\
- memcpy(skel->%1$s, (void *)\"\\ \n\
- ", ident, bpf_map_mmap_sz(map));
+ \n\
+ skel->%1$s = skel_prep_map_data((void *)\"\\ \n\
+ ", ident);
mmap_data = bpf_map__initial_value(map, &mmap_size);
print_hex(mmap_data, mmap_size);
- printf("\", %2$zd);\n"
- "\tskel->maps.%1$s.initial_value = (__u64)(long)skel->%1$s;\n",
- ident, mmap_size);
+ codegen("\
+ \n\
+ \", %1$zd, %2$zd); \n\
+ if (!skel->%3$s) \n\
+ goto cleanup; \n\
+ skel->maps.%3$s.initial_value = (__u64) (long) skel->%3$s;\n\
+ ", bpf_map_mmap_sz(map), mmap_size, ident);
}
codegen("\
\n\
@@ -596,21 +759,21 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h
bpf_object__for_each_map(map, obj) {
const char *mmap_flags;
- if (!get_map_ident(map, ident, sizeof(ident)))
- continue;
-
- if (!bpf_map__is_internal(map) ||
- !(bpf_map__def(map)->map_flags & BPF_F_MMAPABLE))
+ if (!is_internal_mmapable_map(map, ident, sizeof(ident)))
continue;
- if (bpf_map__def(map)->map_flags & BPF_F_RDONLY_PROG)
+ if (bpf_map__map_flags(map) & BPF_F_RDONLY_PROG)
mmap_flags = "PROT_READ";
else
mmap_flags = "PROT_READ | PROT_WRITE";
- printf("\tskel->%1$s =\n"
- "\t\tmmap(skel->%1$s, %2$zd, %3$s, MAP_SHARED | MAP_FIXED,\n"
- "\t\t\tskel->maps.%1$s.map_fd, 0);\n",
+ codegen("\
+ \n\
+ skel->%1$s = skel_finalize_map_data(&skel->maps.%1$s.initial_value, \n\
+ %2$zd, %3$s, skel->maps.%1$s.map_fd);\n\
+ if (!skel->%1$s) \n\
+ return -ENOMEM; \n\
+ ",
ident, bpf_map_mmap_sz(map), mmap_flags);
}
codegen("\
@@ -632,8 +795,11 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h
} \n\
return skel; \n\
} \n\
+ \n\
", obj_name);
+ codegen_asserts(obj, obj_name);
+
codegen("\
\n\
\n\
@@ -645,10 +811,95 @@ out:
return err;
}
+static void
+codegen_maps_skeleton(struct bpf_object *obj, size_t map_cnt, bool mmaped)
+{
+ struct bpf_map *map;
+ char ident[256];
+ size_t i;
+
+ if (!map_cnt)
+ return;
+
+ codegen("\
+ \n\
+ \n\
+ /* maps */ \n\
+ s->map_cnt = %zu; \n\
+ s->map_skel_sz = sizeof(*s->maps); \n\
+ s->maps = (struct bpf_map_skeleton *)calloc(s->map_cnt, s->map_skel_sz);\n\
+ if (!s->maps) \n\
+ goto err; \n\
+ ",
+ map_cnt
+ );
+ i = 0;
+ bpf_object__for_each_map(map, obj) {
+ if (!get_map_ident(map, ident, sizeof(ident)))
+ continue;
+
+ codegen("\
+ \n\
+ \n\
+ s->maps[%zu].name = \"%s\"; \n\
+ s->maps[%zu].map = &obj->maps.%s; \n\
+ ",
+ i, bpf_map__name(map), i, ident);
+ /* memory-mapped internal maps */
+ if (mmaped && is_internal_mmapable_map(map, ident, sizeof(ident))) {
+ printf("\ts->maps[%zu].mmaped = (void **)&obj->%s;\n",
+ i, ident);
+ }
+ i++;
+ }
+}
+
+static void
+codegen_progs_skeleton(struct bpf_object *obj, size_t prog_cnt, bool populate_links)
+{
+ struct bpf_program *prog;
+ int i;
+
+ if (!prog_cnt)
+ return;
+
+ codegen("\
+ \n\
+ \n\
+ /* programs */ \n\
+ s->prog_cnt = %zu; \n\
+ s->prog_skel_sz = sizeof(*s->progs); \n\
+ s->progs = (struct bpf_prog_skeleton *)calloc(s->prog_cnt, s->prog_skel_sz);\n\
+ if (!s->progs) \n\
+ goto err; \n\
+ ",
+ prog_cnt
+ );
+ i = 0;
+ bpf_object__for_each_program(prog, obj) {
+ codegen("\
+ \n\
+ \n\
+ s->progs[%1$zu].name = \"%2$s\"; \n\
+ s->progs[%1$zu].prog = &obj->progs.%2$s;\n\
+ ",
+ i, bpf_program__name(prog));
+
+ if (populate_links) {
+ codegen("\
+ \n\
+ s->progs[%1$zu].link = &obj->links.%2$s;\n\
+ ",
+ i, bpf_program__name(prog));
+ }
+ i++;
+ }
+}
+
static int do_skeleton(int argc, char **argv)
{
char header_guard[MAX_OBJ_NAME_LEN + sizeof("__SKEL_H__")];
- size_t i, map_cnt = 0, prog_cnt = 0, file_sz, mmap_sz;
+ size_t map_cnt = 0, prog_cnt = 0, file_sz, mmap_sz;
DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts);
char obj_name[MAX_OBJ_NAME_LEN] = "", *obj_data;
struct bpf_object *obj = NULL;
@@ -739,7 +990,7 @@ static int do_skeleton(int argc, char **argv)
prog_cnt++;
}
- get_header_guard(header_guard, obj_name);
+ get_header_guard(header_guard, obj_name, "SKEL_H");
if (use_loader) {
codegen("\
\n\
@@ -748,8 +999,6 @@ static int do_skeleton(int argc, char **argv)
#ifndef %2$s \n\
#define %2$s \n\
\n\
- #include <stdlib.h> \n\
- #include <bpf/bpf.h> \n\
#include <bpf/skel_internal.h> \n\
\n\
struct %1$s { \n\
@@ -827,6 +1076,16 @@ static int do_skeleton(int argc, char **argv)
codegen("\
\n\
+ \n\
+ #ifdef __cplusplus \n\
+ static inline struct %1$s *open(const struct bpf_object_open_opts *opts = nullptr);\n\
+ static inline struct %1$s *open_and_load(); \n\
+ static inline int load(struct %1$s *skel); \n\
+ static inline int attach(struct %1$s *skel); \n\
+ static inline void detach(struct %1$s *skel); \n\
+ static inline void destroy(struct %1$s *skel); \n\
+ static inline const void *elf_bytes(size_t *sz); \n\
+ #endif /* __cplusplus */ \n\
}; \n\
\n\
static void \n\
@@ -927,7 +1186,6 @@ static int do_skeleton(int argc, char **argv)
s = (struct bpf_object_skeleton *)calloc(1, sizeof(*s));\n\
if (!s) \n\
goto err; \n\
- obj->skeleton = s; \n\
\n\
s->sz = sizeof(*s); \n\
s->name = \"%1$s\"; \n\
@@ -935,71 +1193,16 @@ static int do_skeleton(int argc, char **argv)
",
obj_name
);
- if (map_cnt) {
- codegen("\
- \n\
- \n\
- /* maps */ \n\
- s->map_cnt = %zu; \n\
- s->map_skel_sz = sizeof(*s->maps); \n\
- s->maps = (struct bpf_map_skeleton *)calloc(s->map_cnt, s->map_skel_sz);\n\
- if (!s->maps) \n\
- goto err; \n\
- ",
- map_cnt
- );
- i = 0;
- bpf_object__for_each_map(map, obj) {
- if (!get_map_ident(map, ident, sizeof(ident)))
- continue;
- codegen("\
- \n\
- \n\
- s->maps[%zu].name = \"%s\"; \n\
- s->maps[%zu].map = &obj->maps.%s; \n\
- ",
- i, bpf_map__name(map), i, ident);
- /* memory-mapped internal maps */
- if (bpf_map__is_internal(map) &&
- (bpf_map__def(map)->map_flags & BPF_F_MMAPABLE)) {
- printf("\ts->maps[%zu].mmaped = (void **)&obj->%s;\n",
- i, ident);
- }
- i++;
- }
- }
- if (prog_cnt) {
- codegen("\
- \n\
- \n\
- /* programs */ \n\
- s->prog_cnt = %zu; \n\
- s->prog_skel_sz = sizeof(*s->progs); \n\
- s->progs = (struct bpf_prog_skeleton *)calloc(s->prog_cnt, s->prog_skel_sz);\n\
- if (!s->progs) \n\
- goto err; \n\
- ",
- prog_cnt
- );
- i = 0;
- bpf_object__for_each_program(prog, obj) {
- codegen("\
- \n\
- \n\
- s->progs[%1$zu].name = \"%2$s\"; \n\
- s->progs[%1$zu].prog = &obj->progs.%2$s;\n\
- s->progs[%1$zu].link = &obj->links.%2$s;\n\
- ",
- i, bpf_program__name(prog));
- i++;
- }
- }
+ codegen_maps_skeleton(obj, map_cnt, true /*mmaped*/);
+ codegen_progs_skeleton(obj, prog_cnt, true /*populate_links*/);
+
codegen("\
\n\
\n\
s->data = (void *)%2$s__elf_bytes(&s->data_sz); \n\
\n\
+ obj->skeleton = s; \n\
return 0; \n\
err: \n\
bpf_object__destroy_skeleton(s); \n\
@@ -1021,7 +1224,25 @@ static int do_skeleton(int argc, char **argv)
\"; \n\
} \n\
\n\
- #endif /* %s */ \n\
+ #ifdef __cplusplus \n\
+ struct %1$s *%1$s::open(const struct bpf_object_open_opts *opts) { return %1$s__open_opts(opts); }\n\
+ struct %1$s *%1$s::open_and_load() { return %1$s__open_and_load(); } \n\
+ int %1$s::load(struct %1$s *skel) { return %1$s__load(skel); } \n\
+ int %1$s::attach(struct %1$s *skel) { return %1$s__attach(skel); } \n\
+ void %1$s::detach(struct %1$s *skel) { %1$s__detach(skel); } \n\
+ void %1$s::destroy(struct %1$s *skel) { %1$s__destroy(skel); } \n\
+ const void *%1$s::elf_bytes(size_t *sz) { return %1$s__elf_bytes(sz); } \n\
+ #endif /* __cplusplus */ \n\
+ \n\
+ ",
+ obj_name);
+
+ codegen_asserts(obj, obj_name);
+
+ codegen("\
+ \n\
+ \n\
+ #endif /* %1$s */ \n\
",
header_guard);
err = 0;
@@ -1033,6 +1254,310 @@ out:
return err;
}
+/* Subskeletons are like skeletons, except they don't own the bpf_object,
+ * associated maps, links, etc. Instead, they know about the existence of
+ * variables, maps, programs and are able to find their locations
+ * _at runtime_ from an already loaded bpf_object.
+ *
+ * This allows for library-like BPF objects to have userspace counterparts
+ * with access to their own items without having to know anything about the
+ * final BPF object that the library was linked into.
+ */
+static int do_subskeleton(int argc, char **argv)
+{
+ char header_guard[MAX_OBJ_NAME_LEN + sizeof("__SUBSKEL_H__")];
+ size_t i, len, file_sz, map_cnt = 0, prog_cnt = 0, mmap_sz, var_cnt = 0, var_idx = 0;
+ DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts);
+ char obj_name[MAX_OBJ_NAME_LEN] = "", *obj_data;
+ struct bpf_object *obj = NULL;
+ const char *file, *var_name;
+ char ident[256];
+ int fd, err = -1, map_type_id;
+ const struct bpf_map *map;
+ struct bpf_program *prog;
+ struct btf *btf;
+ const struct btf_type *map_type, *var_type;
+ const struct btf_var_secinfo *var;
+ struct stat st;
+
+ if (!REQ_ARGS(1)) {
+ usage();
+ return -1;
+ }
+ file = GET_ARG();
+
+ while (argc) {
+ if (!REQ_ARGS(2))
+ return -1;
+
+ if (is_prefix(*argv, "name")) {
+ NEXT_ARG();
+
+ if (obj_name[0] != '\0') {
+ p_err("object name already specified");
+ return -1;
+ }
+
+ strncpy(obj_name, *argv, MAX_OBJ_NAME_LEN - 1);
+ obj_name[MAX_OBJ_NAME_LEN - 1] = '\0';
+ } else {
+ p_err("unknown arg %s", *argv);
+ return -1;
+ }
+
+ NEXT_ARG();
+ }
+
+ if (argc) {
+ p_err("extra unknown arguments");
+ return -1;
+ }
+
+ if (use_loader) {
+ p_err("cannot use loader for subskeletons");
+ return -1;
+ }
+
+ if (stat(file, &st)) {
+ p_err("failed to stat() %s: %s", file, strerror(errno));
+ return -1;
+ }
+ file_sz = st.st_size;
+ mmap_sz = roundup(file_sz, sysconf(_SC_PAGE_SIZE));
+ fd = open(file, O_RDONLY);
+ if (fd < 0) {
+ p_err("failed to open() %s: %s", file, strerror(errno));
+ return -1;
+ }
+ obj_data = mmap(NULL, mmap_sz, PROT_READ, MAP_PRIVATE, fd, 0);
+ if (obj_data == MAP_FAILED) {
+ obj_data = NULL;
+ p_err("failed to mmap() %s: %s", file, strerror(errno));
+ goto out;
+ }
+ if (obj_name[0] == '\0')
+ get_obj_name(obj_name, file);
+
+ /* The empty object name allows us to use bpf_map__name and produce
+ * ELF section names out of it. (".data" instead of "obj.data")
+ */
+ opts.object_name = "";
+ obj = bpf_object__open_mem(obj_data, file_sz, &opts);
+ if (!obj) {
+ char err_buf[256];
+
+ libbpf_strerror(errno, err_buf, sizeof(err_buf));
+ p_err("failed to open BPF object file: %s", err_buf);
+ obj = NULL;
+ goto out;
+ }
+
+ btf = bpf_object__btf(obj);
+ if (!btf) {
+ err = -1;
+ p_err("need btf type information for %s", obj_name);
+ goto out;
+ }
+
+ bpf_object__for_each_program(prog, obj) {
+ prog_cnt++;
+ }
+
+ /* First, count how many variables we have to find.
+ * We need this in advance so the subskel can allocate the right
+ * amount of storage.
+ */
+ bpf_object__for_each_map(map, obj) {
+ if (!get_map_ident(map, ident, sizeof(ident)))
+ continue;
+
+ /* Also count all maps that have a name */
+ map_cnt++;
+
+ if (!is_internal_mmapable_map(map, ident, sizeof(ident)))
+ continue;
+
+ map_type_id = bpf_map__btf_value_type_id(map);
+ if (map_type_id <= 0) {
+ err = map_type_id;
+ goto out;
+ }
+ map_type = btf__type_by_id(btf, map_type_id);
+
+ var = btf_var_secinfos(map_type);
+ len = btf_vlen(map_type);
+ for (i = 0; i < len; i++, var++) {
+ var_type = btf__type_by_id(btf, var->type);
+
+ if (btf_var(var_type)->linkage == BTF_VAR_STATIC)
+ continue;
+
+ var_cnt++;
+ }
+ }
+
+ get_header_guard(header_guard, obj_name, "SUBSKEL_H");
+ codegen("\
+ \n\
+ /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ \n\
+ \n\
+ /* THIS FILE IS AUTOGENERATED! */ \n\
+ #ifndef %2$s \n\
+ #define %2$s \n\
+ \n\
+ #include <errno.h> \n\
+ #include <stdlib.h> \n\
+ #include <bpf/libbpf.h> \n\
+ \n\
+ struct %1$s { \n\
+ struct bpf_object *obj; \n\
+ struct bpf_object_subskeleton *subskel; \n\
+ ", obj_name, header_guard);
+
+ if (map_cnt) {
+ printf("\tstruct {\n");
+ bpf_object__for_each_map(map, obj) {
+ if (!get_map_ident(map, ident, sizeof(ident)))
+ continue;
+ printf("\t\tstruct bpf_map *%s;\n", ident);
+ }
+ printf("\t} maps;\n");
+ }
+
+ if (prog_cnt) {
+ printf("\tstruct {\n");
+ bpf_object__for_each_program(prog, obj) {
+ printf("\t\tstruct bpf_program *%s;\n",
+ bpf_program__name(prog));
+ }
+ printf("\t} progs;\n");
+ }
+
+ err = codegen_subskel_datasecs(obj, obj_name);
+ if (err)
+ goto out;
+
+ /* emit code that will allocate enough storage for all symbols */
+ codegen("\
+ \n\
+ \n\
+ #ifdef __cplusplus \n\
+ static inline struct %1$s *open(const struct bpf_object *src);\n\
+ static inline void destroy(struct %1$s *skel); \n\
+ #endif /* __cplusplus */ \n\
+ }; \n\
+ \n\
+ static inline void \n\
+ %1$s__destroy(struct %1$s *skel) \n\
+ { \n\
+ if (!skel) \n\
+ return; \n\
+ if (skel->subskel) \n\
+ bpf_object__destroy_subskeleton(skel->subskel);\n\
+ free(skel); \n\
+ } \n\
+ \n\
+ static inline struct %1$s * \n\
+ %1$s__open(const struct bpf_object *src) \n\
+ { \n\
+ struct %1$s *obj; \n\
+ struct bpf_object_subskeleton *s; \n\
+ int err; \n\
+ \n\
+ obj = (struct %1$s *)calloc(1, sizeof(*obj)); \n\
+ if (!obj) { \n\
+ errno = ENOMEM; \n\
+ goto err; \n\
+ } \n\
+ s = (struct bpf_object_subskeleton *)calloc(1, sizeof(*s));\n\
+ if (!s) { \n\
+ errno = ENOMEM; \n\
+ goto err; \n\
+ } \n\
+ s->sz = sizeof(*s); \n\
+ s->obj = src; \n\
+ s->var_skel_sz = sizeof(*s->vars); \n\
+ obj->subskel = s; \n\
+ \n\
+ /* vars */ \n\
+ s->var_cnt = %2$d; \n\
+ s->vars = (struct bpf_var_skeleton *)calloc(%2$d, sizeof(*s->vars));\n\
+ if (!s->vars) { \n\
+ errno = ENOMEM; \n\
+ goto err; \n\
+ } \n\
+ ",
+ obj_name, var_cnt
+ );
+
+ /* walk through each symbol and emit the runtime representation */
+ bpf_object__for_each_map(map, obj) {
+ if (!is_internal_mmapable_map(map, ident, sizeof(ident)))
+ continue;
+
+ map_type_id = bpf_map__btf_value_type_id(map);
+ if (map_type_id <= 0)
+ /* skip over internal maps with no type*/
+ continue;
+
+ map_type = btf__type_by_id(btf, map_type_id);
+ var = btf_var_secinfos(map_type);
+ len = btf_vlen(map_type);
+ for (i = 0; i < len; i++, var++) {
+ var_type = btf__type_by_id(btf, var->type);
+ var_name = btf__name_by_offset(btf, var_type->name_off);
+
+ if (btf_var(var_type)->linkage == BTF_VAR_STATIC)
+ continue;
+
+ /* Note that we use the dot prefix in .data as the
+ * field access operator i.e. maps%s becomes maps.data
+ */
+ codegen("\
+ \n\
+ \n\
+ s->vars[%3$d].name = \"%1$s\"; \n\
+ s->vars[%3$d].map = &obj->maps.%2$s; \n\
+ s->vars[%3$d].addr = (void **) &obj->%2$s.%1$s;\n\
+ ", var_name, ident, var_idx);
+
+ var_idx++;
+ }
+ }
+
+ codegen_maps_skeleton(obj, map_cnt, false /*mmaped*/);
+ codegen_progs_skeleton(obj, prog_cnt, false /*links*/);
+
+ codegen("\
+ \n\
+ \n\
+ err = bpf_object__open_subskeleton(s); \n\
+ if (err) \n\
+ goto err; \n\
+ \n\
+ return obj; \n\
+ err: \n\
+ %1$s__destroy(obj); \n\
+ return NULL; \n\
+ } \n\
+ \n\
+ #ifdef __cplusplus \n\
+ struct %1$s *%1$s::open(const struct bpf_object *src) { return %1$s__open(src); }\n\
+ void %1$s::destroy(struct %1$s *skel) { %1$s__destroy(skel); }\n\
+ #endif /* __cplusplus */ \n\
+ \n\
+ #endif /* %2$s */ \n\
+ ",
+ obj_name, header_guard);
+ err = 0;
+out:
+ bpf_object__close(obj);
+ if (obj_data)
+ munmap(obj_data, mmap_sz);
+ close(fd);
+ return err;
+}
+
static int do_object(int argc, char **argv)
{
struct bpf_linker *linker;
@@ -1084,6 +1609,8 @@ static int do_help(int argc, char **argv)
fprintf(stderr,
"Usage: %1$s %2$s object OUTPUT_FILE INPUT_FILE [INPUT_FILE...]\n"
" %1$s %2$s skeleton FILE [name OBJECT_NAME]\n"
+ " %1$s %2$s subskeleton FILE [name OBJECT_NAME]\n"
+ " %1$s %2$s min_core_btf INPUT OUTPUT OBJECT [OBJECT...]\n"
" %1$s %2$s help\n"
"\n"
" " HELP_SPEC_OPTIONS " |\n"
@@ -1094,10 +1621,594 @@ static int do_help(int argc, char **argv)
return 0;
}
+static int btf_save_raw(const struct btf *btf, const char *path)
+{
+ const void *data;
+ FILE *f = NULL;
+ __u32 data_sz;
+ int err = 0;
+
+ data = btf__raw_data(btf, &data_sz);
+ if (!data)
+ return -ENOMEM;
+
+ f = fopen(path, "wb");
+ if (!f)
+ return -errno;
+
+ if (fwrite(data, 1, data_sz, f) != data_sz)
+ err = -errno;
+
+ fclose(f);
+ return err;
+}
+
+struct btfgen_info {
+ struct btf *src_btf;
+ struct btf *marked_btf; /* btf structure used to mark used types */
+};
+
+static size_t btfgen_hash_fn(const void *key, void *ctx)
+{
+ return (size_t)key;
+}
+
+static bool btfgen_equal_fn(const void *k1, const void *k2, void *ctx)
+{
+ return k1 == k2;
+}
+
+static void *u32_as_hash_key(__u32 x)
+{
+ return (void *)(uintptr_t)x;
+}
+
+static void btfgen_free_info(struct btfgen_info *info)
+{
+ if (!info)
+ return;
+
+ btf__free(info->src_btf);
+ btf__free(info->marked_btf);
+
+ free(info);
+}
+
+static struct btfgen_info *
+btfgen_new_info(const char *targ_btf_path)
+{
+ struct btfgen_info *info;
+ int err;
+
+ info = calloc(1, sizeof(*info));
+ if (!info)
+ return NULL;
+
+ info->src_btf = btf__parse(targ_btf_path, NULL);
+ if (!info->src_btf) {
+ err = -errno;
+ p_err("failed parsing '%s' BTF file: %s", targ_btf_path, strerror(errno));
+ goto err_out;
+ }
+
+ info->marked_btf = btf__parse(targ_btf_path, NULL);
+ if (!info->marked_btf) {
+ err = -errno;
+ p_err("failed parsing '%s' BTF file: %s", targ_btf_path, strerror(errno));
+ goto err_out;
+ }
+
+ return info;
+
+err_out:
+ btfgen_free_info(info);
+ errno = -err;
+ return NULL;
+}
+
+#define MARKED UINT32_MAX
+
+static void btfgen_mark_member(struct btfgen_info *info, int type_id, int idx)
+{
+ const struct btf_type *t = btf__type_by_id(info->marked_btf, type_id);
+ struct btf_member *m = btf_members(t) + idx;
+
+ m->name_off = MARKED;
+}
+
+static int
+btfgen_mark_type(struct btfgen_info *info, unsigned int type_id, bool follow_pointers)
+{
+ const struct btf_type *btf_type = btf__type_by_id(info->src_btf, type_id);
+ struct btf_type *cloned_type;
+ struct btf_param *param;
+ struct btf_array *array;
+ int err, i;
+
+ if (type_id == 0)
+ return 0;
+
+ /* mark type on cloned BTF as used */
+ cloned_type = (struct btf_type *) btf__type_by_id(info->marked_btf, type_id);
+ cloned_type->name_off = MARKED;
+
+ /* recursively mark other types needed by it */
+ switch (btf_kind(btf_type)) {
+ case BTF_KIND_UNKN:
+ case BTF_KIND_INT:
+ case BTF_KIND_FLOAT:
+ case BTF_KIND_ENUM:
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION:
+ break;
+ case BTF_KIND_PTR:
+ if (follow_pointers) {
+ err = btfgen_mark_type(info, btf_type->type, follow_pointers);
+ if (err)
+ return err;
+ }
+ break;
+ case BTF_KIND_CONST:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_TYPEDEF:
+ err = btfgen_mark_type(info, btf_type->type, follow_pointers);
+ if (err)
+ return err;
+ break;
+ case BTF_KIND_ARRAY:
+ array = btf_array(btf_type);
+
+ /* mark array type */
+ err = btfgen_mark_type(info, array->type, follow_pointers);
+ /* mark array's index type */
+ err = err ? : btfgen_mark_type(info, array->index_type, follow_pointers);
+ if (err)
+ return err;
+ break;
+ case BTF_KIND_FUNC_PROTO:
+ /* mark ret type */
+ err = btfgen_mark_type(info, btf_type->type, follow_pointers);
+ if (err)
+ return err;
+
+ /* mark parameters types */
+ param = btf_params(btf_type);
+ for (i = 0; i < btf_vlen(btf_type); i++) {
+ err = btfgen_mark_type(info, param->type, follow_pointers);
+ if (err)
+ return err;
+ param++;
+ }
+ break;
+ /* tells if some other type needs to be handled */
+ default:
+ p_err("unsupported kind: %s (%d)", btf_kind_str(btf_type), type_id);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int btfgen_record_field_relo(struct btfgen_info *info, struct bpf_core_spec *targ_spec)
+{
+ struct btf *btf = info->src_btf;
+ const struct btf_type *btf_type;
+ struct btf_member *btf_member;
+ struct btf_array *array;
+ unsigned int type_id = targ_spec->root_type_id;
+ int idx, err;
+
+ /* mark root type */
+ btf_type = btf__type_by_id(btf, type_id);
+ err = btfgen_mark_type(info, type_id, false);
+ if (err)
+ return err;
+
+ /* mark types for complex types (arrays, unions, structures) */
+ for (int i = 1; i < targ_spec->raw_len; i++) {
+ /* skip typedefs and mods */
+ while (btf_is_mod(btf_type) || btf_is_typedef(btf_type)) {
+ type_id = btf_type->type;
+ btf_type = btf__type_by_id(btf, type_id);
+ }
+
+ switch (btf_kind(btf_type)) {
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION:
+ idx = targ_spec->raw_spec[i];
+ btf_member = btf_members(btf_type) + idx;
+
+ /* mark member */
+ btfgen_mark_member(info, type_id, idx);
+
+ /* mark member's type */
+ type_id = btf_member->type;
+ btf_type = btf__type_by_id(btf, type_id);
+ err = btfgen_mark_type(info, type_id, false);
+ if (err)
+ return err;
+ break;
+ case BTF_KIND_ARRAY:
+ array = btf_array(btf_type);
+ type_id = array->type;
+ btf_type = btf__type_by_id(btf, type_id);
+ break;
+ default:
+ p_err("unsupported kind: %s (%d)",
+ btf_kind_str(btf_type), btf_type->type);
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
+static int btfgen_record_type_relo(struct btfgen_info *info, struct bpf_core_spec *targ_spec)
+{
+ return btfgen_mark_type(info, targ_spec->root_type_id, true);
+}
+
+static int btfgen_record_enumval_relo(struct btfgen_info *info, struct bpf_core_spec *targ_spec)
+{
+ return btfgen_mark_type(info, targ_spec->root_type_id, false);
+}
+
+static int btfgen_record_reloc(struct btfgen_info *info, struct bpf_core_spec *res)
+{
+ switch (res->relo_kind) {
+ case BPF_CORE_FIELD_BYTE_OFFSET:
+ case BPF_CORE_FIELD_BYTE_SIZE:
+ case BPF_CORE_FIELD_EXISTS:
+ case BPF_CORE_FIELD_SIGNED:
+ case BPF_CORE_FIELD_LSHIFT_U64:
+ case BPF_CORE_FIELD_RSHIFT_U64:
+ return btfgen_record_field_relo(info, res);
+ case BPF_CORE_TYPE_ID_LOCAL: /* BPF_CORE_TYPE_ID_LOCAL doesn't require kernel BTF */
+ return 0;
+ case BPF_CORE_TYPE_ID_TARGET:
+ case BPF_CORE_TYPE_EXISTS:
+ case BPF_CORE_TYPE_SIZE:
+ return btfgen_record_type_relo(info, res);
+ case BPF_CORE_ENUMVAL_EXISTS:
+ case BPF_CORE_ENUMVAL_VALUE:
+ return btfgen_record_enumval_relo(info, res);
+ default:
+ return -EINVAL;
+ }
+}
+
+static struct bpf_core_cand_list *
+btfgen_find_cands(const struct btf *local_btf, const struct btf *targ_btf, __u32 local_id)
+{
+ const struct btf_type *local_type;
+ struct bpf_core_cand_list *cands = NULL;
+ struct bpf_core_cand local_cand = {};
+ size_t local_essent_len;
+ const char *local_name;
+ int err;
+
+ local_cand.btf = local_btf;
+ local_cand.id = local_id;
+
+ local_type = btf__type_by_id(local_btf, local_id);
+ if (!local_type) {
+ err = -EINVAL;
+ goto err_out;
+ }
+
+ local_name = btf__name_by_offset(local_btf, local_type->name_off);
+ if (!local_name) {
+ err = -EINVAL;
+ goto err_out;
+ }
+ local_essent_len = bpf_core_essential_name_len(local_name);
+
+ cands = calloc(1, sizeof(*cands));
+ if (!cands)
+ return NULL;
+
+ err = bpf_core_add_cands(&local_cand, local_essent_len, targ_btf, "vmlinux", 1, cands);
+ if (err)
+ goto err_out;
+
+ return cands;
+
+err_out:
+ bpf_core_free_cands(cands);
+ errno = -err;
+ return NULL;
+}
+
+/* Record relocation information for a single BPF object */
+static int btfgen_record_obj(struct btfgen_info *info, const char *obj_path)
+{
+ const struct btf_ext_info_sec *sec;
+ const struct bpf_core_relo *relo;
+ const struct btf_ext_info *seg;
+ struct hashmap_entry *entry;
+ struct hashmap *cand_cache = NULL;
+ struct btf_ext *btf_ext = NULL;
+ unsigned int relo_idx;
+ struct btf *btf = NULL;
+ size_t i;
+ int err;
+
+ btf = btf__parse(obj_path, &btf_ext);
+ if (!btf) {
+ err = -errno;
+ p_err("failed to parse BPF object '%s': %s", obj_path, strerror(errno));
+ return err;
+ }
+
+ if (!btf_ext) {
+ p_err("failed to parse BPF object '%s': section %s not found",
+ obj_path, BTF_EXT_ELF_SEC);
+ err = -EINVAL;
+ goto out;
+ }
+
+ if (btf_ext->core_relo_info.len == 0) {
+ err = 0;
+ goto out;
+ }
+
+ cand_cache = hashmap__new(btfgen_hash_fn, btfgen_equal_fn, NULL);
+ if (IS_ERR(cand_cache)) {
+ err = PTR_ERR(cand_cache);
+ goto out;
+ }
+
+ seg = &btf_ext->core_relo_info;
+ for_each_btf_ext_sec(seg, sec) {
+ for_each_btf_ext_rec(seg, sec, relo_idx, relo) {
+ struct bpf_core_spec specs_scratch[3] = {};
+ struct bpf_core_relo_res targ_res = {};
+ struct bpf_core_cand_list *cands = NULL;
+ const void *type_key = u32_as_hash_key(relo->type_id);
+ const char *sec_name = btf__name_by_offset(btf, sec->sec_name_off);
+
+ if (relo->kind != BPF_CORE_TYPE_ID_LOCAL &&
+ !hashmap__find(cand_cache, type_key, (void **)&cands)) {
+ cands = btfgen_find_cands(btf, info->src_btf, relo->type_id);
+ if (!cands) {
+ err = -errno;
+ goto out;
+ }
+
+ err = hashmap__set(cand_cache, type_key, cands, NULL, NULL);
+ if (err)
+ goto out;
+ }
+
+ err = bpf_core_calc_relo_insn(sec_name, relo, relo_idx, btf, cands,
+ specs_scratch, &targ_res);
+ if (err)
+ goto out;
+
+ /* specs_scratch[2] is the target spec */
+ err = btfgen_record_reloc(info, &specs_scratch[2]);
+ if (err)
+ goto out;
+ }
+ }
+
+out:
+ btf__free(btf);
+ btf_ext__free(btf_ext);
+
+ if (!IS_ERR_OR_NULL(cand_cache)) {
+ hashmap__for_each_entry(cand_cache, entry, i) {
+ bpf_core_free_cands(entry->value);
+ }
+ hashmap__free(cand_cache);
+ }
+
+ return err;
+}
+
+static int btfgen_remap_id(__u32 *type_id, void *ctx)
+{
+ unsigned int *ids = ctx;
+
+ *type_id = ids[*type_id];
+
+ return 0;
+}
+
+/* Generate BTF from relocation information previously recorded */
+static struct btf *btfgen_get_btf(struct btfgen_info *info)
+{
+ struct btf *btf_new = NULL;
+ unsigned int *ids = NULL;
+ unsigned int i, n = btf__type_cnt(info->marked_btf);
+ int err = 0;
+
+ btf_new = btf__new_empty();
+ if (!btf_new) {
+ err = -errno;
+ goto err_out;
+ }
+
+ ids = calloc(n, sizeof(*ids));
+ if (!ids) {
+ err = -errno;
+ goto err_out;
+ }
+
+ /* first pass: add all marked types to btf_new and add their new ids to the ids map */
+ for (i = 1; i < n; i++) {
+ const struct btf_type *cloned_type, *type;
+ const char *name;
+ int new_id;
+
+ cloned_type = btf__type_by_id(info->marked_btf, i);
+
+ if (cloned_type->name_off != MARKED)
+ continue;
+
+ type = btf__type_by_id(info->src_btf, i);
+
+ /* add members for struct and union */
+ if (btf_is_composite(type)) {
+ struct btf_member *cloned_m, *m;
+ unsigned short vlen;
+ int idx_src;
+
+ name = btf__str_by_offset(info->src_btf, type->name_off);
+
+ if (btf_is_struct(type))
+ err = btf__add_struct(btf_new, name, type->size);
+ else
+ err = btf__add_union(btf_new, name, type->size);
+
+ if (err < 0)
+ goto err_out;
+ new_id = err;
+
+ cloned_m = btf_members(cloned_type);
+ m = btf_members(type);
+ vlen = btf_vlen(cloned_type);
+ for (idx_src = 0; idx_src < vlen; idx_src++, cloned_m++, m++) {
+ /* add only members that are marked as used */
+ if (cloned_m->name_off != MARKED)
+ continue;
+
+ name = btf__str_by_offset(info->src_btf, m->name_off);
+ err = btf__add_field(btf_new, name, m->type,
+ btf_member_bit_offset(cloned_type, idx_src),
+ btf_member_bitfield_size(cloned_type, idx_src));
+ if (err < 0)
+ goto err_out;
+ }
+ } else {
+ err = btf__add_type(btf_new, info->src_btf, type);
+ if (err < 0)
+ goto err_out;
+ new_id = err;
+ }
+
+ /* add ID mapping */
+ ids[i] = new_id;
+ }
+
+ /* second pass: fix up type ids */
+ for (i = 1; i < btf__type_cnt(btf_new); i++) {
+ struct btf_type *btf_type = (struct btf_type *) btf__type_by_id(btf_new, i);
+
+ err = btf_type_visit_type_ids(btf_type, btfgen_remap_id, ids);
+ if (err)
+ goto err_out;
+ }
+
+ free(ids);
+ return btf_new;
+
+err_out:
+ btf__free(btf_new);
+ free(ids);
+ errno = -err;
+ return NULL;
+}
+
+/* Create minimized BTF file for a set of BPF objects.
+ *
+ * The BTFGen algorithm is divided in two main parts: (1) collect the
+ * BTF types that are involved in relocations and (2) generate the BTF
+ * object using the collected types.
+ *
+ * In order to collect the types involved in the relocations, we parse
+ * the BTF and BTF.ext sections of the BPF objects and use
+ * bpf_core_calc_relo_insn() to get the target specification, this
+ * indicates how the types and fields are used in a relocation.
+ *
+ * Types are recorded in different ways according to the kind of the
+ * relocation. For field-based relocations only the members that are
+ * actually used are saved in order to reduce the size of the generated
+ * BTF file. For type-based relocations empty struct / unions are
+ * generated and for enum-based relocations the whole type is saved.
+ *
+ * The second part of the algorithm generates the BTF object. It creates
+ * an empty BTF object and fills it with the types recorded in the
+ * previous step. This function takes care of only adding the structure
+ * and union members that were marked as used and it also fixes up the
+ * type IDs on the generated BTF object.
+ */
+static int minimize_btf(const char *src_btf, const char *dst_btf, const char *objspaths[])
+{
+ struct btfgen_info *info;
+ struct btf *btf_new = NULL;
+ int err, i;
+
+ info = btfgen_new_info(src_btf);
+ if (!info) {
+ err = -errno;
+ p_err("failed to allocate info structure: %s", strerror(errno));
+ goto out;
+ }
+
+ for (i = 0; objspaths[i] != NULL; i++) {
+ err = btfgen_record_obj(info, objspaths[i]);
+ if (err) {
+ p_err("error recording relocations for %s: %s", objspaths[i],
+ strerror(errno));
+ goto out;
+ }
+ }
+
+ btf_new = btfgen_get_btf(info);
+ if (!btf_new) {
+ err = -errno;
+ p_err("error generating BTF: %s", strerror(errno));
+ goto out;
+ }
+
+ err = btf_save_raw(btf_new, dst_btf);
+ if (err) {
+ p_err("error saving btf file: %s", strerror(errno));
+ goto out;
+ }
+
+out:
+ btf__free(btf_new);
+ btfgen_free_info(info);
+
+ return err;
+}
+
+static int do_min_core_btf(int argc, char **argv)
+{
+ const char *input, *output, **objs;
+ int i, err;
+
+ if (!REQ_ARGS(3)) {
+ usage();
+ return -1;
+ }
+
+ input = GET_ARG();
+ output = GET_ARG();
+
+ objs = (const char **) calloc(argc + 1, sizeof(*objs));
+ if (!objs) {
+ p_err("failed to allocate array for object names");
+ return -ENOMEM;
+ }
+
+ i = 0;
+ while (argc)
+ objs[i++] = GET_ARG();
+
+ err = minimize_btf(input, output, objs);
+ free(objs);
+ return err;
+}
+
static const struct cmd cmds[] = {
- { "object", do_object },
- { "skeleton", do_skeleton },
- { "help", do_help },
+ { "object", do_object },
+ { "skeleton", do_skeleton },
+ { "subskeleton", do_subskeleton },
+ { "min_core_btf", do_min_core_btf},
+ { "help", do_help },
{ 0 }
};
diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
index 2c258db0d352..97dec81950e5 100644
--- a/tools/bpf/bpftool/link.c
+++ b/tools/bpf/bpftool/link.c
@@ -2,6 +2,7 @@
/* Copyright (C) 2020 Facebook */
#include <errno.h>
+#include <linux/err.h>
#include <net/if.h>
#include <stdio.h>
#include <unistd.h>
@@ -306,7 +307,7 @@ static int do_show(int argc, char **argv)
if (show_pinned) {
link_table = hashmap__new(hash_fn_for_key_as_id,
equal_fn_for_key_as_id, NULL);
- if (!link_table) {
+ if (IS_ERR(link_table)) {
p_err("failed to create hashmap for pinned paths");
return -1;
}
diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
index 020e91a542d5..e81227761f5d 100644
--- a/tools/bpf/bpftool/main.c
+++ b/tools/bpf/bpftool/main.c
@@ -71,6 +71,17 @@ static int do_help(int argc, char **argv)
return 0;
}
+#ifndef BPFTOOL_VERSION
+/* bpftool's major and minor version numbers are aligned on libbpf's. There is
+ * an offset of 6 for the version number, because bpftool's version was higher
+ * than libbpf's when we adopted this scheme. The patch number remains at 0
+ * for now. Set BPFTOOL_VERSION to override.
+ */
+#define BPFTOOL_MAJOR_VERSION (LIBBPF_MAJOR_VERSION + 6)
+#define BPFTOOL_MINOR_VERSION LIBBPF_MINOR_VERSION
+#define BPFTOOL_PATCH_VERSION 0
+#endif
+
static int do_version(int argc, char **argv)
{
#ifdef HAVE_LIBBFD_SUPPORT
@@ -88,7 +99,15 @@ static int do_version(int argc, char **argv)
jsonw_start_object(json_wtr); /* root object */
jsonw_name(json_wtr, "version");
+#ifdef BPFTOOL_VERSION
jsonw_printf(json_wtr, "\"%s\"", BPFTOOL_VERSION);
+#else
+ jsonw_printf(json_wtr, "\"%d.%d.%d\"", BPFTOOL_MAJOR_VERSION,
+ BPFTOOL_MINOR_VERSION, BPFTOOL_PATCH_VERSION);
+#endif
+ jsonw_name(json_wtr, "libbpf_version");
+ jsonw_printf(json_wtr, "\"%d.%d\"",
+ libbpf_major_version(), libbpf_minor_version());
jsonw_name(json_wtr, "features");
jsonw_start_object(json_wtr); /* features */
@@ -101,7 +120,13 @@ static int do_version(int argc, char **argv)
} else {
unsigned int nb_features = 0;
+#ifdef BPFTOOL_VERSION
printf("%s v%s\n", bin_name, BPFTOOL_VERSION);
+#else
+ printf("%s v%d.%d.%d\n", bin_name, BPFTOOL_MAJOR_VERSION,
+ BPFTOOL_MINOR_VERSION, BPFTOOL_PATCH_VERSION);
+#endif
+ printf("using libbpf %s\n", libbpf_version_string());
printf("features:");
if (has_libbfd) {
printf(" libbfd");
@@ -478,7 +503,11 @@ int main(int argc, char **argv)
}
if (!legacy_libbpf) {
- ret = libbpf_set_strict_mode(LIBBPF_STRICT_ALL);
+ /* Allow legacy map definitions for skeleton generation.
+ * It will still be rejected if users use LIBBPF_STRICT_ALL
+ * mode for loading generated skeleton.
+ */
+ ret = libbpf_set_strict_mode(LIBBPF_STRICT_ALL & ~LIBBPF_STRICT_MAP_DEFINITIONS);
if (ret)
p_err("failed to enable libbpf strict mode: %d", ret);
}
diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h
index 8d76d937a62b..6e9277ffc68c 100644
--- a/tools/bpf/bpftool/main.h
+++ b/tools/bpf/bpftool/main.h
@@ -8,10 +8,10 @@
#undef GCC_VERSION
#include <stdbool.h>
#include <stdio.h>
+#include <stdlib.h>
#include <linux/bpf.h>
#include <linux/compiler.h>
#include <linux/kernel.h>
-#include <tools/libc_compat.h>
#include <bpf/hashmap.h>
#include <bpf/libbpf.h>
@@ -113,7 +113,9 @@ struct obj_ref {
struct obj_refs {
int ref_cnt;
+ bool has_bpf_cookie;
struct obj_ref *refs;
+ __u64 bpf_cookie;
};
struct btf;
@@ -140,6 +142,10 @@ struct cmd {
int cmd_select(const struct cmd *cmds, int argc, char **argv,
int (*help)(int argc, char **argv));
+#define MAX_PROG_FULL_NAME 128
+void get_prog_full_name(const struct bpf_prog_info *prog_info, int prog_fd,
+ char *name_buff, size_t buff_len);
+
int get_fd_type(int fd);
const char *get_fd_type_name(enum bpf_obj_type type);
char *get_fdinfo(int fd, const char *key);
diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c
index cc530a229812..c26378f20831 100644
--- a/tools/bpf/bpftool/map.c
+++ b/tools/bpf/bpftool/map.c
@@ -504,7 +504,7 @@ static int show_map_close_json(int fd, struct bpf_map_info *info)
jsonw_uint_field(json_wtr, "max_entries", info->max_entries);
if (memlock)
- jsonw_int_field(json_wtr, "bytes_memlock", atoi(memlock));
+ jsonw_int_field(json_wtr, "bytes_memlock", atoll(memlock));
free(memlock);
if (info->type == BPF_MAP_TYPE_PROG_ARRAY) {
@@ -620,17 +620,14 @@ static int show_map_close_plain(int fd, struct bpf_map_info *info)
u32_as_hash_field(info->id))
printf("\n\tpinned %s", (char *)entry->value);
}
- printf("\n");
if (frozen_str) {
frozen = atoi(frozen_str);
free(frozen_str);
}
- if (!info->btf_id && !frozen)
- return 0;
-
- printf("\t");
+ if (info->btf_id || frozen)
+ printf("\n\t");
if (info->btf_id)
printf("btf_id %d", info->btf_id);
@@ -699,7 +696,7 @@ static int do_show(int argc, char **argv)
if (show_pinned) {
map_table = hashmap__new(hash_fn_for_key_as_id,
equal_fn_for_key_as_id, NULL);
- if (!map_table) {
+ if (IS_ERR(map_table)) {
p_err("failed to create hashmap for pinned paths");
return -1;
}
@@ -805,29 +802,30 @@ static int maps_have_btf(int *fds, int nb_fds)
static struct btf *btf_vmlinux;
-static struct btf *get_map_kv_btf(const struct bpf_map_info *info)
+static int get_map_kv_btf(const struct bpf_map_info *info, struct btf **btf)
{
- struct btf *btf = NULL;
+ int err = 0;
if (info->btf_vmlinux_value_type_id) {
if (!btf_vmlinux) {
btf_vmlinux = libbpf_find_kernel_btf();
- if (libbpf_get_error(btf_vmlinux))
+ err = libbpf_get_error(btf_vmlinux);
+ if (err) {
p_err("failed to get kernel btf");
+ return err;
+ }
}
- return btf_vmlinux;
+ *btf = btf_vmlinux;
} else if (info->btf_value_type_id) {
- int err;
-
- btf = btf__load_from_kernel_by_id(info->btf_id);
- err = libbpf_get_error(btf);
- if (err) {
+ *btf = btf__load_from_kernel_by_id(info->btf_id);
+ err = libbpf_get_error(*btf);
+ if (err)
p_err("failed to get btf");
- btf = ERR_PTR(err);
- }
+ } else {
+ *btf = NULL;
}
- return btf;
+ return err;
}
static void free_map_kv_btf(struct btf *btf)
@@ -862,8 +860,7 @@ map_dump(int fd, struct bpf_map_info *info, json_writer_t *wtr,
prev_key = NULL;
if (wtr) {
- btf = get_map_kv_btf(info);
- err = libbpf_get_error(btf);
+ err = get_map_kv_btf(info, &btf);
if (err) {
goto exit_free;
}
@@ -1054,11 +1051,8 @@ static void print_key_value(struct bpf_map_info *info, void *key,
json_writer_t *btf_wtr;
struct btf *btf;
- btf = btf__load_from_kernel_by_id(info->btf_id);
- if (libbpf_get_error(btf)) {
- p_err("failed to get btf");
+ if (get_map_kv_btf(info, &btf))
return;
- }
if (json_output) {
print_entry_json(info, key, value, btf);
diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c
index 649053704bd7..526a332c48e6 100644
--- a/tools/bpf/bpftool/net.c
+++ b/tools/bpf/bpftool/net.c
@@ -551,7 +551,7 @@ static int do_attach_detach_xdp(int progfd, enum net_attach_type attach_type,
if (attach_type == NET_ATTACH_TYPE_XDP_OFFLOAD)
flags |= XDP_FLAGS_HW_MODE;
- return bpf_set_link_xdp_fd(ifindex, progfd, flags);
+ return bpf_xdp_attach(ifindex, progfd, flags, NULL);
}
static int do_attach(int argc, char **argv)
diff --git a/tools/bpf/bpftool/pids.c b/tools/bpf/bpftool/pids.c
index 56b598eee043..bb6c969a114a 100644
--- a/tools/bpf/bpftool/pids.c
+++ b/tools/bpf/bpftool/pids.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
/* Copyright (C) 2020 Facebook */
#include <errno.h>
+#include <linux/err.h>
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
@@ -77,6 +78,8 @@ static void add_ref(struct hashmap *map, struct pid_iter_entry *e)
ref->pid = e->pid;
memcpy(ref->comm, e->comm, sizeof(ref->comm));
refs->ref_cnt = 1;
+ refs->has_bpf_cookie = e->has_bpf_cookie;
+ refs->bpf_cookie = e->bpf_cookie;
err = hashmap__append(map, u32_as_hash_field(e->id), refs);
if (err)
@@ -101,7 +104,7 @@ int build_obj_refs_table(struct hashmap **map, enum bpf_obj_type type)
libbpf_print_fn_t default_print;
*map = hashmap__new(hash_fn_for_key_as_id, equal_fn_for_key_as_id, NULL);
- if (!*map) {
+ if (IS_ERR(*map)) {
p_err("failed to create hashmap for PID references");
return -1;
}
@@ -204,6 +207,9 @@ void emit_obj_refs_json(struct hashmap *map, __u32 id,
if (refs->ref_cnt == 0)
break;
+ if (refs->has_bpf_cookie)
+ jsonw_lluint_field(json_writer, "bpf_cookie", refs->bpf_cookie);
+
jsonw_name(json_writer, "pids");
jsonw_start_array(json_writer);
for (i = 0; i < refs->ref_cnt; i++) {
@@ -233,6 +239,9 @@ void emit_obj_refs_plain(struct hashmap *map, __u32 id, const char *prefix)
if (refs->ref_cnt == 0)
break;
+ if (refs->has_bpf_cookie)
+ printf("\n\tbpf_cookie %llu", (unsigned long long) refs->bpf_cookie);
+
printf("%s", prefix);
for (i = 0; i < refs->ref_cnt; i++) {
struct obj_ref *ref = &refs->refs[i];
diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
index 2a21d50516bc..bc4e05542c2b 100644
--- a/tools/bpf/bpftool/prog.c
+++ b/tools/bpf/bpftool/prog.c
@@ -26,6 +26,7 @@
#include <bpf/btf.h>
#include <bpf/hashmap.h>
#include <bpf/libbpf.h>
+#include <bpf/libbpf_internal.h>
#include <bpf/skel_internal.h>
#include "cfg.h"
@@ -424,8 +425,10 @@ out_free:
free(value);
}
-static void print_prog_header_json(struct bpf_prog_info *info)
+static void print_prog_header_json(struct bpf_prog_info *info, int fd)
{
+ char prog_name[MAX_PROG_FULL_NAME];
+
jsonw_uint_field(json_wtr, "id", info->id);
if (info->type < ARRAY_SIZE(prog_type_name))
jsonw_string_field(json_wtr, "type",
@@ -433,8 +436,10 @@ static void print_prog_header_json(struct bpf_prog_info *info)
else
jsonw_uint_field(json_wtr, "type", info->type);
- if (*info->name)
- jsonw_string_field(json_wtr, "name", info->name);
+ if (*info->name) {
+ get_prog_full_name(info, fd, prog_name, sizeof(prog_name));
+ jsonw_string_field(json_wtr, "name", prog_name);
+ }
jsonw_name(json_wtr, "tag");
jsonw_printf(json_wtr, "\"" BPF_TAG_FMT "\"",
@@ -455,7 +460,7 @@ static void print_prog_json(struct bpf_prog_info *info, int fd)
char *memlock;
jsonw_start_object(json_wtr);
- print_prog_header_json(info);
+ print_prog_header_json(info, fd);
print_dev_json(info->ifindex, info->netns_dev, info->netns_ino);
if (info->load_time) {
@@ -480,7 +485,7 @@ static void print_prog_json(struct bpf_prog_info *info, int fd)
memlock = get_fdinfo(fd, "memlock");
if (memlock)
- jsonw_int_field(json_wtr, "bytes_memlock", atoi(memlock));
+ jsonw_int_field(json_wtr, "bytes_memlock", atoll(memlock));
free(memlock);
if (info->nr_map_ids)
@@ -507,16 +512,20 @@ static void print_prog_json(struct bpf_prog_info *info, int fd)
jsonw_end_object(json_wtr);
}
-static void print_prog_header_plain(struct bpf_prog_info *info)
+static void print_prog_header_plain(struct bpf_prog_info *info, int fd)
{
+ char prog_name[MAX_PROG_FULL_NAME];
+
printf("%u: ", info->id);
if (info->type < ARRAY_SIZE(prog_type_name))
printf("%s ", prog_type_name[info->type]);
else
printf("type %u ", info->type);
- if (*info->name)
- printf("name %s ", info->name);
+ if (*info->name) {
+ get_prog_full_name(info, fd, prog_name, sizeof(prog_name));
+ printf("name %s ", prog_name);
+ }
printf("tag ");
fprint_hex(stdout, info->tag, BPF_TAG_SIZE, "");
@@ -534,7 +543,7 @@ static void print_prog_plain(struct bpf_prog_info *info, int fd)
{
char *memlock;
- print_prog_header_plain(info);
+ print_prog_header_plain(info, fd);
if (info->load_time) {
char buf[32];
@@ -641,7 +650,7 @@ static int do_show(int argc, char **argv)
if (show_pinned) {
prog_table = hashmap__new(hash_fn_for_key_as_id,
equal_fn_for_key_as_id, NULL);
- if (!prog_table) {
+ if (IS_ERR(prog_table)) {
p_err("failed to create hashmap for pinned paths");
return -1;
}
@@ -972,10 +981,10 @@ static int do_dump(int argc, char **argv)
if (json_output && nb_fds > 1) {
jsonw_start_object(json_wtr); /* prog object */
- print_prog_header_json(&info);
+ print_prog_header_json(&info, fds[i]);
jsonw_name(json_wtr, "insns");
} else if (nb_fds > 1) {
- print_prog_header_plain(&info);
+ print_prog_header_plain(&info, fds[i]);
}
err = prog_dump(&info, mode, filepath, opcodes, visual, linum);
@@ -1264,12 +1273,12 @@ static int do_run(int argc, char **argv)
{
char *data_fname_in = NULL, *data_fname_out = NULL;
char *ctx_fname_in = NULL, *ctx_fname_out = NULL;
- struct bpf_prog_test_run_attr test_attr = {0};
const unsigned int default_size = SZ_32K;
void *data_in = NULL, *data_out = NULL;
void *ctx_in = NULL, *ctx_out = NULL;
unsigned int repeat = 1;
int fd, err;
+ LIBBPF_OPTS(bpf_test_run_opts, test_attr);
if (!REQ_ARGS(4))
return -1;
@@ -1387,14 +1396,13 @@ static int do_run(int argc, char **argv)
goto free_ctx_in;
}
- test_attr.prog_fd = fd;
test_attr.repeat = repeat;
test_attr.data_in = data_in;
test_attr.data_out = data_out;
test_attr.ctx_in = ctx_in;
test_attr.ctx_out = ctx_out;
- err = bpf_prog_test_run_xattr(&test_attr);
+ err = bpf_prog_test_run_opts(fd, &test_attr);
if (err) {
p_err("failed to run program: %s", strerror(errno));
goto free_ctx_out;
@@ -1551,9 +1559,9 @@ static int load_with_options(int argc, char **argv, bool first_prog_only)
if (fd < 0)
goto err_free_reuse_maps;
- new_map_replace = reallocarray(map_replace,
- old_map_fds + 1,
- sizeof(*map_replace));
+ new_map_replace = libbpf_reallocarray(map_replace,
+ old_map_fds + 1,
+ sizeof(*map_replace));
if (!new_map_replace) {
p_err("mem alloc failed");
goto err_free_reuse_maps;
@@ -2275,10 +2283,10 @@ static int do_profile(int argc, char **argv)
profile_obj->rodata->num_metric = num_metric;
/* adjust map sizes */
- bpf_map__resize(profile_obj->maps.events, num_metric * num_cpu);
- bpf_map__resize(profile_obj->maps.fentry_readings, num_metric);
- bpf_map__resize(profile_obj->maps.accum_readings, num_metric);
- bpf_map__resize(profile_obj->maps.counts, 1);
+ bpf_map__set_max_entries(profile_obj->maps.events, num_metric * num_cpu);
+ bpf_map__set_max_entries(profile_obj->maps.fentry_readings, num_metric);
+ bpf_map__set_max_entries(profile_obj->maps.accum_readings, num_metric);
+ bpf_map__set_max_entries(profile_obj->maps.counts, 1);
/* change target name */
profile_tgt_name = profile_target_name(profile_tgt_fd);
diff --git a/tools/bpf/bpftool/skeleton/pid_iter.bpf.c b/tools/bpf/bpftool/skeleton/pid_iter.bpf.c
index f70702fcb224..eb05ea53afb1 100644
--- a/tools/bpf/bpftool/skeleton/pid_iter.bpf.c
+++ b/tools/bpf/bpftool/skeleton/pid_iter.bpf.c
@@ -38,6 +38,17 @@ static __always_inline __u32 get_obj_id(void *ent, enum bpf_obj_type type)
}
}
+/* could be used only with BPF_LINK_TYPE_PERF_EVENT links */
+static __u64 get_bpf_cookie(struct bpf_link *link)
+{
+ struct bpf_perf_link *perf_link;
+ struct perf_event *event;
+
+ perf_link = container_of(link, struct bpf_perf_link, link);
+ event = BPF_CORE_READ(perf_link, perf_file, private_data);
+ return BPF_CORE_READ(event, bpf_cookie);
+}
+
SEC("iter/task_file")
int iter(struct bpf_iter__task_file *ctx)
{
@@ -69,8 +80,19 @@ int iter(struct bpf_iter__task_file *ctx)
if (file->f_op != fops)
return 0;
+ __builtin_memset(&e, 0, sizeof(e));
e.pid = task->tgid;
e.id = get_obj_id(file->private_data, obj_type);
+
+ if (obj_type == BPF_OBJ_LINK) {
+ struct bpf_link *link = (struct bpf_link *) file->private_data;
+
+ if (BPF_CORE_READ(link, type) == BPF_LINK_TYPE_PERF_EVENT) {
+ e.has_bpf_cookie = true;
+ e.bpf_cookie = get_bpf_cookie(link);
+ }
+ }
+
bpf_probe_read_kernel_str(&e.comm, sizeof(e.comm),
task->group_leader->comm);
bpf_seq_write(ctx->meta->seq, &e, sizeof(e));
diff --git a/tools/bpf/bpftool/skeleton/pid_iter.h b/tools/bpf/bpftool/skeleton/pid_iter.h
index 5692cf257adb..bbb570d4cca6 100644
--- a/tools/bpf/bpftool/skeleton/pid_iter.h
+++ b/tools/bpf/bpftool/skeleton/pid_iter.h
@@ -6,6 +6,8 @@
struct pid_iter_entry {
__u32 id;
int pid;
+ __u64 bpf_cookie;
+ bool has_bpf_cookie;
char comm[16];
};
diff --git a/tools/bpf/bpftool/struct_ops.c b/tools/bpf/bpftool/struct_ops.c
index 2f693b082bdb..e08a6ff2866c 100644
--- a/tools/bpf/bpftool/struct_ops.c
+++ b/tools/bpf/bpftool/struct_ops.c
@@ -480,7 +480,6 @@ static int do_unregister(int argc, char **argv)
static int do_register(int argc, char **argv)
{
LIBBPF_OPTS(bpf_object_open_opts, open_opts);
- const struct bpf_map_def *def;
struct bpf_map_info info = {};
__u32 info_len = sizeof(info);
int nr_errs = 0, nr_maps = 0;
@@ -510,8 +509,7 @@ static int do_register(int argc, char **argv)
}
bpf_object__for_each_map(map, obj) {
- def = bpf_map__def(map);
- if (def->type != BPF_MAP_TYPE_STRUCT_OPS)
+ if (bpf_map__type(map) != BPF_MAP_TYPE_STRUCT_OPS)
continue;
link = bpf_map__attach_struct_ops(map);
diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c
index f1f32e21d5cd..2d9cd6a7b3c8 100644
--- a/tools/bpf/bpftool/xlated_dumper.c
+++ b/tools/bpf/bpftool/xlated_dumper.c
@@ -8,6 +8,7 @@
#include <string.h>
#include <sys/types.h>
#include <bpf/libbpf.h>
+#include <bpf/libbpf_internal.h>
#include "disasm.h"
#include "json_writer.h"
@@ -32,8 +33,8 @@ void kernel_syms_load(struct dump_data *dd)
return;
while (fgets(buff, sizeof(buff), fp)) {
- tmp = reallocarray(dd->sym_mapping, dd->sym_count + 1,
- sizeof(*dd->sym_mapping));
+ tmp = libbpf_reallocarray(dd->sym_mapping, dd->sym_count + 1,
+ sizeof(*dd->sym_mapping));
if (!tmp) {
out:
free(dd->sym_mapping);
diff --git a/tools/bpf/resolve_btfids/Makefile b/tools/bpf/resolve_btfids/Makefile
index 320a88ac28c9..19a3112e271a 100644
--- a/tools/bpf/resolve_btfids/Makefile
+++ b/tools/bpf/resolve_btfids/Makefile
@@ -24,6 +24,8 @@ LD = $(HOSTLD)
ARCH = $(HOSTARCH)
RM ?= rm
CROSS_COMPILE =
+CFLAGS := $(KBUILD_HOSTCFLAGS)
+LDFLAGS := $(KBUILD_HOSTLDFLAGS)
OUTPUT ?= $(srctree)/tools/bpf/resolve_btfids/
@@ -51,10 +53,10 @@ $(SUBCMDOBJ): fixdep FORCE | $(OUTPUT)/libsubcmd
$(BPFOBJ): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(LIBBPF_OUT)
$(Q)$(MAKE) $(submake_extras) -C $(LIBBPF_SRC) OUTPUT=$(LIBBPF_OUT) \
- DESTDIR=$(LIBBPF_DESTDIR) prefix= \
+ DESTDIR=$(LIBBPF_DESTDIR) prefix= EXTRA_CFLAGS="$(CFLAGS)" \
$(abspath $@) install_headers
-CFLAGS := -g \
+CFLAGS += -g \
-I$(srctree)/tools/include \
-I$(srctree)/tools/include/uapi \
-I$(LIBBPF_INCLUDE) \
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index b0383d371b9a..7604e7d5438f 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -330,6 +330,8 @@ union bpf_iter_link_info {
* *ctx_out*, *data_in* and *data_out* must be NULL.
* *repeat* must be zero.
*
+ * BPF_PROG_RUN is an alias for BPF_PROG_TEST_RUN.
+ *
* Return
* Returns zero on success. On error, -1 is returned and *errno*
* is set appropriately.
@@ -995,6 +997,7 @@ enum bpf_attach_type {
BPF_SK_REUSEPORT_SELECT,
BPF_SK_REUSEPORT_SELECT_OR_MIGRATE,
BPF_PERF_EVENT,
+ BPF_TRACE_KPROBE_MULTI,
__MAX_BPF_ATTACH_TYPE
};
@@ -1009,6 +1012,7 @@ enum bpf_link_type {
BPF_LINK_TYPE_NETNS = 5,
BPF_LINK_TYPE_XDP = 6,
BPF_LINK_TYPE_PERF_EVENT = 7,
+ BPF_LINK_TYPE_KPROBE_MULTI = 8,
MAX_BPF_LINK_TYPE,
};
@@ -1111,6 +1115,16 @@ enum bpf_link_type {
*/
#define BPF_F_SLEEPABLE (1U << 4)
+/* If BPF_F_XDP_HAS_FRAGS is used in BPF_PROG_LOAD command, the loaded program
+ * fully support xdp frags.
+ */
+#define BPF_F_XDP_HAS_FRAGS (1U << 5)
+
+/* link_create.kprobe_multi.flags used in LINK_CREATE command for
+ * BPF_TRACE_KPROBE_MULTI attach type to create return probe.
+ */
+#define BPF_F_KPROBE_MULTI_RETURN (1U << 0)
+
/* When BPF ldimm64's insn[0].src_reg != 0 then this can have
* the following extensions:
*
@@ -1225,6 +1239,8 @@ enum {
/* If set, run the test on the cpu specified by bpf_attr.test.cpu */
#define BPF_F_TEST_RUN_ON_CPU (1U << 0)
+/* If set, XDP frames will be transmitted after processing */
+#define BPF_F_TEST_XDP_LIVE_FRAMES (1U << 1)
/* type for BPF_ENABLE_STATS */
enum bpf_stats_type {
@@ -1386,6 +1402,7 @@ union bpf_attr {
__aligned_u64 ctx_out;
__u32 flags;
__u32 cpu;
+ __u32 batch_size;
} test;
struct { /* anonymous struct used by BPF_*_GET_*_ID */
@@ -1465,6 +1482,13 @@ union bpf_attr {
*/
__u64 bpf_cookie;
} perf_event;
+ struct {
+ __u32 flags;
+ __u32 cnt;
+ __aligned_u64 syms;
+ __aligned_u64 addrs;
+ __aligned_u64 cookies;
+ } kprobe_multi;
};
} link_create;
@@ -1775,6 +1799,8 @@ union bpf_attr {
* 0 on success, or a negative error in case of failure.
*
* u64 bpf_get_current_pid_tgid(void)
+ * Description
+ * Get the current pid and tgid.
* Return
* A 64-bit integer containing the current tgid and pid, and
* created as such:
@@ -1782,6 +1808,8 @@ union bpf_attr {
* *current_task*\ **->pid**.
*
* u64 bpf_get_current_uid_gid(void)
+ * Description
+ * Get the current uid and gid.
* Return
* A 64-bit integer containing the current GID and UID, and
* created as such: *current_gid* **<< 32 \|** *current_uid*.
@@ -2256,6 +2284,8 @@ union bpf_attr {
* The 32-bit hash.
*
* u64 bpf_get_current_task(void)
+ * Description
+ * Get the current task.
* Return
* A pointer to the current task struct.
*
@@ -2286,8 +2316,8 @@ union bpf_attr {
* Return
* The return value depends on the result of the test, and can be:
*
- * * 0, if current task belongs to the cgroup2.
- * * 1, if current task does not belong to the cgroup2.
+ * * 1, if current task belongs to the cgroup2.
+ * * 0, if current task does not belong to the cgroup2.
* * A negative error code, if an error occurred.
*
* long bpf_skb_change_tail(struct sk_buff *skb, u32 len, u64 flags)
@@ -2369,6 +2399,8 @@ union bpf_attr {
* indicate that the hash is outdated and to trigger a
* recalculation the next time the kernel tries to access this
* hash or when the **bpf_get_hash_recalc**\ () helper is called.
+ * Return
+ * void.
*
* long bpf_get_numa_node_id(void)
* Description
@@ -2466,6 +2498,8 @@ union bpf_attr {
* A 8-byte long unique number or 0 if *sk* is NULL.
*
* u32 bpf_get_socket_uid(struct sk_buff *skb)
+ * Description
+ * Get the owner UID of the socked associated to *skb*.
* Return
* The owner UID of the socket associated to *skb*. If the socket
* is **NULL**, or if it is not a full socket (i.e. if it is a
@@ -3240,6 +3274,9 @@ union bpf_attr {
* The id is returned or 0 in case the id could not be retrieved.
*
* u64 bpf_get_current_cgroup_id(void)
+ * Description
+ * Get the current cgroup id based on the cgroup within which
+ * the current task is running.
* Return
* A 64-bit integer containing the current cgroup id based
* on the cgroup within which the current task is running.
@@ -5018,6 +5055,94 @@ union bpf_attr {
*
* Return
* The number of arguments of the traced function.
+ *
+ * int bpf_get_retval(void)
+ * Description
+ * Get the syscall's return value that will be returned to userspace.
+ *
+ * This helper is currently supported by cgroup programs only.
+ * Return
+ * The syscall's return value.
+ *
+ * int bpf_set_retval(int retval)
+ * Description
+ * Set the syscall's return value that will be returned to userspace.
+ *
+ * This helper is currently supported by cgroup programs only.
+ * Return
+ * 0 on success, or a negative error in case of failure.
+ *
+ * u64 bpf_xdp_get_buff_len(struct xdp_buff *xdp_md)
+ * Description
+ * Get the total size of a given xdp buff (linear and paged area)
+ * Return
+ * The total size of a given xdp buffer.
+ *
+ * long bpf_xdp_load_bytes(struct xdp_buff *xdp_md, u32 offset, void *buf, u32 len)
+ * Description
+ * This helper is provided as an easy way to load data from a
+ * xdp buffer. It can be used to load *len* bytes from *offset* from
+ * the frame associated to *xdp_md*, into the buffer pointed by
+ * *buf*.
+ * Return
+ * 0 on success, or a negative error in case of failure.
+ *
+ * long bpf_xdp_store_bytes(struct xdp_buff *xdp_md, u32 offset, void *buf, u32 len)
+ * Description
+ * Store *len* bytes from buffer *buf* into the frame
+ * associated to *xdp_md*, at *offset*.
+ * Return
+ * 0 on success, or a negative error in case of failure.
+ *
+ * long bpf_copy_from_user_task(void *dst, u32 size, const void *user_ptr, struct task_struct *tsk, u64 flags)
+ * Description
+ * Read *size* bytes from user space address *user_ptr* in *tsk*'s
+ * address space, and stores the data in *dst*. *flags* is not
+ * used yet and is provided for future extensibility. This helper
+ * can only be used by sleepable programs.
+ * Return
+ * 0 on success, or a negative error in case of failure. On error
+ * *dst* buffer is zeroed out.
+ *
+ * long bpf_skb_set_tstamp(struct sk_buff *skb, u64 tstamp, u32 tstamp_type)
+ * Description
+ * Change the __sk_buff->tstamp_type to *tstamp_type*
+ * and set *tstamp* to the __sk_buff->tstamp together.
+ *
+ * If there is no need to change the __sk_buff->tstamp_type,
+ * the tstamp value can be directly written to __sk_buff->tstamp
+ * instead.
+ *
+ * BPF_SKB_TSTAMP_DELIVERY_MONO is the only tstamp that
+ * will be kept during bpf_redirect_*(). A non zero
+ * *tstamp* must be used with the BPF_SKB_TSTAMP_DELIVERY_MONO
+ * *tstamp_type*.
+ *
+ * A BPF_SKB_TSTAMP_UNSPEC *tstamp_type* can only be used
+ * with a zero *tstamp*.
+ *
+ * Only IPv4 and IPv6 skb->protocol are supported.
+ *
+ * This function is most useful when it needs to set a
+ * mono delivery time to __sk_buff->tstamp and then
+ * bpf_redirect_*() to the egress of an iface. For example,
+ * changing the (rcv) timestamp in __sk_buff->tstamp at
+ * ingress to a mono delivery time and then bpf_redirect_*()
+ * to sch_fq@phy-dev.
+ * Return
+ * 0 on success.
+ * **-EINVAL** for invalid input
+ * **-EOPNOTSUPP** for unsupported protocol
+ *
+ * long bpf_ima_file_hash(struct file *file, void *dst, u32 size)
+ * Description
+ * Returns a calculated IMA hash of the *file*.
+ * If the hash is larger than *size*, then only *size*
+ * bytes will be copied to *dst*
+ * Return
+ * The **hash_algo** is returned on success,
+ * **-EOPNOTSUP** if the hash calculation failed or **-EINVAL** if
+ * invalid arguments are passed.
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
@@ -5206,6 +5331,14 @@ union bpf_attr {
FN(get_func_arg), \
FN(get_func_ret), \
FN(get_func_arg_cnt), \
+ FN(get_retval), \
+ FN(set_retval), \
+ FN(xdp_get_buff_len), \
+ FN(xdp_load_bytes), \
+ FN(xdp_store_bytes), \
+ FN(copy_from_user_task), \
+ FN(skb_set_tstamp), \
+ FN(ima_file_hash), \
/* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper
@@ -5395,6 +5528,15 @@ union { \
__u64 :64; \
} __attribute__((aligned(8)))
+enum {
+ BPF_SKB_TSTAMP_UNSPEC,
+ BPF_SKB_TSTAMP_DELIVERY_MONO, /* tstamp has mono delivery time */
+ /* For any BPF_SKB_TSTAMP_* that the bpf prog cannot handle,
+ * the bpf prog should handle it like BPF_SKB_TSTAMP_UNSPEC
+ * and try to deduce it by ingress, egress or skb->sk->sk_clockid.
+ */
+};
+
/* user accessible mirror of in-kernel sk_buff.
* new fields can only be added to the end of this structure
*/
@@ -5435,7 +5577,8 @@ struct __sk_buff {
__u32 gso_segs;
__bpf_md_ptr(struct bpf_sock *, sk);
__u32 gso_size;
- __u32 :32; /* Padding, future use. */
+ __u8 tstamp_type;
+ __u32 :24; /* Padding, future use. */
__u64 hwtstamp;
};
@@ -5500,7 +5643,8 @@ struct bpf_sock {
__u32 src_ip4;
__u32 src_ip6[4];
__u32 src_port; /* host byte order */
- __u32 dst_port; /* network byte order */
+ __be16 dst_port; /* network byte order */
+ __u16 :16; /* zero padding */
__u32 dst_ip4;
__u32 dst_ip6[4];
__u32 state;
@@ -6378,7 +6522,8 @@ struct bpf_sk_lookup {
__u32 protocol; /* IP protocol (IPPROTO_TCP, IPPROTO_UDP) */
__u32 remote_ip4; /* Network byte order */
__u32 remote_ip6[4]; /* Network byte order */
- __u32 remote_port; /* Network byte order */
+ __be16 remote_port; /* Network byte order */
+ __u16 :16; /* Zero padding */
__u32 local_ip4; /* Network byte order */
__u32 local_ip6[4]; /* Network byte order */
__u32 local_port; /* Host byte order */
diff --git a/tools/include/uapi/linux/if_link.h b/tools/include/uapi/linux/if_link.h
index 6218f93f5c1a..e1ba2d51b717 100644
--- a/tools/include/uapi/linux/if_link.h
+++ b/tools/include/uapi/linux/if_link.h
@@ -860,6 +860,7 @@ enum {
IFLA_BOND_PEER_NOTIF_DELAY,
IFLA_BOND_AD_LACP_ACTIVE,
IFLA_BOND_MISSED_MAX,
+ IFLA_BOND_NS_IP6_TARGET,
__IFLA_BOND_MAX,
};
diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
index f947b61b2107..b8b37fe76006 100644
--- a/tools/lib/bpf/Makefile
+++ b/tools/lib/bpf/Makefile
@@ -131,7 +131,7 @@ GLOBAL_SYM_COUNT = $(shell readelf -s --wide $(BPF_IN_SHARED) | \
sort -u | wc -l)
VERSIONED_SYM_COUNT = $(shell readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \
sed 's/\[.*\]//' | \
- awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}' | \
+ awk '/GLOBAL/ && /DEFAULT/ && !/UND|ABS/ {print $$NF}' | \
grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | sort -u | wc -l)
CMD_TARGETS = $(LIB_TARGET) $(PC_FILE)
@@ -194,7 +194,7 @@ check_abi: $(OUTPUT)libbpf.so $(VERSION_SCRIPT)
sort -u > $(OUTPUT)libbpf_global_syms.tmp; \
readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \
sed 's/\[.*\]//' | \
- awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}'| \
+ awk '/GLOBAL/ && /DEFAULT/ && !/UND|ABS/ {print $$NF}'| \
grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | \
sort -u > $(OUTPUT)libbpf_versioned_syms.tmp; \
diff -u $(OUTPUT)libbpf_global_syms.tmp \
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 550b4cbb6c99..cf27251adb92 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -29,6 +29,7 @@
#include <errno.h>
#include <linux/bpf.h>
#include <linux/filter.h>
+#include <linux/kernel.h>
#include <limits.h>
#include <sys/resource.h>
#include "bpf.h"
@@ -111,7 +112,7 @@ int probe_memcg_account(void)
BPF_EMIT_CALL(BPF_FUNC_ktime_get_coarse_ns),
BPF_EXIT_INSN(),
};
- size_t insn_cnt = sizeof(insns) / sizeof(insns[0]);
+ size_t insn_cnt = ARRAY_SIZE(insns);
union bpf_attr attr;
int prog_fd;
@@ -754,10 +755,10 @@ int bpf_prog_attach(int prog_fd, int target_fd, enum bpf_attach_type type,
.flags = flags,
);
- return bpf_prog_attach_xattr(prog_fd, target_fd, type, &opts);
+ return bpf_prog_attach_opts(prog_fd, target_fd, type, &opts);
}
-int bpf_prog_attach_xattr(int prog_fd, int target_fd,
+int bpf_prog_attach_opts(int prog_fd, int target_fd,
enum bpf_attach_type type,
const struct bpf_prog_attach_opts *opts)
{
@@ -778,6 +779,11 @@ int bpf_prog_attach_xattr(int prog_fd, int target_fd,
return libbpf_err_errno(ret);
}
+__attribute__((alias("bpf_prog_attach_opts")))
+int bpf_prog_attach_xattr(int prog_fd, int target_fd,
+ enum bpf_attach_type type,
+ const struct bpf_prog_attach_opts *opts);
+
int bpf_prog_detach(int target_fd, enum bpf_attach_type type)
{
union bpf_attr attr;
@@ -848,6 +854,15 @@ int bpf_link_create(int prog_fd, int target_fd,
if (!OPTS_ZEROED(opts, perf_event))
return libbpf_err(-EINVAL);
break;
+ case BPF_TRACE_KPROBE_MULTI:
+ attr.link_create.kprobe_multi.flags = OPTS_GET(opts, kprobe_multi.flags, 0);
+ attr.link_create.kprobe_multi.cnt = OPTS_GET(opts, kprobe_multi.cnt, 0);
+ attr.link_create.kprobe_multi.syms = ptr_to_u64(OPTS_GET(opts, kprobe_multi.syms, 0));
+ attr.link_create.kprobe_multi.addrs = ptr_to_u64(OPTS_GET(opts, kprobe_multi.addrs, 0));
+ attr.link_create.kprobe_multi.cookies = ptr_to_u64(OPTS_GET(opts, kprobe_multi.cookies, 0));
+ if (!OPTS_ZEROED(opts, kprobe_multi))
+ return libbpf_err(-EINVAL);
+ break;
default:
if (!OPTS_ZEROED(opts, flags))
return libbpf_err(-EINVAL);
@@ -989,6 +1004,7 @@ int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts)
memset(&attr, 0, sizeof(attr));
attr.test.prog_fd = prog_fd;
+ attr.test.batch_size = OPTS_GET(opts, batch_size, 0);
attr.test.cpu = OPTS_GET(opts, cpu, 0);
attr.test.flags = OPTS_GET(opts, flags, 0);
attr.test.repeat = OPTS_GET(opts, repeat, 0);
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 14e0d97ad2cf..f4b4afb6d4ba 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -391,6 +391,10 @@ struct bpf_prog_attach_opts {
LIBBPF_API int bpf_prog_attach(int prog_fd, int attachable_fd,
enum bpf_attach_type type, unsigned int flags);
+LIBBPF_API int bpf_prog_attach_opts(int prog_fd, int attachable_fd,
+ enum bpf_attach_type type,
+ const struct bpf_prog_attach_opts *opts);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_prog_attach_opts() instead")
LIBBPF_API int bpf_prog_attach_xattr(int prog_fd, int attachable_fd,
enum bpf_attach_type type,
const struct bpf_prog_attach_opts *opts);
@@ -409,10 +413,17 @@ struct bpf_link_create_opts {
struct {
__u64 bpf_cookie;
} perf_event;
+ struct {
+ __u32 flags;
+ __u32 cnt;
+ const char **syms;
+ const unsigned long *addrs;
+ const __u64 *cookies;
+ } kprobe_multi;
};
size_t :0;
};
-#define bpf_link_create_opts__last_field perf_event
+#define bpf_link_create_opts__last_field kprobe_multi.cookies
LIBBPF_API int bpf_link_create(int prog_fd, int target_fd,
enum bpf_attach_type attach_type,
@@ -449,12 +460,14 @@ struct bpf_prog_test_run_attr {
* out: length of cxt_out */
};
+LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_prog_test_run_opts() instead")
LIBBPF_API int bpf_prog_test_run_xattr(struct bpf_prog_test_run_attr *test_attr);
/*
* bpf_prog_test_run does not check that data_out is large enough. Consider
- * using bpf_prog_test_run_xattr instead.
+ * using bpf_prog_test_run_opts instead.
*/
+LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_prog_test_run_opts() instead")
LIBBPF_API int bpf_prog_test_run(int prog_fd, int repeat, void *data,
__u32 size, void *data_out, __u32 *size_out,
__u32 *retval, __u32 *duration);
@@ -506,8 +519,9 @@ struct bpf_test_run_opts {
__u32 duration; /* out: average per repetition in ns */
__u32 flags;
__u32 cpu;
+ __u32 batch_size;
};
-#define bpf_test_run_opts__last_field cpu
+#define bpf_test_run_opts__last_field batch_size
LIBBPF_API int bpf_prog_test_run_opts(int prog_fd,
struct bpf_test_run_opts *opts);
diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
index 963b1060d944..44df982d2a5c 100644
--- a/tools/lib/bpf/bpf_helpers.h
+++ b/tools/lib/bpf/bpf_helpers.h
@@ -133,7 +133,7 @@ struct bpf_map_def {
unsigned int value_size;
unsigned int max_entries;
unsigned int map_flags;
-};
+} __attribute__((deprecated("use BTF-defined maps in .maps section")));
enum libbpf_pin_type {
LIBBPF_PIN_NONE,
diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
index 90f56b0f585f..e3a8c947e89f 100644
--- a/tools/lib/bpf/bpf_tracing.h
+++ b/tools/lib/bpf/bpf_tracing.h
@@ -76,6 +76,9 @@
#define __PT_RC_REG ax
#define __PT_SP_REG sp
#define __PT_IP_REG ip
+/* syscall uses r10 for PARM4 */
+#define PT_REGS_PARM4_SYSCALL(x) ((x)->r10)
+#define PT_REGS_PARM4_CORE_SYSCALL(x) BPF_CORE_READ(x, r10)
#else
@@ -105,6 +108,9 @@
#define __PT_RC_REG rax
#define __PT_SP_REG rsp
#define __PT_IP_REG rip
+/* syscall uses r10 for PARM4 */
+#define PT_REGS_PARM4_SYSCALL(x) ((x)->r10)
+#define PT_REGS_PARM4_CORE_SYSCALL(x) BPF_CORE_READ(x, r10)
#endif /* __i386__ */
@@ -112,6 +118,10 @@
#elif defined(bpf_target_s390)
+struct pt_regs___s390 {
+ unsigned long orig_gpr2;
+};
+
/* s390 provides user_pt_regs instead of struct pt_regs to userspace */
#define __PT_REGS_CAST(x) ((const user_pt_regs *)(x))
#define __PT_PARM1_REG gprs[2]
@@ -124,6 +134,8 @@
#define __PT_RC_REG gprs[2]
#define __PT_SP_REG gprs[15]
#define __PT_IP_REG psw.addr
+#define PT_REGS_PARM1_SYSCALL(x) ({ _Pragma("GCC error \"use PT_REGS_PARM1_CORE_SYSCALL() instead\""); 0l; })
+#define PT_REGS_PARM1_CORE_SYSCALL(x) BPF_CORE_READ((const struct pt_regs___s390 *)(x), orig_gpr2)
#elif defined(bpf_target_arm)
@@ -140,6 +152,10 @@
#elif defined(bpf_target_arm64)
+struct pt_regs___arm64 {
+ unsigned long orig_x0;
+};
+
/* arm64 provides struct user_pt_regs instead of struct pt_regs to userspace */
#define __PT_REGS_CAST(x) ((const struct user_pt_regs *)(x))
#define __PT_PARM1_REG regs[0]
@@ -152,6 +168,8 @@
#define __PT_RC_REG regs[0]
#define __PT_SP_REG sp
#define __PT_IP_REG pc
+#define PT_REGS_PARM1_SYSCALL(x) ({ _Pragma("GCC error \"use PT_REGS_PARM1_CORE_SYSCALL() instead\""); 0l; })
+#define PT_REGS_PARM1_CORE_SYSCALL(x) BPF_CORE_READ((const struct pt_regs___arm64 *)(x), orig_x0)
#elif defined(bpf_target_mips)
@@ -178,6 +196,8 @@
#define __PT_RC_REG gpr[3]
#define __PT_SP_REG sp
#define __PT_IP_REG nip
+/* powerpc does not select ARCH_HAS_SYSCALL_WRAPPER. */
+#define PT_REGS_SYSCALL_REGS(ctx) ctx
#elif defined(bpf_target_sparc)
@@ -206,10 +226,12 @@
#define __PT_PARM4_REG a3
#define __PT_PARM5_REG a4
#define __PT_RET_REG ra
-#define __PT_FP_REG fp
+#define __PT_FP_REG s0
#define __PT_RC_REG a5
#define __PT_SP_REG sp
-#define __PT_IP_REG epc
+#define __PT_IP_REG pc
+/* riscv does not select ARCH_HAS_SYSCALL_WRAPPER. */
+#define PT_REGS_SYSCALL_REGS(ctx) ctx
#endif
@@ -263,6 +285,26 @@ struct pt_regs;
#endif
+#ifndef PT_REGS_PARM1_SYSCALL
+#define PT_REGS_PARM1_SYSCALL(x) PT_REGS_PARM1(x)
+#endif
+#define PT_REGS_PARM2_SYSCALL(x) PT_REGS_PARM2(x)
+#define PT_REGS_PARM3_SYSCALL(x) PT_REGS_PARM3(x)
+#ifndef PT_REGS_PARM4_SYSCALL
+#define PT_REGS_PARM4_SYSCALL(x) PT_REGS_PARM4(x)
+#endif
+#define PT_REGS_PARM5_SYSCALL(x) PT_REGS_PARM5(x)
+
+#ifndef PT_REGS_PARM1_CORE_SYSCALL
+#define PT_REGS_PARM1_CORE_SYSCALL(x) PT_REGS_PARM1_CORE(x)
+#endif
+#define PT_REGS_PARM2_CORE_SYSCALL(x) PT_REGS_PARM2_CORE(x)
+#define PT_REGS_PARM3_CORE_SYSCALL(x) PT_REGS_PARM3_CORE(x)
+#ifndef PT_REGS_PARM4_CORE_SYSCALL
+#define PT_REGS_PARM4_CORE_SYSCALL(x) PT_REGS_PARM4_CORE(x)
+#endif
+#define PT_REGS_PARM5_CORE_SYSCALL(x) PT_REGS_PARM5_CORE(x)
+
#else /* defined(bpf_target_defined) */
#define PT_REGS_PARM1(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
@@ -290,8 +332,30 @@ struct pt_regs;
#define BPF_KPROBE_READ_RET_IP(ip, ctx) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define BPF_KRETPROBE_READ_RET_IP(ip, ctx) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
+#define PT_REGS_PARM1_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
+#define PT_REGS_PARM2_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
+#define PT_REGS_PARM3_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
+#define PT_REGS_PARM4_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
+#define PT_REGS_PARM5_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
+
+#define PT_REGS_PARM1_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
+#define PT_REGS_PARM2_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
+#define PT_REGS_PARM3_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
+#define PT_REGS_PARM4_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
+#define PT_REGS_PARM5_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
+
#endif /* defined(bpf_target_defined) */
+/*
+ * When invoked from a syscall handler kprobe, returns a pointer to a
+ * struct pt_regs containing syscall arguments and suitable for passing to
+ * PT_REGS_PARMn_SYSCALL() and PT_REGS_PARMn_CORE_SYSCALL().
+ */
+#ifndef PT_REGS_SYSCALL_REGS
+/* By default, assume that the arch selects ARCH_HAS_SYSCALL_WRAPPER. */
+#define PT_REGS_SYSCALL_REGS(ctx) ((struct pt_regs *)PT_REGS_PARM1(ctx))
+#endif
+
#ifndef ___bpf_concat
#define ___bpf_concat(a, b) a ## b
#endif
@@ -406,4 +470,39 @@ typeof(name(0)) name(struct pt_regs *ctx) \
} \
static __always_inline typeof(name(0)) ____##name(struct pt_regs *ctx, ##args)
+#define ___bpf_syscall_args0() ctx
+#define ___bpf_syscall_args1(x) ___bpf_syscall_args0(), (void *)PT_REGS_PARM1_CORE_SYSCALL(regs)
+#define ___bpf_syscall_args2(x, args...) ___bpf_syscall_args1(args), (void *)PT_REGS_PARM2_CORE_SYSCALL(regs)
+#define ___bpf_syscall_args3(x, args...) ___bpf_syscall_args2(args), (void *)PT_REGS_PARM3_CORE_SYSCALL(regs)
+#define ___bpf_syscall_args4(x, args...) ___bpf_syscall_args3(args), (void *)PT_REGS_PARM4_CORE_SYSCALL(regs)
+#define ___bpf_syscall_args5(x, args...) ___bpf_syscall_args4(args), (void *)PT_REGS_PARM5_CORE_SYSCALL(regs)
+#define ___bpf_syscall_args(args...) ___bpf_apply(___bpf_syscall_args, ___bpf_narg(args))(args)
+
+/*
+ * BPF_KPROBE_SYSCALL is a variant of BPF_KPROBE, which is intended for
+ * tracing syscall functions, like __x64_sys_close. It hides the underlying
+ * platform-specific low-level way of getting syscall input arguments from
+ * struct pt_regs, and provides a familiar typed and named function arguments
+ * syntax and semantics of accessing syscall input parameters.
+ *
+ * Original struct pt_regs* context is preserved as 'ctx' argument. This might
+ * be necessary when using BPF helpers like bpf_perf_event_output().
+ *
+ * This macro relies on BPF CO-RE support.
+ */
+#define BPF_KPROBE_SYSCALL(name, args...) \
+name(struct pt_regs *ctx); \
+static __attribute__((always_inline)) typeof(name(0)) \
+____##name(struct pt_regs *ctx, ##args); \
+typeof(name(0)) name(struct pt_regs *ctx) \
+{ \
+ struct pt_regs *regs = PT_REGS_SYSCALL_REGS(ctx); \
+ _Pragma("GCC diagnostic push") \
+ _Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
+ return ____##name(___bpf_syscall_args(args)); \
+ _Pragma("GCC diagnostic pop") \
+} \
+static __attribute__((always_inline)) typeof(name(0)) \
+____##name(struct pt_regs *ctx, ##args)
+
#endif
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 9aa19c89f758..1383e26c5d1f 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -1620,20 +1620,37 @@ static int btf_commit_type(struct btf *btf, int data_sz)
struct btf_pipe {
const struct btf *src;
struct btf *dst;
+ struct hashmap *str_off_map; /* map string offsets from src to dst */
};
static int btf_rewrite_str(__u32 *str_off, void *ctx)
{
struct btf_pipe *p = ctx;
- int off;
+ void *mapped_off;
+ int off, err;
if (!*str_off) /* nothing to do for empty strings */
return 0;
+ if (p->str_off_map &&
+ hashmap__find(p->str_off_map, (void *)(long)*str_off, &mapped_off)) {
+ *str_off = (__u32)(long)mapped_off;
+ return 0;
+ }
+
off = btf__add_str(p->dst, btf__str_by_offset(p->src, *str_off));
if (off < 0)
return off;
+ /* Remember string mapping from src to dst. It avoids
+ * performing expensive string comparisons.
+ */
+ if (p->str_off_map) {
+ err = hashmap__append(p->str_off_map, (void *)(long)*str_off, (void *)(long)off);
+ if (err)
+ return err;
+ }
+
*str_off = off;
return 0;
}
@@ -1680,6 +1697,9 @@ static int btf_rewrite_type_ids(__u32 *type_id, void *ctx)
return 0;
}
+static size_t btf_dedup_identity_hash_fn(const void *key, void *ctx);
+static bool btf_dedup_equal_fn(const void *k1, const void *k2, void *ctx);
+
int btf__add_btf(struct btf *btf, const struct btf *src_btf)
{
struct btf_pipe p = { .src = src_btf, .dst = btf };
@@ -1713,6 +1733,11 @@ int btf__add_btf(struct btf *btf, const struct btf *src_btf)
if (!off)
return libbpf_err(-ENOMEM);
+ /* Map the string offsets from src_btf to the offsets from btf to improve performance */
+ p.str_off_map = hashmap__new(btf_dedup_identity_hash_fn, btf_dedup_equal_fn, NULL);
+ if (IS_ERR(p.str_off_map))
+ return libbpf_err(-ENOMEM);
+
/* bulk copy types data for all types from src_btf */
memcpy(t, src_btf->types_data, data_sz);
@@ -1754,6 +1779,8 @@ int btf__add_btf(struct btf *btf, const struct btf *src_btf)
btf->hdr->str_off += data_sz;
btf->nr_types += cnt;
+ hashmap__free(p.str_off_map);
+
/* return type ID of the first added BTF type */
return btf->start_id + btf->nr_types - cnt;
err_out:
@@ -1767,6 +1794,8 @@ err_out:
* wasn't modified, so doesn't need restoring, see big comment above */
btf->hdr->str_len = old_strs_len;
+ hashmap__free(p.str_off_map);
+
return libbpf_err(err);
}
diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
index 061839f04525..951ac7475794 100644
--- a/tools/lib/bpf/btf.h
+++ b/tools/lib/bpf/btf.h
@@ -147,11 +147,10 @@ LIBBPF_API int btf__resolve_type(const struct btf *btf, __u32 type_id);
LIBBPF_API int btf__align_of(const struct btf *btf, __u32 id);
LIBBPF_API int btf__fd(const struct btf *btf);
LIBBPF_API void btf__set_fd(struct btf *btf, int fd);
-LIBBPF_DEPRECATED_SINCE(0, 7, "use btf__raw_data() instead")
-LIBBPF_API const void *btf__get_raw_data(const struct btf *btf, __u32 *size);
LIBBPF_API const void *btf__raw_data(const struct btf *btf, __u32 *size);
LIBBPF_API const char *btf__name_by_offset(const struct btf *btf, __u32 offset);
LIBBPF_API const char *btf__str_by_offset(const struct btf *btf, __u32 offset);
+LIBBPF_DEPRECATED_SINCE(0, 7, "this API is not necessary when BTF-defined maps are used")
LIBBPF_API int btf__get_map_kv_tids(const struct btf *btf, const char *map_name,
__u32 expected_key_size,
__u32 expected_value_size,
@@ -159,8 +158,7 @@ LIBBPF_API int btf__get_map_kv_tids(const struct btf *btf, const char *map_name,
LIBBPF_API struct btf_ext *btf_ext__new(const __u8 *data, __u32 size);
LIBBPF_API void btf_ext__free(struct btf_ext *btf_ext);
-LIBBPF_API const void *btf_ext__get_raw_data(const struct btf_ext *btf_ext,
- __u32 *size);
+LIBBPF_API const void *btf_ext__raw_data(const struct btf_ext *btf_ext, __u32 *size);
LIBBPF_API LIBBPF_DEPRECATED("btf_ext__reloc_func_info was never meant as a public API and has wrong assumptions embedded in it; it will be removed in the future libbpf versions")
int btf_ext__reloc_func_info(const struct btf *btf,
const struct btf_ext *btf_ext,
@@ -171,8 +169,10 @@ int btf_ext__reloc_line_info(const struct btf *btf,
const struct btf_ext *btf_ext,
const char *sec_name, __u32 insns_cnt,
void **line_info, __u32 *cnt);
-LIBBPF_API __u32 btf_ext__func_info_rec_size(const struct btf_ext *btf_ext);
-LIBBPF_API __u32 btf_ext__line_info_rec_size(const struct btf_ext *btf_ext);
+LIBBPF_API LIBBPF_DEPRECATED("btf_ext__reloc_func_info is deprecated; write custom func_info parsing to fetch rec_size")
+__u32 btf_ext__func_info_rec_size(const struct btf_ext *btf_ext);
+LIBBPF_API LIBBPF_DEPRECATED("btf_ext__reloc_line_info is deprecated; write custom line_info parsing to fetch rec_size")
+__u32 btf_ext__line_info_rec_size(const struct btf_ext *btf_ext);
LIBBPF_API int btf__find_str(struct btf *btf, const char *s);
LIBBPF_API int btf__add_str(struct btf *btf, const char *s);
@@ -375,8 +375,28 @@ btf_dump__dump_type_data(struct btf_dump *d, __u32 id,
const struct btf_dump_type_data_opts *opts);
/*
- * A set of helpers for easier BTF types handling
+ * A set of helpers for easier BTF types handling.
+ *
+ * The inline functions below rely on constants from the kernel headers which
+ * may not be available for applications including this header file. To avoid
+ * compilation errors, we define all the constants here that were added after
+ * the initial introduction of the BTF_KIND* constants.
*/
+#ifndef BTF_KIND_FUNC
+#define BTF_KIND_FUNC 12 /* Function */
+#define BTF_KIND_FUNC_PROTO 13 /* Function Proto */
+#endif
+#ifndef BTF_KIND_VAR
+#define BTF_KIND_VAR 14 /* Variable */
+#define BTF_KIND_DATASEC 15 /* Section */
+#endif
+#ifndef BTF_KIND_FLOAT
+#define BTF_KIND_FLOAT 16 /* Floating point */
+#endif
+/* The kernel header switched to enums, so these two were never #defined */
+#define BTF_KIND_DECL_TAG 17 /* Decl Tag */
+#define BTF_KIND_TYPE_TAG 18 /* Type Tag */
+
static inline __u16 btf_kind(const struct btf_type *t)
{
return BTF_INFO_KIND(t->info);
diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
index b9a3260c83cb..6b1bc1f43728 100644
--- a/tools/lib/bpf/btf_dump.c
+++ b/tools/lib/bpf/btf_dump.c
@@ -1505,6 +1505,11 @@ static const char *btf_dump_resolve_name(struct btf_dump *d, __u32 id,
if (s->name_resolved)
return *cached_name ? *cached_name : orig_name;
+ if (btf_is_fwd(t) || (btf_is_enum(t) && btf_vlen(t) == 0)) {
+ s->name_resolved = 1;
+ return orig_name;
+ }
+
dup_cnt = btf_dump_name_dups(d, name_map, orig_name);
if (dup_cnt > 1) {
const size_t max_len = 256;
@@ -1861,14 +1866,16 @@ static int btf_dump_array_data(struct btf_dump *d,
{
const struct btf_array *array = btf_array(t);
const struct btf_type *elem_type;
- __u32 i, elem_size = 0, elem_type_id;
+ __u32 i, elem_type_id;
+ __s64 elem_size;
bool is_array_member;
elem_type_id = array->type;
elem_type = skip_mods_and_typedefs(d->btf, elem_type_id, NULL);
elem_size = btf__resolve_size(d->btf, elem_type_id);
if (elem_size <= 0) {
- pr_warn("unexpected elem size %d for array type [%u]\n", elem_size, id);
+ pr_warn("unexpected elem size %zd for array type [%u]\n",
+ (ssize_t)elem_size, id);
return -EINVAL;
}
diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
index 8ecef1088ba2..927745b08014 100644
--- a/tools/lib/bpf/gen_loader.c
+++ b/tools/lib/bpf/gen_loader.c
@@ -1043,18 +1043,27 @@ void bpf_gen__map_update_elem(struct bpf_gen *gen, int map_idx, void *pvalue,
value = add_data(gen, pvalue, value_size);
key = add_data(gen, &zero, sizeof(zero));
- /* if (map_desc[map_idx].initial_value)
- * copy_from_user(value, initial_value, value_size);
+ /* if (map_desc[map_idx].initial_value) {
+ * if (ctx->flags & BPF_SKEL_KERNEL)
+ * bpf_probe_read_kernel(value, value_size, initial_value);
+ * else
+ * bpf_copy_from_user(value, value_size, initial_value);
+ * }
*/
emit(gen, BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_6,
sizeof(struct bpf_loader_ctx) +
sizeof(struct bpf_map_desc) * map_idx +
offsetof(struct bpf_map_desc, initial_value)));
- emit(gen, BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0, 4));
+ emit(gen, BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0, 8));
emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE,
0, 0, 0, value));
emit(gen, BPF_MOV64_IMM(BPF_REG_2, value_size));
+ emit(gen, BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6,
+ offsetof(struct bpf_loader_ctx, flags)));
+ emit(gen, BPF_JMP_IMM(BPF_JSET, BPF_REG_0, BPF_SKEL_KERNEL, 2));
emit(gen, BPF_EMIT_CALL(BPF_FUNC_copy_from_user));
+ emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, 1));
+ emit(gen, BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel));
map_update_attr = add_data(gen, &attr, attr_size);
move_blob2blob(gen, attr_field(map_update_attr, map_fd), 4,
diff --git a/tools/lib/bpf/hashmap.c b/tools/lib/bpf/hashmap.c
index 3c20b126d60d..aeb09c288716 100644
--- a/tools/lib/bpf/hashmap.c
+++ b/tools/lib/bpf/hashmap.c
@@ -75,7 +75,7 @@ void hashmap__clear(struct hashmap *map)
void hashmap__free(struct hashmap *map)
{
- if (!map)
+ if (IS_ERR_OR_NULL(map))
return;
hashmap__clear(map);
@@ -238,4 +238,3 @@ bool hashmap__delete(struct hashmap *map, const void *key,
return true;
}
-
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 7f10dd501a52..809fe209cdcc 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -156,14 +156,6 @@ enum libbpf_strict_mode libbpf_mode = LIBBPF_STRICT_NONE;
int libbpf_set_strict_mode(enum libbpf_strict_mode mode)
{
- /* __LIBBPF_STRICT_LAST is the last power-of-2 value used + 1, so to
- * get all possible values we compensate last +1, and then (2*x - 1)
- * to get the bit mask
- */
- if (mode != LIBBPF_STRICT_ALL
- && (mode & ~((__LIBBPF_STRICT_LAST - 1) * 2 - 1)))
- return errno = EINVAL, -EINVAL;
-
libbpf_mode = mode;
return 0;
}
@@ -209,12 +201,6 @@ struct reloc_desc {
};
};
-struct bpf_sec_def;
-
-typedef int (*init_fn_t)(struct bpf_program *prog, long cookie);
-typedef int (*preload_fn_t)(struct bpf_program *prog, struct bpf_prog_load_opts *opts, long cookie);
-typedef struct bpf_link *(*attach_fn_t)(const struct bpf_program *prog, long cookie);
-
/* stored as sec_def->cookie for all libbpf-supported SEC()s */
enum sec_def_flags {
SEC_NONE = 0,
@@ -235,17 +221,22 @@ enum sec_def_flags {
SEC_SLEEPABLE = 8,
/* allow non-strict prefix matching */
SEC_SLOPPY_PFX = 16,
+ /* BPF program support non-linear XDP buffer */
+ SEC_XDP_FRAGS = 32,
+ /* deprecated sec definitions not supposed to be used */
+ SEC_DEPRECATED = 64,
};
struct bpf_sec_def {
- const char *sec;
+ char *sec;
enum bpf_prog_type prog_type;
enum bpf_attach_type expected_attach_type;
long cookie;
+ int handler_id;
- init_fn_t init_fn;
- preload_fn_t preload_fn;
- attach_fn_t attach_fn;
+ libbpf_prog_setup_fn_t prog_setup_fn;
+ libbpf_prog_prepare_load_fn_t prog_prepare_load_fn;
+ libbpf_prog_attach_fn_t prog_attach_fn;
};
/*
@@ -1378,22 +1369,20 @@ static bool bpf_map_type__is_map_in_map(enum bpf_map_type type)
static int find_elf_sec_sz(const struct bpf_object *obj, const char *name, __u32 *size)
{
- int ret = -ENOENT;
Elf_Data *data;
Elf_Scn *scn;
- *size = 0;
if (!name)
return -EINVAL;
scn = elf_sec_by_name(obj, name);
data = elf_sec_data(obj, scn);
if (data) {
- ret = 0; /* found it */
*size = data->d_size;
+ return 0; /* found it */
}
- return *size ? 0 : ret;
+ return -ENOENT;
}
static int find_elf_var_offset(const struct bpf_object *obj, const char *name, __u32 *off)
@@ -1529,6 +1518,9 @@ static char *internal_map_name(struct bpf_object *obj, const char *real_name)
}
static int
+bpf_map_find_btf_info(struct bpf_object *obj, struct bpf_map *map);
+
+static int
bpf_object__init_internal_map(struct bpf_object *obj, enum libbpf_map_type type,
const char *real_name, int sec_idx, void *data, size_t data_sz)
{
@@ -1575,6 +1567,9 @@ bpf_object__init_internal_map(struct bpf_object *obj, enum libbpf_map_type type,
return err;
}
+ /* failures are fine because of maps like .rodata.str1.1 */
+ (void) bpf_map_find_btf_info(obj, map);
+
if (data)
memcpy(map->mmaped, data, data_sz);
@@ -1937,6 +1932,11 @@ static int bpf_object__init_user_maps(struct bpf_object *obj, bool strict)
if (obj->efile.maps_shndx < 0)
return 0;
+ if (libbpf_mode & LIBBPF_STRICT_MAP_DEFINITIONS) {
+ pr_warn("legacy map definitions in SEC(\"maps\") are not supported\n");
+ return -EOPNOTSUPP;
+ }
+
if (!symbols)
return -EINVAL;
@@ -1999,6 +1999,8 @@ static int bpf_object__init_user_maps(struct bpf_object *obj, bool strict)
return -LIBBPF_ERRNO__FORMAT;
}
+ pr_warn("map '%s' (legacy): legacy map definitions are deprecated, use BTF-defined maps instead\n", map_name);
+
if (ELF64_ST_BIND(sym->st_info) == STB_LOCAL) {
pr_warn("map '%s' (legacy): static maps are not supported\n", map_name);
return -ENOTSUP;
@@ -2050,6 +2052,9 @@ static int bpf_object__init_user_maps(struct bpf_object *obj, bool strict)
}
memcpy(&map->def, def, sizeof(struct bpf_map_def));
}
+
+ /* btf info may not exist but fill it in if it does exist */
+ (void) bpf_map_find_btf_info(obj, map);
}
return 0;
}
@@ -2538,6 +2543,10 @@ static int bpf_object__init_user_btf_map(struct bpf_object *obj,
fill_map_from_def(map->inner_map, &inner_def);
}
+ err = bpf_map_find_btf_info(obj, map);
+ if (err)
+ return err;
+
return 0;
}
@@ -2792,7 +2801,7 @@ static int btf_fixup_datasec(struct bpf_object *obj, struct btf *btf,
goto sort_vars;
ret = find_elf_sec_sz(obj, name, &size);
- if (ret || !size || (t->size && t->size != size)) {
+ if (ret || !size) {
pr_debug("Invalid size for section %s: %u bytes\n", name, size);
return -ENOENT;
}
@@ -3836,7 +3845,14 @@ static bool prog_is_subprog(const struct bpf_object *obj,
* .text programs are subprograms (even if they are not called from
* other programs), because libbpf never explicitly supported mixing
* SEC()-designated BPF programs and .text entry-point BPF programs.
+ *
+ * In libbpf 1.0 strict mode, we always consider .text
+ * programs to be subprograms.
*/
+
+ if (libbpf_mode & LIBBPF_STRICT_SEC_NAME)
+ return prog->sec_idx == obj->efile.text_shndx;
+
return prog->sec_idx == obj->efile.text_shndx && obj->nr_programs > 1;
}
@@ -4181,6 +4197,9 @@ static int bpf_map_find_btf_info(struct bpf_object *obj, struct bpf_map *map)
__u32 key_type_id = 0, value_type_id = 0;
int ret;
+ if (!obj->btf)
+ return -ENOENT;
+
/* if it's BTF-defined map, we don't need to search for type IDs.
* For struct_ops map, it does not need btf_key_type_id and
* btf_value_type_id.
@@ -4190,9 +4209,13 @@ static int bpf_map_find_btf_info(struct bpf_object *obj, struct bpf_map *map)
return 0;
if (!bpf_map__is_internal(map)) {
+ pr_warn("Use of BPF_ANNOTATE_KV_PAIR is deprecated, use BTF-defined maps in .maps section instead\n");
+#pragma GCC diagnostic push
+#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
ret = btf__get_map_kv_tids(obj->btf, map->name, def->key_size,
def->value_size, &key_type_id,
&value_type_id);
+#pragma GCC diagnostic pop
} else {
/*
* LLVM annotates global data differently in BTF, that is,
@@ -4800,8 +4823,8 @@ bpf_object__reuse_map(struct bpf_map *map)
}
err = bpf_map__reuse_fd(map, pin_fd);
+ close(pin_fd);
if (err) {
- close(pin_fd);
return err;
}
map->pinned = true;
@@ -4854,7 +4877,6 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
LIBBPF_OPTS(bpf_map_create_opts, create_attr);
struct bpf_map_def *def = &map->def;
const char *map_name = NULL;
- __u32 max_entries;
int err = 0;
if (kernel_supports(obj, FEAT_PROG_NAME))
@@ -4864,25 +4886,10 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
create_attr.numa_node = map->numa_node;
create_attr.map_extra = map->map_extra;
- if (def->type == BPF_MAP_TYPE_PERF_EVENT_ARRAY && !def->max_entries) {
- int nr_cpus;
-
- nr_cpus = libbpf_num_possible_cpus();
- if (nr_cpus < 0) {
- pr_warn("map '%s': failed to determine number of system CPUs: %d\n",
- map->name, nr_cpus);
- return nr_cpus;
- }
- pr_debug("map '%s': setting size to %d\n", map->name, nr_cpus);
- max_entries = nr_cpus;
- } else {
- max_entries = def->max_entries;
- }
-
if (bpf_map__is_struct_ops(map))
create_attr.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id;
- if (obj->btf && btf__fd(obj->btf) >= 0 && !bpf_map_find_btf_info(obj, map)) {
+ if (obj->btf && btf__fd(obj->btf) >= 0) {
create_attr.btf_fd = btf__fd(obj->btf);
create_attr.btf_key_type_id = map->btf_key_type_id;
create_attr.btf_value_type_id = map->btf_value_type_id;
@@ -4928,7 +4935,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
if (obj->gen_loader) {
bpf_gen__map_create(obj->gen_loader, def->type, map_name,
- def->key_size, def->value_size, max_entries,
+ def->key_size, def->value_size, def->max_entries,
&create_attr, is_inner ? -1 : map - obj->maps);
/* Pretend to have valid FD to pass various fd >= 0 checks.
* This fd == 0 will not be used with any syscall and will be reset to -1 eventually.
@@ -4937,7 +4944,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
} else {
map->fd = bpf_map_create(def->type, map_name,
def->key_size, def->value_size,
- max_entries, &create_attr);
+ def->max_entries, &create_attr);
}
if (map->fd < 0 && (create_attr.btf_key_type_id ||
create_attr.btf_value_type_id)) {
@@ -4954,7 +4961,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
map->btf_value_type_id = 0;
map->fd = bpf_map_create(def->type, map_name,
def->key_size, def->value_size,
- max_entries, &create_attr);
+ def->max_entries, &create_attr);
}
err = map->fd < 0 ? -errno : 0;
@@ -5058,6 +5065,24 @@ static int bpf_object_init_prog_arrays(struct bpf_object *obj)
return 0;
}
+static int map_set_def_max_entries(struct bpf_map *map)
+{
+ if (map->def.type == BPF_MAP_TYPE_PERF_EVENT_ARRAY && !map->def.max_entries) {
+ int nr_cpus;
+
+ nr_cpus = libbpf_num_possible_cpus();
+ if (nr_cpus < 0) {
+ pr_warn("map '%s': failed to determine number of system CPUs: %d\n",
+ map->name, nr_cpus);
+ return nr_cpus;
+ }
+ pr_debug("map '%s': setting size to %d\n", map->name, nr_cpus);
+ map->def.max_entries = nr_cpus;
+ }
+
+ return 0;
+}
+
static int
bpf_object__create_maps(struct bpf_object *obj)
{
@@ -5090,6 +5115,10 @@ bpf_object__create_maps(struct bpf_object *obj)
continue;
}
+ err = map_set_def_max_entries(map);
+ if (err)
+ goto err_out;
+
retried = false;
retry:
if (map->pin_path) {
@@ -5185,18 +5214,21 @@ size_t bpf_core_essential_name_len(const char *name)
return n;
}
-static void bpf_core_free_cands(struct bpf_core_cand_list *cands)
+void bpf_core_free_cands(struct bpf_core_cand_list *cands)
{
+ if (!cands)
+ return;
+
free(cands->cands);
free(cands);
}
-static int bpf_core_add_cands(struct bpf_core_cand *local_cand,
- size_t local_essent_len,
- const struct btf *targ_btf,
- const char *targ_btf_name,
- int targ_start_id,
- struct bpf_core_cand_list *cands)
+int bpf_core_add_cands(struct bpf_core_cand *local_cand,
+ size_t local_essent_len,
+ const struct btf *targ_btf,
+ const char *targ_btf_name,
+ int targ_start_id,
+ struct bpf_core_cand_list *cands)
{
struct bpf_core_cand *new_cands, *cand;
const struct btf_type *t, *local_t;
@@ -5523,11 +5555,12 @@ static int record_relo_core(struct bpf_program *prog,
return 0;
}
-static int bpf_core_apply_relo(struct bpf_program *prog,
- const struct bpf_core_relo *relo,
- int relo_idx,
- const struct btf *local_btf,
- struct hashmap *cand_cache)
+static int bpf_core_resolve_relo(struct bpf_program *prog,
+ const struct bpf_core_relo *relo,
+ int relo_idx,
+ const struct btf *local_btf,
+ struct hashmap *cand_cache,
+ struct bpf_core_relo_res *targ_res)
{
struct bpf_core_spec specs_scratch[3] = {};
const void *type_key = u32_as_hash_key(relo->type_id);
@@ -5536,20 +5569,7 @@ static int bpf_core_apply_relo(struct bpf_program *prog,
const struct btf_type *local_type;
const char *local_name;
__u32 local_id = relo->type_id;
- struct bpf_insn *insn;
- int insn_idx, err;
-
- if (relo->insn_off % BPF_INSN_SZ)
- return -EINVAL;
- insn_idx = relo->insn_off / BPF_INSN_SZ;
- /* adjust insn_idx from section frame of reference to the local
- * program's frame of reference; (sub-)program code is not yet
- * relocated, so it's enough to just subtract in-section offset
- */
- insn_idx = insn_idx - prog->sec_insn_off;
- if (insn_idx >= prog->insns_cnt)
- return -EINVAL;
- insn = &prog->insns[insn_idx];
+ int err;
local_type = btf__type_by_id(local_btf, local_id);
if (!local_type)
@@ -5559,15 +5579,6 @@ static int bpf_core_apply_relo(struct bpf_program *prog,
if (!local_name)
return -EINVAL;
- if (prog->obj->gen_loader) {
- const char *spec_str = btf__name_by_offset(local_btf, relo->access_str_off);
-
- pr_debug("record_relo_core: prog %td insn[%d] %s %s %s final insn_idx %d\n",
- prog - prog->obj->programs, relo->insn_off / 8,
- btf_kind_str(local_type), local_name, spec_str, insn_idx);
- return record_relo_core(prog, relo, insn_idx);
- }
-
if (relo->kind != BPF_CORE_TYPE_ID_LOCAL &&
!hashmap__find(cand_cache, type_key, (void **)&cands)) {
cands = bpf_core_find_cands(prog->obj, local_btf, local_id);
@@ -5584,19 +5595,21 @@ static int bpf_core_apply_relo(struct bpf_program *prog,
}
}
- return bpf_core_apply_relo_insn(prog_name, insn, insn_idx, relo,
- relo_idx, local_btf, cands, specs_scratch);
+ return bpf_core_calc_relo_insn(prog_name, relo, relo_idx, local_btf, cands, specs_scratch,
+ targ_res);
}
static int
bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
{
const struct btf_ext_info_sec *sec;
+ struct bpf_core_relo_res targ_res;
const struct bpf_core_relo *rec;
const struct btf_ext_info *seg;
struct hashmap_entry *entry;
struct hashmap *cand_cache = NULL;
struct bpf_program *prog;
+ struct bpf_insn *insn;
const char *sec_name;
int i, err = 0, insn_idx, sec_idx;
@@ -5647,6 +5660,8 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
sec_name, sec->num_info);
for_each_btf_ext_rec(seg, sec, i, rec) {
+ if (rec->insn_off % BPF_INSN_SZ)
+ return -EINVAL;
insn_idx = rec->insn_off / BPF_INSN_SZ;
prog = find_prog_by_sec_insn(obj, sec_idx, insn_idx);
if (!prog) {
@@ -5661,12 +5676,38 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
if (!prog->load)
continue;
- err = bpf_core_apply_relo(prog, rec, i, obj->btf, cand_cache);
+ /* adjust insn_idx from section frame of reference to the local
+ * program's frame of reference; (sub-)program code is not yet
+ * relocated, so it's enough to just subtract in-section offset
+ */
+ insn_idx = insn_idx - prog->sec_insn_off;
+ if (insn_idx >= prog->insns_cnt)
+ return -EINVAL;
+ insn = &prog->insns[insn_idx];
+
+ if (prog->obj->gen_loader) {
+ err = record_relo_core(prog, rec, insn_idx);
+ if (err) {
+ pr_warn("prog '%s': relo #%d: failed to record relocation: %d\n",
+ prog->name, i, err);
+ goto out;
+ }
+ continue;
+ }
+
+ err = bpf_core_resolve_relo(prog, rec, i, obj->btf, cand_cache, &targ_res);
if (err) {
pr_warn("prog '%s': relo #%d: failed to relocate: %d\n",
prog->name, i, err);
goto out;
}
+
+ err = bpf_core_patch_insn(prog->name, insn, insn_idx, rec, i, &targ_res);
+ if (err) {
+ pr_warn("prog '%s': relo #%d: failed to patch insn #%u: %d\n",
+ prog->name, i, insn_idx, err);
+ goto out;
+ }
}
}
@@ -6549,9 +6590,9 @@ static int bpf_object__sanitize_prog(struct bpf_object *obj, struct bpf_program
static int libbpf_find_attach_btf_id(struct bpf_program *prog, const char *attach_name,
int *btf_obj_fd, int *btf_type_id);
-/* this is called as prog->sec_def->preload_fn for libbpf-supported sec_defs */
-static int libbpf_preload_prog(struct bpf_program *prog,
- struct bpf_prog_load_opts *opts, long cookie)
+/* this is called as prog->sec_def->prog_prepare_load_fn for libbpf-supported sec_defs */
+static int libbpf_prepare_prog_load(struct bpf_program *prog,
+ struct bpf_prog_load_opts *opts, long cookie)
{
enum sec_def_flags def = cookie;
@@ -6562,6 +6603,13 @@ static int libbpf_preload_prog(struct bpf_program *prog,
if (def & SEC_SLEEPABLE)
opts->prog_flags |= BPF_F_SLEEPABLE;
+ if (prog->type == BPF_PROG_TYPE_XDP && (def & SEC_XDP_FRAGS))
+ opts->prog_flags |= BPF_F_XDP_HAS_FRAGS;
+
+ if (def & SEC_DEPRECATED)
+ pr_warn("SEC(\"%s\") is deprecated, please see https://github.com/libbpf/libbpf/wiki/Libbpf-1.0-migration-guide#bpf-program-sec-annotation-deprecations for details\n",
+ prog->sec_name);
+
if ((prog->type == BPF_PROG_TYPE_TRACING ||
prog->type == BPF_PROG_TYPE_LSM ||
prog->type == BPF_PROG_TYPE_EXT) && !prog->attach_btf_id) {
@@ -6640,8 +6688,8 @@ static int bpf_object_load_prog_instance(struct bpf_object *obj, struct bpf_prog
load_attr.fd_array = obj->fd_array;
/* adjust load_attr if sec_def provides custom preload callback */
- if (prog->sec_def && prog->sec_def->preload_fn) {
- err = prog->sec_def->preload_fn(prog, &load_attr, prog->sec_def->cookie);
+ if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
+ err = prog->sec_def->prog_prepare_load_fn(prog, &load_attr, prog->sec_def->cookie);
if (err < 0) {
pr_warn("prog '%s': failed to prepare load attributes: %d\n",
prog->name, err);
@@ -6941,8 +6989,8 @@ static int bpf_object_init_progs(struct bpf_object *obj, const struct bpf_object
/* sec_def can have custom callback which should be called
* after bpf_program is initialized to adjust its properties
*/
- if (prog->sec_def->init_fn) {
- err = prog->sec_def->init_fn(prog, prog->sec_def->cookie);
+ if (prog->sec_def->prog_setup_fn) {
+ err = prog->sec_def->prog_setup_fn(prog, prog->sec_def->cookie);
if (err < 0) {
pr_warn("prog '%s': failed to initialize: %d\n",
prog->name, err);
@@ -7146,12 +7194,10 @@ static int bpf_object__sanitize_maps(struct bpf_object *obj)
return 0;
}
-static int bpf_object__read_kallsyms_file(struct bpf_object *obj)
+int libbpf_kallsyms_parse(kallsyms_cb_t cb, void *ctx)
{
char sym_type, sym_name[500];
unsigned long long sym_addr;
- const struct btf_type *t;
- struct extern_desc *ext;
int ret, err = 0;
FILE *f;
@@ -7170,35 +7216,51 @@ static int bpf_object__read_kallsyms_file(struct bpf_object *obj)
if (ret != 3) {
pr_warn("failed to read kallsyms entry: %d\n", ret);
err = -EINVAL;
- goto out;
+ break;
}
- ext = find_extern_by_name(obj, sym_name);
- if (!ext || ext->type != EXT_KSYM)
- continue;
-
- t = btf__type_by_id(obj->btf, ext->btf_id);
- if (!btf_is_var(t))
- continue;
-
- if (ext->is_set && ext->ksym.addr != sym_addr) {
- pr_warn("extern (ksym) '%s' resolution is ambiguous: 0x%llx or 0x%llx\n",
- sym_name, ext->ksym.addr, sym_addr);
- err = -EINVAL;
- goto out;
- }
- if (!ext->is_set) {
- ext->is_set = true;
- ext->ksym.addr = sym_addr;
- pr_debug("extern (ksym) %s=0x%llx\n", sym_name, sym_addr);
- }
+ err = cb(sym_addr, sym_type, sym_name, ctx);
+ if (err)
+ break;
}
-out:
fclose(f);
return err;
}
+static int kallsyms_cb(unsigned long long sym_addr, char sym_type,
+ const char *sym_name, void *ctx)
+{
+ struct bpf_object *obj = ctx;
+ const struct btf_type *t;
+ struct extern_desc *ext;
+
+ ext = find_extern_by_name(obj, sym_name);
+ if (!ext || ext->type != EXT_KSYM)
+ return 0;
+
+ t = btf__type_by_id(obj->btf, ext->btf_id);
+ if (!btf_is_var(t))
+ return 0;
+
+ if (ext->is_set && ext->ksym.addr != sym_addr) {
+ pr_warn("extern (ksym) '%s' resolution is ambiguous: 0x%llx or 0x%llx\n",
+ sym_name, ext->ksym.addr, sym_addr);
+ return -EINVAL;
+ }
+ if (!ext->is_set) {
+ ext->is_set = true;
+ ext->ksym.addr = sym_addr;
+ pr_debug("extern (ksym) %s=0x%llx\n", sym_name, sym_addr);
+ }
+ return 0;
+}
+
+static int bpf_object__read_kallsyms_file(struct bpf_object *obj)
+{
+ return libbpf_kallsyms_parse(kallsyms_cb, obj);
+}
+
static int find_ksym_btf_id(struct bpf_object *obj, const char *ksym_name,
__u16 kind, struct btf **res_btf,
struct module_btf **res_mod_btf)
@@ -7883,10 +7945,8 @@ int bpf_map__set_pin_path(struct bpf_map *map, const char *path)
return 0;
}
-const char *bpf_map__get_pin_path(const struct bpf_map *map)
-{
- return map->pin_path;
-}
+__alias(bpf_map__pin_path)
+const char *bpf_map__get_pin_path(const struct bpf_map *map);
const char *bpf_map__pin_path(const struct bpf_map *map)
{
@@ -8451,7 +8511,10 @@ static int bpf_program_nth_fd(const struct bpf_program *prog, int n)
return fd;
}
-enum bpf_prog_type bpf_program__get_type(const struct bpf_program *prog)
+__alias(bpf_program__type)
+enum bpf_prog_type bpf_program__get_type(const struct bpf_program *prog);
+
+enum bpf_prog_type bpf_program__type(const struct bpf_program *prog)
{
return prog->type;
}
@@ -8495,8 +8558,10 @@ BPF_PROG_TYPE_FNS(struct_ops, BPF_PROG_TYPE_STRUCT_OPS);
BPF_PROG_TYPE_FNS(extension, BPF_PROG_TYPE_EXT);
BPF_PROG_TYPE_FNS(sk_lookup, BPF_PROG_TYPE_SK_LOOKUP);
-enum bpf_attach_type
-bpf_program__get_expected_attach_type(const struct bpf_program *prog)
+__alias(bpf_program__expected_attach_type)
+enum bpf_attach_type bpf_program__get_expected_attach_type(const struct bpf_program *prog);
+
+enum bpf_attach_type bpf_program__expected_attach_type(const struct bpf_program *prog)
{
return prog->expected_attach_type;
}
@@ -8556,20 +8621,21 @@ int bpf_program__set_log_buf(struct bpf_program *prog, char *log_buf, size_t log
}
#define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
- .sec = sec_pfx, \
+ .sec = (char *)sec_pfx, \
.prog_type = BPF_PROG_TYPE_##ptype, \
.expected_attach_type = atype, \
.cookie = (long)(flags), \
- .preload_fn = libbpf_preload_prog, \
+ .prog_prepare_load_fn = libbpf_prepare_prog_load, \
__VA_ARGS__ \
}
-static struct bpf_link *attach_kprobe(const struct bpf_program *prog, long cookie);
-static struct bpf_link *attach_tp(const struct bpf_program *prog, long cookie);
-static struct bpf_link *attach_raw_tp(const struct bpf_program *prog, long cookie);
-static struct bpf_link *attach_trace(const struct bpf_program *prog, long cookie);
-static struct bpf_link *attach_lsm(const struct bpf_program *prog, long cookie);
-static struct bpf_link *attach_iter(const struct bpf_program *prog, long cookie);
+static int attach_kprobe(const struct bpf_program *prog, long cookie, struct bpf_link **link);
+static int attach_tp(const struct bpf_program *prog, long cookie, struct bpf_link **link);
+static int attach_raw_tp(const struct bpf_program *prog, long cookie, struct bpf_link **link);
+static int attach_trace(const struct bpf_program *prog, long cookie, struct bpf_link **link);
+static int attach_kprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
+static int attach_lsm(const struct bpf_program *prog, long cookie, struct bpf_link **link);
+static int attach_iter(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static const struct bpf_sec_def section_defs[] = {
SEC_DEF("socket", SOCKET_FILTER, 0, SEC_NONE | SEC_SLOPPY_PFX),
@@ -8579,8 +8645,10 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("uprobe/", KPROBE, 0, SEC_NONE),
SEC_DEF("kretprobe/", KPROBE, 0, SEC_NONE, attach_kprobe),
SEC_DEF("uretprobe/", KPROBE, 0, SEC_NONE),
+ SEC_DEF("kprobe.multi/", KPROBE, BPF_TRACE_KPROBE_MULTI, SEC_NONE, attach_kprobe_multi),
+ SEC_DEF("kretprobe.multi/", KPROBE, BPF_TRACE_KPROBE_MULTI, SEC_NONE, attach_kprobe_multi),
SEC_DEF("tc", SCHED_CLS, 0, SEC_NONE),
- SEC_DEF("classifier", SCHED_CLS, 0, SEC_NONE | SEC_SLOPPY_PFX),
+ SEC_DEF("classifier", SCHED_CLS, 0, SEC_NONE | SEC_SLOPPY_PFX | SEC_DEPRECATED),
SEC_DEF("action", SCHED_ACT, 0, SEC_NONE | SEC_SLOPPY_PFX),
SEC_DEF("tracepoint/", TRACEPOINT, 0, SEC_NONE, attach_tp),
SEC_DEF("tp/", TRACEPOINT, 0, SEC_NONE, attach_tp),
@@ -8599,9 +8667,15 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("lsm/", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF, attach_lsm),
SEC_DEF("lsm.s/", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_lsm),
SEC_DEF("iter/", TRACING, BPF_TRACE_ITER, SEC_ATTACH_BTF, attach_iter),
+ SEC_DEF("iter.s/", TRACING, BPF_TRACE_ITER, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_iter),
SEC_DEF("syscall", SYSCALL, 0, SEC_SLEEPABLE),
- SEC_DEF("xdp_devmap/", XDP, BPF_XDP_DEVMAP, SEC_ATTACHABLE),
- SEC_DEF("xdp_cpumap/", XDP, BPF_XDP_CPUMAP, SEC_ATTACHABLE),
+ SEC_DEF("xdp.frags/devmap", XDP, BPF_XDP_DEVMAP, SEC_XDP_FRAGS),
+ SEC_DEF("xdp/devmap", XDP, BPF_XDP_DEVMAP, SEC_ATTACHABLE),
+ SEC_DEF("xdp_devmap/", XDP, BPF_XDP_DEVMAP, SEC_ATTACHABLE | SEC_DEPRECATED),
+ SEC_DEF("xdp.frags/cpumap", XDP, BPF_XDP_CPUMAP, SEC_XDP_FRAGS),
+ SEC_DEF("xdp/cpumap", XDP, BPF_XDP_CPUMAP, SEC_ATTACHABLE),
+ SEC_DEF("xdp_cpumap/", XDP, BPF_XDP_CPUMAP, SEC_ATTACHABLE | SEC_DEPRECATED),
+ SEC_DEF("xdp.frags", XDP, BPF_XDP, SEC_XDP_FRAGS),
SEC_DEF("xdp", XDP, BPF_XDP, SEC_ATTACHABLE_OPT | SEC_SLOPPY_PFX),
SEC_DEF("perf_event", PERF_EVENT, 0, SEC_NONE | SEC_SLOPPY_PFX),
SEC_DEF("lwt_in", LWT_IN, 0, SEC_NONE | SEC_SLOPPY_PFX),
@@ -8643,61 +8717,167 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("sk_lookup", SK_LOOKUP, BPF_SK_LOOKUP, SEC_ATTACHABLE | SEC_SLOPPY_PFX),
};
-#define MAX_TYPE_NAME_SIZE 32
+static size_t custom_sec_def_cnt;
+static struct bpf_sec_def *custom_sec_defs;
+static struct bpf_sec_def custom_fallback_def;
+static bool has_custom_fallback_def;
-static const struct bpf_sec_def *find_sec_def(const char *sec_name)
+static int last_custom_sec_def_handler_id;
+
+int libbpf_register_prog_handler(const char *sec,
+ enum bpf_prog_type prog_type,
+ enum bpf_attach_type exp_attach_type,
+ const struct libbpf_prog_handler_opts *opts)
{
- const struct bpf_sec_def *sec_def;
- enum sec_def_flags sec_flags;
- int i, n = ARRAY_SIZE(section_defs), len;
- bool strict = libbpf_mode & LIBBPF_STRICT_SEC_NAME;
+ struct bpf_sec_def *sec_def;
- for (i = 0; i < n; i++) {
- sec_def = &section_defs[i];
- sec_flags = sec_def->cookie;
- len = strlen(sec_def->sec);
+ if (!OPTS_VALID(opts, libbpf_prog_handler_opts))
+ return libbpf_err(-EINVAL);
- /* "type/" always has to have proper SEC("type/extras") form */
- if (sec_def->sec[len - 1] == '/') {
- if (str_has_pfx(sec_name, sec_def->sec))
- return sec_def;
- continue;
- }
+ if (last_custom_sec_def_handler_id == INT_MAX) /* prevent overflow */
+ return libbpf_err(-E2BIG);
- /* "type+" means it can be either exact SEC("type") or
- * well-formed SEC("type/extras") with proper '/' separator
- */
- if (sec_def->sec[len - 1] == '+') {
- len--;
- /* not even a prefix */
- if (strncmp(sec_name, sec_def->sec, len) != 0)
- continue;
- /* exact match or has '/' separator */
- if (sec_name[len] == '\0' || sec_name[len] == '/')
- return sec_def;
- continue;
- }
+ if (sec) {
+ sec_def = libbpf_reallocarray(custom_sec_defs, custom_sec_def_cnt + 1,
+ sizeof(*sec_def));
+ if (!sec_def)
+ return libbpf_err(-ENOMEM);
- /* SEC_SLOPPY_PFX definitions are allowed to be just prefix
- * matches, unless strict section name mode
- * (LIBBPF_STRICT_SEC_NAME) is enabled, in which case the
- * match has to be exact.
- */
- if ((sec_flags & SEC_SLOPPY_PFX) && !strict) {
- if (str_has_pfx(sec_name, sec_def->sec))
- return sec_def;
- continue;
- }
+ custom_sec_defs = sec_def;
+ sec_def = &custom_sec_defs[custom_sec_def_cnt];
+ } else {
+ if (has_custom_fallback_def)
+ return libbpf_err(-EBUSY);
- /* Definitions not marked SEC_SLOPPY_PFX (e.g.,
- * SEC("syscall")) are exact matches in both modes.
- */
- if (strcmp(sec_name, sec_def->sec) == 0)
+ sec_def = &custom_fallback_def;
+ }
+
+ sec_def->sec = sec ? strdup(sec) : NULL;
+ if (sec && !sec_def->sec)
+ return libbpf_err(-ENOMEM);
+
+ sec_def->prog_type = prog_type;
+ sec_def->expected_attach_type = exp_attach_type;
+ sec_def->cookie = OPTS_GET(opts, cookie, 0);
+
+ sec_def->prog_setup_fn = OPTS_GET(opts, prog_setup_fn, NULL);
+ sec_def->prog_prepare_load_fn = OPTS_GET(opts, prog_prepare_load_fn, NULL);
+ sec_def->prog_attach_fn = OPTS_GET(opts, prog_attach_fn, NULL);
+
+ sec_def->handler_id = ++last_custom_sec_def_handler_id;
+
+ if (sec)
+ custom_sec_def_cnt++;
+ else
+ has_custom_fallback_def = true;
+
+ return sec_def->handler_id;
+}
+
+int libbpf_unregister_prog_handler(int handler_id)
+{
+ struct bpf_sec_def *sec_defs;
+ int i;
+
+ if (handler_id <= 0)
+ return libbpf_err(-EINVAL);
+
+ if (has_custom_fallback_def && custom_fallback_def.handler_id == handler_id) {
+ memset(&custom_fallback_def, 0, sizeof(custom_fallback_def));
+ has_custom_fallback_def = false;
+ return 0;
+ }
+
+ for (i = 0; i < custom_sec_def_cnt; i++) {
+ if (custom_sec_defs[i].handler_id == handler_id)
+ break;
+ }
+
+ if (i == custom_sec_def_cnt)
+ return libbpf_err(-ENOENT);
+
+ free(custom_sec_defs[i].sec);
+ for (i = i + 1; i < custom_sec_def_cnt; i++)
+ custom_sec_defs[i - 1] = custom_sec_defs[i];
+ custom_sec_def_cnt--;
+
+ /* try to shrink the array, but it's ok if we couldn't */
+ sec_defs = libbpf_reallocarray(custom_sec_defs, custom_sec_def_cnt, sizeof(*sec_defs));
+ if (sec_defs)
+ custom_sec_defs = sec_defs;
+
+ return 0;
+}
+
+static bool sec_def_matches(const struct bpf_sec_def *sec_def, const char *sec_name,
+ bool allow_sloppy)
+{
+ size_t len = strlen(sec_def->sec);
+
+ /* "type/" always has to have proper SEC("type/extras") form */
+ if (sec_def->sec[len - 1] == '/') {
+ if (str_has_pfx(sec_name, sec_def->sec))
+ return true;
+ return false;
+ }
+
+ /* "type+" means it can be either exact SEC("type") or
+ * well-formed SEC("type/extras") with proper '/' separator
+ */
+ if (sec_def->sec[len - 1] == '+') {
+ len--;
+ /* not even a prefix */
+ if (strncmp(sec_name, sec_def->sec, len) != 0)
+ return false;
+ /* exact match or has '/' separator */
+ if (sec_name[len] == '\0' || sec_name[len] == '/')
+ return true;
+ return false;
+ }
+
+ /* SEC_SLOPPY_PFX definitions are allowed to be just prefix
+ * matches, unless strict section name mode
+ * (LIBBPF_STRICT_SEC_NAME) is enabled, in which case the
+ * match has to be exact.
+ */
+ if (allow_sloppy && str_has_pfx(sec_name, sec_def->sec))
+ return true;
+
+ /* Definitions not marked SEC_SLOPPY_PFX (e.g.,
+ * SEC("syscall")) are exact matches in both modes.
+ */
+ return strcmp(sec_name, sec_def->sec) == 0;
+}
+
+static const struct bpf_sec_def *find_sec_def(const char *sec_name)
+{
+ const struct bpf_sec_def *sec_def;
+ int i, n;
+ bool strict = libbpf_mode & LIBBPF_STRICT_SEC_NAME, allow_sloppy;
+
+ n = custom_sec_def_cnt;
+ for (i = 0; i < n; i++) {
+ sec_def = &custom_sec_defs[i];
+ if (sec_def_matches(sec_def, sec_name, false))
return sec_def;
}
+
+ n = ARRAY_SIZE(section_defs);
+ for (i = 0; i < n; i++) {
+ sec_def = &section_defs[i];
+ allow_sloppy = (sec_def->cookie & SEC_SLOPPY_PFX) && !strict;
+ if (sec_def_matches(sec_def, sec_name, allow_sloppy))
+ return sec_def;
+ }
+
+ if (has_custom_fallback_def)
+ return &custom_fallback_def;
+
return NULL;
}
+#define MAX_TYPE_NAME_SIZE 32
+
static char *libbpf_get_type_names(bool attach_type)
{
int i, len = ARRAY_SIZE(section_defs) * MAX_TYPE_NAME_SIZE;
@@ -8713,7 +8893,7 @@ static char *libbpf_get_type_names(bool attach_type)
const struct bpf_sec_def *sec_def = &section_defs[i];
if (attach_type) {
- if (sec_def->preload_fn != libbpf_preload_prog)
+ if (sec_def->prog_prepare_load_fn != libbpf_prepare_prog_load)
continue;
if (!(sec_def->cookie & SEC_ATTACHABLE))
@@ -9096,7 +9276,7 @@ int libbpf_attach_type_by_name(const char *name,
return libbpf_err(-EINVAL);
}
- if (sec_def->preload_fn != libbpf_preload_prog)
+ if (sec_def->prog_prepare_load_fn != libbpf_prepare_prog_load)
return libbpf_err(-EINVAL);
if (!(sec_def->cookie & SEC_ATTACHABLE))
return libbpf_err(-EINVAL);
@@ -9443,7 +9623,7 @@ static int bpf_prog_load_xattr2(const struct bpf_prog_load_attr *attr,
open_attr.file = attr->file;
open_attr.prog_type = attr->prog_type;
- obj = bpf_object__open_xattr(&open_attr);
+ obj = __bpf_object__open_xattr(&open_attr, 0);
err = libbpf_get_error(obj);
if (err)
return libbpf_err(-ENOENT);
@@ -9460,7 +9640,7 @@ static int bpf_prog_load_xattr2(const struct bpf_prog_load_attr *attr,
bpf_program__set_expected_attach_type(prog,
attach_type);
}
- if (bpf_program__get_type(prog) == BPF_PROG_TYPE_UNSPEC) {
+ if (bpf_program__type(prog) == BPF_PROG_TYPE_UNSPEC) {
/*
* we haven't guessed from section name and user
* didn't provide a fallback type, too bad...
@@ -9477,7 +9657,7 @@ static int bpf_prog_load_xattr2(const struct bpf_prog_load_attr *attr,
}
bpf_object__for_each_map(map, obj) {
- if (!bpf_map__is_offload_neutral(map))
+ if (map->def.type != BPF_MAP_TYPE_PERF_EVENT_ARRAY)
map->map_ifindex = attr->ifindex;
}
@@ -10070,14 +10250,146 @@ struct bpf_link *bpf_program__attach_kprobe(const struct bpf_program *prog,
return bpf_program__attach_kprobe_opts(prog, func_name, &opts);
}
-static struct bpf_link *attach_kprobe(const struct bpf_program *prog, long cookie)
+/* Adapted from perf/util/string.c */
+static bool glob_match(const char *str, const char *pat)
+{
+ while (*str && *pat && *pat != '*') {
+ if (*pat == '?') { /* Matches any single character */
+ str++;
+ pat++;
+ continue;
+ }
+ if (*str != *pat)
+ return false;
+ str++;
+ pat++;
+ }
+ /* Check wild card */
+ if (*pat == '*') {
+ while (*pat == '*')
+ pat++;
+ if (!*pat) /* Tail wild card matches all */
+ return true;
+ while (*str)
+ if (glob_match(str++, pat))
+ return true;
+ }
+ return !*str && !*pat;
+}
+
+struct kprobe_multi_resolve {
+ const char *pattern;
+ unsigned long *addrs;
+ size_t cap;
+ size_t cnt;
+};
+
+static int
+resolve_kprobe_multi_cb(unsigned long long sym_addr, char sym_type,
+ const char *sym_name, void *ctx)
+{
+ struct kprobe_multi_resolve *res = ctx;
+ int err;
+
+ if (!glob_match(sym_name, res->pattern))
+ return 0;
+
+ err = libbpf_ensure_mem((void **) &res->addrs, &res->cap, sizeof(unsigned long),
+ res->cnt + 1);
+ if (err)
+ return err;
+
+ res->addrs[res->cnt++] = (unsigned long) sym_addr;
+ return 0;
+}
+
+struct bpf_link *
+bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
+ const char *pattern,
+ const struct bpf_kprobe_multi_opts *opts)
+{
+ LIBBPF_OPTS(bpf_link_create_opts, lopts);
+ struct kprobe_multi_resolve res = {
+ .pattern = pattern,
+ };
+ struct bpf_link *link = NULL;
+ char errmsg[STRERR_BUFSIZE];
+ const unsigned long *addrs;
+ int err, link_fd, prog_fd;
+ const __u64 *cookies;
+ const char **syms;
+ bool retprobe;
+ size_t cnt;
+
+ if (!OPTS_VALID(opts, bpf_kprobe_multi_opts))
+ return libbpf_err_ptr(-EINVAL);
+
+ syms = OPTS_GET(opts, syms, false);
+ addrs = OPTS_GET(opts, addrs, false);
+ cnt = OPTS_GET(opts, cnt, false);
+ cookies = OPTS_GET(opts, cookies, false);
+
+ if (!pattern && !addrs && !syms)
+ return libbpf_err_ptr(-EINVAL);
+ if (pattern && (addrs || syms || cookies || cnt))
+ return libbpf_err_ptr(-EINVAL);
+ if (!pattern && !cnt)
+ return libbpf_err_ptr(-EINVAL);
+ if (addrs && syms)
+ return libbpf_err_ptr(-EINVAL);
+
+ if (pattern) {
+ err = libbpf_kallsyms_parse(resolve_kprobe_multi_cb, &res);
+ if (err)
+ goto error;
+ if (!res.cnt) {
+ err = -ENOENT;
+ goto error;
+ }
+ addrs = res.addrs;
+ cnt = res.cnt;
+ }
+
+ retprobe = OPTS_GET(opts, retprobe, false);
+
+ lopts.kprobe_multi.syms = syms;
+ lopts.kprobe_multi.addrs = addrs;
+ lopts.kprobe_multi.cookies = cookies;
+ lopts.kprobe_multi.cnt = cnt;
+ lopts.kprobe_multi.flags = retprobe ? BPF_F_KPROBE_MULTI_RETURN : 0;
+
+ link = calloc(1, sizeof(*link));
+ if (!link) {
+ err = -ENOMEM;
+ goto error;
+ }
+ link->detach = &bpf_link__detach_fd;
+
+ prog_fd = bpf_program__fd(prog);
+ link_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_KPROBE_MULTI, &lopts);
+ if (link_fd < 0) {
+ err = -errno;
+ pr_warn("prog '%s': failed to attach: %s\n",
+ prog->name, libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+ goto error;
+ }
+ link->fd = link_fd;
+ free(res.addrs);
+ return link;
+
+error:
+ free(link);
+ free(res.addrs);
+ return libbpf_err_ptr(err);
+}
+
+static int attach_kprobe(const struct bpf_program *prog, long cookie, struct bpf_link **link)
{
DECLARE_LIBBPF_OPTS(bpf_kprobe_opts, opts);
unsigned long offset = 0;
- struct bpf_link *link;
const char *func_name;
char *func;
- int n, err;
+ int n;
opts.retprobe = str_has_pfx(prog->sec_name, "kretprobe/");
if (opts.retprobe)
@@ -10087,21 +10399,43 @@ static struct bpf_link *attach_kprobe(const struct bpf_program *prog, long cooki
n = sscanf(func_name, "%m[a-zA-Z0-9_.]+%li", &func, &offset);
if (n < 1) {
- err = -EINVAL;
pr_warn("kprobe name is invalid: %s\n", func_name);
- return libbpf_err_ptr(err);
+ return -EINVAL;
}
if (opts.retprobe && offset != 0) {
free(func);
- err = -EINVAL;
pr_warn("kretprobes do not support offset specification\n");
- return libbpf_err_ptr(err);
+ return -EINVAL;
}
opts.offset = offset;
- link = bpf_program__attach_kprobe_opts(prog, func, &opts);
+ *link = bpf_program__attach_kprobe_opts(prog, func, &opts);
free(func);
- return link;
+ return libbpf_get_error(*link);
+}
+
+static int attach_kprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link)
+{
+ LIBBPF_OPTS(bpf_kprobe_multi_opts, opts);
+ const char *spec;
+ char *pattern;
+ int n;
+
+ opts.retprobe = str_has_pfx(prog->sec_name, "kretprobe.multi/");
+ if (opts.retprobe)
+ spec = prog->sec_name + sizeof("kretprobe.multi/") - 1;
+ else
+ spec = prog->sec_name + sizeof("kprobe.multi/") - 1;
+
+ n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &pattern);
+ if (n < 1) {
+ pr_warn("kprobe multi pattern is invalid: %s\n", pattern);
+ return -EINVAL;
+ }
+
+ *link = bpf_program__attach_kprobe_multi_opts(prog, pattern, &opts);
+ free(pattern);
+ return libbpf_get_error(*link);
}
static void gen_uprobe_legacy_event_name(char *buf, size_t buf_sz,
@@ -10356,14 +10690,13 @@ struct bpf_link *bpf_program__attach_tracepoint(const struct bpf_program *prog,
return bpf_program__attach_tracepoint_opts(prog, tp_category, tp_name, NULL);
}
-static struct bpf_link *attach_tp(const struct bpf_program *prog, long cookie)
+static int attach_tp(const struct bpf_program *prog, long cookie, struct bpf_link **link)
{
char *sec_name, *tp_cat, *tp_name;
- struct bpf_link *link;
sec_name = strdup(prog->sec_name);
if (!sec_name)
- return libbpf_err_ptr(-ENOMEM);
+ return -ENOMEM;
/* extract "tp/<category>/<name>" or "tracepoint/<category>/<name>" */
if (str_has_pfx(prog->sec_name, "tp/"))
@@ -10373,14 +10706,14 @@ static struct bpf_link *attach_tp(const struct bpf_program *prog, long cookie)
tp_name = strchr(tp_cat, '/');
if (!tp_name) {
free(sec_name);
- return libbpf_err_ptr(-EINVAL);
+ return -EINVAL;
}
*tp_name = '\0';
tp_name++;
- link = bpf_program__attach_tracepoint(prog, tp_cat, tp_name);
+ *link = bpf_program__attach_tracepoint(prog, tp_cat, tp_name);
free(sec_name);
- return link;
+ return libbpf_get_error(*link);
}
struct bpf_link *bpf_program__attach_raw_tracepoint(const struct bpf_program *prog,
@@ -10413,7 +10746,7 @@ struct bpf_link *bpf_program__attach_raw_tracepoint(const struct bpf_program *pr
return link;
}
-static struct bpf_link *attach_raw_tp(const struct bpf_program *prog, long cookie)
+static int attach_raw_tp(const struct bpf_program *prog, long cookie, struct bpf_link **link)
{
static const char *const prefixes[] = {
"raw_tp/",
@@ -10433,10 +10766,11 @@ static struct bpf_link *attach_raw_tp(const struct bpf_program *prog, long cooki
if (!tp_name) {
pr_warn("prog '%s': invalid section name '%s'\n",
prog->name, prog->sec_name);
- return libbpf_err_ptr(-EINVAL);
+ return -EINVAL;
}
- return bpf_program__attach_raw_tracepoint(prog, tp_name);
+ *link = bpf_program__attach_raw_tracepoint(prog, tp_name);
+ return libbpf_get_error(link);
}
/* Common logic for all BPF program types that attach to a btf_id */
@@ -10479,14 +10813,16 @@ struct bpf_link *bpf_program__attach_lsm(const struct bpf_program *prog)
return bpf_program__attach_btf_id(prog);
}
-static struct bpf_link *attach_trace(const struct bpf_program *prog, long cookie)
+static int attach_trace(const struct bpf_program *prog, long cookie, struct bpf_link **link)
{
- return bpf_program__attach_trace(prog);
+ *link = bpf_program__attach_trace(prog);
+ return libbpf_get_error(*link);
}
-static struct bpf_link *attach_lsm(const struct bpf_program *prog, long cookie)
+static int attach_lsm(const struct bpf_program *prog, long cookie, struct bpf_link **link)
{
- return bpf_program__attach_lsm(prog);
+ *link = bpf_program__attach_lsm(prog);
+ return libbpf_get_error(*link);
}
static struct bpf_link *
@@ -10511,7 +10847,7 @@ bpf_program__attach_fd(const struct bpf_program *prog, int target_fd, int btf_id
return libbpf_err_ptr(-ENOMEM);
link->detach = &bpf_link__detach_fd;
- attach_type = bpf_program__get_expected_attach_type(prog);
+ attach_type = bpf_program__expected_attach_type(prog);
link_fd = bpf_link_create(prog_fd, target_fd, attach_type, &opts);
if (link_fd < 0) {
link_fd = -errno;
@@ -10615,17 +10951,33 @@ bpf_program__attach_iter(const struct bpf_program *prog,
return link;
}
-static struct bpf_link *attach_iter(const struct bpf_program *prog, long cookie)
+static int attach_iter(const struct bpf_program *prog, long cookie, struct bpf_link **link)
{
- return bpf_program__attach_iter(prog, NULL);
+ *link = bpf_program__attach_iter(prog, NULL);
+ return libbpf_get_error(*link);
}
struct bpf_link *bpf_program__attach(const struct bpf_program *prog)
{
- if (!prog->sec_def || !prog->sec_def->attach_fn)
- return libbpf_err_ptr(-ESRCH);
+ struct bpf_link *link = NULL;
+ int err;
+
+ if (!prog->sec_def || !prog->sec_def->prog_attach_fn)
+ return libbpf_err_ptr(-EOPNOTSUPP);
+
+ err = prog->sec_def->prog_attach_fn(prog, prog->sec_def->cookie, &link);
+ if (err)
+ return libbpf_err_ptr(err);
- return prog->sec_def->attach_fn(prog, prog->sec_def->cookie);
+ /* When calling bpf_program__attach() explicitly, auto-attach support
+ * is expected to work, so NULL returned link is considered an error.
+ * This is different for skeleton's attach, see comment in
+ * bpf_object__attach_skeleton().
+ */
+ if (!link)
+ return libbpf_err_ptr(-EOPNOTSUPP);
+
+ return link;
}
static int bpf_link__detach_struct_ops(struct bpf_link *link)
@@ -10912,7 +11264,7 @@ struct perf_buffer *perf_buffer__new_raw_v0_6_0(int map_fd, size_t page_cnt,
{
struct perf_buffer_params p = {};
- if (page_cnt == 0 || !attr)
+ if (!attr)
return libbpf_err_ptr(-EINVAL);
if (!OPTS_VALID(opts, perf_buffer_raw_opts))
@@ -10953,7 +11305,7 @@ static struct perf_buffer *__perf_buffer__new(int map_fd, size_t page_cnt,
__u32 map_info_len;
int err, i, j, n;
- if (page_cnt & (page_cnt - 1)) {
+ if (page_cnt == 0 || (page_cnt & (page_cnt - 1))) {
pr_warn("page count should be power of two, but is %zu\n",
page_cnt);
return ERR_PTR(-EINVAL);
@@ -11640,6 +11992,49 @@ int libbpf_num_possible_cpus(void)
return tmp_cpus;
}
+static int populate_skeleton_maps(const struct bpf_object *obj,
+ struct bpf_map_skeleton *maps,
+ size_t map_cnt)
+{
+ int i;
+
+ for (i = 0; i < map_cnt; i++) {
+ struct bpf_map **map = maps[i].map;
+ const char *name = maps[i].name;
+ void **mmaped = maps[i].mmaped;
+
+ *map = bpf_object__find_map_by_name(obj, name);
+ if (!*map) {
+ pr_warn("failed to find skeleton map '%s'\n", name);
+ return -ESRCH;
+ }
+
+ /* externs shouldn't be pre-setup from user code */
+ if (mmaped && (*map)->libbpf_type != LIBBPF_MAP_KCONFIG)
+ *mmaped = (*map)->mmaped;
+ }
+ return 0;
+}
+
+static int populate_skeleton_progs(const struct bpf_object *obj,
+ struct bpf_prog_skeleton *progs,
+ size_t prog_cnt)
+{
+ int i;
+
+ for (i = 0; i < prog_cnt; i++) {
+ struct bpf_program **prog = progs[i].prog;
+ const char *name = progs[i].name;
+
+ *prog = bpf_object__find_program_by_name(obj, name);
+ if (!*prog) {
+ pr_warn("failed to find skeleton program '%s'\n", name);
+ return -ESRCH;
+ }
+ }
+ return 0;
+}
+
int bpf_object__open_skeleton(struct bpf_object_skeleton *s,
const struct bpf_object_open_opts *opts)
{
@@ -11647,7 +12042,7 @@ int bpf_object__open_skeleton(struct bpf_object_skeleton *s,
.object_name = s->name,
);
struct bpf_object *obj;
- int i, err;
+ int err;
/* Attempt to preserve opts->object_name, unless overriden by user
* explicitly. Overwriting object name for skeletons is discouraged,
@@ -11670,37 +12065,91 @@ int bpf_object__open_skeleton(struct bpf_object_skeleton *s,
}
*s->obj = obj;
+ err = populate_skeleton_maps(obj, s->maps, s->map_cnt);
+ if (err) {
+ pr_warn("failed to populate skeleton maps for '%s': %d\n", s->name, err);
+ return libbpf_err(err);
+ }
- for (i = 0; i < s->map_cnt; i++) {
- struct bpf_map **map = s->maps[i].map;
- const char *name = s->maps[i].name;
- void **mmaped = s->maps[i].mmaped;
+ err = populate_skeleton_progs(obj, s->progs, s->prog_cnt);
+ if (err) {
+ pr_warn("failed to populate skeleton progs for '%s': %d\n", s->name, err);
+ return libbpf_err(err);
+ }
- *map = bpf_object__find_map_by_name(obj, name);
- if (!*map) {
- pr_warn("failed to find skeleton map '%s'\n", name);
- return libbpf_err(-ESRCH);
- }
+ return 0;
+}
- /* externs shouldn't be pre-setup from user code */
- if (mmaped && (*map)->libbpf_type != LIBBPF_MAP_KCONFIG)
- *mmaped = (*map)->mmaped;
+int bpf_object__open_subskeleton(struct bpf_object_subskeleton *s)
+{
+ int err, len, var_idx, i;
+ const char *var_name;
+ const struct bpf_map *map;
+ struct btf *btf;
+ __u32 map_type_id;
+ const struct btf_type *map_type, *var_type;
+ const struct bpf_var_skeleton *var_skel;
+ struct btf_var_secinfo *var;
+
+ if (!s->obj)
+ return libbpf_err(-EINVAL);
+
+ btf = bpf_object__btf(s->obj);
+ if (!btf) {
+ pr_warn("subskeletons require BTF at runtime (object %s)\n",
+ bpf_object__name(s->obj));
+ return libbpf_err(-errno);
}
- for (i = 0; i < s->prog_cnt; i++) {
- struct bpf_program **prog = s->progs[i].prog;
- const char *name = s->progs[i].name;
+ err = populate_skeleton_maps(s->obj, s->maps, s->map_cnt);
+ if (err) {
+ pr_warn("failed to populate subskeleton maps: %d\n", err);
+ return libbpf_err(err);
+ }
- *prog = bpf_object__find_program_by_name(obj, name);
- if (!*prog) {
- pr_warn("failed to find skeleton program '%s'\n", name);
- return libbpf_err(-ESRCH);
- }
+ err = populate_skeleton_progs(s->obj, s->progs, s->prog_cnt);
+ if (err) {
+ pr_warn("failed to populate subskeleton maps: %d\n", err);
+ return libbpf_err(err);
}
+ for (var_idx = 0; var_idx < s->var_cnt; var_idx++) {
+ var_skel = &s->vars[var_idx];
+ map = *var_skel->map;
+ map_type_id = bpf_map__btf_value_type_id(map);
+ map_type = btf__type_by_id(btf, map_type_id);
+
+ if (!btf_is_datasec(map_type)) {
+ pr_warn("type for map '%1$s' is not a datasec: %2$s",
+ bpf_map__name(map),
+ __btf_kind_str(btf_kind(map_type)));
+ return libbpf_err(-EINVAL);
+ }
+
+ len = btf_vlen(map_type);
+ var = btf_var_secinfos(map_type);
+ for (i = 0; i < len; i++, var++) {
+ var_type = btf__type_by_id(btf, var->type);
+ var_name = btf__name_by_offset(btf, var_type->name_off);
+ if (strcmp(var_name, var_skel->name) == 0) {
+ *var_skel->addr = map->mmaped + var->offset;
+ break;
+ }
+ }
+ }
return 0;
}
+void bpf_object__destroy_subskeleton(struct bpf_object_subskeleton *s)
+{
+ if (!s)
+ return;
+ free(s->maps);
+ free(s->progs);
+ free(s->vars);
+ free(s);
+}
+
int bpf_object__load_skeleton(struct bpf_object_skeleton *s)
{
int i, err;
@@ -11766,16 +12215,30 @@ int bpf_object__attach_skeleton(struct bpf_object_skeleton *s)
continue;
/* auto-attaching not supported for this program */
- if (!prog->sec_def || !prog->sec_def->attach_fn)
+ if (!prog->sec_def || !prog->sec_def->prog_attach_fn)
continue;
- *link = bpf_program__attach(prog);
- err = libbpf_get_error(*link);
+ /* if user already set the link manually, don't attempt auto-attach */
+ if (*link)
+ continue;
+
+ err = prog->sec_def->prog_attach_fn(prog, prog->sec_def->cookie, link);
if (err) {
- pr_warn("failed to auto-attach program '%s': %d\n",
+ pr_warn("prog '%s': failed to auto-attach: %d\n",
bpf_program__name(prog), err);
return libbpf_err(err);
}
+
+ /* It's possible that for some SEC() definitions auto-attach
+ * is supported in some cases (e.g., if definition completely
+ * specifies target information), but is not in other cases.
+ * SEC("uprobe") is one such case. If user specified target
+ * binary and function name, such BPF program can be
+ * auto-attached. But if not, it shouldn't trigger skeleton's
+ * attach to fail. It should just be skipped.
+ * attach_fn signals such case with returning 0 (no error) and
+ * setting link to NULL.
+ */
}
return 0;
@@ -11795,6 +12258,9 @@ void bpf_object__detach_skeleton(struct bpf_object_skeleton *s)
void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s)
{
+ if (!s)
+ return;
+
if (s->progs)
bpf_object__detach_skeleton(s);
if (s->obj)
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 8b9bc5e90c2b..05dde85e19a6 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -180,9 +180,11 @@ bpf_object__open_mem(const void *obj_buf, size_t obj_buf_sz,
const struct bpf_object_open_opts *opts);
/* deprecated bpf_object__open variants */
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_object__open_mem() instead")
LIBBPF_API struct bpf_object *
bpf_object__open_buffer(const void *obj_buf, size_t obj_buf_sz,
const char *name);
+LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_object__open_file() instead")
LIBBPF_API struct bpf_object *
bpf_object__open_xattr(struct bpf_object_open_attr *attr);
@@ -244,8 +246,10 @@ struct bpf_object *bpf_object__next(struct bpf_object *prev);
(pos) = (tmp), (tmp) = bpf_object__next(tmp))
typedef void (*bpf_object_clear_priv_t)(struct bpf_object *, void *);
+LIBBPF_DEPRECATED_SINCE(0, 7, "storage via set_priv/priv is deprecated")
LIBBPF_API int bpf_object__set_priv(struct bpf_object *obj, void *priv,
bpf_object_clear_priv_t clear_priv);
+LIBBPF_DEPRECATED_SINCE(0, 7, "storage via set_priv/priv is deprecated")
LIBBPF_API void *bpf_object__priv(const struct bpf_object *prog);
LIBBPF_API int
@@ -277,9 +281,10 @@ bpf_object__prev_program(const struct bpf_object *obj, struct bpf_program *prog)
typedef void (*bpf_program_clear_priv_t)(struct bpf_program *, void *);
+LIBBPF_DEPRECATED_SINCE(0, 7, "storage via set_priv/priv is deprecated")
LIBBPF_API int bpf_program__set_priv(struct bpf_program *prog, void *priv,
bpf_program_clear_priv_t clear_priv);
-
+LIBBPF_DEPRECATED_SINCE(0, 7, "storage via set_priv/priv is deprecated")
LIBBPF_API void *bpf_program__priv(const struct bpf_program *prog);
LIBBPF_API void bpf_program__set_ifindex(struct bpf_program *prog,
__u32 ifindex);
@@ -420,6 +425,29 @@ bpf_program__attach_kprobe_opts(const struct bpf_program *prog,
const char *func_name,
const struct bpf_kprobe_opts *opts);
+struct bpf_kprobe_multi_opts {
+ /* size of this struct, for forward/backward compatibility */
+ size_t sz;
+ /* array of function symbols to attach */
+ const char **syms;
+ /* array of function addresses to attach */
+ const unsigned long *addrs;
+ /* array of user-provided values fetchable through bpf_get_attach_cookie */
+ const __u64 *cookies;
+ /* number of elements in syms/addrs/cookies arrays */
+ size_t cnt;
+ /* create return kprobes */
+ bool retprobe;
+ size_t :0;
+};
+
+#define bpf_kprobe_multi_opts__last_field retprobe
+
+LIBBPF_API struct bpf_link *
+bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
+ const char *pattern,
+ const struct bpf_kprobe_multi_opts *opts);
+
struct bpf_uprobe_opts {
/* size of this struct, for forward/backward compatiblity */
size_t sz;
@@ -591,26 +619,39 @@ LIBBPF_API int bpf_program__nth_fd(const struct bpf_program *prog, int n);
/*
* Adjust type of BPF program. Default is kprobe.
*/
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead")
LIBBPF_API int bpf_program__set_socket_filter(struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead")
LIBBPF_API int bpf_program__set_tracepoint(struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead")
LIBBPF_API int bpf_program__set_raw_tracepoint(struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead")
LIBBPF_API int bpf_program__set_kprobe(struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead")
LIBBPF_API int bpf_program__set_lsm(struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead")
LIBBPF_API int bpf_program__set_sched_cls(struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead")
LIBBPF_API int bpf_program__set_sched_act(struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead")
LIBBPF_API int bpf_program__set_xdp(struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead")
LIBBPF_API int bpf_program__set_perf_event(struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead")
LIBBPF_API int bpf_program__set_tracing(struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead")
LIBBPF_API int bpf_program__set_struct_ops(struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead")
LIBBPF_API int bpf_program__set_extension(struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead")
LIBBPF_API int bpf_program__set_sk_lookup(struct bpf_program *prog);
-LIBBPF_API enum bpf_prog_type bpf_program__get_type(const struct bpf_program *prog);
+LIBBPF_API enum bpf_prog_type bpf_program__type(const struct bpf_program *prog);
LIBBPF_API void bpf_program__set_type(struct bpf_program *prog,
enum bpf_prog_type type);
LIBBPF_API enum bpf_attach_type
-bpf_program__get_expected_attach_type(const struct bpf_program *prog);
+bpf_program__expected_attach_type(const struct bpf_program *prog);
LIBBPF_API void
bpf_program__set_expected_attach_type(struct bpf_program *prog,
enum bpf_attach_type type);
@@ -631,18 +672,31 @@ LIBBPF_API int
bpf_program__set_attach_target(struct bpf_program *prog, int attach_prog_fd,
const char *attach_func_name);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead")
LIBBPF_API bool bpf_program__is_socket_filter(const struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead")
LIBBPF_API bool bpf_program__is_tracepoint(const struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead")
LIBBPF_API bool bpf_program__is_raw_tracepoint(const struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead")
LIBBPF_API bool bpf_program__is_kprobe(const struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead")
LIBBPF_API bool bpf_program__is_lsm(const struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead")
LIBBPF_API bool bpf_program__is_sched_cls(const struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead")
LIBBPF_API bool bpf_program__is_sched_act(const struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead")
LIBBPF_API bool bpf_program__is_xdp(const struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead")
LIBBPF_API bool bpf_program__is_perf_event(const struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead")
LIBBPF_API bool bpf_program__is_tracing(const struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead")
LIBBPF_API bool bpf_program__is_struct_ops(const struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead")
LIBBPF_API bool bpf_program__is_extension(const struct bpf_program *prog);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead")
LIBBPF_API bool bpf_program__is_sk_lookup(const struct bpf_program *prog);
/*
@@ -706,7 +760,8 @@ bpf_object__prev_map(const struct bpf_object *obj, const struct bpf_map *map);
LIBBPF_API int bpf_map__fd(const struct bpf_map *map);
LIBBPF_API int bpf_map__reuse_fd(struct bpf_map *map, int fd);
/* get map definition */
-LIBBPF_API const struct bpf_map_def *bpf_map__def(const struct bpf_map *map);
+LIBBPF_API LIBBPF_DEPRECATED_SINCE(0, 8, "use appropriate getters or setters instead")
+const struct bpf_map_def *bpf_map__def(const struct bpf_map *map);
/* get map name */
LIBBPF_API const char *bpf_map__name(const struct bpf_map *map);
/* get/set map type */
@@ -715,6 +770,7 @@ LIBBPF_API int bpf_map__set_type(struct bpf_map *map, enum bpf_map_type type);
/* get/set map size (max_entries) */
LIBBPF_API __u32 bpf_map__max_entries(const struct bpf_map *map);
LIBBPF_API int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_map__set_max_entries() instead")
LIBBPF_API int bpf_map__resize(struct bpf_map *map, __u32 max_entries);
/* get/set map flags */
LIBBPF_API __u32 bpf_map__map_flags(const struct bpf_map *map);
@@ -739,8 +795,10 @@ LIBBPF_API __u64 bpf_map__map_extra(const struct bpf_map *map);
LIBBPF_API int bpf_map__set_map_extra(struct bpf_map *map, __u64 map_extra);
typedef void (*bpf_map_clear_priv_t)(struct bpf_map *, void *);
+LIBBPF_DEPRECATED_SINCE(0, 7, "storage via set_priv/priv is deprecated")
LIBBPF_API int bpf_map__set_priv(struct bpf_map *map, void *priv,
bpf_map_clear_priv_t clear_priv);
+LIBBPF_DEPRECATED_SINCE(0, 7, "storage via set_priv/priv is deprecated")
LIBBPF_API void *bpf_map__priv(const struct bpf_map *map);
LIBBPF_API int bpf_map__set_initial_value(struct bpf_map *map,
const void *data, size_t size);
@@ -757,7 +815,6 @@ LIBBPF_API bool bpf_map__is_offload_neutral(const struct bpf_map *map);
*/
LIBBPF_API bool bpf_map__is_internal(const struct bpf_map *map);
LIBBPF_API int bpf_map__set_pin_path(struct bpf_map *map, const char *path);
-LIBBPF_API const char *bpf_map__get_pin_path(const struct bpf_map *map);
LIBBPF_API const char *bpf_map__pin_path(const struct bpf_map *map);
LIBBPF_API bool bpf_map__is_pinned(const struct bpf_map *map);
LIBBPF_API int bpf_map__pin(struct bpf_map *map, const char *path);
@@ -832,13 +889,42 @@ struct bpf_xdp_set_link_opts {
};
#define bpf_xdp_set_link_opts__last_field old_fd
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_xdp_attach() instead")
LIBBPF_API int bpf_set_link_xdp_fd(int ifindex, int fd, __u32 flags);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_xdp_attach() instead")
LIBBPF_API int bpf_set_link_xdp_fd_opts(int ifindex, int fd, __u32 flags,
const struct bpf_xdp_set_link_opts *opts);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_xdp_query_id() instead")
LIBBPF_API int bpf_get_link_xdp_id(int ifindex, __u32 *prog_id, __u32 flags);
+LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_xdp_query() instead")
LIBBPF_API int bpf_get_link_xdp_info(int ifindex, struct xdp_link_info *info,
size_t info_size, __u32 flags);
+struct bpf_xdp_attach_opts {
+ size_t sz;
+ int old_prog_fd;
+ size_t :0;
+};
+#define bpf_xdp_attach_opts__last_field old_prog_fd
+
+struct bpf_xdp_query_opts {
+ size_t sz;
+ __u32 prog_id; /* output */
+ __u32 drv_prog_id; /* output */
+ __u32 hw_prog_id; /* output */
+ __u32 skb_prog_id; /* output */
+ __u8 attach_mode; /* output */
+ size_t :0;
+};
+#define bpf_xdp_query_opts__last_field attach_mode
+
+LIBBPF_API int bpf_xdp_attach(int ifindex, int prog_fd, __u32 flags,
+ const struct bpf_xdp_attach_opts *opts);
+LIBBPF_API int bpf_xdp_detach(int ifindex, __u32 flags,
+ const struct bpf_xdp_attach_opts *opts);
+LIBBPF_API int bpf_xdp_query(int ifindex, int flags, struct bpf_xdp_query_opts *opts);
+LIBBPF_API int bpf_xdp_query_id(int ifindex, int flags, __u32 *prog_id);
+
/* TC related API */
enum bpf_tc_attach_point {
BPF_TC_INGRESS = 1 << 0,
@@ -1226,6 +1312,35 @@ LIBBPF_API int bpf_object__attach_skeleton(struct bpf_object_skeleton *s);
LIBBPF_API void bpf_object__detach_skeleton(struct bpf_object_skeleton *s);
LIBBPF_API void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s);
+struct bpf_var_skeleton {
+ const char *name;
+ struct bpf_map **map;
+ void **addr;
+};
+
+struct bpf_object_subskeleton {
+ size_t sz; /* size of this struct, for forward/backward compatibility */
+
+ const struct bpf_object *obj;
+
+ int map_cnt;
+ int map_skel_sz; /* sizeof(struct bpf_map_skeleton) */
+ struct bpf_map_skeleton *maps;
+
+ int prog_cnt;
+ int prog_skel_sz; /* sizeof(struct bpf_prog_skeleton) */
+ struct bpf_prog_skeleton *progs;
+
+ int var_cnt;
+ int var_skel_sz; /* sizeof(struct bpf_var_skeleton) */
+ struct bpf_var_skeleton *vars;
+};
+
+LIBBPF_API int
+bpf_object__open_subskeleton(struct bpf_object_subskeleton *s);
+LIBBPF_API void
+bpf_object__destroy_subskeleton(struct bpf_object_subskeleton *s);
+
struct gen_loader_opts {
size_t sz; /* size of this struct, for forward/backward compatiblity */
const char *data;
@@ -1265,6 +1380,115 @@ LIBBPF_API int bpf_linker__add_file(struct bpf_linker *linker,
LIBBPF_API int bpf_linker__finalize(struct bpf_linker *linker);
LIBBPF_API void bpf_linker__free(struct bpf_linker *linker);
+/*
+ * Custom handling of BPF program's SEC() definitions
+ */
+
+struct bpf_prog_load_opts; /* defined in bpf.h */
+
+/* Called during bpf_object__open() for each recognized BPF program. Callback
+ * can use various bpf_program__set_*() setters to adjust whatever properties
+ * are necessary.
+ */
+typedef int (*libbpf_prog_setup_fn_t)(struct bpf_program *prog, long cookie);
+
+/* Called right before libbpf performs bpf_prog_load() to load BPF program
+ * into the kernel. Callback can adjust opts as necessary.
+ */
+typedef int (*libbpf_prog_prepare_load_fn_t)(struct bpf_program *prog,
+ struct bpf_prog_load_opts *opts, long cookie);
+
+/* Called during skeleton attach or through bpf_program__attach(). If
+ * auto-attach is not supported, callback should return 0 and set link to
+ * NULL (it's not considered an error during skeleton attach, but it will be
+ * an error for bpf_program__attach() calls). On error, error should be
+ * returned directly and link set to NULL. On success, return 0 and set link
+ * to a valid struct bpf_link.
+ */
+typedef int (*libbpf_prog_attach_fn_t)(const struct bpf_program *prog, long cookie,
+ struct bpf_link **link);
+
+struct libbpf_prog_handler_opts {
+ /* size of this struct, for forward/backward compatiblity */
+ size_t sz;
+ /* User-provided value that is passed to prog_setup_fn,
+ * prog_prepare_load_fn, and prog_attach_fn callbacks. Allows user to
+ * register one set of callbacks for multiple SEC() definitions and
+ * still be able to distinguish them, if necessary. For example,
+ * libbpf itself is using this to pass necessary flags (e.g.,
+ * sleepable flag) to a common internal SEC() handler.
+ */
+ long cookie;
+ /* BPF program initialization callback (see libbpf_prog_setup_fn_t).
+ * Callback is optional, pass NULL if it's not necessary.
+ */
+ libbpf_prog_setup_fn_t prog_setup_fn;
+ /* BPF program loading callback (see libbpf_prog_prepare_load_fn_t).
+ * Callback is optional, pass NULL if it's not necessary.
+ */
+ libbpf_prog_prepare_load_fn_t prog_prepare_load_fn;
+ /* BPF program attach callback (see libbpf_prog_attach_fn_t).
+ * Callback is optional, pass NULL if it's not necessary.
+ */
+ libbpf_prog_attach_fn_t prog_attach_fn;
+};
+#define libbpf_prog_handler_opts__last_field prog_attach_fn
+
+/**
+ * @brief **libbpf_register_prog_handler()** registers a custom BPF program
+ * SEC() handler.
+ * @param sec section prefix for which custom handler is registered
+ * @param prog_type BPF program type associated with specified section
+ * @param exp_attach_type Expected BPF attach type associated with specified section
+ * @param opts optional cookie, callbacks, and other extra options
+ * @return Non-negative handler ID is returned on success. This handler ID has
+ * to be passed to *libbpf_unregister_prog_handler()* to unregister such
+ * custom handler. Negative error code is returned on error.
+ *
+ * *sec* defines which SEC() definitions are handled by this custom handler
+ * registration. *sec* can have few different forms:
+ * - if *sec* is just a plain string (e.g., "abc"), it will match only
+ * SEC("abc"). If BPF program specifies SEC("abc/whatever") it will result
+ * in an error;
+ * - if *sec* is of the form "abc/", proper SEC() form is
+ * SEC("abc/something"), where acceptable "something" should be checked by
+ * *prog_init_fn* callback, if there are additional restrictions;
+ * - if *sec* is of the form "abc+", it will successfully match both
+ * SEC("abc") and SEC("abc/whatever") forms;
+ * - if *sec* is NULL, custom handler is registered for any BPF program that
+ * doesn't match any of the registered (custom or libbpf's own) SEC()
+ * handlers. There could be only one such generic custom handler registered
+ * at any given time.
+ *
+ * All custom handlers (except the one with *sec* == NULL) are processed
+ * before libbpf's own SEC() handlers. It is allowed to "override" libbpf's
+ * SEC() handlers by registering custom ones for the same section prefix
+ * (i.e., it's possible to have custom SEC("perf_event/LLC-load-misses")
+ * handler).
+ *
+ * Note, like much of global libbpf APIs (e.g., libbpf_set_print(),
+ * libbpf_set_strict_mode(), etc)) these APIs are not thread-safe. User needs
+ * to ensure synchronization if there is a risk of running this API from
+ * multiple threads simultaneously.
+ */
+LIBBPF_API int libbpf_register_prog_handler(const char *sec,
+ enum bpf_prog_type prog_type,
+ enum bpf_attach_type exp_attach_type,
+ const struct libbpf_prog_handler_opts *opts);
+/**
+ * @brief *libbpf_unregister_prog_handler()* unregisters previously registered
+ * custom BPF program SEC() handler.
+ * @param handler_id handler ID returned by *libbpf_register_prog_handler()*
+ * after successful registration
+ * @return 0 on success, negative error code if handler isn't found
+ *
+ * Note, like much of global libbpf APIs (e.g., libbpf_set_print(),
+ * libbpf_set_strict_mode(), etc)) these APIs are not thread-safe. User needs
+ * to ensure synchronization if there is a risk of running this API from
+ * multiple threads simultaneously.
+ */
+LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
+
#ifdef __cplusplus
} /* extern "C" */
#endif
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 529783967793..dd35ee58bfaa 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -247,6 +247,7 @@ LIBBPF_0.0.8 {
bpf_link_create;
bpf_link_update;
bpf_map__set_initial_value;
+ bpf_prog_attach_opts;
bpf_program__attach_cgroup;
bpf_program__attach_lsm;
bpf_program__is_lsm;
@@ -423,12 +424,27 @@ LIBBPF_0.6.0 {
LIBBPF_0.7.0 {
global:
bpf_btf_load;
+ bpf_program__expected_attach_type;
bpf_program__log_buf;
bpf_program__log_level;
bpf_program__set_log_buf;
bpf_program__set_log_level;
+ bpf_program__type;
+ bpf_xdp_attach;
+ bpf_xdp_detach;
+ bpf_xdp_query;
+ bpf_xdp_query_id;
libbpf_probe_bpf_helper;
libbpf_probe_bpf_map_type;
libbpf_probe_bpf_prog_type;
libbpf_set_memlock_rlim_max;
-};
+} LIBBPF_0.6.0;
+
+LIBBPF_0.8.0 {
+ global:
+ bpf_object__destroy_subskeleton;
+ bpf_object__open_subskeleton;
+ libbpf_register_prog_handler;
+ libbpf_unregister_prog_handler;
+ bpf_program__attach_kprobe_multi_opts;
+} LIBBPF_0.7.0;
diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
index 1565679eb432..b6247dc7f8eb 100644
--- a/tools/lib/bpf/libbpf_internal.h
+++ b/tools/lib/bpf/libbpf_internal.h
@@ -92,6 +92,9 @@
# define offsetofend(TYPE, FIELD) \
(offsetof(TYPE, FIELD) + sizeof(((TYPE *)0)->FIELD))
#endif
+#ifndef __alias
+#define __alias(symbol) __attribute__((alias(#symbol)))
+#endif
/* Check whether a string `str` has prefix `pfx`, regardless if `pfx` is
* a string literal known at compilation time or char * pointer known only at
@@ -446,6 +449,11 @@ __s32 btf__find_by_name_kind_own(const struct btf *btf, const char *type_name,
extern enum libbpf_strict_mode libbpf_mode;
+typedef int (*kallsyms_cb_t)(unsigned long long sym_addr, char sym_type,
+ const char *sym_name, void *ctx);
+
+int libbpf_kallsyms_parse(kallsyms_cb_t cb, void *arg);
+
/* handle direct returned errors */
static inline int libbpf_err(int ret)
{
@@ -526,4 +534,13 @@ static inline int ensure_good_fd(int fd)
return fd;
}
+/* The following two functions are exposed to bpftool */
+int bpf_core_add_cands(struct bpf_core_cand *local_cand,
+ size_t local_essent_len,
+ const struct btf *targ_btf,
+ const char *targ_btf_name,
+ int targ_start_id,
+ struct bpf_core_cand_list *cands);
+void bpf_core_free_cands(struct bpf_core_cand_list *cands);
+
#endif /* __LIBBPF_LIBBPF_INTERNAL_H */
diff --git a/tools/lib/bpf/libbpf_legacy.h b/tools/lib/bpf/libbpf_legacy.h
index 79131f761a27..d7bcbd01f66f 100644
--- a/tools/lib/bpf/libbpf_legacy.h
+++ b/tools/lib/bpf/libbpf_legacy.h
@@ -54,6 +54,10 @@ enum libbpf_strict_mode {
*
* Note, in this mode the program pin path will be based on the
* function name instead of section name.
+ *
+ * Additionally, routines in the .text section are always considered
+ * sub-programs. Legacy behavior allows for a single routine in .text
+ * to be a program.
*/
LIBBPF_STRICT_SEC_NAME = 0x04,
/*
@@ -73,6 +77,11 @@ enum libbpf_strict_mode {
* operation.
*/
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK = 0x10,
+ /*
+ * Error out on any SEC("maps") map definition, which are deprecated
+ * in favor of BTF-defined map definitions in SEC(".maps").
+ */
+ LIBBPF_STRICT_MAP_DEFINITIONS = 0x20,
__LIBBPF_STRICT_LAST,
};
@@ -81,6 +90,23 @@ LIBBPF_API int libbpf_set_strict_mode(enum libbpf_strict_mode mode);
#define DECLARE_LIBBPF_OPTS LIBBPF_OPTS
+/* "Discouraged" APIs which don't follow consistent libbpf naming patterns.
+ * They are normally a trivial aliases or wrappers for proper APIs and are
+ * left to minimize unnecessary disruption for users of libbpf. But they
+ * shouldn't be used going forward.
+ */
+
+struct bpf_program;
+struct bpf_map;
+struct btf;
+struct btf_ext;
+
+LIBBPF_API enum bpf_prog_type bpf_program__get_type(const struct bpf_program *prog);
+LIBBPF_API enum bpf_attach_type bpf_program__get_expected_attach_type(const struct bpf_program *prog);
+LIBBPF_API const char *bpf_map__get_pin_path(const struct bpf_map *map);
+LIBBPF_API const void *btf__get_raw_data(const struct btf *btf, __u32 *size);
+LIBBPF_API const void *btf_ext__get_raw_data(const struct btf_ext *btf_ext, __u32 *size);
+
#ifdef __cplusplus
} /* extern "C" */
#endif
diff --git a/tools/lib/bpf/libbpf_version.h b/tools/lib/bpf/libbpf_version.h
index 0fefefc3500b..61f2039404b6 100644
--- a/tools/lib/bpf/libbpf_version.h
+++ b/tools/lib/bpf/libbpf_version.h
@@ -4,6 +4,6 @@
#define __LIBBPF_VERSION_H
#define LIBBPF_MAJOR_VERSION 0
-#define LIBBPF_MINOR_VERSION 7
+#define LIBBPF_MINOR_VERSION 8
#endif /* __LIBBPF_VERSION_H */
diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
index 39f25e09b51e..cbc8967d5402 100644
--- a/tools/lib/bpf/netlink.c
+++ b/tools/lib/bpf/netlink.c
@@ -87,29 +87,75 @@ enum {
NL_DONE,
};
+static int netlink_recvmsg(int sock, struct msghdr *mhdr, int flags)
+{
+ int len;
+
+ do {
+ len = recvmsg(sock, mhdr, flags);
+ } while (len < 0 && (errno == EINTR || errno == EAGAIN));
+
+ if (len < 0)
+ return -errno;
+ return len;
+}
+
+static int alloc_iov(struct iovec *iov, int len)
+{
+ void *nbuf;
+
+ nbuf = realloc(iov->iov_base, len);
+ if (!nbuf)
+ return -ENOMEM;
+
+ iov->iov_base = nbuf;
+ iov->iov_len = len;
+ return 0;
+}
+
static int libbpf_netlink_recv(int sock, __u32 nl_pid, int seq,
__dump_nlmsg_t _fn, libbpf_dump_nlmsg_t fn,
void *cookie)
{
+ struct iovec iov = {};
+ struct msghdr mhdr = {
+ .msg_iov = &iov,
+ .msg_iovlen = 1,
+ };
bool multipart = true;
struct nlmsgerr *err;
struct nlmsghdr *nh;
- char buf[4096];
int len, ret;
+ ret = alloc_iov(&iov, 4096);
+ if (ret)
+ goto done;
+
while (multipart) {
start:
multipart = false;
- len = recv(sock, buf, sizeof(buf), 0);
+ len = netlink_recvmsg(sock, &mhdr, MSG_PEEK | MSG_TRUNC);
if (len < 0) {
- ret = -errno;
+ ret = len;
+ goto done;
+ }
+
+ if (len > iov.iov_len) {
+ ret = alloc_iov(&iov, len);
+ if (ret)
+ goto done;
+ }
+
+ len = netlink_recvmsg(sock, &mhdr, 0);
+ if (len < 0) {
+ ret = len;
goto done;
}
if (len == 0)
break;
- for (nh = (struct nlmsghdr *)buf; NLMSG_OK(nh, len);
+ for (nh = (struct nlmsghdr *)iov.iov_base; NLMSG_OK(nh, len);
nh = NLMSG_NEXT(nh, len)) {
if (nh->nlmsg_pid != nl_pid) {
ret = -LIBBPF_ERRNO__WRNGPID;
@@ -130,7 +176,8 @@ start:
libbpf_nla_dump_errormsg(nh);
goto done;
case NLMSG_DONE:
- return 0;
+ ret = 0;
+ goto done;
default:
break;
}
@@ -142,15 +189,17 @@ start:
case NL_NEXT:
goto start;
case NL_DONE:
- return 0;
+ ret = 0;
+ goto done;
default:
- return ret;
+ goto done;
}
}
}
}
ret = 0;
done:
+ free(iov.iov_base);
return ret;
}
@@ -217,6 +266,28 @@ static int __bpf_set_link_xdp_fd_replace(int ifindex, int fd, int old_fd,
return libbpf_netlink_send_recv(&req, NULL, NULL, NULL);
}
+int bpf_xdp_attach(int ifindex, int prog_fd, __u32 flags, const struct bpf_xdp_attach_opts *opts)
+{
+ int old_prog_fd, err;
+
+ if (!OPTS_VALID(opts, bpf_xdp_attach_opts))
+ return libbpf_err(-EINVAL);
+
+ old_prog_fd = OPTS_GET(opts, old_prog_fd, 0);
+ if (old_prog_fd)
+ flags |= XDP_FLAGS_REPLACE;
+ else
+ old_prog_fd = -1;
+
+ err = __bpf_set_link_xdp_fd_replace(ifindex, prog_fd, old_prog_fd, flags);
+ return libbpf_err(err);
+}
+
+int bpf_xdp_detach(int ifindex, __u32 flags, const struct bpf_xdp_attach_opts *opts)
+{
+ return bpf_xdp_attach(ifindex, -1, flags, opts);
+}
+
int bpf_set_link_xdp_fd_opts(int ifindex, int fd, __u32 flags,
const struct bpf_xdp_set_link_opts *opts)
{
@@ -303,69 +374,98 @@ static int get_xdp_info(void *cookie, void *msg, struct nlattr **tb)
return 0;
}
-int bpf_get_link_xdp_info(int ifindex, struct xdp_link_info *info,
- size_t info_size, __u32 flags)
+int bpf_xdp_query(int ifindex, int xdp_flags, struct bpf_xdp_query_opts *opts)
{
- struct xdp_id_md xdp_id = {};
- __u32 mask;
- int ret;
struct libbpf_nla_req req = {
.nh.nlmsg_len = NLMSG_LENGTH(sizeof(struct ifinfomsg)),
.nh.nlmsg_type = RTM_GETLINK,
.nh.nlmsg_flags = NLM_F_DUMP | NLM_F_REQUEST,
.ifinfo.ifi_family = AF_PACKET,
};
+ struct xdp_id_md xdp_id = {};
+ int err;
- if (flags & ~XDP_FLAGS_MASK || !info_size)
+ if (!OPTS_VALID(opts, bpf_xdp_query_opts))
+ return libbpf_err(-EINVAL);
+
+ if (xdp_flags & ~XDP_FLAGS_MASK)
return libbpf_err(-EINVAL);
/* Check whether the single {HW,DRV,SKB} mode is set */
- flags &= (XDP_FLAGS_SKB_MODE | XDP_FLAGS_DRV_MODE | XDP_FLAGS_HW_MODE);
- mask = flags - 1;
- if (flags && flags & mask)
+ xdp_flags &= XDP_FLAGS_SKB_MODE | XDP_FLAGS_DRV_MODE | XDP_FLAGS_HW_MODE;
+ if (xdp_flags & (xdp_flags - 1))
return libbpf_err(-EINVAL);
xdp_id.ifindex = ifindex;
- xdp_id.flags = flags;
+ xdp_id.flags = xdp_flags;
- ret = libbpf_netlink_send_recv(&req, __dump_link_nlmsg,
+ err = libbpf_netlink_send_recv(&req, __dump_link_nlmsg,
get_xdp_info, &xdp_id);
- if (!ret) {
- size_t sz = min(info_size, sizeof(xdp_id.info));
+ if (err)
+ return libbpf_err(err);
- memcpy(info, &xdp_id.info, sz);
- memset((void *) info + sz, 0, info_size - sz);
- }
+ OPTS_SET(opts, prog_id, xdp_id.info.prog_id);
+ OPTS_SET(opts, drv_prog_id, xdp_id.info.drv_prog_id);
+ OPTS_SET(opts, hw_prog_id, xdp_id.info.hw_prog_id);
+ OPTS_SET(opts, skb_prog_id, xdp_id.info.skb_prog_id);
+ OPTS_SET(opts, attach_mode, xdp_id.info.attach_mode);
- return libbpf_err(ret);
+ return 0;
}
-static __u32 get_xdp_id(struct xdp_link_info *info, __u32 flags)
+int bpf_get_link_xdp_info(int ifindex, struct xdp_link_info *info,
+ size_t info_size, __u32 flags)
{
- flags &= XDP_FLAGS_MODES;
+ LIBBPF_OPTS(bpf_xdp_query_opts, opts);
+ size_t sz;
+ int err;
- if (info->attach_mode != XDP_ATTACHED_MULTI && !flags)
- return info->prog_id;
- if (flags & XDP_FLAGS_DRV_MODE)
- return info->drv_prog_id;
- if (flags & XDP_FLAGS_HW_MODE)
- return info->hw_prog_id;
- if (flags & XDP_FLAGS_SKB_MODE)
- return info->skb_prog_id;
+ if (!info_size)
+ return libbpf_err(-EINVAL);
+
+ err = bpf_xdp_query(ifindex, flags, &opts);
+ if (err)
+ return libbpf_err(err);
+
+ /* struct xdp_link_info field layout matches struct bpf_xdp_query_opts
+ * layout after sz field
+ */
+ sz = min(info_size, offsetofend(struct xdp_link_info, attach_mode));
+ memcpy(info, &opts.prog_id, sz);
+ memset((void *)info + sz, 0, info_size - sz);
return 0;
}
-int bpf_get_link_xdp_id(int ifindex, __u32 *prog_id, __u32 flags)
+int bpf_xdp_query_id(int ifindex, int flags, __u32 *prog_id)
{
- struct xdp_link_info info;
+ LIBBPF_OPTS(bpf_xdp_query_opts, opts);
int ret;
- ret = bpf_get_link_xdp_info(ifindex, &info, sizeof(info), flags);
- if (!ret)
- *prog_id = get_xdp_id(&info, flags);
+ ret = bpf_xdp_query(ifindex, flags, &opts);
+ if (ret)
+ return libbpf_err(ret);
- return libbpf_err(ret);
+ flags &= XDP_FLAGS_MODES;
+
+ if (opts.attach_mode != XDP_ATTACHED_MULTI && !flags)
+ *prog_id = opts.prog_id;
+ else if (flags & XDP_FLAGS_DRV_MODE)
+ *prog_id = opts.drv_prog_id;
+ else if (flags & XDP_FLAGS_HW_MODE)
+ *prog_id = opts.hw_prog_id;
+ else if (flags & XDP_FLAGS_SKB_MODE)
+ *prog_id = opts.skb_prog_id;
+ else
+ *prog_id = 0;
+
+ return 0;
+}
+
+
+int bpf_get_link_xdp_id(int ifindex, __u32 *prog_id, __u32 flags)
+{
+ return bpf_xdp_query_id(ifindex, flags, prog_id);
}
typedef int (*qdisc_config_t)(struct libbpf_nla_req *req);
diff --git a/tools/lib/bpf/relo_core.c b/tools/lib/bpf/relo_core.c
index 910865e29edc..f946f23eab20 100644
--- a/tools/lib/bpf/relo_core.c
+++ b/tools/lib/bpf/relo_core.c
@@ -775,31 +775,6 @@ static int bpf_core_calc_enumval_relo(const struct bpf_core_relo *relo,
return 0;
}
-struct bpf_core_relo_res
-{
- /* expected value in the instruction, unless validate == false */
- __u32 orig_val;
- /* new value that needs to be patched up to */
- __u32 new_val;
- /* relocation unsuccessful, poison instruction, but don't fail load */
- bool poison;
- /* some relocations can't be validated against orig_val */
- bool validate;
- /* for field byte offset relocations or the forms:
- * *(T *)(rX + <off>) = rY
- * rX = *(T *)(rY + <off>),
- * we remember original and resolved field size to adjust direct
- * memory loads of pointers and integers; this is necessary for 32-bit
- * host kernel architectures, but also allows to automatically
- * relocate fields that were resized from, e.g., u32 to u64, etc.
- */
- bool fail_memsz_adjust;
- __u32 orig_sz;
- __u32 orig_type_id;
- __u32 new_sz;
- __u32 new_type_id;
-};
-
/* Calculate original and target relocation values, given local and target
* specs and relocation kind. These values are calculated for each candidate.
* If there are multiple candidates, resulting values should all be consistent
@@ -951,9 +926,9 @@ static int insn_bytes_to_bpf_size(__u32 sz)
* 5. *(T *)(rX + <off>) = rY, where T is one of {u8, u16, u32, u64};
* 6. *(T *)(rX + <off>) = <imm>, where T is one of {u8, u16, u32, u64}.
*/
-static int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn,
- int insn_idx, const struct bpf_core_relo *relo,
- int relo_idx, const struct bpf_core_relo_res *res)
+int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn,
+ int insn_idx, const struct bpf_core_relo *relo,
+ int relo_idx, const struct bpf_core_relo_res *res)
{
__u32 orig_val, new_val;
__u8 class;
@@ -1128,7 +1103,7 @@ static void bpf_core_dump_spec(const char *prog_name, int level, const struct bp
}
/*
- * CO-RE relocate single instruction.
+ * Calculate CO-RE relocation target result.
*
* The outline and important points of the algorithm:
* 1. For given local type, find corresponding candidate target types.
@@ -1177,18 +1152,18 @@ static void bpf_core_dump_spec(const char *prog_name, int level, const struct bp
* between multiple relocations for the same type ID and is updated as some
* of the candidates are pruned due to structural incompatibility.
*/
-int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
- int insn_idx,
- const struct bpf_core_relo *relo,
- int relo_idx,
- const struct btf *local_btf,
- struct bpf_core_cand_list *cands,
- struct bpf_core_spec *specs_scratch)
+int bpf_core_calc_relo_insn(const char *prog_name,
+ const struct bpf_core_relo *relo,
+ int relo_idx,
+ const struct btf *local_btf,
+ struct bpf_core_cand_list *cands,
+ struct bpf_core_spec *specs_scratch,
+ struct bpf_core_relo_res *targ_res)
{
struct bpf_core_spec *local_spec = &specs_scratch[0];
struct bpf_core_spec *cand_spec = &specs_scratch[1];
struct bpf_core_spec *targ_spec = &specs_scratch[2];
- struct bpf_core_relo_res cand_res, targ_res;
+ struct bpf_core_relo_res cand_res;
const struct btf_type *local_type;
const char *local_name;
__u32 local_id;
@@ -1223,12 +1198,12 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
/* TYPE_ID_LOCAL relo is special and doesn't need candidate search */
if (relo->kind == BPF_CORE_TYPE_ID_LOCAL) {
/* bpf_insn's imm value could get out of sync during linking */
- memset(&targ_res, 0, sizeof(targ_res));
- targ_res.validate = false;
- targ_res.poison = false;
- targ_res.orig_val = local_spec->root_type_id;
- targ_res.new_val = local_spec->root_type_id;
- goto patch_insn;
+ memset(targ_res, 0, sizeof(*targ_res));
+ targ_res->validate = false;
+ targ_res->poison = false;
+ targ_res->orig_val = local_spec->root_type_id;
+ targ_res->new_val = local_spec->root_type_id;
+ return 0;
}
/* libbpf doesn't support candidate search for anonymous types */
@@ -1262,7 +1237,7 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
return err;
if (j == 0) {
- targ_res = cand_res;
+ *targ_res = cand_res;
*targ_spec = *cand_spec;
} else if (cand_spec->bit_offset != targ_spec->bit_offset) {
/* if there are many field relo candidates, they
@@ -1272,7 +1247,8 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
prog_name, relo_idx, cand_spec->bit_offset,
targ_spec->bit_offset);
return -EINVAL;
- } else if (cand_res.poison != targ_res.poison || cand_res.new_val != targ_res.new_val) {
+ } else if (cand_res.poison != targ_res->poison ||
+ cand_res.new_val != targ_res->new_val) {
/* all candidates should result in the same relocation
* decision and value, otherwise it's dangerous to
* proceed due to ambiguity
@@ -1280,7 +1256,7 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
pr_warn("prog '%s': relo #%d: relocation decision ambiguity: %s %u != %s %u\n",
prog_name, relo_idx,
cand_res.poison ? "failure" : "success", cand_res.new_val,
- targ_res.poison ? "failure" : "success", targ_res.new_val);
+ targ_res->poison ? "failure" : "success", targ_res->new_val);
return -EINVAL;
}
@@ -1314,19 +1290,10 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
prog_name, relo_idx);
/* calculate single target relo result explicitly */
- err = bpf_core_calc_relo(prog_name, relo, relo_idx, local_spec, NULL, &targ_res);
+ err = bpf_core_calc_relo(prog_name, relo, relo_idx, local_spec, NULL, targ_res);
if (err)
return err;
}
-patch_insn:
- /* bpf_core_patch_insn() should know how to handle missing targ_spec */
- err = bpf_core_patch_insn(prog_name, insn, insn_idx, relo, relo_idx, &targ_res);
- if (err) {
- pr_warn("prog '%s': relo #%d: failed to patch insn #%u: %d\n",
- prog_name, relo_idx, relo->insn_off / 8, err);
- return -EINVAL;
- }
-
return 0;
}
diff --git a/tools/lib/bpf/relo_core.h b/tools/lib/bpf/relo_core.h
index 17799819ad7c..a28bf3711ce2 100644
--- a/tools/lib/bpf/relo_core.h
+++ b/tools/lib/bpf/relo_core.h
@@ -44,14 +44,44 @@ struct bpf_core_spec {
__u32 bit_offset;
};
-int bpf_core_apply_relo_insn(const char *prog_name,
- struct bpf_insn *insn, int insn_idx,
- const struct bpf_core_relo *relo, int relo_idx,
- const struct btf *local_btf,
- struct bpf_core_cand_list *cands,
- struct bpf_core_spec *specs_scratch);
+struct bpf_core_relo_res {
+ /* expected value in the instruction, unless validate == false */
+ __u32 orig_val;
+ /* new value that needs to be patched up to */
+ __u32 new_val;
+ /* relocation unsuccessful, poison instruction, but don't fail load */
+ bool poison;
+ /* some relocations can't be validated against orig_val */
+ bool validate;
+ /* for field byte offset relocations or the forms:
+ * *(T *)(rX + <off>) = rY
+ * rX = *(T *)(rY + <off>),
+ * we remember original and resolved field size to adjust direct
+ * memory loads of pointers and integers; this is necessary for 32-bit
+ * host kernel architectures, but also allows to automatically
+ * relocate fields that were resized from, e.g., u32 to u64, etc.
+ */
+ bool fail_memsz_adjust;
+ __u32 orig_sz;
+ __u32 orig_type_id;
+ __u32 new_sz;
+ __u32 new_type_id;
+};
+
int bpf_core_types_are_compat(const struct btf *local_btf, __u32 local_id,
const struct btf *targ_btf, __u32 targ_id);
size_t bpf_core_essential_name_len(const char *name);
+
+int bpf_core_calc_relo_insn(const char *prog_name,
+ const struct bpf_core_relo *relo, int relo_idx,
+ const struct btf *local_btf,
+ struct bpf_core_cand_list *cands,
+ struct bpf_core_spec *specs_scratch,
+ struct bpf_core_relo_res *targ_res);
+
+int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn,
+ int insn_idx, const struct bpf_core_relo *relo,
+ int relo_idx, const struct bpf_core_relo_res *res);
+
#endif
diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h
index 0b84d8e6b72a..bd6f4505e7b1 100644
--- a/tools/lib/bpf/skel_internal.h
+++ b/tools/lib/bpf/skel_internal.h
@@ -3,9 +3,19 @@
#ifndef __SKEL_INTERNAL_H
#define __SKEL_INTERNAL_H
+#ifdef __KERNEL__
+#include <linux/fdtable.h>
+#include <linux/mm.h>
+#include <linux/mman.h>
+#include <linux/slab.h>
+#include <linux/bpf.h>
+#else
#include <unistd.h>
#include <sys/syscall.h>
#include <sys/mman.h>
+#include <stdlib.h>
+#include "bpf.h"
+#endif
#ifndef __NR_bpf
# if defined(__mips__) && defined(_ABIO32)
@@ -25,24 +35,23 @@
* requested during loader program generation.
*/
struct bpf_map_desc {
- union {
- /* input for the loader prog */
- struct {
- __aligned_u64 initial_value;
- __u32 max_entries;
- };
- /* output of the loader prog */
- struct {
- int map_fd;
- };
- };
+ /* output of the loader prog */
+ int map_fd;
+ /* input for the loader prog */
+ __u32 max_entries;
+ __aligned_u64 initial_value;
};
struct bpf_prog_desc {
int prog_fd;
};
+enum {
+ BPF_SKEL_KERNEL = (1ULL << 0),
+};
+
struct bpf_loader_ctx {
- size_t sz;
+ __u32 sz;
+ __u32 flags;
__u32 log_level;
__u32 log_size;
__u64 log_buf;
@@ -57,12 +66,144 @@ struct bpf_load_and_run_opts {
const char *errstr;
};
+long bpf_sys_bpf(__u32 cmd, void *attr, __u32 attr_size);
+
static inline int skel_sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr,
unsigned int size)
{
+#ifdef __KERNEL__
+ return bpf_sys_bpf(cmd, attr, size);
+#else
return syscall(__NR_bpf, cmd, attr, size);
+#endif
+}
+
+#ifdef __KERNEL__
+static inline int close(int fd)
+{
+ return close_fd(fd);
+}
+
+static inline void *skel_alloc(size_t size)
+{
+ struct bpf_loader_ctx *ctx = kzalloc(size, GFP_KERNEL);
+
+ if (!ctx)
+ return NULL;
+ ctx->flags |= BPF_SKEL_KERNEL;
+ return ctx;
+}
+
+static inline void skel_free(const void *p)
+{
+ kfree(p);
+}
+
+/* skel->bss/rodata maps are populated the following way:
+ *
+ * For kernel use:
+ * skel_prep_map_data() allocates kernel memory that kernel module can directly access.
+ * Generated lskel stores the pointer in skel->rodata and in skel->maps.rodata.initial_value.
+ * The loader program will perform probe_read_kernel() from maps.rodata.initial_value.
+ * skel_finalize_map_data() sets skel->rodata to point to actual value in a bpf map and
+ * does maps.rodata.initial_value = ~0ULL to signal skel_free_map_data() that kvfree
+ * is not nessary.
+ *
+ * For user space:
+ * skel_prep_map_data() mmaps anon memory into skel->rodata that can be accessed directly.
+ * Generated lskel stores the pointer in skel->rodata and in skel->maps.rodata.initial_value.
+ * The loader program will perform copy_from_user() from maps.rodata.initial_value.
+ * skel_finalize_map_data() remaps bpf array map value from the kernel memory into
+ * skel->rodata address.
+ *
+ * The "bpftool gen skeleton -L" command generates lskel.h that is suitable for
+ * both kernel and user space. The generated loader program does
+ * either bpf_probe_read_kernel() or bpf_copy_from_user() from initial_value
+ * depending on bpf_loader_ctx->flags.
+ */
+static inline void skel_free_map_data(void *p, __u64 addr, size_t sz)
+{
+ if (addr != ~0ULL)
+ kvfree(p);
+ /* When addr == ~0ULL the 'p' points to
+ * ((struct bpf_array *)map)->value. See skel_finalize_map_data.
+ */
+}
+
+static inline void *skel_prep_map_data(const void *val, size_t mmap_sz, size_t val_sz)
+{
+ void *addr;
+
+ addr = kvmalloc(val_sz, GFP_KERNEL);
+ if (!addr)
+ return NULL;
+ memcpy(addr, val, val_sz);
+ return addr;
+}
+
+static inline void *skel_finalize_map_data(__u64 *init_val, size_t mmap_sz, int flags, int fd)
+{
+ struct bpf_map *map;
+ void *addr = NULL;
+
+ kvfree((void *) (long) *init_val);
+ *init_val = ~0ULL;
+
+ /* At this point bpf_load_and_run() finished without error and
+ * 'fd' is a valid bpf map FD. All sanity checks below should succeed.
+ */
+ map = bpf_map_get(fd);
+ if (IS_ERR(map))
+ return NULL;
+ if (map->map_type != BPF_MAP_TYPE_ARRAY)
+ goto out;
+ addr = ((struct bpf_array *)map)->value;
+ /* the addr stays valid, since FD is not closed */
+out:
+ bpf_map_put(map);
+ return addr;
+}
+
+#else
+
+static inline void *skel_alloc(size_t size)
+{
+ return calloc(1, size);
+}
+
+static inline void skel_free(void *p)
+{
+ free(p);
+}
+
+static inline void skel_free_map_data(void *p, __u64 addr, size_t sz)
+{
+ munmap(p, sz);
+}
+
+static inline void *skel_prep_map_data(const void *val, size_t mmap_sz, size_t val_sz)
+{
+ void *addr;
+
+ addr = mmap(NULL, mmap_sz, PROT_READ | PROT_WRITE,
+ MAP_SHARED | MAP_ANONYMOUS, -1, 0);
+ if (addr == (void *) -1)
+ return NULL;
+ memcpy(addr, val, val_sz);
+ return addr;
}
+static inline void *skel_finalize_map_data(__u64 *init_val, size_t mmap_sz, int flags, int fd)
+{
+ void *addr;
+
+ addr = mmap((void *) (long) *init_val, mmap_sz, flags, MAP_SHARED | MAP_FIXED, fd, 0);
+ if (addr == (void *) -1)
+ return NULL;
+ return addr;
+}
+#endif
+
static inline int skel_closenz(int fd)
{
if (fd > 0)
@@ -70,22 +211,94 @@ static inline int skel_closenz(int fd)
return -EINVAL;
}
+#ifndef offsetofend
+#define offsetofend(TYPE, MEMBER) \
+ (offsetof(TYPE, MEMBER) + sizeof((((TYPE *)0)->MEMBER)))
+#endif
+
+static inline int skel_map_create(enum bpf_map_type map_type,
+ const char *map_name,
+ __u32 key_size,
+ __u32 value_size,
+ __u32 max_entries)
+{
+ const size_t attr_sz = offsetofend(union bpf_attr, map_extra);
+ union bpf_attr attr;
+
+ memset(&attr, 0, attr_sz);
+
+ attr.map_type = map_type;
+ strncpy(attr.map_name, map_name, sizeof(attr.map_name));
+ attr.key_size = key_size;
+ attr.value_size = value_size;
+ attr.max_entries = max_entries;
+
+ return skel_sys_bpf(BPF_MAP_CREATE, &attr, attr_sz);
+}
+
+static inline int skel_map_update_elem(int fd, const void *key,
+ const void *value, __u64 flags)
+{
+ const size_t attr_sz = offsetofend(union bpf_attr, flags);
+ union bpf_attr attr;
+
+ memset(&attr, 0, attr_sz);
+ attr.map_fd = fd;
+ attr.key = (long) key;
+ attr.value = (long) value;
+ attr.flags = flags;
+
+ return skel_sys_bpf(BPF_MAP_UPDATE_ELEM, &attr, attr_sz);
+}
+
+static inline int skel_raw_tracepoint_open(const char *name, int prog_fd)
+{
+ const size_t attr_sz = offsetofend(union bpf_attr, raw_tracepoint.prog_fd);
+ union bpf_attr attr;
+
+ memset(&attr, 0, attr_sz);
+ attr.raw_tracepoint.name = (long) name;
+ attr.raw_tracepoint.prog_fd = prog_fd;
+
+ return skel_sys_bpf(BPF_RAW_TRACEPOINT_OPEN, &attr, attr_sz);
+}
+
+static inline int skel_link_create(int prog_fd, int target_fd,
+ enum bpf_attach_type attach_type)
+{
+ const size_t attr_sz = offsetofend(union bpf_attr, link_create.iter_info_len);
+ union bpf_attr attr;
+
+ memset(&attr, 0, attr_sz);
+ attr.link_create.prog_fd = prog_fd;
+ attr.link_create.target_fd = target_fd;
+ attr.link_create.attach_type = attach_type;
+
+ return skel_sys_bpf(BPF_LINK_CREATE, &attr, attr_sz);
+}
+
+#ifdef __KERNEL__
+#define set_err
+#else
+#define set_err err = -errno
+#endif
+
static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
{
int map_fd = -1, prog_fd = -1, key = 0, err;
union bpf_attr attr;
- map_fd = bpf_map_create(BPF_MAP_TYPE_ARRAY, "__loader.map", 4, opts->data_sz, 1, NULL);
+ err = map_fd = skel_map_create(BPF_MAP_TYPE_ARRAY, "__loader.map", 4, opts->data_sz, 1);
if (map_fd < 0) {
opts->errstr = "failed to create loader map";
- err = -errno;
+ set_err;
goto out;
}
- err = bpf_map_update_elem(map_fd, &key, opts->data, 0);
+ err = skel_map_update_elem(map_fd, &key, opts->data, 0);
if (err < 0) {
opts->errstr = "failed to update loader map";
- err = -errno;
+ set_err;
goto out;
}
@@ -100,10 +313,10 @@ static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
attr.log_size = opts->ctx->log_size;
attr.log_buf = opts->ctx->log_buf;
attr.prog_flags = BPF_F_SLEEPABLE;
- prog_fd = skel_sys_bpf(BPF_PROG_LOAD, &attr, sizeof(attr));
+ err = prog_fd = skel_sys_bpf(BPF_PROG_LOAD, &attr, sizeof(attr));
if (prog_fd < 0) {
opts->errstr = "failed to load loader prog";
- err = -errno;
+ set_err;
goto out;
}
@@ -115,10 +328,12 @@ static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
if (err < 0 || (int)attr.test.retval < 0) {
opts->errstr = "failed to execute loader prog";
if (err < 0) {
- err = -errno;
+ set_err;
} else {
err = (int)attr.test.retval;
+#ifndef __KERNEL__
errno = -err;
+#endif
}
goto out;
}
diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
index edafe56664f3..af136f73b09d 100644
--- a/tools/lib/bpf/xsk.c
+++ b/tools/lib/bpf/xsk.c
@@ -481,8 +481,8 @@ static int xsk_load_xdp_prog(struct xsk_socket *xsk)
BPF_EMIT_CALL(BPF_FUNC_redirect_map),
BPF_EXIT_INSN(),
};
- size_t insns_cnt[] = {sizeof(prog) / sizeof(struct bpf_insn),
- sizeof(prog_redirect_flags) / sizeof(struct bpf_insn),
+ size_t insns_cnt[] = {ARRAY_SIZE(prog),
+ ARRAY_SIZE(prog_redirect_flags),
};
struct bpf_insn *progs[] = {prog, prog_redirect_flags};
enum xsk_prog option = get_xsk_prog();
@@ -1193,12 +1193,23 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
int xsk_umem__delete(struct xsk_umem *umem)
{
+ struct xdp_mmap_offsets off;
+ int err;
+
if (!umem)
return 0;
if (umem->refcount)
return -EBUSY;
+ err = xsk_get_mmap_offsets(umem->fd, &off);
+ if (!err && umem->fill_save && umem->comp_save) {
+ munmap(umem->fill_save->ring - off.fr.desc,
+ off.fr.desc + umem->config.fill_size * sizeof(__u64));
+ munmap(umem->comp_save->ring - off.cr.desc,
+ off.cr.desc + umem->config.comp_size * sizeof(__u64));
+ }
+
close(umem->fd);
free(umem);
diff --git a/tools/perf/tests/llvm.c b/tools/perf/tests/llvm.c
index 8ac0a3a457ef..0bc25a56cfef 100644
--- a/tools/perf/tests/llvm.c
+++ b/tools/perf/tests/llvm.c
@@ -13,7 +13,7 @@ static int test__bpf_parsing(void *obj_buf, size_t obj_buf_sz)
{
struct bpf_object *obj;
- obj = bpf_object__open_buffer(obj_buf, obj_buf_sz, NULL);
+ obj = bpf_object__open_mem(obj_buf, obj_buf_sz, NULL);
if (libbpf_get_error(obj))
return TEST_FAIL;
bpf_object__close(obj);
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 16ec605a9fe4..ec6d9e7b446d 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -54,6 +54,7 @@ static bool libbpf_initialized;
struct bpf_object *
bpf__prepare_load_buffer(void *obj_buf, size_t obj_buf_sz, const char *name)
{
+ LIBBPF_OPTS(bpf_object_open_opts, opts, .object_name = name);
struct bpf_object *obj;
if (!libbpf_initialized) {
@@ -61,7 +62,7 @@ bpf__prepare_load_buffer(void *obj_buf, size_t obj_buf_sz, const char *name)
libbpf_initialized = true;
}
- obj = bpf_object__open_buffer(obj_buf, obj_buf_sz, name);
+ obj = bpf_object__open_mem(obj_buf, obj_buf_sz, &opts);
if (IS_ERR_OR_NULL(obj)) {
pr_debug("bpf: failed to load buffer\n");
return ERR_PTR(-EINVAL);
@@ -72,6 +73,7 @@ bpf__prepare_load_buffer(void *obj_buf, size_t obj_buf_sz, const char *name)
struct bpf_object *bpf__prepare_load(const char *filename, bool source)
{
+ LIBBPF_OPTS(bpf_object_open_opts, opts, .object_name = filename);
struct bpf_object *obj;
if (!libbpf_initialized) {
@@ -94,7 +96,7 @@ struct bpf_object *bpf__prepare_load(const char *filename, bool source)
return ERR_PTR(-BPF_LOADER_ERRNO__COMPILE);
} else
pr_debug("bpf: successful builtin compilation\n");
- obj = bpf_object__open_buffer(obj_buf, obj_buf_sz, filename);
+ obj = bpf_object__open_mem(obj_buf, obj_buf_sz, &opts);
if (!IS_ERR_OR_NULL(obj) && llvm_param.dump_obj)
llvm__dump_obj(filename, obj_buf, obj_buf_sz);
@@ -654,11 +656,11 @@ int bpf__probe(struct bpf_object *obj)
}
if (priv->is_tp) {
- bpf_program__set_tracepoint(prog);
+ bpf_program__set_type(prog, BPF_PROG_TYPE_TRACEPOINT);
continue;
}
- bpf_program__set_kprobe(prog);
+ bpf_program__set_type(prog, BPF_PROG_TYPE_KPROBE);
pev = &priv->pev;
err = convert_perf_probe_events(pev, 1);
@@ -1005,24 +1007,22 @@ __bpf_map__config_value(struct bpf_map *map,
{
struct bpf_map_op *op;
const char *map_name = bpf_map__name(map);
- const struct bpf_map_def *def = bpf_map__def(map);
- if (IS_ERR(def)) {
- pr_debug("Unable to get map definition from '%s'\n",
- map_name);
+ if (!map) {
+ pr_debug("Map '%s' is invalid\n", map_name);
return -BPF_LOADER_ERRNO__INTERNAL;
}
- if (def->type != BPF_MAP_TYPE_ARRAY) {
+ if (bpf_map__type(map) != BPF_MAP_TYPE_ARRAY) {
pr_debug("Map %s type is not BPF_MAP_TYPE_ARRAY\n",
map_name);
return -BPF_LOADER_ERRNO__OBJCONF_MAP_TYPE;
}
- if (def->key_size < sizeof(unsigned int)) {
+ if (bpf_map__key_size(map) < sizeof(unsigned int)) {
pr_debug("Map %s has incorrect key size\n", map_name);
return -BPF_LOADER_ERRNO__OBJCONF_MAP_KEYSIZE;
}
- switch (def->value_size) {
+ switch (bpf_map__value_size(map)) {
case 1:
case 2:
case 4:
@@ -1064,7 +1064,6 @@ __bpf_map__config_event(struct bpf_map *map,
struct parse_events_term *term,
struct evlist *evlist)
{
- const struct bpf_map_def *def;
struct bpf_map_op *op;
const char *map_name = bpf_map__name(map);
struct evsel *evsel = evlist__find_evsel_by_str(evlist, term->val.str);
@@ -1075,18 +1074,16 @@ __bpf_map__config_event(struct bpf_map *map,
return -BPF_LOADER_ERRNO__OBJCONF_MAP_NOEVT;
}
- def = bpf_map__def(map);
- if (IS_ERR(def)) {
- pr_debug("Unable to get map definition from '%s'\n",
- map_name);
- return PTR_ERR(def);
+ if (!map) {
+ pr_debug("Map '%s' is invalid\n", map_name);
+ return PTR_ERR(map);
}
/*
* No need to check key_size and value_size:
* kernel has already checked them.
*/
- if (def->type != BPF_MAP_TYPE_PERF_EVENT_ARRAY) {
+ if (bpf_map__type(map) != BPF_MAP_TYPE_PERF_EVENT_ARRAY) {
pr_debug("Map %s type is not BPF_MAP_TYPE_PERF_EVENT_ARRAY\n",
map_name);
return -BPF_LOADER_ERRNO__OBJCONF_MAP_TYPE;
@@ -1135,7 +1132,6 @@ config_map_indices_range_check(struct parse_events_term *term,
const char *map_name)
{
struct parse_events_array *array = &term->array;
- const struct bpf_map_def *def;
unsigned int i;
if (!array->nr_ranges)
@@ -1146,10 +1142,8 @@ config_map_indices_range_check(struct parse_events_term *term,
return -BPF_LOADER_ERRNO__INTERNAL;
}
- def = bpf_map__def(map);
- if (IS_ERR(def)) {
- pr_debug("ERROR: Unable to get map definition from '%s'\n",
- map_name);
+ if (!map) {
+ pr_debug("Map '%s' is invalid\n", map_name);
return -BPF_LOADER_ERRNO__INTERNAL;
}
@@ -1158,7 +1152,7 @@ config_map_indices_range_check(struct parse_events_term *term,
size_t length = array->ranges[i].length;
unsigned int idx = start + length - 1;
- if (idx >= def->max_entries) {
+ if (idx >= bpf_map__max_entries(map)) {
pr_debug("ERROR: index %d too large\n", idx);
return -BPF_LOADER_ERRNO__OBJCONF_MAP_IDX2BIG;
}
@@ -1252,21 +1246,21 @@ out:
}
typedef int (*map_config_func_t)(const char *name, int map_fd,
- const struct bpf_map_def *pdef,
+ const struct bpf_map *map,
struct bpf_map_op *op,
void *pkey, void *arg);
static int
foreach_key_array_all(map_config_func_t func,
void *arg, const char *name,
- int map_fd, const struct bpf_map_def *pdef,
+ int map_fd, const struct bpf_map *map,
struct bpf_map_op *op)
{
unsigned int i;
int err;
- for (i = 0; i < pdef->max_entries; i++) {
- err = func(name, map_fd, pdef, op, &i, arg);
+ for (i = 0; i < bpf_map__max_entries(map); i++) {
+ err = func(name, map_fd, map, op, &i, arg);
if (err) {
pr_debug("ERROR: failed to insert value to %s[%u]\n",
name, i);
@@ -1279,7 +1273,7 @@ foreach_key_array_all(map_config_func_t func,
static int
foreach_key_array_ranges(map_config_func_t func, void *arg,
const char *name, int map_fd,
- const struct bpf_map_def *pdef,
+ const struct bpf_map *map,
struct bpf_map_op *op)
{
unsigned int i, j;
@@ -1292,7 +1286,7 @@ foreach_key_array_ranges(map_config_func_t func, void *arg,
for (j = 0; j < length; j++) {
unsigned int idx = start + j;
- err = func(name, map_fd, pdef, op, &idx, arg);
+ err = func(name, map_fd, map, op, &idx, arg);
if (err) {
pr_debug("ERROR: failed to insert value to %s[%u]\n",
name, idx);
@@ -1308,9 +1302,8 @@ bpf_map_config_foreach_key(struct bpf_map *map,
map_config_func_t func,
void *arg)
{
- int err, map_fd;
+ int err, map_fd, type;
struct bpf_map_op *op;
- const struct bpf_map_def *def;
const char *name = bpf_map__name(map);
struct bpf_map_priv *priv = bpf_map__priv(map);
@@ -1323,9 +1316,8 @@ bpf_map_config_foreach_key(struct bpf_map *map,
return 0;
}
- def = bpf_map__def(map);
- if (IS_ERR(def)) {
- pr_debug("ERROR: failed to get definition from map %s\n", name);
+ if (!map) {
+ pr_debug("Map '%s' is invalid\n", name);
return -BPF_LOADER_ERRNO__INTERNAL;
}
map_fd = bpf_map__fd(map);
@@ -1334,19 +1326,19 @@ bpf_map_config_foreach_key(struct bpf_map *map,
return map_fd;
}
+ type = bpf_map__type(map);
list_for_each_entry(op, &priv->ops_list, list) {
- switch (def->type) {
+ switch (type) {
case BPF_MAP_TYPE_ARRAY:
case BPF_MAP_TYPE_PERF_EVENT_ARRAY:
switch (op->key_type) {
case BPF_MAP_KEY_ALL:
err = foreach_key_array_all(func, arg, name,
- map_fd, def, op);
+ map_fd, map, op);
break;
case BPF_MAP_KEY_RANGES:
err = foreach_key_array_ranges(func, arg, name,
- map_fd, def,
- op);
+ map_fd, map, op);
break;
default:
pr_debug("ERROR: keytype for map '%s' invalid\n",
@@ -1455,7 +1447,7 @@ apply_config_evsel_for_key(const char *name, int map_fd, void *pkey,
static int
apply_obj_config_map_for_key(const char *name, int map_fd,
- const struct bpf_map_def *pdef,
+ const struct bpf_map *map,
struct bpf_map_op *op,
void *pkey, void *arg __maybe_unused)
{
@@ -1464,7 +1456,7 @@ apply_obj_config_map_for_key(const char *name, int map_fd,
switch (op->op_type) {
case BPF_MAP_OP_SET_VALUE:
err = apply_config_value_for_key(map_fd, pkey,
- pdef->value_size,
+ bpf_map__value_size(map),
op->v.value);
break;
case BPF_MAP_OP_SET_EVSEL:
diff --git a/tools/perf/util/bpf_map.c b/tools/perf/util/bpf_map.c
index eb853ca67cf4..c863ae0c5cb5 100644
--- a/tools/perf/util/bpf_map.c
+++ b/tools/perf/util/bpf_map.c
@@ -9,25 +9,25 @@
#include <stdlib.h>
#include <unistd.h>
-static bool bpf_map_def__is_per_cpu(const struct bpf_map_def *def)
+static bool bpf_map__is_per_cpu(enum bpf_map_type type)
{
- return def->type == BPF_MAP_TYPE_PERCPU_HASH ||
- def->type == BPF_MAP_TYPE_PERCPU_ARRAY ||
- def->type == BPF_MAP_TYPE_LRU_PERCPU_HASH ||
- def->type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE;
+ return type == BPF_MAP_TYPE_PERCPU_HASH ||
+ type == BPF_MAP_TYPE_PERCPU_ARRAY ||
+ type == BPF_MAP_TYPE_LRU_PERCPU_HASH ||
+ type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE;
}
-static void *bpf_map_def__alloc_value(const struct bpf_map_def *def)
+static void *bpf_map__alloc_value(const struct bpf_map *map)
{
- if (bpf_map_def__is_per_cpu(def))
- return malloc(round_up(def->value_size, 8) * sysconf(_SC_NPROCESSORS_CONF));
+ if (bpf_map__is_per_cpu(bpf_map__type(map)))
+ return malloc(round_up(bpf_map__value_size(map), 8) *
+ sysconf(_SC_NPROCESSORS_CONF));
- return malloc(def->value_size);
+ return malloc(bpf_map__value_size(map));
}
int bpf_map__fprintf(struct bpf_map *map, FILE *fp)
{
- const struct bpf_map_def *def = bpf_map__def(map);
void *prev_key = NULL, *key, *value;
int fd = bpf_map__fd(map), err;
int printed = 0;
@@ -35,15 +35,15 @@ int bpf_map__fprintf(struct bpf_map *map, FILE *fp)
if (fd < 0)
return fd;
- if (IS_ERR(def))
- return PTR_ERR(def);
+ if (!map)
+ return PTR_ERR(map);
err = -ENOMEM;
- key = malloc(def->key_size);
+ key = malloc(bpf_map__key_size(map));
if (key == NULL)
goto out;
- value = bpf_map_def__alloc_value(def);
+ value = bpf_map__alloc_value(map);
if (value == NULL)
goto out_free_key;
diff --git a/tools/scripts/Makefile.include b/tools/scripts/Makefile.include
index 79d102304470..a2335e402145 100644
--- a/tools/scripts/Makefile.include
+++ b/tools/scripts/Makefile.include
@@ -89,6 +89,9 @@ ifeq ($(CC_NO_CLANG), 1)
EXTRA_WARNINGS += -Wstrict-aliasing=3
else ifneq ($(CROSS_COMPILE),)
+# Allow userspace to override CLANG_CROSS_FLAGS to specify their own
+# sysroots and flags or to avoid the GCC call in pure Clang builds.
+ifeq ($(CLANG_CROSS_FLAGS),)
CLANG_CROSS_FLAGS := --target=$(notdir $(CROSS_COMPILE:%-=%))
GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE)gcc 2>/dev/null))
ifneq ($(GCC_TOOLCHAIN_DIR),)
@@ -96,6 +99,7 @@ CLANG_CROSS_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR)$(notdir $(CROSS_COMPILE))
CLANG_CROSS_FLAGS += --sysroot=$(shell $(CROSS_COMPILE)gcc -print-sysroot)
CLANG_CROSS_FLAGS += --gcc-toolchain=$(realpath $(GCC_TOOLCHAIN_DIR)/..)
endif # GCC_TOOLCHAIN_DIR
+endif # CLANG_CROSS_FLAGS
CFLAGS += $(CLANG_CROSS_FLAGS)
AFLAGS += $(CLANG_CROSS_FLAGS)
endif # CROSS_COMPILE
diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore
index 1dad8d617da8..595565eb68c0 100644
--- a/tools/testing/selftests/bpf/.gitignore
+++ b/tools/testing/selftests/bpf/.gitignore
@@ -1,4 +1,5 @@
# SPDX-License-Identifier: GPL-2.0-only
+bpftool
bpf-helpers*
bpf-syscall*
test_verifier
@@ -30,6 +31,7 @@ test_tcp_check_syncookie_user
test_sysctl
xdping
test_cpp
+*.subskel.h
*.skel.h
*.lskel.h
/no_alu32
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 42ffc24e9e71..3820608faf57 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -21,11 +21,11 @@ endif
BPF_GCC ?= $(shell command -v bpf-gcc;)
SAN_CFLAGS ?=
-CFLAGS += -g -O0 -rdynamic -Wall $(GENFLAGS) $(SAN_CFLAGS) \
+CFLAGS += -g -O0 -rdynamic -Wall -Werror $(GENFLAGS) $(SAN_CFLAGS) \
-I$(CURDIR) -I$(INCLUDE_DIR) -I$(GENDIR) -I$(LIBDIR) \
-I$(TOOLSINCDIR) -I$(APIDIR) -I$(OUTPUT)
LDFLAGS += $(SAN_CFLAGS)
-LDLIBS += -lcap -lelf -lz -lrt -lpthread
+LDLIBS += -lelf -lz -lrt -lpthread
# Silence some warnings when compiled with clang
ifneq ($(LLVM),)
@@ -195,6 +195,7 @@ $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED): $(BPFOBJ)
CGROUP_HELPERS := $(OUTPUT)/cgroup_helpers.o
TESTING_HELPERS := $(OUTPUT)/testing_helpers.o
TRACE_HELPERS := $(OUTPUT)/trace_helpers.o
+CAP_HELPERS := $(OUTPUT)/cap_helpers.o
$(OUTPUT)/test_dev_cgroup: $(CGROUP_HELPERS) $(TESTING_HELPERS)
$(OUTPUT)/test_skb_cgroup_id_user: $(CGROUP_HELPERS) $(TESTING_HELPERS)
@@ -211,7 +212,7 @@ $(OUTPUT)/test_lirc_mode2_user: $(TESTING_HELPERS)
$(OUTPUT)/xdping: $(TESTING_HELPERS)
$(OUTPUT)/flow_dissector_load: $(TESTING_HELPERS)
$(OUTPUT)/test_maps: $(TESTING_HELPERS)
-$(OUTPUT)/test_verifier: $(TESTING_HELPERS)
+$(OUTPUT)/test_verifier: $(TESTING_HELPERS) $(CAP_HELPERS)
BPFTOOL ?= $(DEFAULT_BPFTOOL)
$(DEFAULT_BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile) \
@@ -292,7 +293,7 @@ IS_LITTLE_ENDIAN = $(shell $(CC) -dM -E - </dev/null | \
MENDIAN=$(if $(IS_LITTLE_ENDIAN),-mlittle-endian,-mbig-endian)
CLANG_SYS_INCLUDES = $(call get_sys_includes,$(CLANG))
-BPF_CFLAGS = -g -D__TARGET_ARCH_$(SRCARCH) $(MENDIAN) \
+BPF_CFLAGS = -g -Werror -D__TARGET_ARCH_$(SRCARCH) $(MENDIAN) \
-I$(INCLUDE_DIR) -I$(CURDIR) -I$(APIDIR) \
-I$(abspath $(OUTPUT)/../usr/include)
@@ -326,11 +327,17 @@ endef
SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \
- linked_vars.skel.h linked_maps.skel.h
+ linked_vars.skel.h linked_maps.skel.h \
+ test_subskeleton.skel.h test_subskeleton_lib.skel.h
+
+# In the subskeleton case, we want the test_subskeleton_lib.subskel.h file
+# but that's created as a side-effect of the skel.h generation.
+test_subskeleton.skel.h-deps := test_subskeleton_lib2.o test_subskeleton_lib.o test_subskeleton.o
+test_subskeleton_lib.skel.h-deps := test_subskeleton_lib2.o test_subskeleton_lib.o
LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c \
- map_ptr_kern.c core_kern.c
+ map_ptr_kern.c core_kern.c core_kern_overflow.c
# Generate both light skeleton and libbpf skeleton for these
LSKELS_EXTRA := test_ksyms_module.c test_ksyms_weak.c kfunc_call_test_subprog.c
SKEL_BLACKLIST += $$(LSKELS)
@@ -404,6 +411,7 @@ $(TRUNNER_BPF_SKELS): %.skel.h: %.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
$(Q)$$(BPFTOOL) gen object $$(<:.o=.linked3.o) $$(<:.o=.linked2.o)
$(Q)diff $$(<:.o=.linked2.o) $$(<:.o=.linked3.o)
$(Q)$$(BPFTOOL) gen skeleton $$(<:.o=.linked3.o) name $$(notdir $$(<:.o=)) > $$@
+ $(Q)$$(BPFTOOL) gen subskeleton $$(<:.o=.linked3.o) name $$(notdir $$(<:.o=)) > $$(@:.skel.h=.subskel.h)
$(TRUNNER_BPF_LSKELS): %.lskel.h: %.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
$$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
@@ -421,6 +429,7 @@ $(TRUNNER_BPF_SKELS_LINKED): $(TRUNNER_BPF_OBJS) $(BPFTOOL) | $(TRUNNER_OUTPUT)
$(Q)diff $$(@:.skel.h=.linked2.o) $$(@:.skel.h=.linked3.o)
$$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
$(Q)$$(BPFTOOL) gen skeleton $$(@:.skel.h=.linked3.o) name $$(notdir $$(@:.skel.h=)) > $$@
+ $(Q)$$(BPFTOOL) gen subskeleton $$(@:.skel.h=.linked3.o) name $$(notdir $$(@:.skel.h=)) > $$(@:.skel.h=.subskel.h)
endif
# ensure we set up tests.h header generation rule just once
@@ -470,6 +479,7 @@ $(OUTPUT)/$(TRUNNER_BINARY): $(TRUNNER_TEST_OBJS) \
$$(call msg,BINARY,,$$@)
$(Q)$$(CC) $$(CFLAGS) $$(filter %.a %.o,$$^) $$(LDLIBS) -o $$@
$(Q)$(RESOLVE_BTFIDS) --btf $(TRUNNER_OUTPUT)/btf_data.o $$@
+ $(Q)ln -sf $(if $2,..,.)/tools/build/bpftool/bootstrap/bpftool $(if $2,$2/)bpftool
endef
@@ -478,7 +488,8 @@ TRUNNER_TESTS_DIR := prog_tests
TRUNNER_BPF_PROGS_DIR := progs
TRUNNER_EXTRA_SOURCES := test_progs.c cgroup_helpers.c trace_helpers.c \
network_helpers.c testing_helpers.c \
- btf_helpers.c flow_dissector_load.h
+ btf_helpers.c flow_dissector_load.h \
+ cap_helpers.c
TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read $(OUTPUT)/bpf_testmod.ko \
ima_setup.sh \
$(wildcard progs/btf_dump_test_case_*.c)
@@ -555,7 +566,7 @@ $(OUTPUT)/bench: $(OUTPUT)/bench.o \
EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \
prog_tests/tests.h map_tests/tests.h verifier/tests.h \
- feature \
- $(addprefix $(OUTPUT)/,*.o *.skel.h *.lskel.h no_alu32 bpf_gcc bpf_testmod.ko)
+ feature bpftool \
+ $(addprefix $(OUTPUT)/,*.o *.skel.h *.lskel.h *.subskel.h no_alu32 bpf_gcc bpf_testmod.ko)
.PHONY: docs docs-clean
diff --git a/tools/testing/selftests/bpf/README.rst b/tools/testing/selftests/bpf/README.rst
index 42ef250c7acc..eb1b7541f39d 100644
--- a/tools/testing/selftests/bpf/README.rst
+++ b/tools/testing/selftests/bpf/README.rst
@@ -32,11 +32,19 @@ For more information on about using the script, run:
$ tools/testing/selftests/bpf/vmtest.sh -h
+In case of linker errors when running selftests, try using static linking:
+
+.. code-block:: console
+
+ $ LDLIBS=-static vmtest.sh
+
+.. note:: Some distros may not support static linking.
+
.. note:: The script uses pahole and clang based on host environment setting.
If you want to change pahole and llvm, you can change `PATH` environment
variable in the beginning of script.
-.. note:: The script currently only supports x86_64.
+.. note:: The script currently only supports x86_64 and s390x architectures.
Additional information about selftest failures are
documented here.
@@ -206,6 +214,8 @@ btf_tag test and Clang version
The btf_tag selftest requires LLVM support to recognize the btf_decl_tag and
btf_type_tag attributes. They are introduced in `Clang 14` [0_, 1_].
+The subtests ``btf_type_tag_user_{mod1, mod2, vmlinux}`` also requires
+pahole version ``1.23``.
Without them, the btf_tag selftest will be skipped and you will observe:
diff --git a/tools/testing/selftests/bpf/benchs/bench_ringbufs.c b/tools/testing/selftests/bpf/benchs/bench_ringbufs.c
index da8593b3494a..c2554f9695ff 100644
--- a/tools/testing/selftests/bpf/benchs/bench_ringbufs.c
+++ b/tools/testing/selftests/bpf/benchs/bench_ringbufs.c
@@ -151,7 +151,7 @@ static struct ringbuf_bench *ringbuf_setup_skeleton(void)
/* record data + header take 16 bytes */
skel->rodata->wakeup_data_size = args.sample_rate * 16;
- bpf_map__resize(skel->maps.ringbuf, args.ringbuf_sz);
+ bpf_map__set_max_entries(skel->maps.ringbuf, args.ringbuf_sz);
if (ringbuf_bench__load(skel)) {
fprintf(stderr, "failed to load skeleton\n");
diff --git a/tools/testing/selftests/bpf/benchs/bench_trigger.c b/tools/testing/selftests/bpf/benchs/bench_trigger.c
index 7f957c55a3ca..0c481de2833d 100644
--- a/tools/testing/selftests/bpf/benchs/bench_trigger.c
+++ b/tools/testing/selftests/bpf/benchs/bench_trigger.c
@@ -154,7 +154,6 @@ static void *uprobe_producer_without_nop(void *input)
static void usetup(bool use_retprobe, bool use_nop)
{
size_t uprobe_offset;
- ssize_t base_addr;
struct bpf_link *link;
setup_libbpf();
@@ -165,11 +164,10 @@ static void usetup(bool use_retprobe, bool use_nop)
exit(1);
}
- base_addr = get_base_addr();
if (use_nop)
- uprobe_offset = get_uprobe_offset(&uprobe_target_with_nop, base_addr);
+ uprobe_offset = get_uprobe_offset(&uprobe_target_with_nop);
else
- uprobe_offset = get_uprobe_offset(&uprobe_target_without_nop, base_addr);
+ uprobe_offset = get_uprobe_offset(&uprobe_target_without_nop);
link = bpf_program__attach_uprobe(ctx.skel->progs.bench_trigger_uprobe,
use_retprobe,
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
index df3b292a8ffe..e585e1cefc77 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
@@ -13,6 +13,10 @@
#define CREATE_TRACE_POINTS
#include "bpf_testmod-events.h"
+typedef int (*func_proto_typedef)(long);
+typedef int (*func_proto_typedef_nested1)(func_proto_typedef);
+typedef int (*func_proto_typedef_nested2)(func_proto_typedef_nested1);
+
DEFINE_PER_CPU(int, bpf_testmod_ksym_percpu) = 123;
noinline void
@@ -21,6 +25,41 @@ bpf_testmod_test_mod_kfunc(int i)
*(int *)this_cpu_ptr(&bpf_testmod_ksym_percpu) = i;
}
+struct bpf_testmod_btf_type_tag_1 {
+ int a;
+};
+
+struct bpf_testmod_btf_type_tag_2 {
+ struct bpf_testmod_btf_type_tag_1 __user *p;
+};
+
+struct bpf_testmod_btf_type_tag_3 {
+ struct bpf_testmod_btf_type_tag_1 __percpu *p;
+};
+
+noinline int
+bpf_testmod_test_btf_type_tag_user_1(struct bpf_testmod_btf_type_tag_1 __user *arg) {
+ BTF_TYPE_EMIT(func_proto_typedef);
+ BTF_TYPE_EMIT(func_proto_typedef_nested1);
+ BTF_TYPE_EMIT(func_proto_typedef_nested2);
+ return arg->a;
+}
+
+noinline int
+bpf_testmod_test_btf_type_tag_user_2(struct bpf_testmod_btf_type_tag_2 *arg) {
+ return arg->p->a;
+}
+
+noinline int
+bpf_testmod_test_btf_type_tag_percpu_1(struct bpf_testmod_btf_type_tag_1 __percpu *arg) {
+ return arg->a;
+}
+
+noinline int
+bpf_testmod_test_btf_type_tag_percpu_2(struct bpf_testmod_btf_type_tag_3 *arg) {
+ return arg->p->a;
+}
+
noinline int bpf_testmod_loop_test(int n)
{
int i, sum = 0;
@@ -109,26 +148,31 @@ static struct bin_attribute bin_attr_bpf_testmod_file __ro_after_init = {
.write = bpf_testmod_test_write,
};
-BTF_SET_START(bpf_testmod_kfunc_ids)
+BTF_SET_START(bpf_testmod_check_kfunc_ids)
BTF_ID(func, bpf_testmod_test_mod_kfunc)
-BTF_SET_END(bpf_testmod_kfunc_ids)
+BTF_SET_END(bpf_testmod_check_kfunc_ids)
+
+static const struct btf_kfunc_id_set bpf_testmod_kfunc_set = {
+ .owner = THIS_MODULE,
+ .check_set = &bpf_testmod_check_kfunc_ids,
+};
-static DEFINE_KFUNC_BTF_ID_SET(&bpf_testmod_kfunc_ids, bpf_testmod_kfunc_btf_set);
+extern int bpf_fentry_test1(int a);
static int bpf_testmod_init(void)
{
int ret;
- ret = sysfs_create_bin_file(kernel_kobj, &bin_attr_bpf_testmod_file);
- if (ret)
+ ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_testmod_kfunc_set);
+ if (ret < 0)
return ret;
- register_kfunc_btf_id_set(&prog_test_kfunc_list, &bpf_testmod_kfunc_btf_set);
- return 0;
+ if (bpf_fentry_test1(0) < 0)
+ return -EINVAL;
+ return sysfs_create_bin_file(kernel_kobj, &bin_attr_bpf_testmod_file);
}
static void bpf_testmod_exit(void)
{
- unregister_kfunc_btf_id_set(&prog_test_kfunc_list, &bpf_testmod_kfunc_btf_set);
return sysfs_remove_bin_file(kernel_kobj, &bin_attr_bpf_testmod_file);
}
diff --git a/tools/testing/selftests/bpf/cap_helpers.c b/tools/testing/selftests/bpf/cap_helpers.c
new file mode 100644
index 000000000000..d5ac507401d7
--- /dev/null
+++ b/tools/testing/selftests/bpf/cap_helpers.c
@@ -0,0 +1,67 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "cap_helpers.h"
+
+/* Avoid including <sys/capability.h> from the libcap-devel package,
+ * so directly declare them here and use them from glibc.
+ */
+int capget(cap_user_header_t header, cap_user_data_t data);
+int capset(cap_user_header_t header, const cap_user_data_t data);
+
+int cap_enable_effective(__u64 caps, __u64 *old_caps)
+{
+ struct __user_cap_data_struct data[_LINUX_CAPABILITY_U32S_3];
+ struct __user_cap_header_struct hdr = {
+ .version = _LINUX_CAPABILITY_VERSION_3,
+ };
+ __u32 cap0 = caps;
+ __u32 cap1 = caps >> 32;
+ int err;
+
+ err = capget(&hdr, data);
+ if (err)
+ return err;
+
+ if (old_caps)
+ *old_caps = (__u64)(data[1].effective) << 32 | data[0].effective;
+
+ if ((data[0].effective & cap0) == cap0 &&
+ (data[1].effective & cap1) == cap1)
+ return 0;
+
+ data[0].effective |= cap0;
+ data[1].effective |= cap1;
+ err = capset(&hdr, data);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+int cap_disable_effective(__u64 caps, __u64 *old_caps)
+{
+ struct __user_cap_data_struct data[_LINUX_CAPABILITY_U32S_3];
+ struct __user_cap_header_struct hdr = {
+ .version = _LINUX_CAPABILITY_VERSION_3,
+ };
+ __u32 cap0 = caps;
+ __u32 cap1 = caps >> 32;
+ int err;
+
+ err = capget(&hdr, data);
+ if (err)
+ return err;
+
+ if (old_caps)
+ *old_caps = (__u64)(data[1].effective) << 32 | data[0].effective;
+
+ if (!(data[0].effective & cap0) && !(data[1].effective & cap1))
+ return 0;
+
+ data[0].effective &= ~cap0;
+ data[1].effective &= ~cap1;
+ err = capset(&hdr, data);
+ if (err)
+ return err;
+
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/cap_helpers.h b/tools/testing/selftests/bpf/cap_helpers.h
new file mode 100644
index 000000000000..6d163530cb0f
--- /dev/null
+++ b/tools/testing/selftests/bpf/cap_helpers.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __CAP_HELPERS_H
+#define __CAP_HELPERS_H
+
+#include <linux/types.h>
+#include <linux/capability.h>
+
+#ifndef CAP_PERFMON
+#define CAP_PERFMON 38
+#endif
+
+#ifndef CAP_BPF
+#define CAP_BPF 39
+#endif
+
+int cap_enable_effective(__u64 caps, __u64 *old_caps);
+int cap_disable_effective(__u64 caps, __u64 *old_caps);
+
+#endif
diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config
index f6287132fa89..763db63a3890 100644
--- a/tools/testing/selftests/bpf/config
+++ b/tools/testing/selftests/bpf/config
@@ -48,3 +48,8 @@ CONFIG_IMA_READ_POLICY=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_FUNCTION_TRACER=y
CONFIG_DYNAMIC_FTRACE=y
+CONFIG_NETFILTER=y
+CONFIG_NF_DEFRAG_IPV4=y
+CONFIG_NF_DEFRAG_IPV6=y
+CONFIG_NF_CONNTRACK=y
+CONFIG_USERFAULTFD=y
diff --git a/tools/testing/selftests/bpf/ima_setup.sh b/tools/testing/selftests/bpf/ima_setup.sh
index 8e62581113a3..8ecead4ccad0 100755
--- a/tools/testing/selftests/bpf/ima_setup.sh
+++ b/tools/testing/selftests/bpf/ima_setup.sh
@@ -12,7 +12,7 @@ LOG_FILE="$(mktemp /tmp/ima_setup.XXXX.log)"
usage()
{
- echo "Usage: $0 <setup|cleanup|run> <existing_tmp_dir>"
+ echo "Usage: $0 <setup|cleanup|run|modify-bin|restore-bin|load-policy> <existing_tmp_dir>"
exit 1
}
@@ -51,6 +51,7 @@ setup()
ensure_mount_securityfs
echo "measure func=BPRM_CHECK fsuuid=${mount_uuid}" > ${IMA_POLICY_FILE}
+ echo "measure func=BPRM_CHECK fsuuid=${mount_uuid}" > ${mount_dir}/policy_test
}
cleanup() {
@@ -77,6 +78,32 @@ run()
exec "${copied_bin_path}"
}
+modify_bin()
+{
+ local tmp_dir="$1"
+ local mount_dir="${tmp_dir}/mnt"
+ local copied_bin_path="${mount_dir}/$(basename ${TEST_BINARY})"
+
+ echo "mod" >> "${copied_bin_path}"
+}
+
+restore_bin()
+{
+ local tmp_dir="$1"
+ local mount_dir="${tmp_dir}/mnt"
+ local copied_bin_path="${mount_dir}/$(basename ${TEST_BINARY})"
+
+ truncate -s -4 "${copied_bin_path}"
+}
+
+load_policy()
+{
+ local tmp_dir="$1"
+ local mount_dir="${tmp_dir}/mnt"
+
+ echo ${mount_dir}/policy_test > ${IMA_POLICY_FILE} 2> /dev/null
+}
+
catch()
{
local exit_code="$1"
@@ -105,6 +132,12 @@ main()
cleanup "${tmp_dir}"
elif [[ "${action}" == "run" ]]; then
run "${tmp_dir}"
+ elif [[ "${action}" == "modify-bin" ]]; then
+ modify_bin "${tmp_dir}"
+ elif [[ "${action}" == "restore-bin" ]]; then
+ restore_bin "${tmp_dir}"
+ elif [[ "${action}" == "load-policy" ]]; then
+ load_policy "${tmp_dir}"
else
echo "Unknown action: ${action}"
exit 1
diff --git a/tools/testing/selftests/bpf/network_helpers.c b/tools/testing/selftests/bpf/network_helpers.c
index 6db1af8fdee7..2bb1f9b3841d 100644
--- a/tools/testing/selftests/bpf/network_helpers.c
+++ b/tools/testing/selftests/bpf/network_helpers.c
@@ -1,18 +1,25 @@
// SPDX-License-Identifier: GPL-2.0-only
+#define _GNU_SOURCE
+
#include <errno.h>
#include <stdbool.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
+#include <sched.h>
#include <arpa/inet.h>
+#include <sys/mount.h>
+#include <sys/stat.h>
#include <linux/err.h>
#include <linux/in.h>
#include <linux/in6.h>
+#include <linux/limits.h>
#include "bpf_util.h"
#include "network_helpers.h"
+#include "test_progs.h"
#define clean_errno() (errno == 0 ? "None" : strerror(errno))
#define log_err(MSG, ...) ({ \
@@ -356,3 +363,82 @@ char *ping_command(int family)
}
return "ping";
}
+
+struct nstoken {
+ int orig_netns_fd;
+};
+
+static int setns_by_fd(int nsfd)
+{
+ int err;
+
+ err = setns(nsfd, CLONE_NEWNET);
+ close(nsfd);
+
+ if (!ASSERT_OK(err, "setns"))
+ return err;
+
+ /* Switch /sys to the new namespace so that e.g. /sys/class/net
+ * reflects the devices in the new namespace.
+ */
+ err = unshare(CLONE_NEWNS);
+ if (!ASSERT_OK(err, "unshare"))
+ return err;
+
+ /* Make our /sys mount private, so the following umount won't
+ * trigger the global umount in case it's shared.
+ */
+ err = mount("none", "/sys", NULL, MS_PRIVATE, NULL);
+ if (!ASSERT_OK(err, "remount private /sys"))
+ return err;
+
+ err = umount2("/sys", MNT_DETACH);
+ if (!ASSERT_OK(err, "umount2 /sys"))
+ return err;
+
+ err = mount("sysfs", "/sys", "sysfs", 0, NULL);
+ if (!ASSERT_OK(err, "mount /sys"))
+ return err;
+
+ err = mount("bpffs", "/sys/fs/bpf", "bpf", 0, NULL);
+ if (!ASSERT_OK(err, "mount /sys/fs/bpf"))
+ return err;
+
+ return 0;
+}
+
+struct nstoken *open_netns(const char *name)
+{
+ int nsfd;
+ char nspath[PATH_MAX];
+ int err;
+ struct nstoken *token;
+
+ token = malloc(sizeof(struct nstoken));
+ if (!ASSERT_OK_PTR(token, "malloc token"))
+ return NULL;
+
+ token->orig_netns_fd = open("/proc/self/ns/net", O_RDONLY);
+ if (!ASSERT_GE(token->orig_netns_fd, 0, "open /proc/self/ns/net"))
+ goto fail;
+
+ snprintf(nspath, sizeof(nspath), "%s/%s", "/var/run/netns", name);
+ nsfd = open(nspath, O_RDONLY | O_CLOEXEC);
+ if (!ASSERT_GE(nsfd, 0, "open netns fd"))
+ goto fail;
+
+ err = setns_by_fd(nsfd);
+ if (!ASSERT_OK(err, "setns_by_fd"))
+ goto fail;
+
+ return token;
+fail:
+ free(token);
+ return NULL;
+}
+
+void close_netns(struct nstoken *token)
+{
+ ASSERT_OK(setns_by_fd(token->orig_netns_fd), "setns_by_fd");
+ free(token);
+}
diff --git a/tools/testing/selftests/bpf/network_helpers.h b/tools/testing/selftests/bpf/network_helpers.h
index d198181a5648..a4b3b2f9877b 100644
--- a/tools/testing/selftests/bpf/network_helpers.h
+++ b/tools/testing/selftests/bpf/network_helpers.h
@@ -55,4 +55,13 @@ int make_sockaddr(int family, const char *addr_str, __u16 port,
struct sockaddr_storage *addr, socklen_t *len);
char *ping_command(int family);
+struct nstoken;
+/**
+ * open_netns() - Switch to specified network namespace by name.
+ *
+ * Returns token with which to restore the original namespace
+ * using close_netns().
+ */
+struct nstoken *open_netns(const char *name);
+void close_netns(struct nstoken *token);
#endif
diff --git a/tools/testing/selftests/bpf/prog_tests/align.c b/tools/testing/selftests/bpf/prog_tests/align.c
index 0ee29e11eaee..970f09156eb4 100644
--- a/tools/testing/selftests/bpf/prog_tests/align.c
+++ b/tools/testing/selftests/bpf/prog_tests/align.c
@@ -39,13 +39,13 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {0, "R1=ctx(id=0,off=0,imm=0)"},
+ {0, "R1=ctx(off=0,imm=0)"},
{0, "R10=fp0"},
- {0, "R3_w=inv2"},
- {1, "R3_w=inv4"},
- {2, "R3_w=inv8"},
- {3, "R3_w=inv16"},
- {4, "R3_w=inv32"},
+ {0, "R3_w=2"},
+ {1, "R3_w=4"},
+ {2, "R3_w=8"},
+ {3, "R3_w=16"},
+ {4, "R3_w=32"},
},
},
{
@@ -67,19 +67,19 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {0, "R1=ctx(id=0,off=0,imm=0)"},
+ {0, "R1=ctx(off=0,imm=0)"},
{0, "R10=fp0"},
- {0, "R3_w=inv1"},
- {1, "R3_w=inv2"},
- {2, "R3_w=inv4"},
- {3, "R3_w=inv8"},
- {4, "R3_w=inv16"},
- {5, "R3_w=inv1"},
- {6, "R4_w=inv32"},
- {7, "R4_w=inv16"},
- {8, "R4_w=inv8"},
- {9, "R4_w=inv4"},
- {10, "R4_w=inv2"},
+ {0, "R3_w=1"},
+ {1, "R3_w=2"},
+ {2, "R3_w=4"},
+ {3, "R3_w=8"},
+ {4, "R3_w=16"},
+ {5, "R3_w=1"},
+ {6, "R4_w=32"},
+ {7, "R4_w=16"},
+ {8, "R4_w=8"},
+ {9, "R4_w=4"},
+ {10, "R4_w=2"},
},
},
{
@@ -96,14 +96,14 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {0, "R1=ctx(id=0,off=0,imm=0)"},
+ {0, "R1=ctx(off=0,imm=0)"},
{0, "R10=fp0"},
- {0, "R3_w=inv4"},
- {1, "R3_w=inv8"},
- {2, "R3_w=inv10"},
- {3, "R4_w=inv8"},
- {4, "R4_w=inv12"},
- {5, "R4_w=inv14"},
+ {0, "R3_w=4"},
+ {1, "R3_w=8"},
+ {2, "R3_w=10"},
+ {3, "R4_w=8"},
+ {4, "R4_w=12"},
+ {5, "R4_w=14"},
},
},
{
@@ -118,12 +118,12 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {0, "R1=ctx(id=0,off=0,imm=0)"},
+ {0, "R1=ctx(off=0,imm=0)"},
{0, "R10=fp0"},
- {0, "R3_w=inv7"},
- {1, "R3_w=inv7"},
- {2, "R3_w=inv14"},
- {3, "R3_w=inv56"},
+ {0, "R3_w=7"},
+ {1, "R3_w=7"},
+ {2, "R3_w=14"},
+ {3, "R3_w=56"},
},
},
@@ -161,19 +161,19 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {6, "R0_w=pkt(id=0,off=8,r=8,imm=0)"},
- {6, "R3_w=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"},
- {7, "R3_w=inv(id=0,umax_value=510,var_off=(0x0; 0x1fe))"},
- {8, "R3_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
- {9, "R3_w=inv(id=0,umax_value=2040,var_off=(0x0; 0x7f8))"},
- {10, "R3_w=inv(id=0,umax_value=4080,var_off=(0x0; 0xff0))"},
- {12, "R3_w=pkt_end(id=0,off=0,imm=0)"},
- {17, "R4_w=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"},
- {18, "R4_w=inv(id=0,umax_value=8160,var_off=(0x0; 0x1fe0))"},
- {19, "R4_w=inv(id=0,umax_value=4080,var_off=(0x0; 0xff0))"},
- {20, "R4_w=inv(id=0,umax_value=2040,var_off=(0x0; 0x7f8))"},
- {21, "R4_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
- {22, "R4_w=inv(id=0,umax_value=510,var_off=(0x0; 0x1fe))"},
+ {6, "R0_w=pkt(off=8,r=8,imm=0)"},
+ {6, "R3_w=scalar(umax=255,var_off=(0x0; 0xff))"},
+ {7, "R3_w=scalar(umax=510,var_off=(0x0; 0x1fe))"},
+ {8, "R3_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
+ {9, "R3_w=scalar(umax=2040,var_off=(0x0; 0x7f8))"},
+ {10, "R3_w=scalar(umax=4080,var_off=(0x0; 0xff0))"},
+ {12, "R3_w=pkt_end(off=0,imm=0)"},
+ {17, "R4_w=scalar(umax=255,var_off=(0x0; 0xff))"},
+ {18, "R4_w=scalar(umax=8160,var_off=(0x0; 0x1fe0))"},
+ {19, "R4_w=scalar(umax=4080,var_off=(0x0; 0xff0))"},
+ {20, "R4_w=scalar(umax=2040,var_off=(0x0; 0x7f8))"},
+ {21, "R4_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
+ {22, "R4_w=scalar(umax=510,var_off=(0x0; 0x1fe))"},
},
},
{
@@ -194,16 +194,16 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {6, "R3_w=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"},
- {7, "R4_w=inv(id=1,umax_value=255,var_off=(0x0; 0xff))"},
- {8, "R4_w=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"},
- {9, "R4_w=inv(id=1,umax_value=255,var_off=(0x0; 0xff))"},
- {10, "R4_w=inv(id=0,umax_value=510,var_off=(0x0; 0x1fe))"},
- {11, "R4_w=inv(id=1,umax_value=255,var_off=(0x0; 0xff))"},
- {12, "R4_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
- {13, "R4_w=inv(id=1,umax_value=255,var_off=(0x0; 0xff))"},
- {14, "R4_w=inv(id=0,umax_value=2040,var_off=(0x0; 0x7f8))"},
- {15, "R4_w=inv(id=0,umax_value=4080,var_off=(0x0; 0xff0))"},
+ {6, "R3_w=scalar(umax=255,var_off=(0x0; 0xff))"},
+ {7, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"},
+ {8, "R4_w=scalar(umax=255,var_off=(0x0; 0xff))"},
+ {9, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"},
+ {10, "R4_w=scalar(umax=510,var_off=(0x0; 0x1fe))"},
+ {11, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"},
+ {12, "R4_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
+ {13, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"},
+ {14, "R4_w=scalar(umax=2040,var_off=(0x0; 0x7f8))"},
+ {15, "R4_w=scalar(umax=4080,var_off=(0x0; 0xff0))"},
},
},
{
@@ -234,14 +234,14 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {2, "R5_w=pkt(id=0,off=0,r=0,imm=0)"},
- {4, "R5_w=pkt(id=0,off=14,r=0,imm=0)"},
- {5, "R4_w=pkt(id=0,off=14,r=0,imm=0)"},
- {9, "R2=pkt(id=0,off=0,r=18,imm=0)"},
- {10, "R5=pkt(id=0,off=14,r=18,imm=0)"},
- {10, "R4_w=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"},
- {13, "R4_w=inv(id=0,umax_value=65535,var_off=(0x0; 0xffff))"},
- {14, "R4_w=inv(id=0,umax_value=65535,var_off=(0x0; 0xffff))"},
+ {2, "R5_w=pkt(off=0,r=0,imm=0)"},
+ {4, "R5_w=pkt(off=14,r=0,imm=0)"},
+ {5, "R4_w=pkt(off=14,r=0,imm=0)"},
+ {9, "R2=pkt(off=0,r=18,imm=0)"},
+ {10, "R5=pkt(off=14,r=18,imm=0)"},
+ {10, "R4_w=scalar(umax=255,var_off=(0x0; 0xff))"},
+ {13, "R4_w=scalar(umax=65535,var_off=(0x0; 0xffff))"},
+ {14, "R4_w=scalar(umax=65535,var_off=(0x0; 0xffff))"},
},
},
{
@@ -296,59 +296,59 @@ static struct bpf_align_test tests[] = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
- {6, "R2_w=pkt(id=0,off=0,r=8,imm=0)"},
- {7, "R6_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ {6, "R2_w=pkt(off=0,r=8,imm=0)"},
+ {7, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
/* Offset is added to packet pointer R5, resulting in
* known fixed offset, and variable offset from R6.
*/
- {11, "R5_w=pkt(id=1,off=14,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ {11, "R5_w=pkt(id=1,off=14,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
/* At the time the word size load is performed from R5,
* it's total offset is NET_IP_ALIGN + reg->off (0) +
* reg->aux_off (14) which is 16. Then the variable
* offset is considered using reg->aux_off_align which
* is 4 and meets the load's requirements.
*/
- {15, "R4=pkt(id=1,off=18,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"},
- {15, "R5=pkt(id=1,off=14,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ {15, "R4=pkt(id=1,off=18,r=18,umax=1020,var_off=(0x0; 0x3fc))"},
+ {15, "R5=pkt(id=1,off=14,r=18,umax=1020,var_off=(0x0; 0x3fc))"},
/* Variable offset is added to R5 packet pointer,
* resulting in auxiliary alignment of 4.
*/
- {17, "R5_w=pkt(id=2,off=0,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ {17, "R5_w=pkt(id=2,off=0,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
/* Constant offset is added to R5, resulting in
* reg->off of 14.
*/
- {18, "R5_w=pkt(id=2,off=14,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ {18, "R5_w=pkt(id=2,off=14,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off
* (14) which is 16. Then the variable offset is 4-byte
* aligned, so the total offset is 4-byte aligned and
* meets the load's requirements.
*/
- {23, "R4=pkt(id=2,off=18,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"},
- {23, "R5=pkt(id=2,off=14,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ {23, "R4=pkt(id=2,off=18,r=18,umax=1020,var_off=(0x0; 0x3fc))"},
+ {23, "R5=pkt(id=2,off=14,r=18,umax=1020,var_off=(0x0; 0x3fc))"},
/* Constant offset is added to R5 packet pointer,
* resulting in reg->off value of 14.
*/
- {25, "R5_w=pkt(id=0,off=14,r=8"},
+ {25, "R5_w=pkt(off=14,r=8"},
/* Variable offset is added to R5, resulting in a
* variable offset of (4n).
*/
- {26, "R5_w=pkt(id=3,off=14,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ {26, "R5_w=pkt(id=3,off=14,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
/* Constant is added to R5 again, setting reg->off to 18. */
- {27, "R5_w=pkt(id=3,off=18,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ {27, "R5_w=pkt(id=3,off=18,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
/* And once more we add a variable; resulting var_off
* is still (4n), fixed offset is not changed.
* Also, we create a new reg->id.
*/
- {28, "R5_w=pkt(id=4,off=18,r=0,umax_value=2040,var_off=(0x0; 0x7fc)"},
+ {28, "R5_w=pkt(id=4,off=18,r=0,umax=2040,var_off=(0x0; 0x7fc)"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (18)
* which is 20. Then the variable offset is (4n), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {33, "R4=pkt(id=4,off=22,r=22,umax_value=2040,var_off=(0x0; 0x7fc)"},
- {33, "R5=pkt(id=4,off=18,r=22,umax_value=2040,var_off=(0x0; 0x7fc)"},
+ {33, "R4=pkt(id=4,off=22,r=22,umax=2040,var_off=(0x0; 0x7fc)"},
+ {33, "R5=pkt(id=4,off=18,r=22,umax=2040,var_off=(0x0; 0x7fc)"},
},
},
{
@@ -386,36 +386,36 @@ static struct bpf_align_test tests[] = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
- {6, "R2_w=pkt(id=0,off=0,r=8,imm=0)"},
- {7, "R6_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ {6, "R2_w=pkt(off=0,r=8,imm=0)"},
+ {7, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
/* Adding 14 makes R6 be (4n+2) */
- {8, "R6_w=inv(id=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc))"},
+ {8, "R6_w=scalar(umin=14,umax=1034,var_off=(0x2; 0x7fc))"},
/* Packet pointer has (4n+2) offset */
- {11, "R5_w=pkt(id=1,off=0,r=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc)"},
- {12, "R4=pkt(id=1,off=4,r=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc)"},
+ {11, "R5_w=pkt(id=1,off=0,r=0,umin=14,umax=1034,var_off=(0x2; 0x7fc)"},
+ {12, "R4=pkt(id=1,off=4,r=0,umin=14,umax=1034,var_off=(0x2; 0x7fc)"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {15, "R5=pkt(id=1,off=0,r=4,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc)"},
+ {15, "R5=pkt(id=1,off=0,r=4,umin=14,umax=1034,var_off=(0x2; 0x7fc)"},
/* Newly read value in R6 was shifted left by 2, so has
* known alignment of 4.
*/
- {17, "R6_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ {17, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
/* Added (4n) to packet pointer's (4n+2) var_off, giving
* another (4n+2).
*/
- {19, "R5_w=pkt(id=2,off=0,r=0,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc)"},
- {20, "R4=pkt(id=2,off=4,r=0,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc)"},
+ {19, "R5_w=pkt(id=2,off=0,r=0,umin=14,umax=2054,var_off=(0x2; 0xffc)"},
+ {20, "R4=pkt(id=2,off=4,r=0,umin=14,umax=2054,var_off=(0x2; 0xffc)"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {23, "R5=pkt(id=2,off=0,r=4,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc)"},
+ {23, "R5=pkt(id=2,off=0,r=4,umin=14,umax=2054,var_off=(0x2; 0xffc)"},
},
},
{
@@ -448,18 +448,18 @@ static struct bpf_align_test tests[] = {
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
.matches = {
- {3, "R5_w=pkt_end(id=0,off=0,imm=0)"},
+ {3, "R5_w=pkt_end(off=0,imm=0)"},
/* (ptr - ptr) << 2 == unknown, (4n) */
- {5, "R5_w=inv(id=0,smax_value=9223372036854775804,umax_value=18446744073709551612,var_off=(0x0; 0xfffffffffffffffc)"},
+ {5, "R5_w=scalar(smax=9223372036854775804,umax=18446744073709551612,var_off=(0x0; 0xfffffffffffffffc)"},
/* (4n) + 14 == (4n+2). We blow our bounds, because
* the add could overflow.
*/
- {6, "R5_w=inv(id=0,smin_value=-9223372036854775806,smax_value=9223372036854775806,umin_value=2,umax_value=18446744073709551614,var_off=(0x2; 0xfffffffffffffffc)"},
+ {6, "R5_w=scalar(smin=-9223372036854775806,smax=9223372036854775806,umin=2,umax=18446744073709551614,var_off=(0x2; 0xfffffffffffffffc)"},
/* Checked s>=0 */
- {9, "R5=inv(id=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
+ {9, "R5=scalar(umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
/* packet pointer + nonnegative (4n+2) */
- {11, "R6_w=pkt(id=1,off=0,r=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
- {12, "R4_w=pkt(id=1,off=4,r=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
+ {11, "R6_w=pkt(id=1,off=0,r=0,umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
+ {12, "R4_w=pkt(id=1,off=4,r=0,umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
/* NET_IP_ALIGN + (4n+2) == (4n), alignment is fine.
* We checked the bounds, but it might have been able
* to overflow if the packet pointer started in the
@@ -467,7 +467,7 @@ static struct bpf_align_test tests[] = {
* So we did not get a 'range' on R6, and the access
* attempt will fail.
*/
- {15, "R6_w=pkt(id=1,off=0,r=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
+ {15, "R6_w=pkt(id=1,off=0,r=0,umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
}
},
{
@@ -502,23 +502,23 @@ static struct bpf_align_test tests[] = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
- {6, "R2_w=pkt(id=0,off=0,r=8,imm=0)"},
- {8, "R6_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ {6, "R2_w=pkt(off=0,r=8,imm=0)"},
+ {8, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
/* Adding 14 makes R6 be (4n+2) */
- {9, "R6_w=inv(id=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc))"},
+ {9, "R6_w=scalar(umin=14,umax=1034,var_off=(0x2; 0x7fc))"},
/* New unknown value in R7 is (4n) */
- {10, "R7_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ {10, "R7_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
/* Subtracting it from R6 blows our unsigned bounds */
- {11, "R6=inv(id=0,smin_value=-1006,smax_value=1034,umin_value=2,umax_value=18446744073709551614,var_off=(0x2; 0xfffffffffffffffc)"},
+ {11, "R6=scalar(smin=-1006,smax=1034,umin=2,umax=18446744073709551614,var_off=(0x2; 0xfffffffffffffffc)"},
/* Checked s>= 0 */
- {14, "R6=inv(id=0,umin_value=2,umax_value=1034,var_off=(0x2; 0x7fc))"},
+ {14, "R6=scalar(umin=2,umax=1034,var_off=(0x2; 0x7fc))"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {20, "R5=pkt(id=2,off=0,r=4,umin_value=2,umax_value=1034,var_off=(0x2; 0x7fc)"},
+ {20, "R5=pkt(id=2,off=0,r=4,umin=2,umax=1034,var_off=(0x2; 0x7fc)"},
},
},
@@ -556,23 +556,23 @@ static struct bpf_align_test tests[] = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
- {6, "R2_w=pkt(id=0,off=0,r=8,imm=0)"},
- {9, "R6_w=inv(id=0,umax_value=60,var_off=(0x0; 0x3c))"},
+ {6, "R2_w=pkt(off=0,r=8,imm=0)"},
+ {9, "R6_w=scalar(umax=60,var_off=(0x0; 0x3c))"},
/* Adding 14 makes R6 be (4n+2) */
- {10, "R6_w=inv(id=0,umin_value=14,umax_value=74,var_off=(0x2; 0x7c))"},
+ {10, "R6_w=scalar(umin=14,umax=74,var_off=(0x2; 0x7c))"},
/* Subtracting from packet pointer overflows ubounds */
- {13, "R5_w=pkt(id=2,off=0,r=8,umin_value=18446744073709551542,umax_value=18446744073709551602,var_off=(0xffffffffffffff82; 0x7c)"},
+ {13, "R5_w=pkt(id=2,off=0,r=8,umin=18446744073709551542,umax=18446744073709551602,var_off=(0xffffffffffffff82; 0x7c)"},
/* New unknown value in R7 is (4n), >= 76 */
- {14, "R7_w=inv(id=0,umin_value=76,umax_value=1096,var_off=(0x0; 0x7fc))"},
+ {14, "R7_w=scalar(umin=76,umax=1096,var_off=(0x0; 0x7fc))"},
/* Adding it to packet pointer gives nice bounds again */
- {16, "R5_w=pkt(id=3,off=0,r=0,umin_value=2,umax_value=1082,var_off=(0x2; 0xfffffffc)"},
+ {16, "R5_w=pkt(id=3,off=0,r=0,umin=2,umax=1082,var_off=(0x2; 0xfffffffc)"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {20, "R5=pkt(id=3,off=0,r=4,umin_value=2,umax_value=1082,var_off=(0x2; 0xfffffffc)"},
+ {20, "R5=pkt(id=3,off=0,r=4,umin=2,umax=1082,var_off=(0x2; 0xfffffffc)"},
},
},
};
@@ -648,8 +648,8 @@ static int do_test_single(struct bpf_align_test *test)
/* Check the next line as well in case the previous line
* did not have a corresponding bpf insn. Example:
* func#0 @0
- * 0: R1=ctx(id=0,off=0,imm=0) R10=fp0
- * 0: (b7) r3 = 2 ; R3_w=inv2
+ * 0: R1=ctx(off=0,imm=0) R10=fp0
+ * 0: (b7) r3 = 2 ; R3_w=2
*/
if (!strstr(line_ptr, m.match)) {
cur_line = -1;
diff --git a/tools/testing/selftests/bpf/prog_tests/atomics.c b/tools/testing/selftests/bpf/prog_tests/atomics.c
index 86b7d5d84eec..13e101f370a1 100644
--- a/tools/testing/selftests/bpf/prog_tests/atomics.c
+++ b/tools/testing/selftests/bpf/prog_tests/atomics.c
@@ -7,19 +7,15 @@
static void test_add(struct atomics_lskel *skel)
{
int err, prog_fd;
- __u32 duration = 0, retval;
- int link_fd;
-
- link_fd = atomics_lskel__add__attach(skel);
- if (!ASSERT_GT(link_fd, 0, "attach(add)"))
- return;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ /* No need to attach it, just run it directly */
prog_fd = skel->progs.add.prog_fd;
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
- if (CHECK(err || retval, "test_run add",
- "err %d errno %d retval %d duration %d\n", err, errno, retval, duration))
- goto cleanup;
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ if (!ASSERT_OK(err, "test_run_opts err"))
+ return;
+ if (!ASSERT_OK(topts.retval, "test_run_opts retval"))
+ return;
ASSERT_EQ(skel->data->add64_value, 3, "add64_value");
ASSERT_EQ(skel->bss->add64_result, 1, "add64_result");
@@ -31,28 +27,20 @@ static void test_add(struct atomics_lskel *skel)
ASSERT_EQ(skel->bss->add_stack_result, 1, "add_stack_result");
ASSERT_EQ(skel->data->add_noreturn_value, 3, "add_noreturn_value");
-
-cleanup:
- close(link_fd);
}
static void test_sub(struct atomics_lskel *skel)
{
int err, prog_fd;
- __u32 duration = 0, retval;
- int link_fd;
-
- link_fd = atomics_lskel__sub__attach(skel);
- if (!ASSERT_GT(link_fd, 0, "attach(sub)"))
- return;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ /* No need to attach it, just run it directly */
prog_fd = skel->progs.sub.prog_fd;
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
- if (CHECK(err || retval, "test_run sub",
- "err %d errno %d retval %d duration %d\n",
- err, errno, retval, duration))
- goto cleanup;
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ if (!ASSERT_OK(err, "test_run_opts err"))
+ return;
+ if (!ASSERT_OK(topts.retval, "test_run_opts retval"))
+ return;
ASSERT_EQ(skel->data->sub64_value, -1, "sub64_value");
ASSERT_EQ(skel->bss->sub64_result, 1, "sub64_result");
@@ -64,27 +52,20 @@ static void test_sub(struct atomics_lskel *skel)
ASSERT_EQ(skel->bss->sub_stack_result, 1, "sub_stack_result");
ASSERT_EQ(skel->data->sub_noreturn_value, -1, "sub_noreturn_value");
-
-cleanup:
- close(link_fd);
}
static void test_and(struct atomics_lskel *skel)
{
int err, prog_fd;
- __u32 duration = 0, retval;
- int link_fd;
-
- link_fd = atomics_lskel__and__attach(skel);
- if (!ASSERT_GT(link_fd, 0, "attach(and)"))
- return;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ /* No need to attach it, just run it directly */
prog_fd = skel->progs.and.prog_fd;
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
- if (CHECK(err || retval, "test_run and",
- "err %d errno %d retval %d duration %d\n", err, errno, retval, duration))
- goto cleanup;
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ if (!ASSERT_OK(err, "test_run_opts err"))
+ return;
+ if (!ASSERT_OK(topts.retval, "test_run_opts retval"))
+ return;
ASSERT_EQ(skel->data->and64_value, 0x010ull << 32, "and64_value");
ASSERT_EQ(skel->bss->and64_result, 0x110ull << 32, "and64_result");
@@ -93,27 +74,20 @@ static void test_and(struct atomics_lskel *skel)
ASSERT_EQ(skel->bss->and32_result, 0x110, "and32_result");
ASSERT_EQ(skel->data->and_noreturn_value, 0x010ull << 32, "and_noreturn_value");
-cleanup:
- close(link_fd);
}
static void test_or(struct atomics_lskel *skel)
{
int err, prog_fd;
- __u32 duration = 0, retval;
- int link_fd;
-
- link_fd = atomics_lskel__or__attach(skel);
- if (!ASSERT_GT(link_fd, 0, "attach(or)"))
- return;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ /* No need to attach it, just run it directly */
prog_fd = skel->progs.or.prog_fd;
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
- if (CHECK(err || retval, "test_run or",
- "err %d errno %d retval %d duration %d\n",
- err, errno, retval, duration))
- goto cleanup;
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ if (!ASSERT_OK(err, "test_run_opts err"))
+ return;
+ if (!ASSERT_OK(topts.retval, "test_run_opts retval"))
+ return;
ASSERT_EQ(skel->data->or64_value, 0x111ull << 32, "or64_value");
ASSERT_EQ(skel->bss->or64_result, 0x110ull << 32, "or64_result");
@@ -122,26 +96,20 @@ static void test_or(struct atomics_lskel *skel)
ASSERT_EQ(skel->bss->or32_result, 0x110, "or32_result");
ASSERT_EQ(skel->data->or_noreturn_value, 0x111ull << 32, "or_noreturn_value");
-cleanup:
- close(link_fd);
}
static void test_xor(struct atomics_lskel *skel)
{
int err, prog_fd;
- __u32 duration = 0, retval;
- int link_fd;
-
- link_fd = atomics_lskel__xor__attach(skel);
- if (!ASSERT_GT(link_fd, 0, "attach(xor)"))
- return;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ /* No need to attach it, just run it directly */
prog_fd = skel->progs.xor.prog_fd;
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
- if (CHECK(err || retval, "test_run xor",
- "err %d errno %d retval %d duration %d\n", err, errno, retval, duration))
- goto cleanup;
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ if (!ASSERT_OK(err, "test_run_opts err"))
+ return;
+ if (!ASSERT_OK(topts.retval, "test_run_opts retval"))
+ return;
ASSERT_EQ(skel->data->xor64_value, 0x101ull << 32, "xor64_value");
ASSERT_EQ(skel->bss->xor64_result, 0x110ull << 32, "xor64_result");
@@ -150,26 +118,20 @@ static void test_xor(struct atomics_lskel *skel)
ASSERT_EQ(skel->bss->xor32_result, 0x110, "xor32_result");
ASSERT_EQ(skel->data->xor_noreturn_value, 0x101ull << 32, "xor_nxoreturn_value");
-cleanup:
- close(link_fd);
}
static void test_cmpxchg(struct atomics_lskel *skel)
{
int err, prog_fd;
- __u32 duration = 0, retval;
- int link_fd;
-
- link_fd = atomics_lskel__cmpxchg__attach(skel);
- if (!ASSERT_GT(link_fd, 0, "attach(cmpxchg)"))
- return;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ /* No need to attach it, just run it directly */
prog_fd = skel->progs.cmpxchg.prog_fd;
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
- if (CHECK(err || retval, "test_run cmpxchg",
- "err %d errno %d retval %d duration %d\n", err, errno, retval, duration))
- goto cleanup;
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ if (!ASSERT_OK(err, "test_run_opts err"))
+ return;
+ if (!ASSERT_OK(topts.retval, "test_run_opts retval"))
+ return;
ASSERT_EQ(skel->data->cmpxchg64_value, 2, "cmpxchg64_value");
ASSERT_EQ(skel->bss->cmpxchg64_result_fail, 1, "cmpxchg_result_fail");
@@ -178,45 +140,34 @@ static void test_cmpxchg(struct atomics_lskel *skel)
ASSERT_EQ(skel->data->cmpxchg32_value, 2, "lcmpxchg32_value");
ASSERT_EQ(skel->bss->cmpxchg32_result_fail, 1, "cmpxchg_result_fail");
ASSERT_EQ(skel->bss->cmpxchg32_result_succeed, 1, "cmpxchg_result_succeed");
-
-cleanup:
- close(link_fd);
}
static void test_xchg(struct atomics_lskel *skel)
{
int err, prog_fd;
- __u32 duration = 0, retval;
- int link_fd;
-
- link_fd = atomics_lskel__xchg__attach(skel);
- if (!ASSERT_GT(link_fd, 0, "attach(xchg)"))
- return;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ /* No need to attach it, just run it directly */
prog_fd = skel->progs.xchg.prog_fd;
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
- if (CHECK(err || retval, "test_run xchg",
- "err %d errno %d retval %d duration %d\n", err, errno, retval, duration))
- goto cleanup;
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ if (!ASSERT_OK(err, "test_run_opts err"))
+ return;
+ if (!ASSERT_OK(topts.retval, "test_run_opts retval"))
+ return;
ASSERT_EQ(skel->data->xchg64_value, 2, "xchg64_value");
ASSERT_EQ(skel->bss->xchg64_result, 1, "xchg64_result");
ASSERT_EQ(skel->data->xchg32_value, 2, "xchg32_value");
ASSERT_EQ(skel->bss->xchg32_result, 1, "xchg32_result");
-
-cleanup:
- close(link_fd);
}
void test_atomics(void)
{
struct atomics_lskel *skel;
- __u32 duration = 0;
skel = atomics_lskel__open_and_load();
- if (CHECK(!skel, "skel_load", "atomics skeleton failed\n"))
+ if (!ASSERT_OK_PTR(skel, "atomics skeleton load"))
return;
if (skel->data->skip_tests) {
diff --git a/tools/testing/selftests/bpf/prog_tests/attach_probe.c b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
index d0bd51eb23c8..d48f6e533e1e 100644
--- a/tools/testing/selftests/bpf/prog_tests/attach_probe.c
+++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
@@ -5,9 +5,10 @@
/* this is how USDT semaphore is actually defined, except volatile modifier */
volatile unsigned short uprobe_ref_ctr __attribute__((unused)) __attribute((section(".probes")));
-/* attach point */
-static void method(void) {
- return ;
+/* uprobe attach point */
+static void trigger_func(void)
+{
+ asm volatile ("");
}
void test_attach_probe(void)
@@ -17,8 +18,7 @@ void test_attach_probe(void)
struct bpf_link *kprobe_link, *kretprobe_link;
struct bpf_link *uprobe_link, *uretprobe_link;
struct test_attach_probe* skel;
- size_t uprobe_offset;
- ssize_t base_addr, ref_ctr_offset;
+ ssize_t uprobe_offset, ref_ctr_offset;
bool legacy;
/* Check if new-style kprobe/uprobe API is supported.
@@ -34,11 +34,9 @@ void test_attach_probe(void)
*/
legacy = access("/sys/bus/event_source/devices/kprobe/type", F_OK) != 0;
- base_addr = get_base_addr();
- if (CHECK(base_addr < 0, "get_base_addr",
- "failed to find base addr: %zd", base_addr))
+ uprobe_offset = get_uprobe_offset(&trigger_func);
+ if (!ASSERT_GE(uprobe_offset, 0, "uprobe_offset"))
return;
- uprobe_offset = get_uprobe_offset(&method, base_addr);
ref_ctr_offset = get_rel_offset((uintptr_t)&uprobe_ref_ctr);
if (!ASSERT_GE(ref_ctr_offset, 0, "ref_ctr_offset"))
@@ -103,7 +101,7 @@ void test_attach_probe(void)
goto cleanup;
/* trigger & validate uprobe & uretprobe */
- method();
+ trigger_func();
if (CHECK(skel->bss->uprobe_res != 3, "check_uprobe_res",
"wrong uprobe res: %d\n", skel->bss->uprobe_res))
diff --git a/tools/testing/selftests/bpf/prog_tests/bind_perm.c b/tools/testing/selftests/bpf/prog_tests/bind_perm.c
index d0f06e40c16d..a1766a298bb7 100644
--- a/tools/testing/selftests/bpf/prog_tests/bind_perm.c
+++ b/tools/testing/selftests/bpf/prog_tests/bind_perm.c
@@ -1,13 +1,24 @@
// SPDX-License-Identifier: GPL-2.0
-#include <test_progs.h>
-#include "bind_perm.skel.h"
-
+#define _GNU_SOURCE
+#include <sched.h>
+#include <stdlib.h>
#include <sys/types.h>
#include <sys/socket.h>
-#include <sys/capability.h>
+
+#include "test_progs.h"
+#include "cap_helpers.h"
+#include "bind_perm.skel.h"
static int duration;
+static int create_netns(void)
+{
+ if (!ASSERT_OK(unshare(CLONE_NEWNET), "create netns"))
+ return -1;
+
+ return 0;
+}
+
void try_bind(int family, int port, int expected_errno)
{
struct sockaddr_storage addr = {};
@@ -38,43 +49,16 @@ close_socket:
close(fd);
}
-bool cap_net_bind_service(cap_flag_value_t flag)
-{
- const cap_value_t cap_net_bind_service = CAP_NET_BIND_SERVICE;
- cap_flag_value_t original_value;
- bool was_effective = false;
- cap_t caps;
-
- caps = cap_get_proc();
- if (CHECK(!caps, "cap_get_proc", "errno %d", errno))
- goto free_caps;
-
- if (CHECK(cap_get_flag(caps, CAP_NET_BIND_SERVICE, CAP_EFFECTIVE,
- &original_value),
- "cap_get_flag", "errno %d", errno))
- goto free_caps;
-
- was_effective = (original_value == CAP_SET);
-
- if (CHECK(cap_set_flag(caps, CAP_EFFECTIVE, 1, &cap_net_bind_service,
- flag),
- "cap_set_flag", "errno %d", errno))
- goto free_caps;
-
- if (CHECK(cap_set_proc(caps), "cap_set_proc", "errno %d", errno))
- goto free_caps;
-
-free_caps:
- CHECK(cap_free(caps), "cap_free", "errno %d", errno);
- return was_effective;
-}
-
void test_bind_perm(void)
{
- bool cap_was_effective;
+ const __u64 net_bind_svc_cap = 1ULL << CAP_NET_BIND_SERVICE;
struct bind_perm *skel;
+ __u64 old_caps = 0;
int cgroup_fd;
+ if (create_netns())
+ return;
+
cgroup_fd = test__join_cgroup("/bind_perm");
if (CHECK(cgroup_fd < 0, "cg-join", "errno %d", errno))
return;
@@ -91,7 +75,8 @@ void test_bind_perm(void)
if (!ASSERT_OK_PTR(skel, "bind_v6_prog"))
goto close_skeleton;
- cap_was_effective = cap_net_bind_service(CAP_CLEAR);
+ ASSERT_OK(cap_disable_effective(net_bind_svc_cap, &old_caps),
+ "cap_disable_effective");
try_bind(AF_INET, 110, EACCES);
try_bind(AF_INET6, 110, EACCES);
@@ -99,8 +84,9 @@ void test_bind_perm(void)
try_bind(AF_INET, 111, 0);
try_bind(AF_INET6, 111, 0);
- if (cap_was_effective)
- cap_net_bind_service(CAP_SET);
+ if (old_caps & net_bind_svc_cap)
+ ASSERT_OK(cap_enable_effective(net_bind_svc_cap, NULL),
+ "cap_enable_effective");
close_skeleton:
bind_perm__destroy(skel);
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
index 5eea3c3a40fe..923a6139b2d8 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
@@ -7,6 +7,13 @@
#include <unistd.h>
#include <test_progs.h>
#include "test_bpf_cookie.skel.h"
+#include "kprobe_multi.skel.h"
+
+/* uprobe attach point */
+static void trigger_func(void)
+{
+ asm volatile ("");
+}
static void kprobe_subtest(struct test_bpf_cookie *skel)
{
@@ -57,16 +64,188 @@ cleanup:
bpf_link__destroy(retlink2);
}
+static void kprobe_multi_test_run(struct kprobe_multi *skel)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ int err, prog_fd;
+
+ prog_fd = bpf_program__fd(skel->progs.trigger);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run");
+
+ ASSERT_EQ(skel->bss->kprobe_test1_result, 1, "kprobe_test1_result");
+ ASSERT_EQ(skel->bss->kprobe_test2_result, 1, "kprobe_test2_result");
+ ASSERT_EQ(skel->bss->kprobe_test3_result, 1, "kprobe_test3_result");
+ ASSERT_EQ(skel->bss->kprobe_test4_result, 1, "kprobe_test4_result");
+ ASSERT_EQ(skel->bss->kprobe_test5_result, 1, "kprobe_test5_result");
+ ASSERT_EQ(skel->bss->kprobe_test6_result, 1, "kprobe_test6_result");
+ ASSERT_EQ(skel->bss->kprobe_test7_result, 1, "kprobe_test7_result");
+ ASSERT_EQ(skel->bss->kprobe_test8_result, 1, "kprobe_test8_result");
+
+ ASSERT_EQ(skel->bss->kretprobe_test1_result, 1, "kretprobe_test1_result");
+ ASSERT_EQ(skel->bss->kretprobe_test2_result, 1, "kretprobe_test2_result");
+ ASSERT_EQ(skel->bss->kretprobe_test3_result, 1, "kretprobe_test3_result");
+ ASSERT_EQ(skel->bss->kretprobe_test4_result, 1, "kretprobe_test4_result");
+ ASSERT_EQ(skel->bss->kretprobe_test5_result, 1, "kretprobe_test5_result");
+ ASSERT_EQ(skel->bss->kretprobe_test6_result, 1, "kretprobe_test6_result");
+ ASSERT_EQ(skel->bss->kretprobe_test7_result, 1, "kretprobe_test7_result");
+ ASSERT_EQ(skel->bss->kretprobe_test8_result, 1, "kretprobe_test8_result");
+}
+
+static void kprobe_multi_link_api_subtest(void)
+{
+ int prog_fd, link1_fd = -1, link2_fd = -1;
+ struct kprobe_multi *skel = NULL;
+ LIBBPF_OPTS(bpf_link_create_opts, opts);
+ unsigned long long addrs[8];
+ __u64 cookies[8];
+
+ if (!ASSERT_OK(load_kallsyms(), "load_kallsyms"))
+ goto cleanup;
+
+ skel = kprobe_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "fentry_raw_skel_load"))
+ goto cleanup;
+
+ skel->bss->pid = getpid();
+ skel->bss->test_cookie = true;
+
+#define GET_ADDR(__sym, __addr) ({ \
+ __addr = ksym_get_addr(__sym); \
+ if (!ASSERT_NEQ(__addr, 0, "ksym_get_addr " #__sym)) \
+ goto cleanup; \
+})
+
+ GET_ADDR("bpf_fentry_test1", addrs[0]);
+ GET_ADDR("bpf_fentry_test2", addrs[1]);
+ GET_ADDR("bpf_fentry_test3", addrs[2]);
+ GET_ADDR("bpf_fentry_test4", addrs[3]);
+ GET_ADDR("bpf_fentry_test5", addrs[4]);
+ GET_ADDR("bpf_fentry_test6", addrs[5]);
+ GET_ADDR("bpf_fentry_test7", addrs[6]);
+ GET_ADDR("bpf_fentry_test8", addrs[7]);
+
+#undef GET_ADDR
+
+ cookies[0] = 1;
+ cookies[1] = 2;
+ cookies[2] = 3;
+ cookies[3] = 4;
+ cookies[4] = 5;
+ cookies[5] = 6;
+ cookies[6] = 7;
+ cookies[7] = 8;
+
+ opts.kprobe_multi.addrs = (const unsigned long *) &addrs;
+ opts.kprobe_multi.cnt = ARRAY_SIZE(addrs);
+ opts.kprobe_multi.cookies = (const __u64 *) &cookies;
+ prog_fd = bpf_program__fd(skel->progs.test_kprobe);
+
+ link1_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_KPROBE_MULTI, &opts);
+ if (!ASSERT_GE(link1_fd, 0, "link1_fd"))
+ goto cleanup;
+
+ cookies[0] = 8;
+ cookies[1] = 7;
+ cookies[2] = 6;
+ cookies[3] = 5;
+ cookies[4] = 4;
+ cookies[5] = 3;
+ cookies[6] = 2;
+ cookies[7] = 1;
+
+ opts.kprobe_multi.flags = BPF_F_KPROBE_MULTI_RETURN;
+ prog_fd = bpf_program__fd(skel->progs.test_kretprobe);
+
+ link2_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_KPROBE_MULTI, &opts);
+ if (!ASSERT_GE(link2_fd, 0, "link2_fd"))
+ goto cleanup;
+
+ kprobe_multi_test_run(skel);
+
+cleanup:
+ close(link1_fd);
+ close(link2_fd);
+ kprobe_multi__destroy(skel);
+}
+
+static void kprobe_multi_attach_api_subtest(void)
+{
+ struct bpf_link *link1 = NULL, *link2 = NULL;
+ LIBBPF_OPTS(bpf_kprobe_multi_opts, opts);
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ struct kprobe_multi *skel = NULL;
+ const char *syms[8] = {
+ "bpf_fentry_test1",
+ "bpf_fentry_test2",
+ "bpf_fentry_test3",
+ "bpf_fentry_test4",
+ "bpf_fentry_test5",
+ "bpf_fentry_test6",
+ "bpf_fentry_test7",
+ "bpf_fentry_test8",
+ };
+ __u64 cookies[8];
+
+ skel = kprobe_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "fentry_raw_skel_load"))
+ goto cleanup;
+
+ skel->bss->pid = getpid();
+ skel->bss->test_cookie = true;
+
+ cookies[0] = 1;
+ cookies[1] = 2;
+ cookies[2] = 3;
+ cookies[3] = 4;
+ cookies[4] = 5;
+ cookies[5] = 6;
+ cookies[6] = 7;
+ cookies[7] = 8;
+
+ opts.syms = syms;
+ opts.cnt = ARRAY_SIZE(syms);
+ opts.cookies = cookies;
+
+ link1 = bpf_program__attach_kprobe_multi_opts(skel->progs.test_kprobe,
+ NULL, &opts);
+ if (!ASSERT_OK_PTR(link1, "bpf_program__attach_kprobe_multi_opts"))
+ goto cleanup;
+
+ cookies[0] = 8;
+ cookies[1] = 7;
+ cookies[2] = 6;
+ cookies[3] = 5;
+ cookies[4] = 4;
+ cookies[5] = 3;
+ cookies[6] = 2;
+ cookies[7] = 1;
+
+ opts.retprobe = true;
+
+ link2 = bpf_program__attach_kprobe_multi_opts(skel->progs.test_kretprobe,
+ NULL, &opts);
+ if (!ASSERT_OK_PTR(link2, "bpf_program__attach_kprobe_multi_opts"))
+ goto cleanup;
+
+ kprobe_multi_test_run(skel);
+
+cleanup:
+ bpf_link__destroy(link2);
+ bpf_link__destroy(link1);
+ kprobe_multi__destroy(skel);
+}
static void uprobe_subtest(struct test_bpf_cookie *skel)
{
DECLARE_LIBBPF_OPTS(bpf_uprobe_opts, opts);
struct bpf_link *link1 = NULL, *link2 = NULL;
struct bpf_link *retlink1 = NULL, *retlink2 = NULL;
- size_t uprobe_offset;
- ssize_t base_addr;
+ ssize_t uprobe_offset;
- base_addr = get_base_addr();
- uprobe_offset = get_uprobe_offset(&get_base_addr, base_addr);
+ uprobe_offset = get_uprobe_offset(&trigger_func);
+ if (!ASSERT_GE(uprobe_offset, 0, "uprobe_offset"))
+ goto cleanup;
/* attach two uprobes */
opts.bpf_cookie = 0x100;
@@ -99,7 +278,7 @@ static void uprobe_subtest(struct test_bpf_cookie *skel)
goto cleanup;
/* trigger uprobe && uretprobe */
- get_base_addr();
+ trigger_func();
ASSERT_EQ(skel->bss->uprobe_res, 0x100 | 0x200, "uprobe_res");
ASSERT_EQ(skel->bss->uretprobe_res, 0x1000 | 0x2000, "uretprobe_res");
@@ -193,7 +372,7 @@ static void pe_subtest(struct test_bpf_cookie *skel)
attr.type = PERF_TYPE_SOFTWARE;
attr.config = PERF_COUNT_SW_CPU_CLOCK;
attr.freq = 1;
- attr.sample_freq = 4000;
+ attr.sample_freq = 1000;
pfd = syscall(__NR_perf_event_open, &attr, -1, 0, -1, PERF_FLAG_FD_CLOEXEC);
if (!ASSERT_GE(pfd, 0, "perf_fd"))
goto cleanup;
@@ -243,6 +422,10 @@ void test_bpf_cookie(void)
if (test__start_subtest("kprobe"))
kprobe_subtest(skel);
+ if (test__start_subtest("multi_kprobe_link_api"))
+ kprobe_multi_link_api_subtest();
+ if (test__start_subtest("multi_kprobe_attach_api"))
+ kprobe_multi_attach_api_subtest();
if (test__start_subtest("uprobe"))
uprobe_subtest(skel);
if (test__start_subtest("tracepoint"))
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
index b84f859b1267..5142a7d130b2 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
@@ -138,6 +138,24 @@ static void test_task(void)
bpf_iter_task__destroy(skel);
}
+static void test_task_sleepable(void)
+{
+ struct bpf_iter_task *skel;
+
+ skel = bpf_iter_task__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "bpf_iter_task__open_and_load"))
+ return;
+
+ do_dummy_read(skel->progs.dump_task_sleepable);
+
+ ASSERT_GT(skel->bss->num_expected_failure_copy_from_user_task, 0,
+ "num_expected_failure_copy_from_user_task");
+ ASSERT_GT(skel->bss->num_success_copy_from_user_task, 0,
+ "num_success_copy_from_user_task");
+
+ bpf_iter_task__destroy(skel);
+}
+
static void test_task_stack(void)
{
struct bpf_iter_task_stack *skel;
@@ -1252,6 +1270,8 @@ void test_bpf_iter(void)
test_bpf_map();
if (test__start_subtest("task"))
test_task();
+ if (test__start_subtest("task_sleepable"))
+ test_task_sleepable();
if (test__start_subtest("task_stack"))
test_task_stack();
if (test__start_subtest("task_file"))
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter_setsockopt_unix.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter_setsockopt_unix.c
new file mode 100644
index 000000000000..ee725d4d98a5
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter_setsockopt_unix.c
@@ -0,0 +1,100 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright Amazon.com Inc. or its affiliates. */
+#include <sys/socket.h>
+#include <sys/un.h>
+#include <test_progs.h>
+#include "bpf_iter_setsockopt_unix.skel.h"
+
+#define NR_CASES 5
+
+static int create_unix_socket(struct bpf_iter_setsockopt_unix *skel)
+{
+ struct sockaddr_un addr = {
+ .sun_family = AF_UNIX,
+ .sun_path = "",
+ };
+ socklen_t len;
+ int fd, err;
+
+ fd = socket(AF_UNIX, SOCK_STREAM, 0);
+ if (!ASSERT_NEQ(fd, -1, "socket"))
+ return -1;
+
+ len = offsetof(struct sockaddr_un, sun_path);
+ err = bind(fd, (struct sockaddr *)&addr, len);
+ if (!ASSERT_OK(err, "bind"))
+ return -1;
+
+ len = sizeof(addr);
+ err = getsockname(fd, (struct sockaddr *)&addr, &len);
+ if (!ASSERT_OK(err, "getsockname"))
+ return -1;
+
+ memcpy(&skel->bss->sun_path, &addr.sun_path,
+ len - offsetof(struct sockaddr_un, sun_path));
+
+ return fd;
+}
+
+static void test_sndbuf(struct bpf_iter_setsockopt_unix *skel, int fd)
+{
+ socklen_t optlen;
+ int i, err;
+
+ for (i = 0; i < NR_CASES; i++) {
+ if (!ASSERT_NEQ(skel->data->sndbuf_getsockopt[i], -1,
+ "bpf_(get|set)sockopt"))
+ return;
+
+ err = setsockopt(fd, SOL_SOCKET, SO_SNDBUF,
+ &(skel->data->sndbuf_setsockopt[i]),
+ sizeof(skel->data->sndbuf_setsockopt[i]));
+ if (!ASSERT_OK(err, "setsockopt"))
+ return;
+
+ optlen = sizeof(skel->bss->sndbuf_getsockopt_expected[i]);
+ err = getsockopt(fd, SOL_SOCKET, SO_SNDBUF,
+ &(skel->bss->sndbuf_getsockopt_expected[i]),
+ &optlen);
+ if (!ASSERT_OK(err, "getsockopt"))
+ return;
+
+ if (!ASSERT_EQ(skel->data->sndbuf_getsockopt[i],
+ skel->bss->sndbuf_getsockopt_expected[i],
+ "bpf_(get|set)sockopt"))
+ return;
+ }
+}
+
+void test_bpf_iter_setsockopt_unix(void)
+{
+ struct bpf_iter_setsockopt_unix *skel;
+ int err, unix_fd, iter_fd;
+ char buf;
+
+ skel = bpf_iter_setsockopt_unix__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "open_and_load"))
+ return;
+
+ unix_fd = create_unix_socket(skel);
+ if (!ASSERT_NEQ(unix_fd, -1, "create_unix_server"))
+ goto destroy;
+
+ skel->links.change_sndbuf = bpf_program__attach_iter(skel->progs.change_sndbuf, NULL);
+ if (!ASSERT_OK_PTR(skel->links.change_sndbuf, "bpf_program__attach_iter"))
+ goto destroy;
+
+ iter_fd = bpf_iter_create(bpf_link__fd(skel->links.change_sndbuf));
+ if (!ASSERT_GE(iter_fd, 0, "bpf_iter_create"))
+ goto destroy;
+
+ while ((err = read(iter_fd, &buf, sizeof(buf))) == -1 &&
+ errno == EAGAIN)
+ ;
+ if (!ASSERT_OK(err, "read iter error"))
+ goto destroy;
+
+ test_sndbuf(skel, unix_fd);
+destroy:
+ bpf_iter_setsockopt_unix__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_mod_race.c b/tools/testing/selftests/bpf/prog_tests/bpf_mod_race.c
new file mode 100644
index 000000000000..d43f548c572c
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_mod_race.c
@@ -0,0 +1,230 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <unistd.h>
+#include <pthread.h>
+#include <sys/mman.h>
+#include <stdatomic.h>
+#include <test_progs.h>
+#include <sys/syscall.h>
+#include <linux/module.h>
+#include <linux/userfaultfd.h>
+
+#include "ksym_race.skel.h"
+#include "bpf_mod_race.skel.h"
+#include "kfunc_call_race.skel.h"
+
+/* This test crafts a race between btf_try_get_module and do_init_module, and
+ * checks whether btf_try_get_module handles the invocation for a well-formed
+ * but uninitialized module correctly. Unless the module has completed its
+ * initcalls, the verifier should fail the program load and return ENXIO.
+ *
+ * userfaultfd is used to trigger a fault in an fmod_ret program, and make it
+ * sleep, then the BPF program is loaded and the return value from verifier is
+ * inspected. After this, the userfaultfd is closed so that the module loading
+ * thread makes forward progress, and fmod_ret injects an error so that the
+ * module load fails and it is freed.
+ *
+ * If the verifier succeeded in loading the supplied program, it will end up
+ * taking reference to freed module, and trigger a crash when the program fd
+ * is closed later. This is true for both kfuncs and ksyms. In both cases,
+ * the crash is triggered inside bpf_prog_free_deferred, when module reference
+ * is finally released.
+ */
+
+struct test_config {
+ const char *str_open;
+ void *(*bpf_open_and_load)();
+ void (*bpf_destroy)(void *);
+};
+
+enum test_state {
+ _TS_INVALID,
+ TS_MODULE_LOAD,
+ TS_MODULE_LOAD_FAIL,
+};
+
+static _Atomic enum test_state state = _TS_INVALID;
+
+static int sys_finit_module(int fd, const char *param_values, int flags)
+{
+ return syscall(__NR_finit_module, fd, param_values, flags);
+}
+
+static int sys_delete_module(const char *name, unsigned int flags)
+{
+ return syscall(__NR_delete_module, name, flags);
+}
+
+static int load_module(const char *mod)
+{
+ int ret, fd;
+
+ fd = open("bpf_testmod.ko", O_RDONLY);
+ if (fd < 0)
+ return fd;
+
+ ret = sys_finit_module(fd, "", 0);
+ close(fd);
+ if (ret < 0)
+ return ret;
+ return 0;
+}
+
+static void *load_module_thread(void *p)
+{
+
+ if (!ASSERT_NEQ(load_module("bpf_testmod.ko"), 0, "load_module_thread must fail"))
+ atomic_store(&state, TS_MODULE_LOAD);
+ else
+ atomic_store(&state, TS_MODULE_LOAD_FAIL);
+ return p;
+}
+
+static int sys_userfaultfd(int flags)
+{
+ return syscall(__NR_userfaultfd, flags);
+}
+
+static int test_setup_uffd(void *fault_addr)
+{
+ struct uffdio_register uffd_register = {};
+ struct uffdio_api uffd_api = {};
+ int uffd;
+
+ uffd = sys_userfaultfd(O_CLOEXEC);
+ if (uffd < 0)
+ return -errno;
+
+ uffd_api.api = UFFD_API;
+ uffd_api.features = 0;
+ if (ioctl(uffd, UFFDIO_API, &uffd_api)) {
+ close(uffd);
+ return -1;
+ }
+
+ uffd_register.range.start = (unsigned long)fault_addr;
+ uffd_register.range.len = 4096;
+ uffd_register.mode = UFFDIO_REGISTER_MODE_MISSING;
+ if (ioctl(uffd, UFFDIO_REGISTER, &uffd_register)) {
+ close(uffd);
+ return -1;
+ }
+ return uffd;
+}
+
+static void test_bpf_mod_race_config(const struct test_config *config)
+{
+ void *fault_addr, *skel_fail;
+ struct bpf_mod_race *skel;
+ struct uffd_msg uffd_msg;
+ pthread_t load_mod_thrd;
+ _Atomic int *blockingp;
+ int uffd, ret;
+
+ fault_addr = mmap(0, 4096, PROT_READ, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+ if (!ASSERT_NEQ(fault_addr, MAP_FAILED, "mmap for uffd registration"))
+ return;
+
+ if (!ASSERT_OK(sys_delete_module("bpf_testmod", 0), "unload bpf_testmod"))
+ goto end_mmap;
+
+ skel = bpf_mod_race__open();
+ if (!ASSERT_OK_PTR(skel, "bpf_mod_kfunc_race__open"))
+ goto end_module;
+
+ skel->rodata->bpf_mod_race_config.tgid = getpid();
+ skel->rodata->bpf_mod_race_config.inject_error = -4242;
+ skel->rodata->bpf_mod_race_config.fault_addr = fault_addr;
+ if (!ASSERT_OK(bpf_mod_race__load(skel), "bpf_mod___load"))
+ goto end_destroy;
+ blockingp = (_Atomic int *)&skel->bss->bpf_blocking;
+
+ if (!ASSERT_OK(bpf_mod_race__attach(skel), "bpf_mod_kfunc_race__attach"))
+ goto end_destroy;
+
+ uffd = test_setup_uffd(fault_addr);
+ if (!ASSERT_GE(uffd, 0, "userfaultfd open + register address"))
+ goto end_destroy;
+
+ if (!ASSERT_OK(pthread_create(&load_mod_thrd, NULL, load_module_thread, NULL),
+ "load module thread"))
+ goto end_uffd;
+
+ /* Now, we either fail loading module, or block in bpf prog, spin to find out */
+ while (!atomic_load(&state) && !atomic_load(blockingp))
+ ;
+ if (!ASSERT_EQ(state, _TS_INVALID, "module load should block"))
+ goto end_join;
+ if (!ASSERT_EQ(*blockingp, 1, "module load blocked")) {
+ pthread_kill(load_mod_thrd, SIGKILL);
+ goto end_uffd;
+ }
+
+ /* We might have set bpf_blocking to 1, but may have not blocked in
+ * bpf_copy_from_user. Read userfaultfd descriptor to verify that.
+ */
+ if (!ASSERT_EQ(read(uffd, &uffd_msg, sizeof(uffd_msg)), sizeof(uffd_msg),
+ "read uffd block event"))
+ goto end_join;
+ if (!ASSERT_EQ(uffd_msg.event, UFFD_EVENT_PAGEFAULT, "read uffd event is pagefault"))
+ goto end_join;
+
+ /* We know that load_mod_thrd is blocked in the fmod_ret program, the
+ * module state is still MODULE_STATE_COMING because mod->init hasn't
+ * returned. This is the time we try to load a program calling kfunc and
+ * check if we get ENXIO from verifier.
+ */
+ skel_fail = config->bpf_open_and_load();
+ ret = errno;
+ if (!ASSERT_EQ(skel_fail, NULL, config->str_open)) {
+ /* Close uffd to unblock load_mod_thrd */
+ close(uffd);
+ uffd = -1;
+ while (atomic_load(blockingp) != 2)
+ ;
+ ASSERT_OK(kern_sync_rcu(), "kern_sync_rcu");
+ config->bpf_destroy(skel_fail);
+ goto end_join;
+
+ }
+ ASSERT_EQ(ret, ENXIO, "verifier returns ENXIO");
+ ASSERT_EQ(skel->data->res_try_get_module, false, "btf_try_get_module == false");
+
+ close(uffd);
+ uffd = -1;
+end_join:
+ pthread_join(load_mod_thrd, NULL);
+ if (uffd < 0)
+ ASSERT_EQ(atomic_load(&state), TS_MODULE_LOAD_FAIL, "load_mod_thrd success");
+end_uffd:
+ if (uffd >= 0)
+ close(uffd);
+end_destroy:
+ bpf_mod_race__destroy(skel);
+ ASSERT_OK(kern_sync_rcu(), "kern_sync_rcu");
+end_module:
+ sys_delete_module("bpf_testmod", 0);
+ ASSERT_OK(load_module("bpf_testmod.ko"), "restore bpf_testmod");
+end_mmap:
+ munmap(fault_addr, 4096);
+ atomic_store(&state, _TS_INVALID);
+}
+
+static const struct test_config ksym_config = {
+ .str_open = "ksym_race__open_and_load",
+ .bpf_open_and_load = (void *)ksym_race__open_and_load,
+ .bpf_destroy = (void *)ksym_race__destroy,
+};
+
+static const struct test_config kfunc_config = {
+ .str_open = "kfunc_call_race__open_and_load",
+ .bpf_open_and_load = (void *)kfunc_call_race__open_and_load,
+ .bpf_destroy = (void *)kfunc_call_race__destroy,
+};
+
+void serial_test_bpf_mod_race(void)
+{
+ if (test__start_subtest("ksym (used_btfs UAF)"))
+ test_bpf_mod_race_config(&ksym_config);
+ if (test__start_subtest("kfunc (kfunc_btf_tab UAF)"))
+ test_bpf_mod_race_config(&kfunc_config);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
new file mode 100644
index 000000000000..dd30b1e3a67c
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
@@ -0,0 +1,52 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include <network_helpers.h>
+#include "test_bpf_nf.skel.h"
+
+enum {
+ TEST_XDP,
+ TEST_TC_BPF,
+};
+
+void test_bpf_nf_ct(int mode)
+{
+ struct test_bpf_nf *skel;
+ int prog_fd, err;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
+
+ skel = test_bpf_nf__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "test_bpf_nf__open_and_load"))
+ return;
+
+ if (mode == TEST_XDP)
+ prog_fd = bpf_program__fd(skel->progs.nf_xdp_ct_test);
+ else
+ prog_fd = bpf_program__fd(skel->progs.nf_skb_ct_test);
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ if (!ASSERT_OK(err, "bpf_prog_test_run"))
+ goto end;
+
+ ASSERT_EQ(skel->bss->test_einval_bpf_tuple, -EINVAL, "Test EINVAL for NULL bpf_tuple");
+ ASSERT_EQ(skel->bss->test_einval_reserved, -EINVAL, "Test EINVAL for reserved not set to 0");
+ ASSERT_EQ(skel->bss->test_einval_netns_id, -EINVAL, "Test EINVAL for netns_id < -1");
+ ASSERT_EQ(skel->bss->test_einval_len_opts, -EINVAL, "Test EINVAL for len__opts != NF_BPF_CT_OPTS_SZ");
+ ASSERT_EQ(skel->bss->test_eproto_l4proto, -EPROTO, "Test EPROTO for l4proto != TCP or UDP");
+ ASSERT_EQ(skel->bss->test_enonet_netns_id, -ENONET, "Test ENONET for bad but valid netns_id");
+ ASSERT_EQ(skel->bss->test_enoent_lookup, -ENOENT, "Test ENOENT for failed lookup");
+ ASSERT_EQ(skel->bss->test_eafnosupport, -EAFNOSUPPORT, "Test EAFNOSUPPORT for invalid len__tuple");
+end:
+ test_bpf_nf__destroy(skel);
+}
+
+void test_bpf_nf(void)
+{
+ if (test__start_subtest("xdp-ct"))
+ test_bpf_nf_ct(TEST_XDP);
+ if (test__start_subtest("tc-bpf-ct"))
+ test_bpf_nf_ct(TEST_TC_BPF);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/btf.c b/tools/testing/selftests/bpf/prog_tests/btf.c
index 8ba53acf9eb4..ec823561b912 100644
--- a/tools/testing/selftests/bpf/prog_tests/btf.c
+++ b/tools/testing/selftests/bpf/prog_tests/btf.c
@@ -3939,6 +3939,25 @@ static struct btf_raw_test raw_tests[] = {
.err_str = "Invalid component_idx",
},
{
+ .descr = "decl_tag test #15, func, invalid func proto",
+ .raw_types = {
+ BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
+ BTF_DECL_TAG_ENC(NAME_TBD, 3, 0), /* [2] */
+ BTF_FUNC_ENC(NAME_TBD, 8), /* [3] */
+ BTF_END_RAW,
+ },
+ BTF_STR_SEC("\0tag\0func"),
+ .map_type = BPF_MAP_TYPE_ARRAY,
+ .map_name = "tag_type_check_btf",
+ .key_size = sizeof(int),
+ .value_size = 4,
+ .key_type_id = 1,
+ .value_type_id = 1,
+ .max_entries = 1,
+ .btf_load_err = true,
+ .err_str = "Invalid type_id",
+},
+{
.descr = "type_tag test #1",
.raw_types = {
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
@@ -4560,6 +4579,8 @@ static void do_test_file(unsigned int test_num)
has_btf_ext = btf_ext != NULL;
btf_ext__free(btf_ext);
+ /* temporary disable LIBBPF_STRICT_MAP_DEFINITIONS to test legacy maps */
+ libbpf_set_strict_mode(LIBBPF_STRICT_ALL & ~LIBBPF_STRICT_MAP_DEFINITIONS);
obj = bpf_object__open(test->file);
err = libbpf_get_error(obj);
if (CHECK(err, "obj: %d", err))
@@ -4684,6 +4705,8 @@ skip:
fprintf(stderr, "OK");
done:
+ libbpf_set_strict_mode(LIBBPF_STRICT_ALL);
+
btf__free(btf);
free(func_info);
bpf_object__close(obj);
@@ -6533,7 +6556,7 @@ done:
static void do_test_info_raw(unsigned int test_num)
{
const struct prog_info_raw_test *test = &info_raw_tests[test_num - 1];
- unsigned int raw_btf_size, linfo_str_off, linfo_size;
+ unsigned int raw_btf_size, linfo_str_off, linfo_size = 0;
int btf_fd = -1, prog_fd = -1, err = 0;
void *raw_btf, *patched_linfo = NULL;
const char *ret_next_str;
diff --git a/tools/testing/selftests/bpf/prog_tests/btf_dump.c b/tools/testing/selftests/bpf/prog_tests/btf_dump.c
index 9e26903f9170..5fce7008d1ff 100644
--- a/tools/testing/selftests/bpf/prog_tests/btf_dump.c
+++ b/tools/testing/selftests/bpf/prog_tests/btf_dump.c
@@ -148,22 +148,38 @@ static void test_btf_dump_incremental(void)
/* First, generate BTF corresponding to the following C code:
*
- * enum { VAL = 1 };
+ * enum x;
+ *
+ * enum x { X = 1 };
+ *
+ * enum { Y = 1 };
+ *
+ * struct s;
*
* struct s { int x; };
*
*/
+ id = btf__add_enum(btf, "x", 4);
+ ASSERT_EQ(id, 1, "enum_declaration_id");
+ id = btf__add_enum(btf, "x", 4);
+ ASSERT_EQ(id, 2, "named_enum_id");
+ err = btf__add_enum_value(btf, "X", 1);
+ ASSERT_OK(err, "named_enum_val_ok");
+
id = btf__add_enum(btf, NULL, 4);
- ASSERT_EQ(id, 1, "enum_id");
- err = btf__add_enum_value(btf, "VAL", 1);
- ASSERT_OK(err, "enum_val_ok");
+ ASSERT_EQ(id, 3, "anon_enum_id");
+ err = btf__add_enum_value(btf, "Y", 1);
+ ASSERT_OK(err, "anon_enum_val_ok");
id = btf__add_int(btf, "int", 4, BTF_INT_SIGNED);
- ASSERT_EQ(id, 2, "int_id");
+ ASSERT_EQ(id, 4, "int_id");
+
+ id = btf__add_fwd(btf, "s", BTF_FWD_STRUCT);
+ ASSERT_EQ(id, 5, "fwd_id");
id = btf__add_struct(btf, "s", 4);
- ASSERT_EQ(id, 3, "struct_id");
- err = btf__add_field(btf, "x", 2, 0, 0);
+ ASSERT_EQ(id, 6, "struct_id");
+ err = btf__add_field(btf, "x", 4, 0, 0);
ASSERT_OK(err, "field_ok");
for (i = 1; i < btf__type_cnt(btf); i++) {
@@ -173,11 +189,20 @@ static void test_btf_dump_incremental(void)
fflush(dump_buf_file);
dump_buf[dump_buf_sz] = 0; /* some libc implementations don't do this */
+
ASSERT_STREQ(dump_buf,
+"enum x;\n"
+"\n"
+"enum x {\n"
+" X = 1,\n"
+"};\n"
+"\n"
"enum {\n"
-" VAL = 1,\n"
+" Y = 1,\n"
"};\n"
"\n"
+"struct s;\n"
+"\n"
"struct s {\n"
" int x;\n"
"};\n\n", "c_dump1");
@@ -199,10 +224,12 @@ static void test_btf_dump_incremental(void)
fseek(dump_buf_file, 0, SEEK_SET);
id = btf__add_struct(btf, "s", 4);
- ASSERT_EQ(id, 4, "struct_id");
- err = btf__add_field(btf, "x", 1, 0, 0);
+ ASSERT_EQ(id, 7, "struct_id");
+ err = btf__add_field(btf, "x", 2, 0, 0);
+ ASSERT_OK(err, "field_ok");
+ err = btf__add_field(btf, "y", 3, 32, 0);
ASSERT_OK(err, "field_ok");
- err = btf__add_field(btf, "s", 3, 32, 0);
+ err = btf__add_field(btf, "s", 6, 64, 0);
ASSERT_OK(err, "field_ok");
for (i = 1; i < btf__type_cnt(btf); i++) {
@@ -214,9 +241,10 @@ static void test_btf_dump_incremental(void)
dump_buf[dump_buf_sz] = 0; /* some libc implementations don't do this */
ASSERT_STREQ(dump_buf,
"struct s___2 {\n"
+" enum x x;\n"
" enum {\n"
-" VAL___2 = 1,\n"
-" } x;\n"
+" Y___2 = 1,\n"
+" } y;\n"
" struct s s;\n"
"};\n\n" , "c_dump1");
diff --git a/tools/testing/selftests/bpf/prog_tests/btf_tag.c b/tools/testing/selftests/bpf/prog_tests/btf_tag.c
index 88d63e23e35f..071430cd54de 100644
--- a/tools/testing/selftests/bpf/prog_tests/btf_tag.c
+++ b/tools/testing/selftests/bpf/prog_tests/btf_tag.c
@@ -1,19 +1,22 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2021 Facebook */
#include <test_progs.h>
-#include "btf_decl_tag.skel.h"
+#include <bpf/btf.h>
+#include "test_btf_decl_tag.skel.h"
/* struct btf_type_tag_test is referenced in btf_type_tag.skel.h */
struct btf_type_tag_test {
int **p;
};
#include "btf_type_tag.skel.h"
+#include "btf_type_tag_user.skel.h"
+#include "btf_type_tag_percpu.skel.h"
static void test_btf_decl_tag(void)
{
- struct btf_decl_tag *skel;
+ struct test_btf_decl_tag *skel;
- skel = btf_decl_tag__open_and_load();
+ skel = test_btf_decl_tag__open_and_load();
if (!ASSERT_OK_PTR(skel, "btf_decl_tag"))
return;
@@ -22,7 +25,7 @@ static void test_btf_decl_tag(void)
test__skip();
}
- btf_decl_tag__destroy(skel);
+ test_btf_decl_tag__destroy(skel);
}
static void test_btf_type_tag(void)
@@ -41,10 +44,206 @@ static void test_btf_type_tag(void)
btf_type_tag__destroy(skel);
}
+/* loads vmlinux_btf as well as module_btf. If the caller passes NULL as
+ * module_btf, it will not load module btf.
+ *
+ * Returns 0 on success.
+ * Return -1 On error. In case of error, the loaded btf will be freed and the
+ * input parameters will be set to pointing to NULL.
+ */
+static int load_btfs(struct btf **vmlinux_btf, struct btf **module_btf,
+ bool needs_vmlinux_tag)
+{
+ const char *module_name = "bpf_testmod";
+ __s32 type_id;
+
+ if (!env.has_testmod) {
+ test__skip();
+ return -1;
+ }
+
+ *vmlinux_btf = btf__load_vmlinux_btf();
+ if (!ASSERT_OK_PTR(*vmlinux_btf, "could not load vmlinux BTF"))
+ return -1;
+
+ if (!needs_vmlinux_tag)
+ goto load_module_btf;
+
+ /* skip the test if the vmlinux does not have __user tags */
+ type_id = btf__find_by_name_kind(*vmlinux_btf, "user", BTF_KIND_TYPE_TAG);
+ if (type_id <= 0) {
+ printf("%s:SKIP: btf_type_tag attribute not in vmlinux btf", __func__);
+ test__skip();
+ goto free_vmlinux_btf;
+ }
+
+load_module_btf:
+ /* skip loading module_btf, if not requested by caller */
+ if (!module_btf)
+ return 0;
+
+ *module_btf = btf__load_module_btf(module_name, *vmlinux_btf);
+ if (!ASSERT_OK_PTR(*module_btf, "could not load module BTF"))
+ goto free_vmlinux_btf;
+
+ /* skip the test if the module does not have __user tags */
+ type_id = btf__find_by_name_kind(*module_btf, "user", BTF_KIND_TYPE_TAG);
+ if (type_id <= 0) {
+ printf("%s:SKIP: btf_type_tag attribute not in %s", __func__, module_name);
+ test__skip();
+ goto free_module_btf;
+ }
+
+ return 0;
+
+free_module_btf:
+ btf__free(*module_btf);
+free_vmlinux_btf:
+ btf__free(*vmlinux_btf);
+
+ *vmlinux_btf = NULL;
+ if (module_btf)
+ *module_btf = NULL;
+ return -1;
+}
+
+static void test_btf_type_tag_mod_user(bool load_test_user1)
+{
+ struct btf *vmlinux_btf = NULL, *module_btf = NULL;
+ struct btf_type_tag_user *skel;
+ int err;
+
+ if (load_btfs(&vmlinux_btf, &module_btf, /*needs_vmlinux_tag=*/false))
+ return;
+
+ skel = btf_type_tag_user__open();
+ if (!ASSERT_OK_PTR(skel, "btf_type_tag_user"))
+ goto cleanup;
+
+ bpf_program__set_autoload(skel->progs.test_sys_getsockname, false);
+ if (load_test_user1)
+ bpf_program__set_autoload(skel->progs.test_user2, false);
+ else
+ bpf_program__set_autoload(skel->progs.test_user1, false);
+
+ err = btf_type_tag_user__load(skel);
+ ASSERT_ERR(err, "btf_type_tag_user");
+
+ btf_type_tag_user__destroy(skel);
+
+cleanup:
+ btf__free(module_btf);
+ btf__free(vmlinux_btf);
+}
+
+static void test_btf_type_tag_vmlinux_user(void)
+{
+ struct btf_type_tag_user *skel;
+ struct btf *vmlinux_btf = NULL;
+ int err;
+
+ if (load_btfs(&vmlinux_btf, NULL, /*needs_vmlinux_tag=*/true))
+ return;
+
+ skel = btf_type_tag_user__open();
+ if (!ASSERT_OK_PTR(skel, "btf_type_tag_user"))
+ goto cleanup;
+
+ bpf_program__set_autoload(skel->progs.test_user2, false);
+ bpf_program__set_autoload(skel->progs.test_user1, false);
+
+ err = btf_type_tag_user__load(skel);
+ ASSERT_ERR(err, "btf_type_tag_user");
+
+ btf_type_tag_user__destroy(skel);
+
+cleanup:
+ btf__free(vmlinux_btf);
+}
+
+static void test_btf_type_tag_mod_percpu(bool load_test_percpu1)
+{
+ struct btf *vmlinux_btf, *module_btf;
+ struct btf_type_tag_percpu *skel;
+ int err;
+
+ if (load_btfs(&vmlinux_btf, &module_btf, /*needs_vmlinux_tag=*/false))
+ return;
+
+ skel = btf_type_tag_percpu__open();
+ if (!ASSERT_OK_PTR(skel, "btf_type_tag_percpu"))
+ goto cleanup;
+
+ bpf_program__set_autoload(skel->progs.test_percpu_load, false);
+ bpf_program__set_autoload(skel->progs.test_percpu_helper, false);
+ if (load_test_percpu1)
+ bpf_program__set_autoload(skel->progs.test_percpu2, false);
+ else
+ bpf_program__set_autoload(skel->progs.test_percpu1, false);
+
+ err = btf_type_tag_percpu__load(skel);
+ ASSERT_ERR(err, "btf_type_tag_percpu");
+
+ btf_type_tag_percpu__destroy(skel);
+
+cleanup:
+ btf__free(module_btf);
+ btf__free(vmlinux_btf);
+}
+
+static void test_btf_type_tag_vmlinux_percpu(bool load_test)
+{
+ struct btf_type_tag_percpu *skel;
+ struct btf *vmlinux_btf = NULL;
+ int err;
+
+ if (load_btfs(&vmlinux_btf, NULL, /*needs_vmlinux_tag=*/true))
+ return;
+
+ skel = btf_type_tag_percpu__open();
+ if (!ASSERT_OK_PTR(skel, "btf_type_tag_percpu"))
+ goto cleanup;
+
+ bpf_program__set_autoload(skel->progs.test_percpu2, false);
+ bpf_program__set_autoload(skel->progs.test_percpu1, false);
+ if (load_test) {
+ bpf_program__set_autoload(skel->progs.test_percpu_helper, false);
+
+ err = btf_type_tag_percpu__load(skel);
+ ASSERT_ERR(err, "btf_type_tag_percpu_load");
+ } else {
+ bpf_program__set_autoload(skel->progs.test_percpu_load, false);
+
+ err = btf_type_tag_percpu__load(skel);
+ ASSERT_OK(err, "btf_type_tag_percpu_helper");
+ }
+
+ btf_type_tag_percpu__destroy(skel);
+
+cleanup:
+ btf__free(vmlinux_btf);
+}
+
void test_btf_tag(void)
{
if (test__start_subtest("btf_decl_tag"))
test_btf_decl_tag();
if (test__start_subtest("btf_type_tag"))
test_btf_type_tag();
+
+ if (test__start_subtest("btf_type_tag_user_mod1"))
+ test_btf_type_tag_mod_user(true);
+ if (test__start_subtest("btf_type_tag_user_mod2"))
+ test_btf_type_tag_mod_user(false);
+ if (test__start_subtest("btf_type_tag_sys_user_vmlinux"))
+ test_btf_type_tag_vmlinux_user();
+
+ if (test__start_subtest("btf_type_tag_percpu_mod1"))
+ test_btf_type_tag_mod_percpu(true);
+ if (test__start_subtest("btf_type_tag_percpu_mod2"))
+ test_btf_type_tag_mod_percpu(false);
+ if (test__start_subtest("btf_type_tag_percpu_vmlinux_load"))
+ test_btf_type_tag_vmlinux_percpu(true);
+ if (test__start_subtest("btf_type_tag_percpu_vmlinux_helper"))
+ test_btf_type_tag_vmlinux_percpu(false);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_autodetach.c b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_autodetach.c
index 858916d11e2e..9367bd2f0ae1 100644
--- a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_autodetach.c
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_autodetach.c
@@ -14,7 +14,7 @@ static int prog_load(void)
BPF_MOV64_IMM(BPF_REG_0, 1), /* r0 = 1 */
BPF_EXIT_INSN(),
};
- size_t insns_cnt = sizeof(prog) / sizeof(struct bpf_insn);
+ size_t insns_cnt = ARRAY_SIZE(prog);
return bpf_test_load_program(BPF_PROG_TYPE_CGROUP_SKB,
prog, insns_cnt, "GPL", 0,
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c
index d3e8f729c623..db0b7bac78d1 100644
--- a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c
@@ -63,7 +63,7 @@ static int prog_load_cnt(int verdict, int val)
BPF_MOV64_IMM(BPF_REG_0, verdict), /* r0 = verdict */
BPF_EXIT_INSN(),
};
- size_t insns_cnt = sizeof(prog) / sizeof(struct bpf_insn);
+ size_t insns_cnt = ARRAY_SIZE(prog);
int ret;
ret = bpf_test_load_program(BPF_PROG_TYPE_CGROUP_SKB,
@@ -194,14 +194,14 @@ void serial_test_cgroup_attach_multi(void)
attach_opts.flags = BPF_F_ALLOW_OVERRIDE | BPF_F_REPLACE;
attach_opts.replace_prog_fd = allow_prog[0];
- if (CHECK(!bpf_prog_attach_xattr(allow_prog[6], cg1,
+ if (CHECK(!bpf_prog_attach_opts(allow_prog[6], cg1,
BPF_CGROUP_INET_EGRESS, &attach_opts),
"fail_prog_replace_override", "unexpected success\n"))
goto err;
CHECK_FAIL(errno != EINVAL);
attach_opts.flags = BPF_F_REPLACE;
- if (CHECK(!bpf_prog_attach_xattr(allow_prog[6], cg1,
+ if (CHECK(!bpf_prog_attach_opts(allow_prog[6], cg1,
BPF_CGROUP_INET_EGRESS, &attach_opts),
"fail_prog_replace_no_multi", "unexpected success\n"))
goto err;
@@ -209,7 +209,7 @@ void serial_test_cgroup_attach_multi(void)
attach_opts.flags = BPF_F_ALLOW_MULTI | BPF_F_REPLACE;
attach_opts.replace_prog_fd = -1;
- if (CHECK(!bpf_prog_attach_xattr(allow_prog[6], cg1,
+ if (CHECK(!bpf_prog_attach_opts(allow_prog[6], cg1,
BPF_CGROUP_INET_EGRESS, &attach_opts),
"fail_prog_replace_bad_fd", "unexpected success\n"))
goto err;
@@ -217,7 +217,7 @@ void serial_test_cgroup_attach_multi(void)
/* replacing a program that is not attached to cgroup should fail */
attach_opts.replace_prog_fd = allow_prog[3];
- if (CHECK(!bpf_prog_attach_xattr(allow_prog[6], cg1,
+ if (CHECK(!bpf_prog_attach_opts(allow_prog[6], cg1,
BPF_CGROUP_INET_EGRESS, &attach_opts),
"fail_prog_replace_no_ent", "unexpected success\n"))
goto err;
@@ -225,14 +225,14 @@ void serial_test_cgroup_attach_multi(void)
/* replace 1st from the top program */
attach_opts.replace_prog_fd = allow_prog[0];
- if (CHECK(bpf_prog_attach_xattr(allow_prog[6], cg1,
+ if (CHECK(bpf_prog_attach_opts(allow_prog[6], cg1,
BPF_CGROUP_INET_EGRESS, &attach_opts),
"prog_replace", "errno=%d\n", errno))
goto err;
/* replace program with itself */
attach_opts.replace_prog_fd = allow_prog[6];
- if (CHECK(bpf_prog_attach_xattr(allow_prog[6], cg1,
+ if (CHECK(bpf_prog_attach_opts(allow_prog[6], cg1,
BPF_CGROUP_INET_EGRESS, &attach_opts),
"prog_replace", "errno=%d\n", errno))
goto err;
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_override.c b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_override.c
index 356547e849e2..9421a5b7f4e1 100644
--- a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_override.c
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_override.c
@@ -16,7 +16,7 @@ static int prog_load(int verdict)
BPF_MOV64_IMM(BPF_REG_0, verdict), /* r0 = verdict */
BPF_EXIT_INSN(),
};
- size_t insns_cnt = sizeof(prog) / sizeof(struct bpf_insn);
+ size_t insns_cnt = ARRAY_SIZE(prog);
return bpf_test_load_program(BPF_PROG_TYPE_CGROUP_SKB,
prog, insns_cnt, "GPL", 0,
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_getset_retval.c b/tools/testing/selftests/bpf/prog_tests/cgroup_getset_retval.c
new file mode 100644
index 000000000000..0b47c3c000c7
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_getset_retval.c
@@ -0,0 +1,481 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+/*
+ * Copyright 2021 Google LLC.
+ */
+
+#include <test_progs.h>
+#include <cgroup_helpers.h>
+#include <network_helpers.h>
+
+#include "cgroup_getset_retval_setsockopt.skel.h"
+#include "cgroup_getset_retval_getsockopt.skel.h"
+
+#define SOL_CUSTOM 0xdeadbeef
+
+static int zero;
+
+static void test_setsockopt_set(int cgroup_fd, int sock_fd)
+{
+ struct cgroup_getset_retval_setsockopt *obj;
+ struct bpf_link *link_set_eunatch = NULL;
+
+ obj = cgroup_getset_retval_setsockopt__open_and_load();
+ if (!ASSERT_OK_PTR(obj, "skel-load"))
+ return;
+
+ /* Attach setsockopt that sets EUNATCH, assert that
+ * we actually get that error when we run setsockopt()
+ */
+ link_set_eunatch = bpf_program__attach_cgroup(obj->progs.set_eunatch,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_set_eunatch, "cg-attach-set_eunatch"))
+ goto close_bpf_object;
+
+ if (!ASSERT_ERR(setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR,
+ &zero, sizeof(int)), "setsockopt"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(errno, EUNATCH, "setsockopt-errno"))
+ goto close_bpf_object;
+
+ if (!ASSERT_EQ(obj->bss->invocations, 1, "invocations"))
+ goto close_bpf_object;
+ if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error"))
+ goto close_bpf_object;
+
+close_bpf_object:
+ bpf_link__destroy(link_set_eunatch);
+
+ cgroup_getset_retval_setsockopt__destroy(obj);
+}
+
+static void test_setsockopt_set_and_get(int cgroup_fd, int sock_fd)
+{
+ struct cgroup_getset_retval_setsockopt *obj;
+ struct bpf_link *link_set_eunatch = NULL, *link_get_retval = NULL;
+
+ obj = cgroup_getset_retval_setsockopt__open_and_load();
+ if (!ASSERT_OK_PTR(obj, "skel-load"))
+ return;
+
+ /* Attach setsockopt that sets EUNATCH, and one that gets the
+ * previously set errno. Assert that we get the same errno back.
+ */
+ link_set_eunatch = bpf_program__attach_cgroup(obj->progs.set_eunatch,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_set_eunatch, "cg-attach-set_eunatch"))
+ goto close_bpf_object;
+ link_get_retval = bpf_program__attach_cgroup(obj->progs.get_retval,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_get_retval, "cg-attach-get_retval"))
+ goto close_bpf_object;
+
+ if (!ASSERT_ERR(setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR,
+ &zero, sizeof(int)), "setsockopt"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(errno, EUNATCH, "setsockopt-errno"))
+ goto close_bpf_object;
+
+ if (!ASSERT_EQ(obj->bss->invocations, 2, "invocations"))
+ goto close_bpf_object;
+ if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(obj->bss->retval_value, -EUNATCH, "retval_value"))
+ goto close_bpf_object;
+
+close_bpf_object:
+ bpf_link__destroy(link_set_eunatch);
+ bpf_link__destroy(link_get_retval);
+
+ cgroup_getset_retval_setsockopt__destroy(obj);
+}
+
+static void test_setsockopt_default_zero(int cgroup_fd, int sock_fd)
+{
+ struct cgroup_getset_retval_setsockopt *obj;
+ struct bpf_link *link_get_retval = NULL;
+
+ obj = cgroup_getset_retval_setsockopt__open_and_load();
+ if (!ASSERT_OK_PTR(obj, "skel-load"))
+ return;
+
+ /* Attach setsockopt that gets the previously set errno.
+ * Assert that, without anything setting one, we get 0.
+ */
+ link_get_retval = bpf_program__attach_cgroup(obj->progs.get_retval,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_get_retval, "cg-attach-get_retval"))
+ goto close_bpf_object;
+
+ if (!ASSERT_OK(setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR,
+ &zero, sizeof(int)), "setsockopt"))
+ goto close_bpf_object;
+
+ if (!ASSERT_EQ(obj->bss->invocations, 1, "invocations"))
+ goto close_bpf_object;
+ if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(obj->bss->retval_value, 0, "retval_value"))
+ goto close_bpf_object;
+
+close_bpf_object:
+ bpf_link__destroy(link_get_retval);
+
+ cgroup_getset_retval_setsockopt__destroy(obj);
+}
+
+static void test_setsockopt_default_zero_and_set(int cgroup_fd, int sock_fd)
+{
+ struct cgroup_getset_retval_setsockopt *obj;
+ struct bpf_link *link_get_retval = NULL, *link_set_eunatch = NULL;
+
+ obj = cgroup_getset_retval_setsockopt__open_and_load();
+ if (!ASSERT_OK_PTR(obj, "skel-load"))
+ return;
+
+ /* Attach setsockopt that gets the previously set errno, and then
+ * one that sets the errno to EUNATCH. Assert that the get does not
+ * see EUNATCH set later, and does not prevent EUNATCH from being set.
+ */
+ link_get_retval = bpf_program__attach_cgroup(obj->progs.get_retval,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_get_retval, "cg-attach-get_retval"))
+ goto close_bpf_object;
+ link_set_eunatch = bpf_program__attach_cgroup(obj->progs.set_eunatch,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_set_eunatch, "cg-attach-set_eunatch"))
+ goto close_bpf_object;
+
+ if (!ASSERT_ERR(setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR,
+ &zero, sizeof(int)), "setsockopt"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(errno, EUNATCH, "setsockopt-errno"))
+ goto close_bpf_object;
+
+ if (!ASSERT_EQ(obj->bss->invocations, 2, "invocations"))
+ goto close_bpf_object;
+ if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(obj->bss->retval_value, 0, "retval_value"))
+ goto close_bpf_object;
+
+close_bpf_object:
+ bpf_link__destroy(link_get_retval);
+ bpf_link__destroy(link_set_eunatch);
+
+ cgroup_getset_retval_setsockopt__destroy(obj);
+}
+
+static void test_setsockopt_override(int cgroup_fd, int sock_fd)
+{
+ struct cgroup_getset_retval_setsockopt *obj;
+ struct bpf_link *link_set_eunatch = NULL, *link_set_eisconn = NULL;
+ struct bpf_link *link_get_retval = NULL;
+
+ obj = cgroup_getset_retval_setsockopt__open_and_load();
+ if (!ASSERT_OK_PTR(obj, "skel-load"))
+ return;
+
+ /* Attach setsockopt that sets EUNATCH, then one that sets EISCONN,
+ * and then one that gets the exported errno. Assert both the syscall
+ * and the helper sees the last set errno.
+ */
+ link_set_eunatch = bpf_program__attach_cgroup(obj->progs.set_eunatch,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_set_eunatch, "cg-attach-set_eunatch"))
+ goto close_bpf_object;
+ link_set_eisconn = bpf_program__attach_cgroup(obj->progs.set_eisconn,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_set_eisconn, "cg-attach-set_eisconn"))
+ goto close_bpf_object;
+ link_get_retval = bpf_program__attach_cgroup(obj->progs.get_retval,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_get_retval, "cg-attach-get_retval"))
+ goto close_bpf_object;
+
+ if (!ASSERT_ERR(setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR,
+ &zero, sizeof(int)), "setsockopt"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(errno, EISCONN, "setsockopt-errno"))
+ goto close_bpf_object;
+
+ if (!ASSERT_EQ(obj->bss->invocations, 3, "invocations"))
+ goto close_bpf_object;
+ if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(obj->bss->retval_value, -EISCONN, "retval_value"))
+ goto close_bpf_object;
+
+close_bpf_object:
+ bpf_link__destroy(link_set_eunatch);
+ bpf_link__destroy(link_set_eisconn);
+ bpf_link__destroy(link_get_retval);
+
+ cgroup_getset_retval_setsockopt__destroy(obj);
+}
+
+static void test_setsockopt_legacy_eperm(int cgroup_fd, int sock_fd)
+{
+ struct cgroup_getset_retval_setsockopt *obj;
+ struct bpf_link *link_legacy_eperm = NULL, *link_get_retval = NULL;
+
+ obj = cgroup_getset_retval_setsockopt__open_and_load();
+ if (!ASSERT_OK_PTR(obj, "skel-load"))
+ return;
+
+ /* Attach setsockopt that return a reject without setting errno
+ * (legacy reject), and one that gets the errno. Assert that for
+ * backward compatibility the syscall result in EPERM, and this
+ * is also visible to the helper.
+ */
+ link_legacy_eperm = bpf_program__attach_cgroup(obj->progs.legacy_eperm,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_legacy_eperm, "cg-attach-legacy_eperm"))
+ goto close_bpf_object;
+ link_get_retval = bpf_program__attach_cgroup(obj->progs.get_retval,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_get_retval, "cg-attach-get_retval"))
+ goto close_bpf_object;
+
+ if (!ASSERT_ERR(setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR,
+ &zero, sizeof(int)), "setsockopt"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(errno, EPERM, "setsockopt-errno"))
+ goto close_bpf_object;
+
+ if (!ASSERT_EQ(obj->bss->invocations, 2, "invocations"))
+ goto close_bpf_object;
+ if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(obj->bss->retval_value, -EPERM, "retval_value"))
+ goto close_bpf_object;
+
+close_bpf_object:
+ bpf_link__destroy(link_legacy_eperm);
+ bpf_link__destroy(link_get_retval);
+
+ cgroup_getset_retval_setsockopt__destroy(obj);
+}
+
+static void test_setsockopt_legacy_no_override(int cgroup_fd, int sock_fd)
+{
+ struct cgroup_getset_retval_setsockopt *obj;
+ struct bpf_link *link_set_eunatch = NULL, *link_legacy_eperm = NULL;
+ struct bpf_link *link_get_retval = NULL;
+
+ obj = cgroup_getset_retval_setsockopt__open_and_load();
+ if (!ASSERT_OK_PTR(obj, "skel-load"))
+ return;
+
+ /* Attach setsockopt that sets EUNATCH, then one that return a reject
+ * without setting errno, and then one that gets the exported errno.
+ * Assert both the syscall and the helper's errno are unaffected by
+ * the second prog (i.e. legacy rejects does not override the errno
+ * to EPERM).
+ */
+ link_set_eunatch = bpf_program__attach_cgroup(obj->progs.set_eunatch,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_set_eunatch, "cg-attach-set_eunatch"))
+ goto close_bpf_object;
+ link_legacy_eperm = bpf_program__attach_cgroup(obj->progs.legacy_eperm,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_legacy_eperm, "cg-attach-legacy_eperm"))
+ goto close_bpf_object;
+ link_get_retval = bpf_program__attach_cgroup(obj->progs.get_retval,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_get_retval, "cg-attach-get_retval"))
+ goto close_bpf_object;
+
+ if (!ASSERT_ERR(setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR,
+ &zero, sizeof(int)), "setsockopt"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(errno, EUNATCH, "setsockopt-errno"))
+ goto close_bpf_object;
+
+ if (!ASSERT_EQ(obj->bss->invocations, 3, "invocations"))
+ goto close_bpf_object;
+ if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(obj->bss->retval_value, -EUNATCH, "retval_value"))
+ goto close_bpf_object;
+
+close_bpf_object:
+ bpf_link__destroy(link_set_eunatch);
+ bpf_link__destroy(link_legacy_eperm);
+ bpf_link__destroy(link_get_retval);
+
+ cgroup_getset_retval_setsockopt__destroy(obj);
+}
+
+static void test_getsockopt_get(int cgroup_fd, int sock_fd)
+{
+ struct cgroup_getset_retval_getsockopt *obj;
+ struct bpf_link *link_get_retval = NULL;
+ int buf;
+ socklen_t optlen = sizeof(buf);
+
+ obj = cgroup_getset_retval_getsockopt__open_and_load();
+ if (!ASSERT_OK_PTR(obj, "skel-load"))
+ return;
+
+ /* Attach getsockopt that gets previously set errno. Assert that the
+ * error from kernel is in both ctx_retval_value and retval_value.
+ */
+ link_get_retval = bpf_program__attach_cgroup(obj->progs.get_retval,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_get_retval, "cg-attach-get_retval"))
+ goto close_bpf_object;
+
+ if (!ASSERT_ERR(getsockopt(sock_fd, SOL_CUSTOM, 0,
+ &buf, &optlen), "getsockopt"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(errno, EOPNOTSUPP, "getsockopt-errno"))
+ goto close_bpf_object;
+
+ if (!ASSERT_EQ(obj->bss->invocations, 1, "invocations"))
+ goto close_bpf_object;
+ if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(obj->bss->retval_value, -EOPNOTSUPP, "retval_value"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(obj->bss->ctx_retval_value, -EOPNOTSUPP, "ctx_retval_value"))
+ goto close_bpf_object;
+
+close_bpf_object:
+ bpf_link__destroy(link_get_retval);
+
+ cgroup_getset_retval_getsockopt__destroy(obj);
+}
+
+static void test_getsockopt_override(int cgroup_fd, int sock_fd)
+{
+ struct cgroup_getset_retval_getsockopt *obj;
+ struct bpf_link *link_set_eisconn = NULL;
+ int buf;
+ socklen_t optlen = sizeof(buf);
+
+ obj = cgroup_getset_retval_getsockopt__open_and_load();
+ if (!ASSERT_OK_PTR(obj, "skel-load"))
+ return;
+
+ /* Attach getsockopt that sets retval to -EISCONN. Assert that this
+ * overrides the value from kernel.
+ */
+ link_set_eisconn = bpf_program__attach_cgroup(obj->progs.set_eisconn,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_set_eisconn, "cg-attach-set_eisconn"))
+ goto close_bpf_object;
+
+ if (!ASSERT_ERR(getsockopt(sock_fd, SOL_CUSTOM, 0,
+ &buf, &optlen), "getsockopt"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(errno, EISCONN, "getsockopt-errno"))
+ goto close_bpf_object;
+
+ if (!ASSERT_EQ(obj->bss->invocations, 1, "invocations"))
+ goto close_bpf_object;
+ if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error"))
+ goto close_bpf_object;
+
+close_bpf_object:
+ bpf_link__destroy(link_set_eisconn);
+
+ cgroup_getset_retval_getsockopt__destroy(obj);
+}
+
+static void test_getsockopt_retval_sync(int cgroup_fd, int sock_fd)
+{
+ struct cgroup_getset_retval_getsockopt *obj;
+ struct bpf_link *link_set_eisconn = NULL, *link_clear_retval = NULL;
+ struct bpf_link *link_get_retval = NULL;
+ int buf;
+ socklen_t optlen = sizeof(buf);
+
+ obj = cgroup_getset_retval_getsockopt__open_and_load();
+ if (!ASSERT_OK_PTR(obj, "skel-load"))
+ return;
+
+ /* Attach getsockopt that sets retval to -EISCONN, and one that clears
+ * ctx retval. Assert that the clearing ctx retval is synced to helper
+ * and clears any errors both from kernel and BPF..
+ */
+ link_set_eisconn = bpf_program__attach_cgroup(obj->progs.set_eisconn,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_set_eisconn, "cg-attach-set_eisconn"))
+ goto close_bpf_object;
+ link_clear_retval = bpf_program__attach_cgroup(obj->progs.clear_retval,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_clear_retval, "cg-attach-clear_retval"))
+ goto close_bpf_object;
+ link_get_retval = bpf_program__attach_cgroup(obj->progs.get_retval,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(link_get_retval, "cg-attach-get_retval"))
+ goto close_bpf_object;
+
+ if (!ASSERT_OK(getsockopt(sock_fd, SOL_CUSTOM, 0,
+ &buf, &optlen), "getsockopt"))
+ goto close_bpf_object;
+
+ if (!ASSERT_EQ(obj->bss->invocations, 3, "invocations"))
+ goto close_bpf_object;
+ if (!ASSERT_FALSE(obj->bss->assertion_error, "assertion_error"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(obj->bss->retval_value, 0, "retval_value"))
+ goto close_bpf_object;
+ if (!ASSERT_EQ(obj->bss->ctx_retval_value, 0, "ctx_retval_value"))
+ goto close_bpf_object;
+
+close_bpf_object:
+ bpf_link__destroy(link_set_eisconn);
+ bpf_link__destroy(link_clear_retval);
+ bpf_link__destroy(link_get_retval);
+
+ cgroup_getset_retval_getsockopt__destroy(obj);
+}
+
+void test_cgroup_getset_retval(void)
+{
+ int cgroup_fd = -1;
+ int sock_fd = -1;
+
+ cgroup_fd = test__join_cgroup("/cgroup_getset_retval");
+ if (!ASSERT_GE(cgroup_fd, 0, "cg-create"))
+ goto close_fd;
+
+ sock_fd = start_server(AF_INET, SOCK_DGRAM, NULL, 0, 0);
+ if (!ASSERT_GE(sock_fd, 0, "start-server"))
+ goto close_fd;
+
+ if (test__start_subtest("setsockopt-set"))
+ test_setsockopt_set(cgroup_fd, sock_fd);
+
+ if (test__start_subtest("setsockopt-set_and_get"))
+ test_setsockopt_set_and_get(cgroup_fd, sock_fd);
+
+ if (test__start_subtest("setsockopt-default_zero"))
+ test_setsockopt_default_zero(cgroup_fd, sock_fd);
+
+ if (test__start_subtest("setsockopt-default_zero_and_set"))
+ test_setsockopt_default_zero_and_set(cgroup_fd, sock_fd);
+
+ if (test__start_subtest("setsockopt-override"))
+ test_setsockopt_override(cgroup_fd, sock_fd);
+
+ if (test__start_subtest("setsockopt-legacy_eperm"))
+ test_setsockopt_legacy_eperm(cgroup_fd, sock_fd);
+
+ if (test__start_subtest("setsockopt-legacy_no_override"))
+ test_setsockopt_legacy_no_override(cgroup_fd, sock_fd);
+
+ if (test__start_subtest("getsockopt-get"))
+ test_getsockopt_get(cgroup_fd, sock_fd);
+
+ if (test__start_subtest("getsockopt-override"))
+ test_getsockopt_override(cgroup_fd, sock_fd);
+
+ if (test__start_subtest("getsockopt-retval_sync"))
+ test_getsockopt_retval_sync(cgroup_fd, sock_fd);
+
+close_fd:
+ close(cgroup_fd);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/check_mtu.c b/tools/testing/selftests/bpf/prog_tests/check_mtu.c
index f73e6e36b74d..12f4395f18b3 100644
--- a/tools/testing/selftests/bpf/prog_tests/check_mtu.c
+++ b/tools/testing/selftests/bpf/prog_tests/check_mtu.c
@@ -79,28 +79,21 @@ static void test_check_mtu_run_xdp(struct test_check_mtu *skel,
struct bpf_program *prog,
__u32 mtu_expect)
{
- const char *prog_name = bpf_program__name(prog);
int retval_expect = XDP_PASS;
__u32 mtu_result = 0;
char buf[256] = {};
- int err;
- struct bpf_prog_test_run_attr tattr = {
+ int err, prog_fd = bpf_program__fd(prog);
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
.repeat = 1,
.data_in = &pkt_v4,
.data_size_in = sizeof(pkt_v4),
.data_out = buf,
.data_size_out = sizeof(buf),
- .prog_fd = bpf_program__fd(prog),
- };
-
- err = bpf_prog_test_run_xattr(&tattr);
- CHECK_ATTR(err != 0, "bpf_prog_test_run",
- "prog_name:%s (err %d errno %d retval %d)\n",
- prog_name, err, errno, tattr.retval);
+ );
- CHECK(tattr.retval != retval_expect, "retval",
- "progname:%s unexpected retval=%d expected=%d\n",
- prog_name, tattr.retval, retval_expect);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, retval_expect, "retval");
/* Extract MTU that BPF-prog got */
mtu_result = skel->bss->global_bpf_mtu_xdp;
@@ -139,28 +132,21 @@ static void test_check_mtu_run_tc(struct test_check_mtu *skel,
struct bpf_program *prog,
__u32 mtu_expect)
{
- const char *prog_name = bpf_program__name(prog);
int retval_expect = BPF_OK;
__u32 mtu_result = 0;
char buf[256] = {};
- int err;
- struct bpf_prog_test_run_attr tattr = {
- .repeat = 1,
+ int err, prog_fd = bpf_program__fd(prog);
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
.data_in = &pkt_v4,
.data_size_in = sizeof(pkt_v4),
.data_out = buf,
.data_size_out = sizeof(buf),
- .prog_fd = bpf_program__fd(prog),
- };
-
- err = bpf_prog_test_run_xattr(&tattr);
- CHECK_ATTR(err != 0, "bpf_prog_test_run",
- "prog_name:%s (err %d errno %d retval %d)\n",
- prog_name, err, errno, tattr.retval);
+ .repeat = 1,
+ );
- CHECK(tattr.retval != retval_expect, "retval",
- "progname:%s unexpected retval=%d expected=%d\n",
- prog_name, tattr.retval, retval_expect);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, retval_expect, "retval");
/* Extract MTU that BPF-prog got */
mtu_result = skel->bss->global_bpf_mtu_tc;
diff --git a/tools/testing/selftests/bpf/prog_tests/cls_redirect.c b/tools/testing/selftests/bpf/prog_tests/cls_redirect.c
index e075d03ab630..224f016b0a53 100644
--- a/tools/testing/selftests/bpf/prog_tests/cls_redirect.c
+++ b/tools/testing/selftests/bpf/prog_tests/cls_redirect.c
@@ -161,7 +161,7 @@ static socklen_t prepare_addr(struct sockaddr_storage *addr, int family)
}
}
-static bool was_decapsulated(struct bpf_prog_test_run_attr *tattr)
+static bool was_decapsulated(struct bpf_test_run_opts *tattr)
{
return tattr->data_size_out < tattr->data_size_in;
}
@@ -367,12 +367,12 @@ static void close_fds(int *fds, int n)
static void test_cls_redirect_common(struct bpf_program *prog)
{
- struct bpf_prog_test_run_attr tattr = {};
+ LIBBPF_OPTS(bpf_test_run_opts, tattr);
int families[] = { AF_INET, AF_INET6 };
struct sockaddr_storage ss;
struct sockaddr *addr;
socklen_t slen;
- int i, j, err;
+ int i, j, err, prog_fd;
int servers[__NR_KIND][ARRAY_SIZE(families)] = {};
int conns[__NR_KIND][ARRAY_SIZE(families)] = {};
struct tuple tuples[__NR_KIND][ARRAY_SIZE(families)];
@@ -394,7 +394,7 @@ static void test_cls_redirect_common(struct bpf_program *prog)
goto cleanup;
}
- tattr.prog_fd = bpf_program__fd(prog);
+ prog_fd = bpf_program__fd(prog);
for (i = 0; i < ARRAY_SIZE(tests); i++) {
struct test_cfg *test = &tests[i];
@@ -415,7 +415,7 @@ static void test_cls_redirect_common(struct bpf_program *prog)
if (CHECK_FAIL(!tattr.data_size_in))
continue;
- err = bpf_prog_test_run_xattr(&tattr);
+ err = bpf_prog_test_run_opts(prog_fd, &tattr);
if (CHECK_FAIL(err))
continue;
diff --git a/tools/testing/selftests/bpf/prog_tests/core_kern.c b/tools/testing/selftests/bpf/prog_tests/core_kern.c
index 561c5185d886..6a5a1c019a5d 100644
--- a/tools/testing/selftests/bpf/prog_tests/core_kern.c
+++ b/tools/testing/selftests/bpf/prog_tests/core_kern.c
@@ -7,8 +7,22 @@
void test_core_kern_lskel(void)
{
struct core_kern_lskel *skel;
+ int link_fd;
skel = core_kern_lskel__open_and_load();
- ASSERT_OK_PTR(skel, "open_and_load");
+ if (!ASSERT_OK_PTR(skel, "open_and_load"))
+ return;
+
+ link_fd = core_kern_lskel__core_relo_proto__attach(skel);
+ if (!ASSERT_GT(link_fd, 0, "attach(core_relo_proto)"))
+ goto cleanup;
+
+ /* trigger tracepoints */
+ usleep(1);
+ ASSERT_TRUE(skel->bss->proto_out[0], "bpf_core_type_exists");
+ ASSERT_FALSE(skel->bss->proto_out[1], "!bpf_core_type_exists");
+ ASSERT_TRUE(skel->bss->proto_out[2], "bpf_core_type_exists. nested");
+
+cleanup:
core_kern_lskel__destroy(skel);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/core_kern_overflow.c b/tools/testing/selftests/bpf/prog_tests/core_kern_overflow.c
new file mode 100644
index 000000000000..04cc145bc26a
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/core_kern_overflow.c
@@ -0,0 +1,13 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include "test_progs.h"
+#include "core_kern_overflow.lskel.h"
+
+void test_core_kern_overflow_lskel(void)
+{
+ struct core_kern_overflow_lskel *skel;
+
+ skel = core_kern_overflow_lskel__open_and_load();
+ if (!ASSERT_NULL(skel, "open_and_load"))
+ core_kern_overflow_lskel__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
index b8bdd1c3efca..f28f75aa9154 100644
--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
@@ -2,6 +2,7 @@
#include <test_progs.h>
#include "progs/core_reloc_types.h"
#include "bpf_testmod/bpf_testmod.h"
+#include <linux/limits.h>
#include <sys/mman.h>
#include <sys/syscall.h>
#include <bpf/btf.h>
@@ -511,7 +512,7 @@ static int __trigger_module_test_read(const struct core_reloc_test_case *test)
}
-static struct core_reloc_test_case test_cases[] = {
+static const struct core_reloc_test_case test_cases[] = {
/* validate we can find kernel image and use its BTF for relocs */
{
.case_name = "kernel",
@@ -836,13 +837,27 @@ static size_t roundup_page(size_t sz)
return (sz + page_size - 1) / page_size * page_size;
}
-void test_core_reloc(void)
+static int run_btfgen(const char *src_btf, const char *dst_btf, const char *objpath)
+{
+ char command[4096];
+ int n;
+
+ n = snprintf(command, sizeof(command),
+ "./bpftool gen min_core_btf %s %s %s",
+ src_btf, dst_btf, objpath);
+ if (n < 0 || n >= sizeof(command))
+ return -1;
+
+ return system(command);
+}
+
+static void run_core_reloc_tests(bool use_btfgen)
{
const size_t mmap_sz = roundup_page(sizeof(struct data));
DECLARE_LIBBPF_OPTS(bpf_object_open_opts, open_opts);
- struct core_reloc_test_case *test_case;
+ struct core_reloc_test_case *test_case, test_case_copy;
const char *tp_name, *probe_name;
- int err, i, equal;
+ int err, i, equal, fd;
struct bpf_link *link = NULL;
struct bpf_map *data_map;
struct bpf_program *prog;
@@ -854,7 +869,11 @@ void test_core_reloc(void)
my_pid_tgid = getpid() | ((uint64_t)syscall(SYS_gettid) << 32);
for (i = 0; i < ARRAY_SIZE(test_cases); i++) {
- test_case = &test_cases[i];
+ char btf_file[] = "/tmp/core_reloc.btf.XXXXXX";
+
+ test_case_copy = test_cases[i];
+ test_case = &test_case_copy;
+
if (!test__start_subtest(test_case->case_name))
continue;
@@ -863,6 +882,26 @@ void test_core_reloc(void)
continue;
}
+ /* generate a "minimal" BTF file and use it as source */
+ if (use_btfgen) {
+
+ if (!test_case->btf_src_file || test_case->fails) {
+ test__skip();
+ continue;
+ }
+
+ fd = mkstemp(btf_file);
+ if (!ASSERT_GE(fd, 0, "btf_tmp"))
+ continue;
+ close(fd); /* we only need the path */
+ err = run_btfgen(test_case->btf_src_file, btf_file,
+ test_case->bpf_obj_file);
+ if (!ASSERT_OK(err, "run_btfgen"))
+ continue;
+
+ test_case->btf_src_file = btf_file;
+ }
+
if (test_case->setup) {
err = test_case->setup(test_case);
if (CHECK(err, "test_setup", "test #%d setup failed: %d\n", i, err))
@@ -872,7 +911,7 @@ void test_core_reloc(void)
if (test_case->btf_src_file) {
err = access(test_case->btf_src_file, R_OK);
if (!ASSERT_OK(err, "btf_src_file"))
- goto cleanup;
+ continue;
}
open_opts.btf_custom_path = test_case->btf_src_file;
@@ -954,8 +993,20 @@ cleanup:
CHECK_FAIL(munmap(mmap_data, mmap_sz));
mmap_data = NULL;
}
+ if (use_btfgen)
+ remove(test_case->btf_src_file);
bpf_link__destroy(link);
link = NULL;
bpf_object__close(obj);
}
}
+
+void test_core_reloc(void)
+{
+ run_core_reloc_tests(false);
+}
+
+void test_core_reloc_btfgen(void)
+{
+ run_core_reloc_tests(true);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/custom_sec_handlers.c b/tools/testing/selftests/bpf/prog_tests/custom_sec_handlers.c
new file mode 100644
index 000000000000..b2dfc5954aea
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/custom_sec_handlers.c
@@ -0,0 +1,176 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Facebook */
+
+#include <test_progs.h>
+#include "test_custom_sec_handlers.skel.h"
+
+#define COOKIE_ABC1 1
+#define COOKIE_ABC2 2
+#define COOKIE_CUSTOM 3
+#define COOKIE_FALLBACK 4
+#define COOKIE_KPROBE 5
+
+static int custom_setup_prog(struct bpf_program *prog, long cookie)
+{
+ if (cookie == COOKIE_ABC1)
+ bpf_program__set_autoload(prog, false);
+
+ return 0;
+}
+
+static int custom_prepare_load_prog(struct bpf_program *prog,
+ struct bpf_prog_load_opts *opts, long cookie)
+{
+ if (cookie == COOKIE_FALLBACK)
+ opts->prog_flags |= BPF_F_SLEEPABLE;
+ else if (cookie == COOKIE_ABC1)
+ ASSERT_FALSE(true, "unexpected preload for abc");
+
+ return 0;
+}
+
+static int custom_attach_prog(const struct bpf_program *prog, long cookie,
+ struct bpf_link **link)
+{
+ switch (cookie) {
+ case COOKIE_ABC2:
+ *link = bpf_program__attach_raw_tracepoint(prog, "sys_enter");
+ return libbpf_get_error(*link);
+ case COOKIE_CUSTOM:
+ *link = bpf_program__attach_tracepoint(prog, "syscalls", "sys_enter_nanosleep");
+ return libbpf_get_error(*link);
+ case COOKIE_KPROBE:
+ case COOKIE_FALLBACK:
+ /* no auto-attach for SEC("xyz") and SEC("kprobe") */
+ *link = NULL;
+ return 0;
+ default:
+ ASSERT_FALSE(true, "unexpected cookie");
+ return -EINVAL;
+ }
+}
+
+static int abc1_id;
+static int abc2_id;
+static int custom_id;
+static int fallback_id;
+static int kprobe_id;
+
+__attribute__((constructor))
+static void register_sec_handlers(void)
+{
+ LIBBPF_OPTS(libbpf_prog_handler_opts, abc1_opts,
+ .cookie = COOKIE_ABC1,
+ .prog_setup_fn = custom_setup_prog,
+ .prog_prepare_load_fn = custom_prepare_load_prog,
+ .prog_attach_fn = NULL,
+ );
+ LIBBPF_OPTS(libbpf_prog_handler_opts, abc2_opts,
+ .cookie = COOKIE_ABC2,
+ .prog_setup_fn = custom_setup_prog,
+ .prog_prepare_load_fn = custom_prepare_load_prog,
+ .prog_attach_fn = custom_attach_prog,
+ );
+ LIBBPF_OPTS(libbpf_prog_handler_opts, custom_opts,
+ .cookie = COOKIE_CUSTOM,
+ .prog_setup_fn = NULL,
+ .prog_prepare_load_fn = NULL,
+ .prog_attach_fn = custom_attach_prog,
+ );
+
+ abc1_id = libbpf_register_prog_handler("abc", BPF_PROG_TYPE_RAW_TRACEPOINT, 0, &abc1_opts);
+ abc2_id = libbpf_register_prog_handler("abc/", BPF_PROG_TYPE_RAW_TRACEPOINT, 0, &abc2_opts);
+ custom_id = libbpf_register_prog_handler("custom+", BPF_PROG_TYPE_TRACEPOINT, 0, &custom_opts);
+}
+
+__attribute__((destructor))
+static void unregister_sec_handlers(void)
+{
+ libbpf_unregister_prog_handler(abc1_id);
+ libbpf_unregister_prog_handler(abc2_id);
+ libbpf_unregister_prog_handler(custom_id);
+}
+
+void test_custom_sec_handlers(void)
+{
+ LIBBPF_OPTS(libbpf_prog_handler_opts, opts,
+ .prog_setup_fn = custom_setup_prog,
+ .prog_prepare_load_fn = custom_prepare_load_prog,
+ .prog_attach_fn = custom_attach_prog,
+ );
+ struct test_custom_sec_handlers* skel;
+ int err;
+
+ ASSERT_GT(abc1_id, 0, "abc1_id");
+ ASSERT_GT(abc2_id, 0, "abc2_id");
+ ASSERT_GT(custom_id, 0, "custom_id");
+
+ /* override libbpf's handle of SEC("kprobe/...") but also allow pure
+ * SEC("kprobe") due to "kprobe+" specifier. Register it as
+ * TRACEPOINT, just for fun.
+ */
+ opts.cookie = COOKIE_KPROBE;
+ kprobe_id = libbpf_register_prog_handler("kprobe+", BPF_PROG_TYPE_TRACEPOINT, 0, &opts);
+ /* fallback treats everything as BPF_PROG_TYPE_SYSCALL program to test
+ * setting custom BPF_F_SLEEPABLE bit in preload handler
+ */
+ opts.cookie = COOKIE_FALLBACK;
+ fallback_id = libbpf_register_prog_handler(NULL, BPF_PROG_TYPE_SYSCALL, 0, &opts);
+
+ if (!ASSERT_GT(fallback_id, 0, "fallback_id") /* || !ASSERT_GT(kprobe_id, 0, "kprobe_id")*/) {
+ if (fallback_id > 0)
+ libbpf_unregister_prog_handler(fallback_id);
+ if (kprobe_id > 0)
+ libbpf_unregister_prog_handler(kprobe_id);
+ return;
+ }
+
+ /* open skeleton and validate assumptions */
+ skel = test_custom_sec_handlers__open();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ goto cleanup;
+
+ ASSERT_EQ(bpf_program__type(skel->progs.abc1), BPF_PROG_TYPE_RAW_TRACEPOINT, "abc1_type");
+ ASSERT_FALSE(bpf_program__autoload(skel->progs.abc1), "abc1_autoload");
+
+ ASSERT_EQ(bpf_program__type(skel->progs.abc2), BPF_PROG_TYPE_RAW_TRACEPOINT, "abc2_type");
+ ASSERT_EQ(bpf_program__type(skel->progs.custom1), BPF_PROG_TYPE_TRACEPOINT, "custom1_type");
+ ASSERT_EQ(bpf_program__type(skel->progs.custom2), BPF_PROG_TYPE_TRACEPOINT, "custom2_type");
+ ASSERT_EQ(bpf_program__type(skel->progs.kprobe1), BPF_PROG_TYPE_TRACEPOINT, "kprobe1_type");
+ ASSERT_EQ(bpf_program__type(skel->progs.xyz), BPF_PROG_TYPE_SYSCALL, "xyz_type");
+
+ skel->rodata->my_pid = getpid();
+
+ /* now attempt to load everything */
+ err = test_custom_sec_handlers__load(skel);
+ if (!ASSERT_OK(err, "skel_load"))
+ goto cleanup;
+
+ /* now try to auto-attach everything */
+ err = test_custom_sec_handlers__attach(skel);
+ if (!ASSERT_OK(err, "skel_attach"))
+ goto cleanup;
+
+ skel->links.xyz = bpf_program__attach(skel->progs.kprobe1);
+ ASSERT_EQ(errno, EOPNOTSUPP, "xyz_attach_err");
+ ASSERT_ERR_PTR(skel->links.xyz, "xyz_attach");
+
+ /* trigger programs */
+ usleep(1);
+
+ /* SEC("abc") is set to not auto-loaded */
+ ASSERT_FALSE(skel->bss->abc1_called, "abc1_called");
+ ASSERT_TRUE(skel->bss->abc2_called, "abc2_called");
+ ASSERT_TRUE(skel->bss->custom1_called, "custom1_called");
+ ASSERT_TRUE(skel->bss->custom2_called, "custom2_called");
+ /* SEC("kprobe") shouldn't be auto-attached */
+ ASSERT_FALSE(skel->bss->kprobe1_called, "kprobe1_called");
+ /* SEC("xyz") shouldn't be auto-attached */
+ ASSERT_FALSE(skel->bss->xyz_called, "xyz_called");
+
+cleanup:
+ test_custom_sec_handlers__destroy(skel);
+
+ ASSERT_OK(libbpf_unregister_prog_handler(fallback_id), "unregister_fallback");
+ ASSERT_OK(libbpf_unregister_prog_handler(kprobe_id), "unregister_kprobe");
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c b/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
index cbaa44ffb8c6..5aa52cc31dc2 100644
--- a/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
+++ b/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
@@ -26,10 +26,10 @@ static void test_dummy_st_ops_attach(void)
static void test_dummy_init_ret_value(void)
{
__u64 args[1] = {0};
- struct bpf_prog_test_run_attr attr = {
- .ctx_size_in = sizeof(args),
+ LIBBPF_OPTS(bpf_test_run_opts, attr,
.ctx_in = args,
- };
+ .ctx_size_in = sizeof(args),
+ );
struct dummy_st_ops *skel;
int fd, err;
@@ -38,8 +38,7 @@ static void test_dummy_init_ret_value(void)
return;
fd = bpf_program__fd(skel->progs.test_1);
- attr.prog_fd = fd;
- err = bpf_prog_test_run_xattr(&attr);
+ err = bpf_prog_test_run_opts(fd, &attr);
ASSERT_OK(err, "test_run");
ASSERT_EQ(attr.retval, 0xf2f3f4f5, "test_ret");
@@ -53,10 +52,10 @@ static void test_dummy_init_ptr_arg(void)
.val = exp_retval,
};
__u64 args[1] = {(unsigned long)&in_state};
- struct bpf_prog_test_run_attr attr = {
- .ctx_size_in = sizeof(args),
+ LIBBPF_OPTS(bpf_test_run_opts, attr,
.ctx_in = args,
- };
+ .ctx_size_in = sizeof(args),
+ );
struct dummy_st_ops *skel;
int fd, err;
@@ -65,8 +64,7 @@ static void test_dummy_init_ptr_arg(void)
return;
fd = bpf_program__fd(skel->progs.test_1);
- attr.prog_fd = fd;
- err = bpf_prog_test_run_xattr(&attr);
+ err = bpf_prog_test_run_opts(fd, &attr);
ASSERT_OK(err, "test_run");
ASSERT_EQ(in_state.val, 0x5a, "test_ptr_ret");
ASSERT_EQ(attr.retval, exp_retval, "test_ret");
@@ -77,10 +75,10 @@ static void test_dummy_init_ptr_arg(void)
static void test_dummy_multiple_args(void)
{
__u64 args[5] = {0, -100, 0x8a5f, 'c', 0x1234567887654321ULL};
- struct bpf_prog_test_run_attr attr = {
- .ctx_size_in = sizeof(args),
+ LIBBPF_OPTS(bpf_test_run_opts, attr,
.ctx_in = args,
- };
+ .ctx_size_in = sizeof(args),
+ );
struct dummy_st_ops *skel;
int fd, err;
size_t i;
@@ -91,8 +89,7 @@ static void test_dummy_multiple_args(void)
return;
fd = bpf_program__fd(skel->progs.test_2);
- attr.prog_fd = fd;
- err = bpf_prog_test_run_xattr(&attr);
+ err = bpf_prog_test_run_opts(fd, &attr);
ASSERT_OK(err, "test_run");
for (i = 0; i < ARRAY_SIZE(args); i++) {
snprintf(name, sizeof(name), "arg %zu", i);
diff --git a/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c b/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c
index 4374ac8a8a91..130f5b82d2e6 100644
--- a/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c
+++ b/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c
@@ -9,38 +9,34 @@ void test_fentry_fexit(void)
struct fentry_test_lskel *fentry_skel = NULL;
struct fexit_test_lskel *fexit_skel = NULL;
__u64 *fentry_res, *fexit_res;
- __u32 duration = 0, retval;
int err, prog_fd, i;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
fentry_skel = fentry_test_lskel__open_and_load();
- if (CHECK(!fentry_skel, "fentry_skel_load", "fentry skeleton failed\n"))
+ if (!ASSERT_OK_PTR(fentry_skel, "fentry_skel_load"))
goto close_prog;
fexit_skel = fexit_test_lskel__open_and_load();
- if (CHECK(!fexit_skel, "fexit_skel_load", "fexit skeleton failed\n"))
+ if (!ASSERT_OK_PTR(fexit_skel, "fexit_skel_load"))
goto close_prog;
err = fentry_test_lskel__attach(fentry_skel);
- if (CHECK(err, "fentry_attach", "fentry attach failed: %d\n", err))
+ if (!ASSERT_OK(err, "fentry_attach"))
goto close_prog;
err = fexit_test_lskel__attach(fexit_skel);
- if (CHECK(err, "fexit_attach", "fexit attach failed: %d\n", err))
+ if (!ASSERT_OK(err, "fexit_attach"))
goto close_prog;
prog_fd = fexit_skel->progs.test1.prog_fd;
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
- CHECK(err || retval, "ipv6",
- "err %d errno %d retval %d duration %d\n",
- err, errno, retval, duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "ipv6 test_run");
+ ASSERT_OK(topts.retval, "ipv6 test retval");
fentry_res = (__u64 *)fentry_skel->bss;
fexit_res = (__u64 *)fexit_skel->bss;
printf("%lld\n", fentry_skel->bss->test1_result);
for (i = 0; i < 8; i++) {
- CHECK(fentry_res[i] != 1, "result",
- "fentry_test%d failed err %lld\n", i + 1, fentry_res[i]);
- CHECK(fexit_res[i] != 1, "result",
- "fexit_test%d failed err %lld\n", i + 1, fexit_res[i]);
+ ASSERT_EQ(fentry_res[i], 1, "fentry result");
+ ASSERT_EQ(fexit_res[i], 1, "fexit result");
}
close_prog:
diff --git a/tools/testing/selftests/bpf/prog_tests/fentry_test.c b/tools/testing/selftests/bpf/prog_tests/fentry_test.c
index 12921b3850d2..c0d1d61d5f66 100644
--- a/tools/testing/selftests/bpf/prog_tests/fentry_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/fentry_test.c
@@ -6,9 +6,9 @@
static int fentry_test(struct fentry_test_lskel *fentry_skel)
{
int err, prog_fd, i;
- __u32 duration = 0, retval;
int link_fd;
__u64 *result;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
err = fentry_test_lskel__attach(fentry_skel);
if (!ASSERT_OK(err, "fentry_attach"))
@@ -20,10 +20,9 @@ static int fentry_test(struct fentry_test_lskel *fentry_skel)
return -1;
prog_fd = fentry_skel->progs.test1.prog_fd;
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "test_run");
- ASSERT_EQ(retval, 0, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run");
result = (__u64 *)fentry_skel->bss;
for (i = 0; i < sizeof(*fentry_skel->bss) / sizeof(__u64); i++) {
diff --git a/tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c b/tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
index c52f99f6a909..d9aad15e0d24 100644
--- a/tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
+++ b/tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
@@ -58,12 +58,17 @@ static void test_fexit_bpf2bpf_common(const char *obj_file,
test_cb cb)
{
struct bpf_object *obj = NULL, *tgt_obj;
- __u32 retval, tgt_prog_id, info_len;
+ __u32 tgt_prog_id, info_len;
struct bpf_prog_info prog_info = {};
struct bpf_program **prog = NULL, *p;
struct bpf_link **link = NULL;
int err, tgt_fd, i;
struct btf *btf;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v6,
+ .data_size_in = sizeof(pkt_v6),
+ .repeat = 1,
+ );
err = bpf_prog_test_load(target_obj_file, BPF_PROG_TYPE_UNSPEC,
&tgt_obj, &tgt_fd);
@@ -132,7 +137,7 @@ static void test_fexit_bpf2bpf_common(const char *obj_file,
&link_info, &info_len);
ASSERT_OK(err, "link_fd_get_info");
ASSERT_EQ(link_info.tracing.attach_type,
- bpf_program__get_expected_attach_type(prog[i]),
+ bpf_program__expected_attach_type(prog[i]),
"link_attach_type");
ASSERT_EQ(link_info.tracing.target_obj_id, tgt_prog_id, "link_tgt_obj_id");
ASSERT_EQ(link_info.tracing.target_btf_id, btf_id, "link_tgt_btf_id");
@@ -147,10 +152,9 @@ static void test_fexit_bpf2bpf_common(const char *obj_file,
if (!run_prog)
goto close_prog;
- err = bpf_prog_test_run(tgt_fd, 1, &pkt_v6, sizeof(pkt_v6),
- NULL, NULL, &retval, NULL);
+ err = bpf_prog_test_run_opts(tgt_fd, &topts);
ASSERT_OK(err, "prog_run");
- ASSERT_EQ(retval, 0, "prog_run_ret");
+ ASSERT_EQ(topts.retval, 0, "prog_run_ret");
if (check_data_map(obj, prog_cnt, false))
goto close_prog;
@@ -225,29 +229,31 @@ static int test_second_attach(struct bpf_object *obj)
const char *tgt_obj_file = "./test_pkt_access.o";
struct bpf_program *prog = NULL;
struct bpf_object *tgt_obj;
- __u32 duration = 0, retval;
struct bpf_link *link;
int err = 0, tgt_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v6,
+ .data_size_in = sizeof(pkt_v6),
+ .repeat = 1,
+ );
prog = bpf_object__find_program_by_name(obj, prog_name);
- if (CHECK(!prog, "find_prog", "prog %s not found\n", prog_name))
+ if (!ASSERT_OK_PTR(prog, "find_prog"))
return -ENOENT;
err = bpf_prog_test_load(tgt_obj_file, BPF_PROG_TYPE_UNSPEC,
&tgt_obj, &tgt_fd);
- if (CHECK(err, "second_prog_load", "file %s err %d errno %d\n",
- tgt_obj_file, err, errno))
+ if (!ASSERT_OK(err, "second_prog_load"))
return err;
link = bpf_program__attach_freplace(prog, tgt_fd, tgt_name);
if (!ASSERT_OK_PTR(link, "second_link"))
goto out;
- err = bpf_prog_test_run(tgt_fd, 1, &pkt_v6, sizeof(pkt_v6),
- NULL, NULL, &retval, &duration);
- if (CHECK(err || retval, "ipv6",
- "err %d errno %d retval %d duration %d\n",
- err, errno, retval, duration))
+ err = bpf_prog_test_run_opts(tgt_fd, &topts);
+ if (!ASSERT_OK(err, "ipv6 test_run"))
+ goto out;
+ if (!ASSERT_OK(topts.retval, "ipv6 retval"))
goto out;
err = check_data_map(obj, 1, true);
diff --git a/tools/testing/selftests/bpf/prog_tests/fexit_stress.c b/tools/testing/selftests/bpf/prog_tests/fexit_stress.c
index e4cede6b4b2d..3ee2107bbf7a 100644
--- a/tools/testing/selftests/bpf/prog_tests/fexit_stress.c
+++ b/tools/testing/selftests/bpf/prog_tests/fexit_stress.c
@@ -10,9 +10,7 @@ void test_fexit_stress(void)
char test_skb[128] = {};
int fexit_fd[CNT] = {};
int link_fd[CNT] = {};
- __u32 duration = 0;
char error[4096];
- __u32 prog_ret;
int err, i, filter_fd;
const struct bpf_insn trace_program[] = {
@@ -36,9 +34,15 @@ void test_fexit_stress(void)
.log_size = sizeof(error),
);
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = test_skb,
+ .data_size_in = sizeof(test_skb),
+ .repeat = 1,
+ );
+
err = libbpf_find_vmlinux_btf_id("bpf_fentry_test1",
trace_opts.expected_attach_type);
- if (CHECK(err <= 0, "find_vmlinux_btf_id", "failed: %d\n", err))
+ if (!ASSERT_GT(err, 0, "find_vmlinux_btf_id"))
goto out;
trace_opts.attach_btf_id = err;
@@ -47,24 +51,20 @@ void test_fexit_stress(void)
trace_program,
sizeof(trace_program) / sizeof(struct bpf_insn),
&trace_opts);
- if (CHECK(fexit_fd[i] < 0, "fexit loaded",
- "failed: %d errno %d\n", fexit_fd[i], errno))
+ if (!ASSERT_GE(fexit_fd[i], 0, "fexit load"))
goto out;
link_fd[i] = bpf_raw_tracepoint_open(NULL, fexit_fd[i]);
- if (CHECK(link_fd[i] < 0, "fexit attach failed",
- "prog %d failed: %d err %d\n", i, link_fd[i], errno))
+ if (!ASSERT_GE(link_fd[i], 0, "fexit attach"))
goto out;
}
filter_fd = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL",
skb_program, sizeof(skb_program) / sizeof(struct bpf_insn),
&skb_opts);
- if (CHECK(filter_fd < 0, "test_program_loaded", "failed: %d errno %d\n",
- filter_fd, errno))
+ if (!ASSERT_GE(filter_fd, 0, "test_program_loaded"))
goto out;
- err = bpf_prog_test_run(filter_fd, 1, test_skb, sizeof(test_skb), 0,
- 0, &prog_ret, 0);
+ err = bpf_prog_test_run_opts(filter_fd, &topts);
close(filter_fd);
CHECK_FAIL(err);
out:
diff --git a/tools/testing/selftests/bpf/prog_tests/fexit_test.c b/tools/testing/selftests/bpf/prog_tests/fexit_test.c
index d4887d8bb396..101b7343036b 100644
--- a/tools/testing/selftests/bpf/prog_tests/fexit_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/fexit_test.c
@@ -6,9 +6,9 @@
static int fexit_test(struct fexit_test_lskel *fexit_skel)
{
int err, prog_fd, i;
- __u32 duration = 0, retval;
int link_fd;
__u64 *result;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
err = fexit_test_lskel__attach(fexit_skel);
if (!ASSERT_OK(err, "fexit_attach"))
@@ -20,10 +20,9 @@ static int fexit_test(struct fexit_test_lskel *fexit_skel)
return -1;
prog_fd = fexit_skel->progs.test1.prog_fd;
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "test_run");
- ASSERT_EQ(retval, 0, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run");
result = (__u64 *)fexit_skel->bss;
for (i = 0; i < sizeof(*fexit_skel->bss) / sizeof(__u64); i++) {
diff --git a/tools/testing/selftests/bpf/prog_tests/find_vma.c b/tools/testing/selftests/bpf/prog_tests/find_vma.c
index b74b3c0c555a..5165b38f0e59 100644
--- a/tools/testing/selftests/bpf/prog_tests/find_vma.c
+++ b/tools/testing/selftests/bpf/prog_tests/find_vma.c
@@ -7,12 +7,14 @@
#include "find_vma_fail1.skel.h"
#include "find_vma_fail2.skel.h"
-static void test_and_reset_skel(struct find_vma *skel, int expected_find_zero_ret)
+static void test_and_reset_skel(struct find_vma *skel, int expected_find_zero_ret, bool need_test)
{
- ASSERT_EQ(skel->bss->found_vm_exec, 1, "found_vm_exec");
- ASSERT_EQ(skel->data->find_addr_ret, 0, "find_addr_ret");
- ASSERT_EQ(skel->data->find_zero_ret, expected_find_zero_ret, "find_zero_ret");
- ASSERT_OK_PTR(strstr(skel->bss->d_iname, "test_progs"), "find_test_progs");
+ if (need_test) {
+ ASSERT_EQ(skel->bss->found_vm_exec, 1, "found_vm_exec");
+ ASSERT_EQ(skel->data->find_addr_ret, 0, "find_addr_ret");
+ ASSERT_EQ(skel->data->find_zero_ret, expected_find_zero_ret, "find_zero_ret");
+ ASSERT_OK_PTR(strstr(skel->bss->d_iname, "test_progs"), "find_test_progs");
+ }
skel->bss->found_vm_exec = 0;
skel->data->find_addr_ret = -1;
@@ -30,17 +32,26 @@ static int open_pe(void)
attr.type = PERF_TYPE_HARDWARE;
attr.config = PERF_COUNT_HW_CPU_CYCLES;
attr.freq = 1;
- attr.sample_freq = 4000;
+ attr.sample_freq = 1000;
pfd = syscall(__NR_perf_event_open, &attr, 0, -1, -1, PERF_FLAG_FD_CLOEXEC);
return pfd >= 0 ? pfd : -errno;
}
+static bool find_vma_pe_condition(struct find_vma *skel)
+{
+ return skel->bss->found_vm_exec == 0 ||
+ skel->data->find_addr_ret != 0 ||
+ skel->data->find_zero_ret == -1 ||
+ strcmp(skel->bss->d_iname, "test_progs") != 0;
+}
+
static void test_find_vma_pe(struct find_vma *skel)
{
struct bpf_link *link = NULL;
volatile int j = 0;
int pfd, i;
+ const int one_bn = 1000000000;
pfd = open_pe();
if (pfd < 0) {
@@ -57,10 +68,10 @@ static void test_find_vma_pe(struct find_vma *skel)
if (!ASSERT_OK_PTR(link, "attach_perf_event"))
goto cleanup;
- for (i = 0; i < 1000000; ++i)
+ for (i = 0; i < one_bn && find_vma_pe_condition(skel); ++i)
++j;
- test_and_reset_skel(skel, -EBUSY /* in nmi, irq_work is busy */);
+ test_and_reset_skel(skel, -EBUSY /* in nmi, irq_work is busy */, i == one_bn);
cleanup:
bpf_link__destroy(link);
close(pfd);
@@ -75,7 +86,7 @@ static void test_find_vma_kprobe(struct find_vma *skel)
return;
getpgid(skel->bss->target_pid);
- test_and_reset_skel(skel, -ENOENT /* could not find vma for ptr 0 */);
+ test_and_reset_skel(skel, -ENOENT /* could not find vma for ptr 0 */, true);
}
static void test_illegal_write_vma(void)
@@ -108,7 +119,6 @@ void serial_test_find_vma(void)
skel->bss->addr = (__u64)(uintptr_t)test_find_vma_pe;
test_find_vma_pe(skel);
- usleep(100000); /* allow the irq_work to finish */
test_find_vma_kprobe(skel);
find_vma__destroy(skel);
diff --git a/tools/testing/selftests/bpf/prog_tests/flow_dissector.c b/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
index ac54e3f91d42..0c1661ea996e 100644
--- a/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
+++ b/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
@@ -13,8 +13,9 @@
#endif
#define CHECK_FLOW_KEYS(desc, got, expected) \
- CHECK_ATTR(memcmp(&got, &expected, sizeof(got)) != 0, \
+ _CHECK(memcmp(&got, &expected, sizeof(got)) != 0, \
desc, \
+ topts.duration, \
"nhoff=%u/%u " \
"thoff=%u/%u " \
"addr_proto=0x%x/0x%x " \
@@ -457,7 +458,7 @@ static int init_prog_array(struct bpf_object *obj, struct bpf_map *prog_array)
if (map_fd < 0)
return -1;
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
snprintf(prog_name, sizeof(prog_name), "flow_dissector_%d", i);
prog = bpf_object__find_program_by_name(obj, prog_name);
@@ -487,7 +488,7 @@ static void run_tests_skb_less(int tap_fd, struct bpf_map *keys)
/* Keep in sync with 'flags' from eth_get_headlen. */
__u32 eth_get_headlen_flags =
BPF_FLOW_DISSECTOR_F_PARSE_1ST_FRAG;
- struct bpf_prog_test_run_attr tattr = {};
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
struct bpf_flow_keys flow_keys = {};
__u32 key = (__u32)(tests[i].keys.sport) << 16 |
tests[i].keys.dport;
@@ -503,13 +504,12 @@ static void run_tests_skb_less(int tap_fd, struct bpf_map *keys)
CHECK(err < 0, "tx_tap", "err %d errno %d\n", err, errno);
err = bpf_map_lookup_elem(keys_fd, &key, &flow_keys);
- CHECK_ATTR(err, tests[i].name, "bpf_map_lookup_elem %d\n", err);
+ ASSERT_OK(err, "bpf_map_lookup_elem");
- CHECK_ATTR(err, tests[i].name, "skb-less err %d\n", err);
CHECK_FLOW_KEYS(tests[i].name, flow_keys, tests[i].keys);
err = bpf_map_delete_elem(keys_fd, &key);
- CHECK_ATTR(err, tests[i].name, "bpf_map_delete_elem %d\n", err);
+ ASSERT_OK(err, "bpf_map_delete_elem");
}
}
@@ -573,27 +573,24 @@ void test_flow_dissector(void)
for (i = 0; i < ARRAY_SIZE(tests); i++) {
struct bpf_flow_keys flow_keys;
- struct bpf_prog_test_run_attr tattr = {
- .prog_fd = prog_fd,
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
.data_in = &tests[i].pkt,
.data_size_in = sizeof(tests[i].pkt),
.data_out = &flow_keys,
- };
+ );
static struct bpf_flow_keys ctx = {};
if (tests[i].flags) {
- tattr.ctx_in = &ctx;
- tattr.ctx_size_in = sizeof(ctx);
+ topts.ctx_in = &ctx;
+ topts.ctx_size_in = sizeof(ctx);
ctx.flags = tests[i].flags;
}
- err = bpf_prog_test_run_xattr(&tattr);
- CHECK_ATTR(tattr.data_size_out != sizeof(flow_keys) ||
- err || tattr.retval != 1,
- tests[i].name,
- "err %d errno %d retval %d duration %d size %u/%zu\n",
- err, errno, tattr.retval, tattr.duration,
- tattr.data_size_out, sizeof(flow_keys));
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 1, "test_run retval");
+ ASSERT_EQ(topts.data_size_out, sizeof(flow_keys),
+ "test_run data_size_out");
CHECK_FLOW_KEYS(tests[i].name, flow_keys, tests[i].keys);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/flow_dissector_load_bytes.c b/tools/testing/selftests/bpf/prog_tests/flow_dissector_load_bytes.c
index 93ac3f28226c..36afb409c25f 100644
--- a/tools/testing/selftests/bpf/prog_tests/flow_dissector_load_bytes.c
+++ b/tools/testing/selftests/bpf/prog_tests/flow_dissector_load_bytes.c
@@ -5,7 +5,6 @@
void serial_test_flow_dissector_load_bytes(void)
{
struct bpf_flow_keys flow_keys;
- __u32 duration = 0, retval, size;
struct bpf_insn prog[] = {
// BPF_REG_1 - 1st argument: context
// BPF_REG_2 - 2nd argument: offset, start at first byte
@@ -27,22 +26,25 @@ void serial_test_flow_dissector_load_bytes(void)
BPF_EXIT_INSN(),
};
int fd, err;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .data_out = &flow_keys,
+ .data_size_out = sizeof(flow_keys),
+ .repeat = 1,
+ );
/* make sure bpf_skb_load_bytes is not allowed from skb-less context
*/
fd = bpf_test_load_program(BPF_PROG_TYPE_FLOW_DISSECTOR, prog,
ARRAY_SIZE(prog), "GPL", 0, NULL, 0);
- CHECK(fd < 0,
- "flow_dissector-bpf_skb_load_bytes-load",
- "fd %d errno %d\n",
- fd, errno);
+ ASSERT_GE(fd, 0, "bpf_test_load_program good fd");
- err = bpf_prog_test_run(fd, 1, &pkt_v4, sizeof(pkt_v4),
- &flow_keys, &size, &retval, &duration);
- CHECK(size != sizeof(flow_keys) || err || retval != 1,
- "flow_dissector-bpf_skb_load_bytes",
- "err %d errno %d retval %d duration %d size %u/%zu\n",
- err, errno, retval, duration, size, sizeof(flow_keys));
+ err = bpf_prog_test_run_opts(fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.data_size_out, sizeof(flow_keys),
+ "test_run data_size_out");
+ ASSERT_EQ(topts.retval, 1, "test_run retval");
if (fd >= -1)
close(fd);
diff --git a/tools/testing/selftests/bpf/prog_tests/for_each.c b/tools/testing/selftests/bpf/prog_tests/for_each.c
index 68eb12a287d4..044df13ee069 100644
--- a/tools/testing/selftests/bpf/prog_tests/for_each.c
+++ b/tools/testing/selftests/bpf/prog_tests/for_each.c
@@ -12,8 +12,13 @@ static void test_hash_map(void)
int i, err, hashmap_fd, max_entries, percpu_map_fd;
struct for_each_hash_map_elem *skel;
__u64 *percpu_valbuf = NULL;
- __u32 key, num_cpus, retval;
+ __u32 key, num_cpus;
__u64 val;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
skel = for_each_hash_map_elem__open_and_load();
if (!ASSERT_OK_PTR(skel, "for_each_hash_map_elem__open_and_load"))
@@ -42,11 +47,10 @@ static void test_hash_map(void)
if (!ASSERT_OK(err, "percpu_map_update"))
goto out;
- err = bpf_prog_test_run(bpf_program__fd(skel->progs.test_pkt_access),
- 1, &pkt_v4, sizeof(pkt_v4), NULL, NULL,
- &retval, &duration);
- if (CHECK(err || retval, "ipv4", "err %d errno %d retval %d\n",
- err, errno, retval))
+ err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_pkt_access), &topts);
+ duration = topts.duration;
+ if (CHECK(err || topts.retval, "ipv4", "err %d errno %d retval %d\n",
+ err, errno, topts.retval))
goto out;
ASSERT_EQ(skel->bss->hashmap_output, 4, "hashmap_output");
@@ -69,11 +73,16 @@ out:
static void test_array_map(void)
{
- __u32 key, num_cpus, max_entries, retval;
+ __u32 key, num_cpus, max_entries;
int i, arraymap_fd, percpu_map_fd, err;
struct for_each_array_map_elem *skel;
__u64 *percpu_valbuf = NULL;
__u64 val, expected_total;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
skel = for_each_array_map_elem__open_and_load();
if (!ASSERT_OK_PTR(skel, "for_each_array_map_elem__open_and_load"))
@@ -106,11 +115,10 @@ static void test_array_map(void)
if (!ASSERT_OK(err, "percpu_map_update"))
goto out;
- err = bpf_prog_test_run(bpf_program__fd(skel->progs.test_pkt_access),
- 1, &pkt_v4, sizeof(pkt_v4), NULL, NULL,
- &retval, &duration);
- if (CHECK(err || retval, "ipv4", "err %d errno %d retval %d\n",
- err, errno, retval))
+ err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_pkt_access), &topts);
+ duration = topts.duration;
+ if (CHECK(err || topts.retval, "ipv4", "err %d errno %d retval %d\n",
+ err, errno, topts.retval))
goto out;
ASSERT_EQ(skel->bss->arraymap_output, expected_total, "array_output");
diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
index 85c427119fe9..28cf63963cb7 100644
--- a/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
@@ -5,8 +5,8 @@
void test_get_func_args_test(void)
{
struct get_func_args_test *skel = NULL;
- __u32 duration = 0, retval;
int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
skel = get_func_args_test__open_and_load();
if (!ASSERT_OK_PTR(skel, "get_func_args_test__open_and_load"))
@@ -20,19 +20,17 @@ void test_get_func_args_test(void)
* fentry/fexit programs.
*/
prog_fd = bpf_program__fd(skel->progs.test1);
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "test_run");
- ASSERT_EQ(retval, 0, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run");
/* This runs bpf_modify_return_test function and triggers
* fmod_ret_test and fexit_test programs.
*/
prog_fd = bpf_program__fd(skel->progs.fmod_ret_test);
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "test_run");
- ASSERT_EQ(retval, 1234, "test_run");
+ ASSERT_EQ(topts.retval, 1234, "test_run");
ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
index 02a465f36d59..938dbd4d7c2f 100644
--- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
@@ -5,8 +5,8 @@
void test_get_func_ip_test(void)
{
struct get_func_ip_test *skel = NULL;
- __u32 duration = 0, retval;
int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
skel = get_func_ip_test__open();
if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open"))
@@ -29,14 +29,12 @@ void test_get_func_ip_test(void)
goto cleanup;
prog_fd = bpf_program__fd(skel->progs.test1);
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "test_run");
- ASSERT_EQ(retval, 0, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run");
prog_fd = bpf_program__fd(skel->progs.test5);
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "test_run");
diff --git a/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c b/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c
index 8d5a6023a1bb..5308de1ed478 100644
--- a/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c
+++ b/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c
@@ -27,7 +27,7 @@ void test_get_stackid_cannot_attach(void)
return;
/* override program type */
- bpf_program__set_perf_event(skel->progs.oncpu);
+ bpf_program__set_type(skel->progs.oncpu, BPF_PROG_TYPE_PERF_EVENT);
err = test_stacktrace_build_id__load(skel);
if (CHECK(err, "skel_load", "skeleton load failed: %d\n", err))
diff --git a/tools/testing/selftests/bpf/prog_tests/global_data.c b/tools/testing/selftests/bpf/prog_tests/global_data.c
index 9da131b32e13..027685858925 100644
--- a/tools/testing/selftests/bpf/prog_tests/global_data.c
+++ b/tools/testing/selftests/bpf/prog_tests/global_data.c
@@ -29,7 +29,7 @@ static void test_global_data_number(struct bpf_object *obj, __u32 duration)
{ "relocate .rodata reference", 10, ~0 },
};
- for (i = 0; i < sizeof(tests) / sizeof(tests[0]); i++) {
+ for (i = 0; i < ARRAY_SIZE(tests); i++) {
err = bpf_map_lookup_elem(map_fd, &tests[i].key, &num);
CHECK(err || num != tests[i].num, tests[i].name,
"err %d result %llx expected %llx\n",
@@ -58,7 +58,7 @@ static void test_global_data_string(struct bpf_object *obj, __u32 duration)
{ "relocate .bss reference", 4, "\0\0hello" },
};
- for (i = 0; i < sizeof(tests) / sizeof(tests[0]); i++) {
+ for (i = 0; i < ARRAY_SIZE(tests); i++) {
err = bpf_map_lookup_elem(map_fd, &tests[i].key, str);
CHECK(err || memcmp(str, tests[i].str, sizeof(str)),
tests[i].name, "err %d result \'%s\' expected \'%s\'\n",
@@ -92,7 +92,7 @@ static void test_global_data_struct(struct bpf_object *obj, __u32 duration)
{ "relocate .data reference", 3, { 41, 0xeeeeefef, 0x2111111111111111ULL, } },
};
- for (i = 0; i < sizeof(tests) / sizeof(tests[0]); i++) {
+ for (i = 0; i < ARRAY_SIZE(tests); i++) {
err = bpf_map_lookup_elem(map_fd, &tests[i].key, &val);
CHECK(err || memcmp(&val, &tests[i].val, sizeof(val)),
tests[i].name, "err %d result { %u, %u, %llu } expected { %u, %u, %llu }\n",
@@ -121,7 +121,7 @@ static void test_global_data_rdonly(struct bpf_object *obj, __u32 duration)
if (CHECK_FAIL(map_fd < 0))
return;
- buff = malloc(bpf_map__def(map)->value_size);
+ buff = malloc(bpf_map__value_size(map));
if (buff)
err = bpf_map_update_elem(map_fd, &zero, buff, 0);
free(buff);
@@ -132,24 +132,26 @@ static void test_global_data_rdonly(struct bpf_object *obj, __u32 duration)
void test_global_data(void)
{
const char *file = "./test_global_data.o";
- __u32 duration = 0, retval;
struct bpf_object *obj;
int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
err = bpf_prog_test_load(file, BPF_PROG_TYPE_SCHED_CLS, &obj, &prog_fd);
- if (CHECK(err, "load program", "error %d loading %s\n", err, file))
+ if (!ASSERT_OK(err, "load program"))
return;
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
- NULL, NULL, &retval, &duration);
- CHECK(err || retval, "pass global data run",
- "err %d errno %d retval %d duration %d\n",
- err, errno, retval, duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "pass global data run err");
+ ASSERT_OK(topts.retval, "pass global data run retval");
- test_global_data_number(obj, duration);
- test_global_data_string(obj, duration);
- test_global_data_struct(obj, duration);
- test_global_data_rdonly(obj, duration);
+ test_global_data_number(obj, topts.duration);
+ test_global_data_string(obj, topts.duration);
+ test_global_data_struct(obj, topts.duration);
+ test_global_data_rdonly(obj, topts.duration);
bpf_object__close(obj);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/global_data_init.c b/tools/testing/selftests/bpf/prog_tests/global_data_init.c
index 1db86eab101b..57331c606964 100644
--- a/tools/testing/selftests/bpf/prog_tests/global_data_init.c
+++ b/tools/testing/selftests/bpf/prog_tests/global_data_init.c
@@ -20,7 +20,7 @@ void test_global_data_init(void)
if (CHECK_FAIL(!map || !bpf_map__is_internal(map)))
goto out;
- sz = bpf_map__def(map)->value_size;
+ sz = bpf_map__value_size(map);
newval = malloc(sz);
if (CHECK_FAIL(!newval))
goto out;
diff --git a/tools/testing/selftests/bpf/prog_tests/global_func_args.c b/tools/testing/selftests/bpf/prog_tests/global_func_args.c
index 93a2439237b0..29039a36cce5 100644
--- a/tools/testing/selftests/bpf/prog_tests/global_func_args.c
+++ b/tools/testing/selftests/bpf/prog_tests/global_func_args.c
@@ -40,19 +40,21 @@ static void test_global_func_args0(struct bpf_object *obj)
void test_global_func_args(void)
{
const char *file = "./test_global_func_args.o";
- __u32 retval;
struct bpf_object *obj;
int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
err = bpf_prog_test_load(file, BPF_PROG_TYPE_CGROUP_SKB, &obj, &prog_fd);
if (CHECK(err, "load program", "error %d loading %s\n", err, file))
return;
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
- NULL, NULL, &retval, &duration);
- CHECK(err || retval, "pass global func args run",
- "err %d errno %d retval %d duration %d\n",
- err, errno, retval, duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_OK(topts.retval, "test_run retval");
test_global_func_args0(obj);
diff --git a/tools/testing/selftests/bpf/prog_tests/kfree_skb.c b/tools/testing/selftests/bpf/prog_tests/kfree_skb.c
index ce10d2fc3a6c..1cee6957285e 100644
--- a/tools/testing/selftests/bpf/prog_tests/kfree_skb.c
+++ b/tools/testing/selftests/bpf/prog_tests/kfree_skb.c
@@ -53,24 +53,24 @@ static void on_sample(void *ctx, int cpu, void *data, __u32 size)
void serial_test_kfree_skb(void)
{
struct __sk_buff skb = {};
- struct bpf_prog_test_run_attr tattr = {
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
.data_in = &pkt_v6,
.data_size_in = sizeof(pkt_v6),
.ctx_in = &skb,
.ctx_size_in = sizeof(skb),
- };
+ );
struct kfree_skb *skel = NULL;
struct bpf_link *link;
struct bpf_object *obj;
struct perf_buffer *pb = NULL;
- int err;
+ int err, prog_fd;
bool passed = false;
__u32 duration = 0;
const int zero = 0;
bool test_ok[2];
err = bpf_prog_test_load("./test_pkt_access.o", BPF_PROG_TYPE_SCHED_CLS,
- &obj, &tattr.prog_fd);
+ &obj, &prog_fd);
if (CHECK(err, "prog_load sched cls", "err %d errno %d\n", err, errno))
return;
@@ -100,11 +100,9 @@ void serial_test_kfree_skb(void)
goto close_prog;
memcpy(skb.cb, &cb, sizeof(cb));
- err = bpf_prog_test_run_xattr(&tattr);
- duration = tattr.duration;
- CHECK(err || tattr.retval, "ipv6",
- "err %d errno %d retval %d duration %d\n",
- err, errno, tattr.retval, duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "ipv6 test_run");
+ ASSERT_OK(topts.retval, "ipv6 test_run retval");
/* read perf buffer */
err = perf_buffer__poll(pb, 100);
diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
index 7d7445ccc141..c00eb974eb85 100644
--- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
+++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
@@ -9,23 +9,31 @@
static void test_main(void)
{
struct kfunc_call_test_lskel *skel;
- int prog_fd, retval, err;
+ int prog_fd, err;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
skel = kfunc_call_test_lskel__open_and_load();
if (!ASSERT_OK_PTR(skel, "skel"))
return;
prog_fd = skel->progs.kfunc_call_test1.prog_fd;
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
- NULL, NULL, (__u32 *)&retval, NULL);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "bpf_prog_test_run(test1)");
- ASSERT_EQ(retval, 12, "test1-retval");
+ ASSERT_EQ(topts.retval, 12, "test1-retval");
prog_fd = skel->progs.kfunc_call_test2.prog_fd;
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
- NULL, NULL, (__u32 *)&retval, NULL);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "bpf_prog_test_run(test2)");
- ASSERT_EQ(retval, 3, "test2-retval");
+ ASSERT_EQ(topts.retval, 3, "test2-retval");
+
+ prog_fd = skel->progs.kfunc_call_test_ref_btf_id.prog_fd;
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "bpf_prog_test_run(test_ref_btf_id)");
+ ASSERT_EQ(topts.retval, 0, "test_ref_btf_id-retval");
kfunc_call_test_lskel__destroy(skel);
}
@@ -33,17 +41,21 @@ static void test_main(void)
static void test_subprog(void)
{
struct kfunc_call_test_subprog *skel;
- int prog_fd, retval, err;
+ int prog_fd, err;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
skel = kfunc_call_test_subprog__open_and_load();
if (!ASSERT_OK_PTR(skel, "skel"))
return;
prog_fd = bpf_program__fd(skel->progs.kfunc_call_test1);
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
- NULL, NULL, (__u32 *)&retval, NULL);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "bpf_prog_test_run(test1)");
- ASSERT_EQ(retval, 10, "test1-retval");
+ ASSERT_EQ(topts.retval, 10, "test1-retval");
ASSERT_NEQ(skel->data->active_res, -1, "active_res");
ASSERT_EQ(skel->data->sk_state_res, BPF_TCP_CLOSE, "sk_state_res");
@@ -53,17 +65,21 @@ static void test_subprog(void)
static void test_subprog_lskel(void)
{
struct kfunc_call_test_subprog_lskel *skel;
- int prog_fd, retval, err;
+ int prog_fd, err;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
skel = kfunc_call_test_subprog_lskel__open_and_load();
if (!ASSERT_OK_PTR(skel, "skel"))
return;
prog_fd = skel->progs.kfunc_call_test1.prog_fd;
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
- NULL, NULL, (__u32 *)&retval, NULL);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "bpf_prog_test_run(test1)");
- ASSERT_EQ(retval, 10, "test1-retval");
+ ASSERT_EQ(topts.retval, 10, "test1-retval");
ASSERT_NEQ(skel->data->active_res, -1, "active_res");
ASSERT_EQ(skel->data->sk_state_res, BPF_TCP_CLOSE, "sk_state_res");
diff --git a/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c
new file mode 100644
index 000000000000..b9876b55fc0c
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c
@@ -0,0 +1,323 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "kprobe_multi.skel.h"
+#include "trace_helpers.h"
+
+static void kprobe_multi_test_run(struct kprobe_multi *skel, bool test_return)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ int err, prog_fd;
+
+ prog_fd = bpf_program__fd(skel->progs.trigger);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run");
+
+ ASSERT_EQ(skel->bss->kprobe_test1_result, 1, "kprobe_test1_result");
+ ASSERT_EQ(skel->bss->kprobe_test2_result, 1, "kprobe_test2_result");
+ ASSERT_EQ(skel->bss->kprobe_test3_result, 1, "kprobe_test3_result");
+ ASSERT_EQ(skel->bss->kprobe_test4_result, 1, "kprobe_test4_result");
+ ASSERT_EQ(skel->bss->kprobe_test5_result, 1, "kprobe_test5_result");
+ ASSERT_EQ(skel->bss->kprobe_test6_result, 1, "kprobe_test6_result");
+ ASSERT_EQ(skel->bss->kprobe_test7_result, 1, "kprobe_test7_result");
+ ASSERT_EQ(skel->bss->kprobe_test8_result, 1, "kprobe_test8_result");
+
+ if (test_return) {
+ ASSERT_EQ(skel->bss->kretprobe_test1_result, 1, "kretprobe_test1_result");
+ ASSERT_EQ(skel->bss->kretprobe_test2_result, 1, "kretprobe_test2_result");
+ ASSERT_EQ(skel->bss->kretprobe_test3_result, 1, "kretprobe_test3_result");
+ ASSERT_EQ(skel->bss->kretprobe_test4_result, 1, "kretprobe_test4_result");
+ ASSERT_EQ(skel->bss->kretprobe_test5_result, 1, "kretprobe_test5_result");
+ ASSERT_EQ(skel->bss->kretprobe_test6_result, 1, "kretprobe_test6_result");
+ ASSERT_EQ(skel->bss->kretprobe_test7_result, 1, "kretprobe_test7_result");
+ ASSERT_EQ(skel->bss->kretprobe_test8_result, 1, "kretprobe_test8_result");
+ }
+}
+
+static void test_skel_api(void)
+{
+ struct kprobe_multi *skel = NULL;
+ int err;
+
+ skel = kprobe_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "kprobe_multi__open_and_load"))
+ goto cleanup;
+
+ skel->bss->pid = getpid();
+ err = kprobe_multi__attach(skel);
+ if (!ASSERT_OK(err, "kprobe_multi__attach"))
+ goto cleanup;
+
+ kprobe_multi_test_run(skel, true);
+
+cleanup:
+ kprobe_multi__destroy(skel);
+}
+
+static void test_link_api(struct bpf_link_create_opts *opts)
+{
+ int prog_fd, link1_fd = -1, link2_fd = -1;
+ struct kprobe_multi *skel = NULL;
+
+ skel = kprobe_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "fentry_raw_skel_load"))
+ goto cleanup;
+
+ skel->bss->pid = getpid();
+ prog_fd = bpf_program__fd(skel->progs.test_kprobe);
+ link1_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_KPROBE_MULTI, opts);
+ if (!ASSERT_GE(link1_fd, 0, "link_fd"))
+ goto cleanup;
+
+ opts->kprobe_multi.flags = BPF_F_KPROBE_MULTI_RETURN;
+ prog_fd = bpf_program__fd(skel->progs.test_kretprobe);
+ link2_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_KPROBE_MULTI, opts);
+ if (!ASSERT_GE(link2_fd, 0, "link_fd"))
+ goto cleanup;
+
+ kprobe_multi_test_run(skel, true);
+
+cleanup:
+ if (link1_fd != -1)
+ close(link1_fd);
+ if (link2_fd != -1)
+ close(link2_fd);
+ kprobe_multi__destroy(skel);
+}
+
+#define GET_ADDR(__sym, __addr) ({ \
+ __addr = ksym_get_addr(__sym); \
+ if (!ASSERT_NEQ(__addr, 0, "kallsyms load failed for " #__sym)) \
+ return; \
+})
+
+static void test_link_api_addrs(void)
+{
+ LIBBPF_OPTS(bpf_link_create_opts, opts);
+ unsigned long long addrs[8];
+
+ GET_ADDR("bpf_fentry_test1", addrs[0]);
+ GET_ADDR("bpf_fentry_test2", addrs[1]);
+ GET_ADDR("bpf_fentry_test3", addrs[2]);
+ GET_ADDR("bpf_fentry_test4", addrs[3]);
+ GET_ADDR("bpf_fentry_test5", addrs[4]);
+ GET_ADDR("bpf_fentry_test6", addrs[5]);
+ GET_ADDR("bpf_fentry_test7", addrs[6]);
+ GET_ADDR("bpf_fentry_test8", addrs[7]);
+
+ opts.kprobe_multi.addrs = (const unsigned long*) addrs;
+ opts.kprobe_multi.cnt = ARRAY_SIZE(addrs);
+ test_link_api(&opts);
+}
+
+static void test_link_api_syms(void)
+{
+ LIBBPF_OPTS(bpf_link_create_opts, opts);
+ const char *syms[8] = {
+ "bpf_fentry_test1",
+ "bpf_fentry_test2",
+ "bpf_fentry_test3",
+ "bpf_fentry_test4",
+ "bpf_fentry_test5",
+ "bpf_fentry_test6",
+ "bpf_fentry_test7",
+ "bpf_fentry_test8",
+ };
+
+ opts.kprobe_multi.syms = syms;
+ opts.kprobe_multi.cnt = ARRAY_SIZE(syms);
+ test_link_api(&opts);
+}
+
+static void
+test_attach_api(const char *pattern, struct bpf_kprobe_multi_opts *opts)
+{
+ struct bpf_link *link1 = NULL, *link2 = NULL;
+ struct kprobe_multi *skel = NULL;
+
+ skel = kprobe_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "fentry_raw_skel_load"))
+ goto cleanup;
+
+ skel->bss->pid = getpid();
+ link1 = bpf_program__attach_kprobe_multi_opts(skel->progs.test_kprobe,
+ pattern, opts);
+ if (!ASSERT_OK_PTR(link1, "bpf_program__attach_kprobe_multi_opts"))
+ goto cleanup;
+
+ if (opts) {
+ opts->retprobe = true;
+ link2 = bpf_program__attach_kprobe_multi_opts(skel->progs.test_kretprobe,
+ pattern, opts);
+ if (!ASSERT_OK_PTR(link2, "bpf_program__attach_kprobe_multi_opts"))
+ goto cleanup;
+ }
+
+ kprobe_multi_test_run(skel, !!opts);
+
+cleanup:
+ bpf_link__destroy(link2);
+ bpf_link__destroy(link1);
+ kprobe_multi__destroy(skel);
+}
+
+static void test_attach_api_pattern(void)
+{
+ LIBBPF_OPTS(bpf_kprobe_multi_opts, opts);
+
+ test_attach_api("bpf_fentry_test*", &opts);
+ test_attach_api("bpf_fentry_test?", NULL);
+}
+
+static void test_attach_api_addrs(void)
+{
+ LIBBPF_OPTS(bpf_kprobe_multi_opts, opts);
+ unsigned long long addrs[8];
+
+ GET_ADDR("bpf_fentry_test1", addrs[0]);
+ GET_ADDR("bpf_fentry_test2", addrs[1]);
+ GET_ADDR("bpf_fentry_test3", addrs[2]);
+ GET_ADDR("bpf_fentry_test4", addrs[3]);
+ GET_ADDR("bpf_fentry_test5", addrs[4]);
+ GET_ADDR("bpf_fentry_test6", addrs[5]);
+ GET_ADDR("bpf_fentry_test7", addrs[6]);
+ GET_ADDR("bpf_fentry_test8", addrs[7]);
+
+ opts.addrs = (const unsigned long *) addrs;
+ opts.cnt = ARRAY_SIZE(addrs);
+ test_attach_api(NULL, &opts);
+}
+
+static void test_attach_api_syms(void)
+{
+ LIBBPF_OPTS(bpf_kprobe_multi_opts, opts);
+ const char *syms[8] = {
+ "bpf_fentry_test1",
+ "bpf_fentry_test2",
+ "bpf_fentry_test3",
+ "bpf_fentry_test4",
+ "bpf_fentry_test5",
+ "bpf_fentry_test6",
+ "bpf_fentry_test7",
+ "bpf_fentry_test8",
+ };
+
+ opts.syms = syms;
+ opts.cnt = ARRAY_SIZE(syms);
+ test_attach_api(NULL, &opts);
+}
+
+static void test_attach_api_fails(void)
+{
+ LIBBPF_OPTS(bpf_kprobe_multi_opts, opts);
+ struct kprobe_multi *skel = NULL;
+ struct bpf_link *link = NULL;
+ unsigned long long addrs[2];
+ const char *syms[2] = {
+ "bpf_fentry_test1",
+ "bpf_fentry_test2",
+ };
+ __u64 cookies[2];
+
+ addrs[0] = ksym_get_addr("bpf_fentry_test1");
+ addrs[1] = ksym_get_addr("bpf_fentry_test2");
+
+ if (!ASSERT_FALSE(!addrs[0] || !addrs[1], "ksym_get_addr"))
+ goto cleanup;
+
+ skel = kprobe_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "fentry_raw_skel_load"))
+ goto cleanup;
+
+ skel->bss->pid = getpid();
+
+ /* fail_1 - pattern and opts NULL */
+ link = bpf_program__attach_kprobe_multi_opts(skel->progs.test_kprobe,
+ NULL, NULL);
+ if (!ASSERT_ERR_PTR(link, "fail_1"))
+ goto cleanup;
+
+ if (!ASSERT_EQ(libbpf_get_error(link), -EINVAL, "fail_1_error"))
+ goto cleanup;
+
+ /* fail_2 - both addrs and syms set */
+ opts.addrs = (const unsigned long *) addrs;
+ opts.syms = syms;
+ opts.cnt = ARRAY_SIZE(syms);
+ opts.cookies = NULL;
+
+ link = bpf_program__attach_kprobe_multi_opts(skel->progs.test_kprobe,
+ NULL, &opts);
+ if (!ASSERT_ERR_PTR(link, "fail_2"))
+ goto cleanup;
+
+ if (!ASSERT_EQ(libbpf_get_error(link), -EINVAL, "fail_2_error"))
+ goto cleanup;
+
+ /* fail_3 - pattern and addrs set */
+ opts.addrs = (const unsigned long *) addrs;
+ opts.syms = NULL;
+ opts.cnt = ARRAY_SIZE(syms);
+ opts.cookies = NULL;
+
+ link = bpf_program__attach_kprobe_multi_opts(skel->progs.test_kprobe,
+ "ksys_*", &opts);
+ if (!ASSERT_ERR_PTR(link, "fail_3"))
+ goto cleanup;
+
+ if (!ASSERT_EQ(libbpf_get_error(link), -EINVAL, "fail_3_error"))
+ goto cleanup;
+
+ /* fail_4 - pattern and cnt set */
+ opts.addrs = NULL;
+ opts.syms = NULL;
+ opts.cnt = ARRAY_SIZE(syms);
+ opts.cookies = NULL;
+
+ link = bpf_program__attach_kprobe_multi_opts(skel->progs.test_kprobe,
+ "ksys_*", &opts);
+ if (!ASSERT_ERR_PTR(link, "fail_4"))
+ goto cleanup;
+
+ if (!ASSERT_EQ(libbpf_get_error(link), -EINVAL, "fail_4_error"))
+ goto cleanup;
+
+ /* fail_5 - pattern and cookies */
+ opts.addrs = NULL;
+ opts.syms = NULL;
+ opts.cnt = 0;
+ opts.cookies = cookies;
+
+ link = bpf_program__attach_kprobe_multi_opts(skel->progs.test_kprobe,
+ "ksys_*", &opts);
+ if (!ASSERT_ERR_PTR(link, "fail_5"))
+ goto cleanup;
+
+ if (!ASSERT_EQ(libbpf_get_error(link), -EINVAL, "fail_5_error"))
+ goto cleanup;
+
+cleanup:
+ bpf_link__destroy(link);
+ kprobe_multi__destroy(skel);
+}
+
+void test_kprobe_multi_test(void)
+{
+ if (!ASSERT_OK(load_kallsyms(), "load_kallsyms"))
+ return;
+
+ if (test__start_subtest("skel_api"))
+ test_skel_api();
+ if (test__start_subtest("link_api_addrs"))
+ test_link_api_syms();
+ if (test__start_subtest("link_api_syms"))
+ test_link_api_addrs();
+ if (test__start_subtest("attach_api_pattern"))
+ test_attach_api_pattern();
+ if (test__start_subtest("attach_api_addrs"))
+ test_attach_api_addrs();
+ if (test__start_subtest("attach_api_syms"))
+ test_attach_api_syms();
+ if (test__start_subtest("attach_api_fails"))
+ test_attach_api_fails();
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/ksyms_module.c b/tools/testing/selftests/bpf/prog_tests/ksyms_module.c
index d490ad80eccb..a1ebac70ec29 100644
--- a/tools/testing/selftests/bpf/prog_tests/ksyms_module.c
+++ b/tools/testing/selftests/bpf/prog_tests/ksyms_module.c
@@ -6,11 +6,15 @@
#include "test_ksyms_module.lskel.h"
#include "test_ksyms_module.skel.h"
-void test_ksyms_module_lskel(void)
+static void test_ksyms_module_lskel(void)
{
struct test_ksyms_module_lskel *skel;
- int retval;
int err;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
if (!env.has_testmod) {
test__skip();
@@ -20,20 +24,24 @@ void test_ksyms_module_lskel(void)
skel = test_ksyms_module_lskel__open_and_load();
if (!ASSERT_OK_PTR(skel, "test_ksyms_module_lskel__open_and_load"))
return;
- err = bpf_prog_test_run(skel->progs.load.prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
- NULL, NULL, (__u32 *)&retval, NULL);
+ err = bpf_prog_test_run_opts(skel->progs.load.prog_fd, &topts);
if (!ASSERT_OK(err, "bpf_prog_test_run"))
goto cleanup;
- ASSERT_EQ(retval, 0, "retval");
+ ASSERT_EQ(topts.retval, 0, "retval");
ASSERT_EQ(skel->bss->out_bpf_testmod_ksym, 42, "bpf_testmod_ksym");
cleanup:
test_ksyms_module_lskel__destroy(skel);
}
-void test_ksyms_module_libbpf(void)
+static void test_ksyms_module_libbpf(void)
{
struct test_ksyms_module *skel;
- int retval, err;
+ int err;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
if (!env.has_testmod) {
test__skip();
@@ -43,11 +51,10 @@ void test_ksyms_module_libbpf(void)
skel = test_ksyms_module__open_and_load();
if (!ASSERT_OK_PTR(skel, "test_ksyms_module__open"))
return;
- err = bpf_prog_test_run(bpf_program__fd(skel->progs.load), 1, &pkt_v4,
- sizeof(pkt_v4), NULL, NULL, (__u32 *)&retval, NULL);
+ err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.load), &topts);
if (!ASSERT_OK(err, "bpf_prog_test_run"))
goto cleanup;
- ASSERT_EQ(retval, 0, "retval");
+ ASSERT_EQ(topts.retval, 0, "retval");
ASSERT_EQ(skel->bss->out_bpf_testmod_ksym, 42, "bpf_testmod_ksym");
cleanup:
test_ksyms_module__destroy(skel);
diff --git a/tools/testing/selftests/bpf/prog_tests/l4lb_all.c b/tools/testing/selftests/bpf/prog_tests/l4lb_all.c
index 540ef28fabff..55f733ff4109 100644
--- a/tools/testing/selftests/bpf/prog_tests/l4lb_all.c
+++ b/tools/testing/selftests/bpf/prog_tests/l4lb_all.c
@@ -23,12 +23,16 @@ static void test_l4lb(const char *file)
__u8 flags;
} real_def = {.dst = MAGIC_VAL};
__u32 ch_key = 11, real_num = 3;
- __u32 duration, retval, size;
int err, i, prog_fd, map_fd;
__u64 bytes = 0, pkts = 0;
struct bpf_object *obj;
char buf[128];
u32 *magic = (u32 *)buf;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_out = buf,
+ .data_size_out = sizeof(buf),
+ .repeat = NUM_ITER,
+ );
err = bpf_prog_test_load(file, BPF_PROG_TYPE_SCHED_CLS, &obj, &prog_fd);
if (CHECK_FAIL(err))
@@ -49,19 +53,24 @@ static void test_l4lb(const char *file)
goto out;
bpf_map_update_elem(map_fd, &real_num, &real_def, 0);
- err = bpf_prog_test_run(prog_fd, NUM_ITER, &pkt_v4, sizeof(pkt_v4),
- buf, &size, &retval, &duration);
- CHECK(err || retval != 7/*TC_ACT_REDIRECT*/ || size != 54 ||
- *magic != MAGIC_VAL, "ipv4",
- "err %d errno %d retval %d size %d magic %x\n",
- err, errno, retval, size, *magic);
+ topts.data_in = &pkt_v4;
+ topts.data_size_in = sizeof(pkt_v4);
- err = bpf_prog_test_run(prog_fd, NUM_ITER, &pkt_v6, sizeof(pkt_v6),
- buf, &size, &retval, &duration);
- CHECK(err || retval != 7/*TC_ACT_REDIRECT*/ || size != 74 ||
- *magic != MAGIC_VAL, "ipv6",
- "err %d errno %d retval %d size %d magic %x\n",
- err, errno, retval, size, *magic);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 7 /*TC_ACT_REDIRECT*/, "ipv4 test_run retval");
+ ASSERT_EQ(topts.data_size_out, 54, "ipv4 test_run data_size_out");
+ ASSERT_EQ(*magic, MAGIC_VAL, "ipv4 magic");
+
+ topts.data_in = &pkt_v6;
+ topts.data_size_in = sizeof(pkt_v6);
+ topts.data_size_out = sizeof(buf); /* reset out size */
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 7 /*TC_ACT_REDIRECT*/, "ipv6 test_run retval");
+ ASSERT_EQ(topts.data_size_out, 74, "ipv6 test_run data_size_out");
+ ASSERT_EQ(*magic, MAGIC_VAL, "ipv6 magic");
map_fd = bpf_find_map(__func__, obj, "stats");
if (map_fd < 0)
diff --git a/tools/testing/selftests/bpf/prog_tests/log_buf.c b/tools/testing/selftests/bpf/prog_tests/log_buf.c
index e469b023962b..fe9a23e65ef4 100644
--- a/tools/testing/selftests/bpf/prog_tests/log_buf.c
+++ b/tools/testing/selftests/bpf/prog_tests/log_buf.c
@@ -78,7 +78,7 @@ static void obj_load_log_buf(void)
ASSERT_OK_PTR(strstr(libbpf_log_buf, "prog 'bad_prog': BPF program load failed"),
"libbpf_log_not_empty");
ASSERT_OK_PTR(strstr(obj_log_buf, "DATASEC license"), "obj_log_not_empty");
- ASSERT_OK_PTR(strstr(good_log_buf, "0: R1=ctx(id=0,off=0,imm=0) R10=fp0"),
+ ASSERT_OK_PTR(strstr(good_log_buf, "0: R1=ctx(off=0,imm=0) R10=fp0"),
"good_log_verbose");
ASSERT_OK_PTR(strstr(bad_log_buf, "invalid access to map value, value_size=16 off=16000 size=4"),
"bad_log_not_empty");
@@ -175,7 +175,7 @@ static void bpf_prog_load_log_buf(void)
opts.log_level = 2;
fd = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, "good_prog", "GPL",
good_prog_insns, good_prog_insn_cnt, &opts);
- ASSERT_OK_PTR(strstr(log_buf, "0: R1=ctx(id=0,off=0,imm=0) R10=fp0"), "good_log_2");
+ ASSERT_OK_PTR(strstr(log_buf, "0: R1=ctx(off=0,imm=0) R10=fp0"), "good_log_2");
ASSERT_GE(fd, 0, "good_fd2");
if (fd >= 0)
close(fd);
@@ -202,7 +202,7 @@ static void bpf_btf_load_log_buf(void)
const void *raw_btf_data;
__u32 raw_btf_size;
struct btf *btf;
- char *log_buf;
+ char *log_buf = NULL;
int fd = -1;
btf = btf__new_empty();
diff --git a/tools/testing/selftests/bpf/prog_tests/map_lock.c b/tools/testing/selftests/bpf/prog_tests/map_lock.c
index 23d19e9cf26a..e4e99b37df64 100644
--- a/tools/testing/selftests/bpf/prog_tests/map_lock.c
+++ b/tools/testing/selftests/bpf/prog_tests/map_lock.c
@@ -4,14 +4,17 @@
static void *spin_lock_thread(void *arg)
{
- __u32 duration, retval;
int err, prog_fd = *(u32 *) arg;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 10000,
+ );
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run_opts err");
+ ASSERT_OK(topts.retval, "test_run_opts retval");
- err = bpf_prog_test_run(prog_fd, 10000, &pkt_v4, sizeof(pkt_v4),
- NULL, NULL, &retval, &duration);
- CHECK(err || retval, "",
- "err %d errno %d retval %d duration %d\n",
- err, errno, retval, duration);
pthread_exit(arg);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/map_ptr.c b/tools/testing/selftests/bpf/prog_tests/map_ptr.c
index 273725504f11..43e502acf050 100644
--- a/tools/testing/selftests/bpf/prog_tests/map_ptr.c
+++ b/tools/testing/selftests/bpf/prog_tests/map_ptr.c
@@ -9,10 +9,16 @@
void test_map_ptr(void)
{
struct map_ptr_kern_lskel *skel;
- __u32 duration = 0, retval;
char buf[128];
int err;
int page_size = getpagesize();
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .data_out = buf,
+ .data_size_out = sizeof(buf),
+ .repeat = 1,
+ );
skel = map_ptr_kern_lskel__open();
if (!ASSERT_OK_PTR(skel, "skel_open"))
@@ -26,14 +32,12 @@ void test_map_ptr(void)
skel->bss->page_size = page_size;
- err = bpf_prog_test_run(skel->progs.cg_skb.prog_fd, 1, &pkt_v4,
- sizeof(pkt_v4), buf, NULL, &retval, NULL);
+ err = bpf_prog_test_run_opts(skel->progs.cg_skb.prog_fd, &topts);
- if (CHECK(err, "test_run", "err=%d errno=%d\n", err, errno))
+ if (!ASSERT_OK(err, "test_run"))
goto cleanup;
- if (CHECK(!retval, "retval", "retval=%d map_type=%u line=%u\n", retval,
- skel->bss->g_map_type, skel->bss->g_line))
+ if (!ASSERT_NEQ(topts.retval, 0, "test_run retval"))
goto cleanup;
cleanup:
diff --git a/tools/testing/selftests/bpf/prog_tests/modify_return.c b/tools/testing/selftests/bpf/prog_tests/modify_return.c
index b772fe30ce9b..5d9955af6247 100644
--- a/tools/testing/selftests/bpf/prog_tests/modify_return.c
+++ b/tools/testing/selftests/bpf/prog_tests/modify_return.c
@@ -15,39 +15,31 @@ static void run_test(__u32 input_retval, __u16 want_side_effect, __s16 want_ret)
{
struct modify_return *skel = NULL;
int err, prog_fd;
- __u32 duration = 0, retval;
__u16 side_effect;
__s16 ret;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
skel = modify_return__open_and_load();
- if (CHECK(!skel, "skel_load", "modify_return skeleton failed\n"))
+ if (!ASSERT_OK_PTR(skel, "skel_load"))
goto cleanup;
err = modify_return__attach(skel);
- if (CHECK(err, "modify_return", "attach failed: %d\n", err))
+ if (!ASSERT_OK(err, "modify_return__attach failed"))
goto cleanup;
skel->bss->input_retval = input_retval;
prog_fd = bpf_program__fd(skel->progs.fmod_ret_test);
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0, NULL, 0,
- &retval, &duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
- CHECK(err, "test_run", "err %d errno %d\n", err, errno);
+ side_effect = UPPER(topts.retval);
+ ret = LOWER(topts.retval);
- side_effect = UPPER(retval);
- ret = LOWER(retval);
-
- CHECK(ret != want_ret, "test_run",
- "unexpected ret: %d, expected: %d\n", ret, want_ret);
- CHECK(side_effect != want_side_effect, "modify_return",
- "unexpected side_effect: %d\n", side_effect);
-
- CHECK(skel->bss->fentry_result != 1, "modify_return",
- "fentry failed\n");
- CHECK(skel->bss->fexit_result != 1, "modify_return",
- "fexit failed\n");
- CHECK(skel->bss->fmod_ret_result != 1, "modify_return",
- "fmod_ret failed\n");
+ ASSERT_EQ(ret, want_ret, "test_run ret");
+ ASSERT_EQ(side_effect, want_side_effect, "modify_return side_effect");
+ ASSERT_EQ(skel->bss->fentry_result, 1, "modify_return fentry_result");
+ ASSERT_EQ(skel->bss->fexit_result, 1, "modify_return fexit_result");
+ ASSERT_EQ(skel->bss->fmod_ret_result, 1, "modify_return fmod_ret_result");
cleanup:
modify_return__destroy(skel);
@@ -63,4 +55,3 @@ void serial_test_modify_return(void)
0 /* want_side_effect */,
-EINVAL /* want_ret */);
}
-
diff --git a/tools/testing/selftests/bpf/prog_tests/obj_name.c b/tools/testing/selftests/bpf/prog_tests/obj_name.c
index 6194b776a28b..7093edca6e08 100644
--- a/tools/testing/selftests/bpf/prog_tests/obj_name.c
+++ b/tools/testing/selftests/bpf/prog_tests/obj_name.c
@@ -20,7 +20,7 @@ void test_obj_name(void)
__u32 duration = 0;
int i;
- for (i = 0; i < sizeof(tests) / sizeof(tests[0]); i++) {
+ for (i = 0; i < ARRAY_SIZE(tests); i++) {
size_t name_len = strlen(tests[i].name) + 1;
union bpf_attr attr;
size_t ncopy;
diff --git a/tools/testing/selftests/bpf/prog_tests/perf_branches.c b/tools/testing/selftests/bpf/prog_tests/perf_branches.c
index 12c4f45cee1a..bc24f83339d6 100644
--- a/tools/testing/selftests/bpf/prog_tests/perf_branches.c
+++ b/tools/testing/selftests/bpf/prog_tests/perf_branches.c
@@ -110,7 +110,7 @@ static void test_perf_branches_hw(void)
attr.type = PERF_TYPE_HARDWARE;
attr.config = PERF_COUNT_HW_CPU_CYCLES;
attr.freq = 1;
- attr.sample_freq = 4000;
+ attr.sample_freq = 1000;
attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
attr.branch_sample_type = PERF_SAMPLE_BRANCH_USER | PERF_SAMPLE_BRANCH_ANY;
pfd = syscall(__NR_perf_event_open, &attr, -1, 0, -1, PERF_FLAG_FD_CLOEXEC);
@@ -151,7 +151,7 @@ static void test_perf_branches_no_hw(void)
attr.type = PERF_TYPE_SOFTWARE;
attr.config = PERF_COUNT_SW_CPU_CLOCK;
attr.freq = 1;
- attr.sample_freq = 4000;
+ attr.sample_freq = 1000;
pfd = syscall(__NR_perf_event_open, &attr, -1, 0, -1, PERF_FLAG_FD_CLOEXEC);
if (CHECK(pfd < 0, "perf_event_open", "err %d\n", pfd))
return;
diff --git a/tools/testing/selftests/bpf/prog_tests/perf_link.c b/tools/testing/selftests/bpf/prog_tests/perf_link.c
index ede07344f264..224eba6fef2e 100644
--- a/tools/testing/selftests/bpf/prog_tests/perf_link.c
+++ b/tools/testing/selftests/bpf/prog_tests/perf_link.c
@@ -39,7 +39,7 @@ void serial_test_perf_link(void)
attr.type = PERF_TYPE_SOFTWARE;
attr.config = PERF_COUNT_SW_CPU_CLOCK;
attr.freq = 1;
- attr.sample_freq = 4000;
+ attr.sample_freq = 1000;
pfd = syscall(__NR_perf_event_open, &attr, -1, 0, -1, PERF_FLAG_FD_CLOEXEC);
if (!ASSERT_GE(pfd, 0, "perf_fd"))
goto cleanup;
diff --git a/tools/testing/selftests/bpf/prog_tests/pkt_access.c b/tools/testing/selftests/bpf/prog_tests/pkt_access.c
index 6628710ec3c6..0bcccdc34fbc 100644
--- a/tools/testing/selftests/bpf/prog_tests/pkt_access.c
+++ b/tools/testing/selftests/bpf/prog_tests/pkt_access.c
@@ -6,23 +6,27 @@ void test_pkt_access(void)
{
const char *file = "./test_pkt_access.o";
struct bpf_object *obj;
- __u32 duration, retval;
int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 100000,
+ );
err = bpf_prog_test_load(file, BPF_PROG_TYPE_SCHED_CLS, &obj, &prog_fd);
if (CHECK_FAIL(err))
return;
- err = bpf_prog_test_run(prog_fd, 100000, &pkt_v4, sizeof(pkt_v4),
- NULL, NULL, &retval, &duration);
- CHECK(err || retval, "ipv4",
- "err %d errno %d retval %d duration %d\n",
- err, errno, retval, duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "ipv4 test_run_opts err");
+ ASSERT_OK(topts.retval, "ipv4 test_run_opts retval");
+
+ topts.data_in = &pkt_v6;
+ topts.data_size_in = sizeof(pkt_v6);
+ topts.data_size_out = 0; /* reset from last call */
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "ipv6 test_run_opts err");
+ ASSERT_OK(topts.retval, "ipv6 test_run_opts retval");
- err = bpf_prog_test_run(prog_fd, 100000, &pkt_v6, sizeof(pkt_v6),
- NULL, NULL, &retval, &duration);
- CHECK(err || retval, "ipv6",
- "err %d errno %d retval %d duration %d\n",
- err, errno, retval, duration);
bpf_object__close(obj);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/pkt_md_access.c b/tools/testing/selftests/bpf/prog_tests/pkt_md_access.c
index c9d2d6a1bfcc..00ee1dd792aa 100644
--- a/tools/testing/selftests/bpf/prog_tests/pkt_md_access.c
+++ b/tools/testing/selftests/bpf/prog_tests/pkt_md_access.c
@@ -6,18 +6,20 @@ void test_pkt_md_access(void)
{
const char *file = "./test_pkt_md_access.o";
struct bpf_object *obj;
- __u32 duration, retval;
int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 10,
+ );
err = bpf_prog_test_load(file, BPF_PROG_TYPE_SCHED_CLS, &obj, &prog_fd);
if (CHECK_FAIL(err))
return;
- err = bpf_prog_test_run(prog_fd, 10, &pkt_v4, sizeof(pkt_v4),
- NULL, NULL, &retval, &duration);
- CHECK(err || retval, "",
- "err %d errno %d retval %d duration %d\n",
- err, errno, retval, duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run_opts err");
+ ASSERT_OK(topts.retval, "test_run_opts retval");
bpf_object__close(obj);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/prog_run_opts.c b/tools/testing/selftests/bpf/prog_tests/prog_run_opts.c
new file mode 100644
index 000000000000..1ccd2bdf8fa8
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/prog_run_opts.c
@@ -0,0 +1,77 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include <network_helpers.h>
+
+#include "test_pkt_access.skel.h"
+
+static const __u32 duration;
+
+static void check_run_cnt(int prog_fd, __u64 run_cnt)
+{
+ struct bpf_prog_info info = {};
+ __u32 info_len = sizeof(info);
+ int err;
+
+ err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len);
+ if (CHECK(err, "get_prog_info", "failed to get bpf_prog_info for fd %d\n", prog_fd))
+ return;
+
+ CHECK(run_cnt != info.run_cnt, "run_cnt",
+ "incorrect number of repetitions, want %llu have %llu\n", run_cnt, info.run_cnt);
+}
+
+void test_prog_run_opts(void)
+{
+ struct test_pkt_access *skel;
+ int err, stats_fd = -1, prog_fd;
+ char buf[10] = {};
+ __u64 run_cnt = 0;
+
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .repeat = 1,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .data_out = buf,
+ .data_size_out = 5,
+ );
+
+ stats_fd = bpf_enable_stats(BPF_STATS_RUN_TIME);
+ if (!ASSERT_GE(stats_fd, 0, "enable_stats good fd"))
+ return;
+
+ skel = test_pkt_access__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "open_and_load"))
+ goto cleanup;
+
+ prog_fd = bpf_program__fd(skel->progs.test_pkt_access);
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_EQ(errno, ENOSPC, "test_run errno");
+ ASSERT_ERR(err, "test_run");
+ ASSERT_OK(topts.retval, "test_run retval");
+
+ ASSERT_EQ(topts.data_size_out, sizeof(pkt_v4), "test_run data_size_out");
+ ASSERT_EQ(buf[5], 0, "overflow, BPF_PROG_TEST_RUN ignored size hint");
+
+ run_cnt += topts.repeat;
+ check_run_cnt(prog_fd, run_cnt);
+
+ topts.data_out = NULL;
+ topts.data_size_out = 0;
+ topts.repeat = 2;
+ errno = 0;
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(errno, "run_no_output errno");
+ ASSERT_OK(err, "run_no_output err");
+ ASSERT_OK(topts.retval, "run_no_output retval");
+
+ run_cnt += topts.repeat;
+ check_run_cnt(prog_fd, run_cnt);
+
+cleanup:
+ if (skel)
+ test_pkt_access__destroy(skel);
+ if (stats_fd >= 0)
+ close(stats_fd);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/prog_run_xattr.c b/tools/testing/selftests/bpf/prog_tests/prog_run_xattr.c
deleted file mode 100644
index 89fc98faf19e..000000000000
--- a/tools/testing/selftests/bpf/prog_tests/prog_run_xattr.c
+++ /dev/null
@@ -1,83 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#include <test_progs.h>
-#include <network_helpers.h>
-
-#include "test_pkt_access.skel.h"
-
-static const __u32 duration;
-
-static void check_run_cnt(int prog_fd, __u64 run_cnt)
-{
- struct bpf_prog_info info = {};
- __u32 info_len = sizeof(info);
- int err;
-
- err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len);
- if (CHECK(err, "get_prog_info", "failed to get bpf_prog_info for fd %d\n", prog_fd))
- return;
-
- CHECK(run_cnt != info.run_cnt, "run_cnt",
- "incorrect number of repetitions, want %llu have %llu\n", run_cnt, info.run_cnt);
-}
-
-void test_prog_run_xattr(void)
-{
- struct test_pkt_access *skel;
- int err, stats_fd = -1;
- char buf[10] = {};
- __u64 run_cnt = 0;
-
- struct bpf_prog_test_run_attr tattr = {
- .repeat = 1,
- .data_in = &pkt_v4,
- .data_size_in = sizeof(pkt_v4),
- .data_out = buf,
- .data_size_out = 5,
- };
-
- stats_fd = bpf_enable_stats(BPF_STATS_RUN_TIME);
- if (CHECK_ATTR(stats_fd < 0, "enable_stats", "failed %d\n", errno))
- return;
-
- skel = test_pkt_access__open_and_load();
- if (CHECK_ATTR(!skel, "open_and_load", "failed\n"))
- goto cleanup;
-
- tattr.prog_fd = bpf_program__fd(skel->progs.test_pkt_access);
-
- err = bpf_prog_test_run_xattr(&tattr);
- CHECK_ATTR(err >= 0 || errno != ENOSPC || tattr.retval, "run",
- "err %d errno %d retval %d\n", err, errno, tattr.retval);
-
- CHECK_ATTR(tattr.data_size_out != sizeof(pkt_v4), "data_size_out",
- "incorrect output size, want %zu have %u\n",
- sizeof(pkt_v4), tattr.data_size_out);
-
- CHECK_ATTR(buf[5] != 0, "overflow",
- "BPF_PROG_TEST_RUN ignored size hint\n");
-
- run_cnt += tattr.repeat;
- check_run_cnt(tattr.prog_fd, run_cnt);
-
- tattr.data_out = NULL;
- tattr.data_size_out = 0;
- tattr.repeat = 2;
- errno = 0;
-
- err = bpf_prog_test_run_xattr(&tattr);
- CHECK_ATTR(err || errno || tattr.retval, "run_no_output",
- "err %d errno %d retval %d\n", err, errno, tattr.retval);
-
- tattr.data_size_out = 1;
- err = bpf_prog_test_run_xattr(&tattr);
- CHECK_ATTR(err != -EINVAL, "run_wrong_size_out", "err %d\n", err);
-
- run_cnt += tattr.repeat;
- check_run_cnt(tattr.prog_fd, run_cnt);
-
-cleanup:
- if (skel)
- test_pkt_access__destroy(skel);
- if (stats_fd >= 0)
- close(stats_fd);
-}
diff --git a/tools/testing/selftests/bpf/prog_tests/queue_stack_map.c b/tools/testing/selftests/bpf/prog_tests/queue_stack_map.c
index b9822f914eeb..d2743fc10032 100644
--- a/tools/testing/selftests/bpf/prog_tests/queue_stack_map.c
+++ b/tools/testing/selftests/bpf/prog_tests/queue_stack_map.c
@@ -10,11 +10,18 @@ enum {
static void test_queue_stack_map_by_type(int type)
{
const int MAP_SIZE = 32;
- __u32 vals[MAP_SIZE], duration, retval, size, val;
+ __u32 vals[MAP_SIZE], val;
int i, err, prog_fd, map_in_fd, map_out_fd;
char file[32], buf[128];
struct bpf_object *obj;
struct iphdr iph;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .data_out = buf,
+ .data_size_out = sizeof(buf),
+ .repeat = 1,
+ );
/* Fill test values to be used */
for (i = 0; i < MAP_SIZE; i++)
@@ -58,38 +65,37 @@ static void test_queue_stack_map_by_type(int type)
pkt_v4.iph.saddr = vals[MAP_SIZE - 1 - i] * 5;
}
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
- buf, &size, &retval, &duration);
- if (err || retval || size != sizeof(pkt_v4))
+ topts.data_size_out = sizeof(buf);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ if (err || topts.retval ||
+ topts.data_size_out != sizeof(pkt_v4))
break;
memcpy(&iph, buf + sizeof(struct ethhdr), sizeof(iph));
if (iph.daddr != val)
break;
}
- CHECK(err || retval || size != sizeof(pkt_v4) || iph.daddr != val,
- "bpf_map_pop_elem",
- "err %d errno %d retval %d size %d iph->daddr %u\n",
- err, errno, retval, size, iph.daddr);
+ ASSERT_OK(err, "bpf_map_pop_elem");
+ ASSERT_OK(topts.retval, "bpf_map_pop_elem test retval");
+ ASSERT_EQ(topts.data_size_out, sizeof(pkt_v4),
+ "bpf_map_pop_elem data_size_out");
+ ASSERT_EQ(iph.daddr, val, "bpf_map_pop_elem iph.daddr");
/* Queue is empty, program should return TC_ACT_SHOT */
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
- buf, &size, &retval, &duration);
- CHECK(err || retval != 2 /* TC_ACT_SHOT */|| size != sizeof(pkt_v4),
- "check-queue-stack-map-empty",
- "err %d errno %d retval %d size %d\n",
- err, errno, retval, size);
+ topts.data_size_out = sizeof(buf);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "check-queue-stack-map-empty");
+ ASSERT_EQ(topts.retval, 2 /* TC_ACT_SHOT */,
+ "check-queue-stack-map-empty test retval");
+ ASSERT_EQ(topts.data_size_out, sizeof(pkt_v4),
+ "check-queue-stack-map-empty data_size_out");
/* Check that the program pushed elements correctly */
for (i = 0; i < MAP_SIZE; i++) {
err = bpf_map_lookup_and_delete_elem(map_out_fd, NULL, &val);
- if (err || val != vals[i] * 5)
- break;
+ ASSERT_OK(err, "bpf_map_lookup_and_delete_elem");
+ ASSERT_EQ(val, vals[i] * 5, "bpf_map_push_elem val");
}
-
- CHECK(i != MAP_SIZE && (err || val != vals[i] * 5),
- "bpf_map_push_elem", "err %d value %u\n", err, val);
-
out:
pkt_v4.iph.saddr = 0;
bpf_object__close(obj);
diff --git a/tools/testing/selftests/bpf/prog_tests/raw_tp_test_run.c b/tools/testing/selftests/bpf/prog_tests/raw_tp_test_run.c
index 41720a62c4fa..fe5b8fae2c36 100644
--- a/tools/testing/selftests/bpf/prog_tests/raw_tp_test_run.c
+++ b/tools/testing/selftests/bpf/prog_tests/raw_tp_test_run.c
@@ -5,59 +5,54 @@
#include "bpf/libbpf_internal.h"
#include "test_raw_tp_test_run.skel.h"
-static int duration;
-
void test_raw_tp_test_run(void)
{
- struct bpf_prog_test_run_attr test_attr = {};
int comm_fd = -1, err, nr_online, i, prog_fd;
__u64 args[2] = {0x1234ULL, 0x5678ULL};
int expected_retval = 0x1234 + 0x5678;
struct test_raw_tp_test_run *skel;
char buf[] = "new_name";
bool *online = NULL;
- DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,
- .ctx_in = args,
- .ctx_size_in = sizeof(args),
- .flags = BPF_F_TEST_RUN_ON_CPU,
- );
+ LIBBPF_OPTS(bpf_test_run_opts, opts,
+ .ctx_in = args,
+ .ctx_size_in = sizeof(args),
+ .flags = BPF_F_TEST_RUN_ON_CPU,
+ );
err = parse_cpu_mask_file("/sys/devices/system/cpu/online", &online,
&nr_online);
- if (CHECK(err, "parse_cpu_mask_file", "err %d\n", err))
+ if (!ASSERT_OK(err, "parse_cpu_mask_file"))
return;
skel = test_raw_tp_test_run__open_and_load();
- if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
goto cleanup;
err = test_raw_tp_test_run__attach(skel);
- if (CHECK(err, "skel_attach", "skeleton attach failed: %d\n", err))
+ if (!ASSERT_OK(err, "skel_attach"))
goto cleanup;
comm_fd = open("/proc/self/comm", O_WRONLY|O_TRUNC);
- if (CHECK(comm_fd < 0, "open /proc/self/comm", "err %d\n", errno))
+ if (!ASSERT_GE(comm_fd, 0, "open /proc/self/comm"))
goto cleanup;
err = write(comm_fd, buf, sizeof(buf));
- CHECK(err < 0, "task rename", "err %d", errno);
+ ASSERT_GE(err, 0, "task rename");
- CHECK(skel->bss->count == 0, "check_count", "didn't increase\n");
- CHECK(skel->data->on_cpu != 0xffffffff, "check_on_cpu", "got wrong value\n");
+ ASSERT_NEQ(skel->bss->count, 0, "check_count");
+ ASSERT_EQ(skel->data->on_cpu, 0xffffffff, "check_on_cpu");
prog_fd = bpf_program__fd(skel->progs.rename);
- test_attr.prog_fd = prog_fd;
- test_attr.ctx_in = args;
- test_attr.ctx_size_in = sizeof(__u64);
+ opts.ctx_in = args;
+ opts.ctx_size_in = sizeof(__u64);
- err = bpf_prog_test_run_xattr(&test_attr);
- CHECK(err == 0, "test_run", "should fail for too small ctx\n");
+ err = bpf_prog_test_run_opts(prog_fd, &opts);
+ ASSERT_NEQ(err, 0, "test_run should fail for too small ctx");
- test_attr.ctx_size_in = sizeof(args);
- err = bpf_prog_test_run_xattr(&test_attr);
- CHECK(err < 0, "test_run", "err %d\n", errno);
- CHECK(test_attr.retval != expected_retval, "check_retval",
- "expect 0x%x, got 0x%x\n", expected_retval, test_attr.retval);
+ opts.ctx_size_in = sizeof(args);
+ err = bpf_prog_test_run_opts(prog_fd, &opts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(opts.retval, expected_retval, "check_retval");
for (i = 0; i < nr_online; i++) {
if (!online[i])
@@ -66,28 +61,23 @@ void test_raw_tp_test_run(void)
opts.cpu = i;
opts.retval = 0;
err = bpf_prog_test_run_opts(prog_fd, &opts);
- CHECK(err < 0, "test_run_opts", "err %d\n", errno);
- CHECK(skel->data->on_cpu != i, "check_on_cpu",
- "expect %d got %d\n", i, skel->data->on_cpu);
- CHECK(opts.retval != expected_retval,
- "check_retval", "expect 0x%x, got 0x%x\n",
- expected_retval, opts.retval);
+ ASSERT_OK(err, "test_run_opts");
+ ASSERT_EQ(skel->data->on_cpu, i, "check_on_cpu");
+ ASSERT_EQ(opts.retval, expected_retval, "check_retval");
}
/* invalid cpu ID should fail with ENXIO */
opts.cpu = 0xffffffff;
err = bpf_prog_test_run_opts(prog_fd, &opts);
- CHECK(err >= 0 || errno != ENXIO,
- "test_run_opts_fail",
- "should failed with ENXIO\n");
+ ASSERT_EQ(errno, ENXIO, "test_run_opts should fail with ENXIO");
+ ASSERT_ERR(err, "test_run_opts_fail");
/* non-zero cpu w/o BPF_F_TEST_RUN_ON_CPU should fail with EINVAL */
opts.cpu = 1;
opts.flags = 0;
err = bpf_prog_test_run_opts(prog_fd, &opts);
- CHECK(err >= 0 || errno != EINVAL,
- "test_run_opts_fail",
- "should failed with EINVAL\n");
+ ASSERT_EQ(errno, EINVAL, "test_run_opts should fail with EINVAL");
+ ASSERT_ERR(err, "test_run_opts_fail");
cleanup:
close(comm_fd);
diff --git a/tools/testing/selftests/bpf/prog_tests/raw_tp_writable_test_run.c b/tools/testing/selftests/bpf/prog_tests/raw_tp_writable_test_run.c
index 239baccabccb..f4aa7dab4766 100644
--- a/tools/testing/selftests/bpf/prog_tests/raw_tp_writable_test_run.c
+++ b/tools/testing/selftests/bpf/prog_tests/raw_tp_writable_test_run.c
@@ -56,21 +56,23 @@ void serial_test_raw_tp_writable_test_run(void)
0,
};
- __u32 prog_ret;
- int err = bpf_prog_test_run(filter_fd, 1, test_skb, sizeof(test_skb), 0,
- 0, &prog_ret, 0);
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = test_skb,
+ .data_size_in = sizeof(test_skb),
+ .repeat = 1,
+ );
+ int err = bpf_prog_test_run_opts(filter_fd, &topts);
CHECK(err != 42, "test_run",
"tracepoint did not modify return value\n");
- CHECK(prog_ret != 0, "test_run_ret",
+ CHECK(topts.retval != 0, "test_run_ret",
"socket_filter did not return 0\n");
close(tp_fd);
- err = bpf_prog_test_run(filter_fd, 1, test_skb, sizeof(test_skb), 0, 0,
- &prog_ret, 0);
+ err = bpf_prog_test_run_opts(filter_fd, &topts);
CHECK(err != 0, "test_run_notrace",
"test_run failed with %d errno %d\n", err, errno);
- CHECK(prog_ret != 0, "test_run_ret_notrace",
+ CHECK(topts.retval != 0, "test_run_ret_notrace",
"socket_filter did not return 0\n");
out_filterfd:
diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
index 776916b61c40..d71226e34c34 100644
--- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
+++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
@@ -4,11 +4,11 @@
#include <sys/resource.h>
#include "test_send_signal_kern.skel.h"
-int sigusr1_received = 0;
+static int sigusr1_received;
static void sigusr1_handler(int signum)
{
- sigusr1_received++;
+ sigusr1_received = 1;
}
static void test_send_signal_common(struct perf_event_attr *attr,
@@ -40,9 +40,10 @@ static void test_send_signal_common(struct perf_event_attr *attr,
if (pid == 0) {
int old_prio;
+ volatile int j = 0;
/* install signal handler and notify parent */
- signal(SIGUSR1, sigusr1_handler);
+ ASSERT_NEQ(signal(SIGUSR1, sigusr1_handler), SIG_ERR, "signal");
close(pipe_c2p[0]); /* close read */
close(pipe_p2c[1]); /* close write */
@@ -63,9 +64,11 @@ static void test_send_signal_common(struct perf_event_attr *attr,
ASSERT_EQ(read(pipe_p2c[0], buf, 1), 1, "pipe_read");
/* wait a little for signal handler */
- sleep(1);
+ for (int i = 0; i < 100000000 && !sigusr1_received; i++)
+ j /= i + j + 1;
buf[0] = sigusr1_received ? '2' : '0';
+ ASSERT_EQ(sigusr1_received, 1, "sigusr1_received");
ASSERT_EQ(write(pipe_c2p[1], buf, 1), 1, "pipe_write");
/* wait for parent notification and exit */
@@ -93,7 +96,7 @@ static void test_send_signal_common(struct perf_event_attr *attr,
goto destroy_skel;
}
} else {
- pmu_fd = syscall(__NR_perf_event_open, attr, pid, -1,
+ pmu_fd = syscall(__NR_perf_event_open, attr, pid, -1 /* cpu */,
-1 /* group id */, 0 /* flags */);
if (!ASSERT_GE(pmu_fd, 0, "perf_event_open")) {
err = -1;
@@ -110,9 +113,9 @@ static void test_send_signal_common(struct perf_event_attr *attr,
ASSERT_EQ(read(pipe_c2p[0], buf, 1), 1, "pipe_read");
/* trigger the bpf send_signal */
- skel->bss->pid = pid;
- skel->bss->sig = SIGUSR1;
skel->bss->signal_thread = signal_thread;
+ skel->bss->sig = SIGUSR1;
+ skel->bss->pid = pid;
/* notify child that bpf program can send_signal now */
ASSERT_EQ(write(pipe_p2c[1], buf, 1), 1, "pipe_write");
diff --git a/tools/testing/selftests/bpf/prog_tests/signal_pending.c b/tools/testing/selftests/bpf/prog_tests/signal_pending.c
index aecfe662c070..70b49da5ca0a 100644
--- a/tools/testing/selftests/bpf/prog_tests/signal_pending.c
+++ b/tools/testing/selftests/bpf/prog_tests/signal_pending.c
@@ -13,10 +13,14 @@ static void test_signal_pending_by_type(enum bpf_prog_type prog_type)
struct itimerval timeo = {
.it_value.tv_usec = 100000, /* 100ms */
};
- __u32 duration = 0, retval;
int prog_fd;
int err;
int i;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 0xffffffff,
+ );
for (i = 0; i < ARRAY_SIZE(prog); i++)
prog[i] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0);
@@ -24,20 +28,17 @@ static void test_signal_pending_by_type(enum bpf_prog_type prog_type)
prog_fd = bpf_test_load_program(prog_type, prog, ARRAY_SIZE(prog),
"GPL", 0, NULL, 0);
- CHECK(prog_fd < 0, "test-run", "errno %d\n", errno);
+ ASSERT_GE(prog_fd, 0, "test-run load");
err = sigaction(SIGALRM, &sigalrm_action, NULL);
- CHECK(err, "test-run-signal-sigaction", "errno %d\n", errno);
+ ASSERT_OK(err, "test-run-signal-sigaction");
err = setitimer(ITIMER_REAL, &timeo, NULL);
- CHECK(err, "test-run-signal-timer", "errno %d\n", errno);
-
- err = bpf_prog_test_run(prog_fd, 0xffffffff, &pkt_v4, sizeof(pkt_v4),
- NULL, NULL, &retval, &duration);
- CHECK(duration > 500000000, /* 500ms */
- "test-run-signal-duration",
- "duration %dns > 500ms\n",
- duration);
+ ASSERT_OK(err, "test-run-signal-timer");
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_LE(topts.duration, 500000000 /* 500ms */,
+ "test-run-signal-duration");
signal(SIGALRM, SIG_DFL);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/skb_ctx.c b/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
index b5319ba2ee27..ce0e555b5e38 100644
--- a/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
+++ b/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
@@ -20,97 +20,72 @@ void test_skb_ctx(void)
.gso_size = 10,
.hwtstamp = 11,
};
- struct bpf_prog_test_run_attr tattr = {
+ LIBBPF_OPTS(bpf_test_run_opts, tattr,
.data_in = &pkt_v4,
.data_size_in = sizeof(pkt_v4),
.ctx_in = &skb,
.ctx_size_in = sizeof(skb),
.ctx_out = &skb,
.ctx_size_out = sizeof(skb),
- };
+ );
struct bpf_object *obj;
- int err;
- int i;
+ int err, prog_fd, i;
- err = bpf_prog_test_load("./test_skb_ctx.o", BPF_PROG_TYPE_SCHED_CLS, &obj,
- &tattr.prog_fd);
- if (CHECK_ATTR(err, "load", "err %d errno %d\n", err, errno))
+ err = bpf_prog_test_load("./test_skb_ctx.o", BPF_PROG_TYPE_SCHED_CLS,
+ &obj, &prog_fd);
+ if (!ASSERT_OK(err, "load"))
return;
/* ctx_in != NULL, ctx_size_in == 0 */
tattr.ctx_size_in = 0;
- err = bpf_prog_test_run_xattr(&tattr);
- CHECK_ATTR(err == 0, "ctx_size_in", "err %d errno %d\n", err, errno);
+ err = bpf_prog_test_run_opts(prog_fd, &tattr);
+ ASSERT_NEQ(err, 0, "ctx_size_in");
tattr.ctx_size_in = sizeof(skb);
/* ctx_out != NULL, ctx_size_out == 0 */
tattr.ctx_size_out = 0;
- err = bpf_prog_test_run_xattr(&tattr);
- CHECK_ATTR(err == 0, "ctx_size_out", "err %d errno %d\n", err, errno);
+ err = bpf_prog_test_run_opts(prog_fd, &tattr);
+ ASSERT_NEQ(err, 0, "ctx_size_out");
tattr.ctx_size_out = sizeof(skb);
/* non-zero [len, tc_index] fields should be rejected*/
skb.len = 1;
- err = bpf_prog_test_run_xattr(&tattr);
- CHECK_ATTR(err == 0, "len", "err %d errno %d\n", err, errno);
+ err = bpf_prog_test_run_opts(prog_fd, &tattr);
+ ASSERT_NEQ(err, 0, "len");
skb.len = 0;
skb.tc_index = 1;
- err = bpf_prog_test_run_xattr(&tattr);
- CHECK_ATTR(err == 0, "tc_index", "err %d errno %d\n", err, errno);
+ err = bpf_prog_test_run_opts(prog_fd, &tattr);
+ ASSERT_NEQ(err, 0, "tc_index");
skb.tc_index = 0;
/* non-zero [hash, sk] fields should be rejected */
skb.hash = 1;
- err = bpf_prog_test_run_xattr(&tattr);
- CHECK_ATTR(err == 0, "hash", "err %d errno %d\n", err, errno);
+ err = bpf_prog_test_run_opts(prog_fd, &tattr);
+ ASSERT_NEQ(err, 0, "hash");
skb.hash = 0;
skb.sk = (struct bpf_sock *)1;
- err = bpf_prog_test_run_xattr(&tattr);
- CHECK_ATTR(err == 0, "sk", "err %d errno %d\n", err, errno);
+ err = bpf_prog_test_run_opts(prog_fd, &tattr);
+ ASSERT_NEQ(err, 0, "sk");
skb.sk = 0;
- err = bpf_prog_test_run_xattr(&tattr);
- CHECK_ATTR(err != 0 || tattr.retval,
- "run",
- "err %d errno %d retval %d\n",
- err, errno, tattr.retval);
-
- CHECK_ATTR(tattr.ctx_size_out != sizeof(skb),
- "ctx_size_out",
- "incorrect output size, want %zu have %u\n",
- sizeof(skb), tattr.ctx_size_out);
+ err = bpf_prog_test_run_opts(prog_fd, &tattr);
+ ASSERT_OK(err, "test_run");
+ ASSERT_OK(tattr.retval, "test_run retval");
+ ASSERT_EQ(tattr.ctx_size_out, sizeof(skb), "ctx_size_out");
for (i = 0; i < 5; i++)
- CHECK_ATTR(skb.cb[i] != i + 2,
- "ctx_out_cb",
- "skb->cb[i] == %d, expected %d\n",
- skb.cb[i], i + 2);
- CHECK_ATTR(skb.priority != 7,
- "ctx_out_priority",
- "skb->priority == %d, expected %d\n",
- skb.priority, 7);
- CHECK_ATTR(skb.ifindex != 1,
- "ctx_out_ifindex",
- "skb->ifindex == %d, expected %d\n",
- skb.ifindex, 1);
- CHECK_ATTR(skb.ingress_ifindex != 11,
- "ctx_out_ingress_ifindex",
- "skb->ingress_ifindex == %d, expected %d\n",
- skb.ingress_ifindex, 11);
- CHECK_ATTR(skb.tstamp != 8,
- "ctx_out_tstamp",
- "skb->tstamp == %lld, expected %d\n",
- skb.tstamp, 8);
- CHECK_ATTR(skb.mark != 10,
- "ctx_out_mark",
- "skb->mark == %u, expected %d\n",
- skb.mark, 10);
+ ASSERT_EQ(skb.cb[i], i + 2, "ctx_out_cb");
+ ASSERT_EQ(skb.priority, 7, "ctx_out_priority");
+ ASSERT_EQ(skb.ifindex, 1, "ctx_out_ifindex");
+ ASSERT_EQ(skb.ingress_ifindex, 11, "ctx_out_ingress_ifindex");
+ ASSERT_EQ(skb.tstamp, 8, "ctx_out_tstamp");
+ ASSERT_EQ(skb.mark, 10, "ctx_out_mark");
bpf_object__close(obj);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/skb_helpers.c b/tools/testing/selftests/bpf/prog_tests/skb_helpers.c
index 6f802a1c0800..97dc8b14be48 100644
--- a/tools/testing/selftests/bpf/prog_tests/skb_helpers.c
+++ b/tools/testing/selftests/bpf/prog_tests/skb_helpers.c
@@ -9,22 +9,22 @@ void test_skb_helpers(void)
.gso_segs = 8,
.gso_size = 10,
};
- struct bpf_prog_test_run_attr tattr = {
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
.data_in = &pkt_v4,
.data_size_in = sizeof(pkt_v4),
.ctx_in = &skb,
.ctx_size_in = sizeof(skb),
.ctx_out = &skb,
.ctx_size_out = sizeof(skb),
- };
+ );
struct bpf_object *obj;
- int err;
+ int err, prog_fd;
- err = bpf_prog_test_load("./test_skb_helpers.o", BPF_PROG_TYPE_SCHED_CLS, &obj,
- &tattr.prog_fd);
- if (CHECK_ATTR(err, "load", "err %d errno %d\n", err, errno))
+ err = bpf_prog_test_load("./test_skb_helpers.o",
+ BPF_PROG_TYPE_SCHED_CLS, &obj, &prog_fd);
+ if (!ASSERT_OK(err, "load"))
return;
- err = bpf_prog_test_run_xattr(&tattr);
- CHECK_ATTR(err, "len", "err %d errno %d\n", err, errno);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
bpf_object__close(obj);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/sock_fields.c b/tools/testing/selftests/bpf/prog_tests/sock_fields.c
index 9fc040eaa482..9d211b5c22c4 100644
--- a/tools/testing/selftests/bpf/prog_tests/sock_fields.c
+++ b/tools/testing/selftests/bpf/prog_tests/sock_fields.c
@@ -1,9 +1,11 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2019 Facebook */
+#define _GNU_SOURCE
#include <netinet/in.h>
#include <arpa/inet.h>
#include <unistd.h>
+#include <sched.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
@@ -20,6 +22,7 @@
enum bpf_linum_array_idx {
EGRESS_LINUM_IDX,
INGRESS_LINUM_IDX,
+ READ_SK_DST_PORT_LINUM_IDX,
__NR_BPF_LINUM_ARRAY_IDX,
};
@@ -42,8 +45,16 @@ static __u64 child_cg_id;
static int linum_map_fd;
static __u32 duration;
-static __u32 egress_linum_idx = EGRESS_LINUM_IDX;
-static __u32 ingress_linum_idx = INGRESS_LINUM_IDX;
+static bool create_netns(void)
+{
+ if (!ASSERT_OK(unshare(CLONE_NEWNET), "create netns"))
+ return false;
+
+ if (!ASSERT_OK(system("ip link set dev lo up"), "bring up lo"))
+ return false;
+
+ return true;
+}
static void print_sk(const struct bpf_sock *sk, const char *prefix)
{
@@ -91,19 +102,24 @@ static void check_result(void)
{
struct bpf_tcp_sock srv_tp, cli_tp, listen_tp;
struct bpf_sock srv_sk, cli_sk, listen_sk;
- __u32 ingress_linum, egress_linum;
+ __u32 idx, ingress_linum, egress_linum, linum;
int err;
- err = bpf_map_lookup_elem(linum_map_fd, &egress_linum_idx,
- &egress_linum);
+ idx = EGRESS_LINUM_IDX;
+ err = bpf_map_lookup_elem(linum_map_fd, &idx, &egress_linum);
CHECK(err < 0, "bpf_map_lookup_elem(linum_map_fd)",
"err:%d errno:%d\n", err, errno);
- err = bpf_map_lookup_elem(linum_map_fd, &ingress_linum_idx,
- &ingress_linum);
+ idx = INGRESS_LINUM_IDX;
+ err = bpf_map_lookup_elem(linum_map_fd, &idx, &ingress_linum);
CHECK(err < 0, "bpf_map_lookup_elem(linum_map_fd)",
"err:%d errno:%d\n", err, errno);
+ idx = READ_SK_DST_PORT_LINUM_IDX;
+ err = bpf_map_lookup_elem(linum_map_fd, &idx, &linum);
+ ASSERT_OK(err, "bpf_map_lookup_elem(linum_map_fd, READ_SK_DST_PORT_IDX)");
+ ASSERT_EQ(linum, 0, "failure in read_sk_dst_port on line");
+
memcpy(&srv_sk, &skel->bss->srv_sk, sizeof(srv_sk));
memcpy(&srv_tp, &skel->bss->srv_tp, sizeof(srv_tp));
memcpy(&cli_sk, &skel->bss->cli_sk, sizeof(cli_sk));
@@ -262,7 +278,7 @@ static void test(void)
char buf[DATA_LEN];
/* Prepare listen_fd */
- listen_fd = start_server(AF_INET6, SOCK_STREAM, "::1", 0, 0);
+ listen_fd = start_server(AF_INET6, SOCK_STREAM, "::1", 0xcafe, 0);
/* start_server() has logged the error details */
if (CHECK_FAIL(listen_fd == -1))
goto done;
@@ -330,8 +346,12 @@ done:
void serial_test_sock_fields(void)
{
- struct bpf_link *egress_link = NULL, *ingress_link = NULL;
int parent_cg_fd = -1, child_cg_fd = -1;
+ struct bpf_link *link;
+
+ /* Use a dedicated netns to have a fixed listen port */
+ if (!create_netns())
+ return;
/* Create a cgroup, get fd, and join it */
parent_cg_fd = test__join_cgroup(PARENT_CGROUP);
@@ -352,15 +372,20 @@ void serial_test_sock_fields(void)
if (CHECK(!skel, "test_sock_fields__open_and_load", "failed\n"))
goto done;
- egress_link = bpf_program__attach_cgroup(skel->progs.egress_read_sock_fields,
- child_cg_fd);
- if (!ASSERT_OK_PTR(egress_link, "attach_cgroup(egress)"))
+ link = bpf_program__attach_cgroup(skel->progs.egress_read_sock_fields, child_cg_fd);
+ if (!ASSERT_OK_PTR(link, "attach_cgroup(egress_read_sock_fields)"))
+ goto done;
+ skel->links.egress_read_sock_fields = link;
+
+ link = bpf_program__attach_cgroup(skel->progs.ingress_read_sock_fields, child_cg_fd);
+ if (!ASSERT_OK_PTR(link, "attach_cgroup(ingress_read_sock_fields)"))
goto done;
+ skel->links.ingress_read_sock_fields = link;
- ingress_link = bpf_program__attach_cgroup(skel->progs.ingress_read_sock_fields,
- child_cg_fd);
- if (!ASSERT_OK_PTR(ingress_link, "attach_cgroup(ingress)"))
+ link = bpf_program__attach_cgroup(skel->progs.read_sk_dst_port, child_cg_fd);
+ if (!ASSERT_OK_PTR(link, "attach_cgroup(read_sk_dst_port"))
goto done;
+ skel->links.read_sk_dst_port = link;
linum_map_fd = bpf_map__fd(skel->maps.linum_map);
sk_pkt_out_cnt_fd = bpf_map__fd(skel->maps.sk_pkt_out_cnt);
@@ -369,8 +394,7 @@ void serial_test_sock_fields(void)
test();
done:
- bpf_link__destroy(egress_link);
- bpf_link__destroy(ingress_link);
+ test_sock_fields__detach(skel);
test_sock_fields__destroy(skel);
if (child_cg_fd >= 0)
close(child_cg_fd);
diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
index 85db0f4cdd95..cec5c0882372 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
@@ -8,6 +8,7 @@
#include "test_sockmap_update.skel.h"
#include "test_sockmap_invalid_update.skel.h"
#include "test_sockmap_skb_verdict_attach.skel.h"
+#include "test_sockmap_progs_query.skel.h"
#include "bpf_iter_sockmap.skel.h"
#define TCP_REPAIR 19 /* TCP sock is under repair right now */
@@ -139,12 +140,16 @@ out:
static void test_sockmap_update(enum bpf_map_type map_type)
{
- struct bpf_prog_test_run_attr tattr;
int err, prog, src, duration = 0;
struct test_sockmap_update *skel;
struct bpf_map *dst_map;
const __u32 zero = 0;
char dummy[14] = {0};
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = dummy,
+ .data_size_in = sizeof(dummy),
+ .repeat = 1,
+ );
__s64 sk;
sk = connected_socket_v4();
@@ -166,16 +171,10 @@ static void test_sockmap_update(enum bpf_map_type map_type)
if (CHECK(err, "update_elem(src)", "errno=%u\n", errno))
goto out;
- tattr = (struct bpf_prog_test_run_attr){
- .prog_fd = prog,
- .repeat = 1,
- .data_in = dummy,
- .data_size_in = sizeof(dummy),
- };
-
- err = bpf_prog_test_run_xattr(&tattr);
- if (CHECK_ATTR(err || !tattr.retval, "bpf_prog_test_run",
- "errno=%u retval=%u\n", errno, tattr.retval))
+ err = bpf_prog_test_run_opts(prog, &topts);
+ if (!ASSERT_OK(err, "test_run"))
+ goto out;
+ if (!ASSERT_NEQ(topts.retval, 0, "test_run retval"))
goto out;
compare_cookies(skel->maps.src, dst_map);
@@ -315,6 +314,63 @@ out:
test_sockmap_skb_verdict_attach__destroy(skel);
}
+static __u32 query_prog_id(int prog_fd)
+{
+ struct bpf_prog_info info = {};
+ __u32 info_len = sizeof(info);
+ int err;
+
+ err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len);
+ if (!ASSERT_OK(err, "bpf_obj_get_info_by_fd") ||
+ !ASSERT_EQ(info_len, sizeof(info), "bpf_obj_get_info_by_fd"))
+ return 0;
+
+ return info.id;
+}
+
+static void test_sockmap_progs_query(enum bpf_attach_type attach_type)
+{
+ struct test_sockmap_progs_query *skel;
+ int err, map_fd, verdict_fd;
+ __u32 attach_flags = 0;
+ __u32 prog_ids[3] = {};
+ __u32 prog_cnt = 3;
+
+ skel = test_sockmap_progs_query__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "test_sockmap_progs_query__open_and_load"))
+ return;
+
+ map_fd = bpf_map__fd(skel->maps.sock_map);
+
+ if (attach_type == BPF_SK_MSG_VERDICT)
+ verdict_fd = bpf_program__fd(skel->progs.prog_skmsg_verdict);
+ else
+ verdict_fd = bpf_program__fd(skel->progs.prog_skb_verdict);
+
+ err = bpf_prog_query(map_fd, attach_type, 0 /* query flags */,
+ &attach_flags, prog_ids, &prog_cnt);
+ ASSERT_OK(err, "bpf_prog_query failed");
+ ASSERT_EQ(attach_flags, 0, "wrong attach_flags on query");
+ ASSERT_EQ(prog_cnt, 0, "wrong program count on query");
+
+ err = bpf_prog_attach(verdict_fd, map_fd, attach_type, 0);
+ if (!ASSERT_OK(err, "bpf_prog_attach failed"))
+ goto out;
+
+ prog_cnt = 1;
+ err = bpf_prog_query(map_fd, attach_type, 0 /* query flags */,
+ &attach_flags, prog_ids, &prog_cnt);
+ ASSERT_OK(err, "bpf_prog_query failed");
+ ASSERT_EQ(attach_flags, 0, "wrong attach_flags on query");
+ ASSERT_EQ(prog_cnt, 1, "wrong program count on query");
+ ASSERT_EQ(prog_ids[0], query_prog_id(verdict_fd),
+ "wrong prog_ids on query");
+
+ bpf_prog_detach2(verdict_fd, map_fd, attach_type);
+out:
+ test_sockmap_progs_query__destroy(skel);
+}
+
void test_sockmap_basic(void)
{
if (test__start_subtest("sockmap create_update_free"))
@@ -341,4 +397,12 @@ void test_sockmap_basic(void)
test_sockmap_skb_verdict_attach(BPF_SK_SKB_STREAM_VERDICT,
BPF_SK_SKB_VERDICT);
}
+ if (test__start_subtest("sockmap msg_verdict progs query"))
+ test_sockmap_progs_query(BPF_SK_MSG_VERDICT);
+ if (test__start_subtest("sockmap stream_parser progs query"))
+ test_sockmap_progs_query(BPF_SK_SKB_STREAM_PARSER);
+ if (test__start_subtest("sockmap stream_verdict progs query"))
+ test_sockmap_progs_query(BPF_SK_SKB_STREAM_VERDICT);
+ if (test__start_subtest("sockmap skb_verdict progs query"))
+ test_sockmap_progs_query(BPF_SK_SKB_VERDICT);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
index 7e21bfab6358..2cf0c7a3fe23 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
@@ -1413,14 +1413,12 @@ close_srv1:
static void test_ops_cleanup(const struct bpf_map *map)
{
- const struct bpf_map_def *def;
int err, mapfd;
u32 key;
- def = bpf_map__def(map);
mapfd = bpf_map__fd(map);
- for (key = 0; key < def->max_entries; key++) {
+ for (key = 0; key < bpf_map__max_entries(map); key++) {
err = bpf_map_delete_elem(mapfd, &key);
if (err && errno != EINVAL && errno != ENOENT)
FAIL_ERRNO("map_delete: expected EINVAL/ENOENT");
@@ -1443,13 +1441,13 @@ static const char *family_str(sa_family_t family)
static const char *map_type_str(const struct bpf_map *map)
{
- const struct bpf_map_def *def;
+ int type;
- def = bpf_map__def(map);
- if (IS_ERR(def))
+ if (!map)
return "invalid";
+ type = bpf_map__type(map);
- switch (def->type) {
+ switch (type) {
case BPF_MAP_TYPE_SOCKMAP:
return "sockmap";
case BPF_MAP_TYPE_SOCKHASH:
diff --git a/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c b/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c
index 4b937e5dbaca..30a99d2ed5c6 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c
@@ -173,11 +173,11 @@ static int getsetsockopt(void)
}
memset(&buf, 0, sizeof(buf));
- buf.zc.address = 12345; /* rejected by BPF */
+ buf.zc.address = 12345; /* Not page aligned. Rejected by tcp_zerocopy_receive() */
optlen = sizeof(buf.zc);
errno = 0;
err = getsockopt(fd, SOL_TCP, TCP_ZEROCOPY_RECEIVE, &buf, &optlen);
- if (errno != EPERM) {
+ if (errno != EINVAL) {
log_err("Unexpected getsockopt(TCP_ZEROCOPY_RECEIVE) err=%d errno=%d",
err, errno);
goto err;
diff --git a/tools/testing/selftests/bpf/prog_tests/spinlock.c b/tools/testing/selftests/bpf/prog_tests/spinlock.c
index 6307f5d2b417..8e329eaee6d7 100644
--- a/tools/testing/selftests/bpf/prog_tests/spinlock.c
+++ b/tools/testing/selftests/bpf/prog_tests/spinlock.c
@@ -4,14 +4,16 @@
static void *spin_lock_thread(void *arg)
{
- __u32 duration, retval;
int err, prog_fd = *(u32 *) arg;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 10000,
+ );
- err = bpf_prog_test_run(prog_fd, 10000, &pkt_v4, sizeof(pkt_v4),
- NULL, NULL, &retval, &duration);
- CHECK(err || retval, "",
- "err %d errno %d retval %d duration %d\n",
- err, errno, retval, duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_OK(topts.retval, "test_run retval");
pthread_exit(arg);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
index 0a91d8d9954b..f45a1d7b0a28 100644
--- a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
@@ -42,7 +42,7 @@ retry:
return;
/* override program type */
- bpf_program__set_perf_event(skel->progs.oncpu);
+ bpf_program__set_type(skel->progs.oncpu, BPF_PROG_TYPE_PERF_EVENT);
err = test_stacktrace_build_id__load(skel);
if (CHECK(err, "skel_load", "skeleton load failed: %d\n", err))
diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_map_skip.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_map_skip.c
new file mode 100644
index 000000000000..1932b1e0685c
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_map_skip.c
@@ -0,0 +1,63 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "stacktrace_map_skip.skel.h"
+
+#define TEST_STACK_DEPTH 2
+
+void test_stacktrace_map_skip(void)
+{
+ struct stacktrace_map_skip *skel;
+ int stackid_hmap_fd, stackmap_fd, stack_amap_fd;
+ int err, stack_trace_len;
+
+ skel = stacktrace_map_skip__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
+ return;
+
+ /* find map fds */
+ stackid_hmap_fd = bpf_map__fd(skel->maps.stackid_hmap);
+ if (!ASSERT_GE(stackid_hmap_fd, 0, "stackid_hmap fd"))
+ goto out;
+
+ stackmap_fd = bpf_map__fd(skel->maps.stackmap);
+ if (!ASSERT_GE(stackmap_fd, 0, "stackmap fd"))
+ goto out;
+
+ stack_amap_fd = bpf_map__fd(skel->maps.stack_amap);
+ if (!ASSERT_GE(stack_amap_fd, 0, "stack_amap fd"))
+ goto out;
+
+ skel->bss->pid = getpid();
+
+ err = stacktrace_map_skip__attach(skel);
+ if (!ASSERT_OK(err, "skel_attach"))
+ goto out;
+
+ /* give some time for bpf program run */
+ sleep(1);
+
+ /* disable stack trace collection */
+ skel->bss->control = 1;
+
+ /* for every element in stackid_hmap, we can find a corresponding one
+ * in stackmap, and vise versa.
+ */
+ err = compare_map_keys(stackid_hmap_fd, stackmap_fd);
+ if (!ASSERT_OK(err, "compare_map_keys stackid_hmap vs. stackmap"))
+ goto out;
+
+ err = compare_map_keys(stackmap_fd, stackid_hmap_fd);
+ if (!ASSERT_OK(err, "compare_map_keys stackmap vs. stackid_hmap"))
+ goto out;
+
+ stack_trace_len = TEST_STACK_DEPTH * sizeof(__u64);
+ err = compare_stack_ips(stackmap_fd, stack_amap_fd, stack_trace_len);
+ if (!ASSERT_OK(err, "compare_stack_ips stackmap vs. stack_amap"))
+ goto out;
+
+ if (!ASSERT_EQ(skel->bss->failed, 0, "skip_failed"))
+ goto out;
+
+out:
+ stacktrace_map_skip__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/subprogs.c b/tools/testing/selftests/bpf/prog_tests/subprogs.c
index 3f3d2ac4dd57..903f35a9e62e 100644
--- a/tools/testing/selftests/bpf/prog_tests/subprogs.c
+++ b/tools/testing/selftests/bpf/prog_tests/subprogs.c
@@ -1,32 +1,83 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2020 Facebook */
#include <test_progs.h>
-#include <time.h>
#include "test_subprogs.skel.h"
#include "test_subprogs_unused.skel.h"
-static int duration;
+struct toggler_ctx {
+ int fd;
+ bool stop;
+};
-void test_subprogs(void)
+static void *toggle_jit_harden(void *arg)
+{
+ struct toggler_ctx *ctx = arg;
+ char two = '2';
+ char zero = '0';
+
+ while (!ctx->stop) {
+ lseek(ctx->fd, SEEK_SET, 0);
+ write(ctx->fd, &two, sizeof(two));
+ lseek(ctx->fd, SEEK_SET, 0);
+ write(ctx->fd, &zero, sizeof(zero));
+ }
+
+ return NULL;
+}
+
+static void test_subprogs_with_jit_harden_toggling(void)
+{
+ struct toggler_ctx ctx;
+ pthread_t toggler;
+ int err;
+ unsigned int i, loop = 10;
+
+ ctx.fd = open("/proc/sys/net/core/bpf_jit_harden", O_RDWR);
+ if (!ASSERT_GE(ctx.fd, 0, "open bpf_jit_harden"))
+ return;
+
+ ctx.stop = false;
+ err = pthread_create(&toggler, NULL, toggle_jit_harden, &ctx);
+ if (!ASSERT_OK(err, "new toggler"))
+ goto out;
+
+ /* Make toggler thread to run */
+ usleep(1);
+
+ for (i = 0; i < loop; i++) {
+ struct test_subprogs *skel = test_subprogs__open_and_load();
+
+ if (!ASSERT_OK_PTR(skel, "skel open"))
+ break;
+ test_subprogs__destroy(skel);
+ }
+
+ ctx.stop = true;
+ pthread_join(toggler, NULL);
+out:
+ close(ctx.fd);
+}
+
+static void test_subprogs_alone(void)
{
struct test_subprogs *skel;
struct test_subprogs_unused *skel2;
int err;
skel = test_subprogs__open_and_load();
- if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
return;
err = test_subprogs__attach(skel);
- if (CHECK(err, "skel_attach", "failed to attach skeleton: %d\n", err))
+ if (!ASSERT_OK(err, "skel attach"))
goto cleanup;
usleep(1);
- CHECK(skel->bss->res1 != 12, "res1", "got %d, exp %d\n", skel->bss->res1, 12);
- CHECK(skel->bss->res2 != 17, "res2", "got %d, exp %d\n", skel->bss->res2, 17);
- CHECK(skel->bss->res3 != 19, "res3", "got %d, exp %d\n", skel->bss->res3, 19);
- CHECK(skel->bss->res4 != 36, "res4", "got %d, exp %d\n", skel->bss->res4, 36);
+ ASSERT_EQ(skel->bss->res1, 12, "res1");
+ ASSERT_EQ(skel->bss->res2, 17, "res2");
+ ASSERT_EQ(skel->bss->res3, 19, "res3");
+ ASSERT_EQ(skel->bss->res4, 36, "res4");
skel2 = test_subprogs_unused__open_and_load();
ASSERT_OK_PTR(skel2, "unused_progs_skel");
@@ -35,3 +86,11 @@ void test_subprogs(void)
cleanup:
test_subprogs__destroy(skel);
}
+
+void test_subprogs(void)
+{
+ if (test__start_subtest("subprogs_alone"))
+ test_subprogs_alone();
+ if (test__start_subtest("subprogs_and_jit_harden"))
+ test_subprogs_with_jit_harden_toggling();
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/subskeleton.c b/tools/testing/selftests/bpf/prog_tests/subskeleton.c
new file mode 100644
index 000000000000..9c31b7004f9c
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/subskeleton.c
@@ -0,0 +1,78 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) Meta Platforms, Inc. and affiliates. */
+
+#include <test_progs.h>
+#include "test_subskeleton.skel.h"
+#include "test_subskeleton_lib.subskel.h"
+
+static void subskeleton_lib_setup(struct bpf_object *obj)
+{
+ struct test_subskeleton_lib *lib = test_subskeleton_lib__open(obj);
+
+ if (!ASSERT_OK_PTR(lib, "open subskeleton"))
+ return;
+
+ *lib->rodata.var1 = 1;
+ *lib->data.var2 = 2;
+ lib->bss.var3->var3_1 = 3;
+ lib->bss.var3->var3_2 = 4;
+
+ test_subskeleton_lib__destroy(lib);
+}
+
+static int subskeleton_lib_subresult(struct bpf_object *obj)
+{
+ struct test_subskeleton_lib *lib = test_subskeleton_lib__open(obj);
+ int result;
+
+ if (!ASSERT_OK_PTR(lib, "open subskeleton"))
+ return -EINVAL;
+
+ result = *lib->bss.libout1;
+ ASSERT_EQ(result, 1 + 2 + 3 + 4 + 5 + 6, "lib subresult");
+
+ ASSERT_OK_PTR(lib->progs.lib_perf_handler, "lib_perf_handler");
+ ASSERT_STREQ(bpf_program__name(lib->progs.lib_perf_handler),
+ "lib_perf_handler", "program name");
+
+ ASSERT_OK_PTR(lib->maps.map1, "map1");
+ ASSERT_STREQ(bpf_map__name(lib->maps.map1), "map1", "map name");
+
+ ASSERT_EQ(*lib->data.var5, 5, "__weak var5");
+ ASSERT_EQ(*lib->data.var6, 6, "extern var6");
+ ASSERT_TRUE(*lib->kconfig.CONFIG_BPF_SYSCALL, "CONFIG_BPF_SYSCALL");
+
+ test_subskeleton_lib__destroy(lib);
+ return result;
+}
+
+void test_subskeleton(void)
+{
+ int err, result;
+ struct test_subskeleton *skel;
+
+ skel = test_subskeleton__open();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ return;
+
+ skel->rodata->rovar1 = 10;
+ skel->rodata->var1 = 1;
+ subskeleton_lib_setup(skel->obj);
+
+ err = test_subskeleton__load(skel);
+ if (!ASSERT_OK(err, "skel_load"))
+ goto cleanup;
+
+ err = test_subskeleton__attach(skel);
+ if (!ASSERT_OK(err, "skel_attach"))
+ goto cleanup;
+
+ /* trigger tracepoint */
+ usleep(1);
+
+ result = subskeleton_lib_subresult(skel->obj) * 10;
+ ASSERT_EQ(skel->bss->out1, result, "unexpected calculation");
+
+cleanup:
+ test_subskeleton__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/syscall.c b/tools/testing/selftests/bpf/prog_tests/syscall.c
index 81e997a69f7a..f4d40001155a 100644
--- a/tools/testing/selftests/bpf/prog_tests/syscall.c
+++ b/tools/testing/selftests/bpf/prog_tests/syscall.c
@@ -20,20 +20,20 @@ void test_syscall(void)
.log_buf = (uintptr_t) verifier_log,
.log_size = sizeof(verifier_log),
};
- struct bpf_prog_test_run_attr tattr = {
+ LIBBPF_OPTS(bpf_test_run_opts, tattr,
.ctx_in = &ctx,
.ctx_size_in = sizeof(ctx),
- };
+ );
struct syscall *skel = NULL;
__u64 key = 12, value = 0;
- int err;
+ int err, prog_fd;
skel = syscall__open_and_load();
if (!ASSERT_OK_PTR(skel, "skel_load"))
goto cleanup;
- tattr.prog_fd = bpf_program__fd(skel->progs.bpf_prog);
- err = bpf_prog_test_run_xattr(&tattr);
+ prog_fd = bpf_program__fd(skel->progs.bpf_prog);
+ err = bpf_prog_test_run_opts(prog_fd, &tattr);
ASSERT_EQ(err, 0, "err");
ASSERT_EQ(tattr.retval, 1, "retval");
ASSERT_GT(ctx.map_fd, 0, "ctx.map_fd");
diff --git a/tools/testing/selftests/bpf/prog_tests/tailcalls.c b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
index 5dc0f425bd11..c4da87ec3ba4 100644
--- a/tools/testing/selftests/bpf/prog_tests/tailcalls.c
+++ b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
@@ -12,9 +12,13 @@ static void test_tailcall_1(void)
struct bpf_map *prog_array;
struct bpf_program *prog;
struct bpf_object *obj;
- __u32 retval, duration;
char prog_name[32];
char buff[128] = {};
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = buff,
+ .data_size_in = sizeof(buff),
+ .repeat = 1,
+ );
err = bpf_prog_test_load("tailcall1.o", BPF_PROG_TYPE_SCHED_CLS, &obj,
&prog_fd);
@@ -37,7 +41,7 @@ static void test_tailcall_1(void)
if (CHECK_FAIL(map_fd < 0))
goto out;
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_name(obj, prog_name);
@@ -53,23 +57,21 @@ static void test_tailcall_1(void)
goto out;
}
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != i, "tailcall",
- "err %d errno %d retval %d\n", err, errno, retval);
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, i, "tailcall retval");
err = bpf_map_delete_elem(map_fd, &i);
if (CHECK_FAIL(err))
goto out;
}
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != 3, "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, 3, "tailcall retval");
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_name(obj, prog_name);
@@ -85,13 +87,12 @@ static void test_tailcall_1(void)
goto out;
}
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != 0, "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_OK(topts.retval, "tailcall retval");
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
- j = bpf_map__def(prog_array)->max_entries - 1 - i;
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
+ j = bpf_map__max_entries(prog_array) - 1 - i;
snprintf(prog_name, sizeof(prog_name), "classifier_%d", j);
prog = bpf_object__find_program_by_name(obj, prog_name);
@@ -107,33 +108,30 @@ static void test_tailcall_1(void)
goto out;
}
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
- j = bpf_map__def(prog_array)->max_entries - 1 - i;
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
+ j = bpf_map__max_entries(prog_array) - 1 - i;
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != j, "tailcall",
- "err %d errno %d retval %d\n", err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, j, "tailcall retval");
err = bpf_map_delete_elem(map_fd, &i);
if (CHECK_FAIL(err))
goto out;
}
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != 3, "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, 3, "tailcall retval");
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
err = bpf_map_delete_elem(map_fd, &i);
if (CHECK_FAIL(err >= 0 || errno != ENOENT))
goto out;
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != 3, "tailcall",
- "err %d errno %d retval %d\n", err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, 3, "tailcall retval");
}
out:
@@ -150,9 +148,13 @@ static void test_tailcall_2(void)
struct bpf_map *prog_array;
struct bpf_program *prog;
struct bpf_object *obj;
- __u32 retval, duration;
char prog_name[32];
char buff[128] = {};
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = buff,
+ .data_size_in = sizeof(buff),
+ .repeat = 1,
+ );
err = bpf_prog_test_load("tailcall2.o", BPF_PROG_TYPE_SCHED_CLS, &obj,
&prog_fd);
@@ -175,7 +177,7 @@ static void test_tailcall_2(void)
if (CHECK_FAIL(map_fd < 0))
goto out;
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_name(obj, prog_name);
@@ -191,30 +193,27 @@ static void test_tailcall_2(void)
goto out;
}
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != 2, "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, 2, "tailcall retval");
i = 2;
err = bpf_map_delete_elem(map_fd, &i);
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != 1, "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, 1, "tailcall retval");
i = 0;
err = bpf_map_delete_elem(map_fd, &i);
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != 3, "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, 3, "tailcall retval");
out:
bpf_object__close(obj);
}
@@ -225,8 +224,12 @@ static void test_tailcall_count(const char *which)
struct bpf_map *prog_array, *data_map;
struct bpf_program *prog;
struct bpf_object *obj;
- __u32 retval, duration;
char buff[128] = {};
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = buff,
+ .data_size_in = sizeof(buff),
+ .repeat = 1,
+ );
err = bpf_prog_test_load(which, BPF_PROG_TYPE_SCHED_CLS, &obj,
&prog_fd);
@@ -262,10 +265,9 @@ static void test_tailcall_count(const char *which)
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != 1, "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, 1, "tailcall retval");
data_map = bpf_object__find_map_by_name(obj, "tailcall.bss");
if (CHECK_FAIL(!data_map || !bpf_map__is_internal(data_map)))
@@ -277,18 +279,17 @@ static void test_tailcall_count(const char *which)
i = 0;
err = bpf_map_lookup_elem(data_fd, &i, &val);
- CHECK(err || val != 33, "tailcall count", "err %d errno %d count %d\n",
- err, errno, val);
+ ASSERT_OK(err, "tailcall count");
+ ASSERT_EQ(val, 33, "tailcall count");
i = 0;
err = bpf_map_delete_elem(map_fd, &i);
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != 0, "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_OK(topts.retval, "tailcall retval");
out:
bpf_object__close(obj);
}
@@ -319,10 +320,14 @@ static void test_tailcall_4(void)
struct bpf_map *prog_array, *data_map;
struct bpf_program *prog;
struct bpf_object *obj;
- __u32 retval, duration;
static const int zero = 0;
char buff[128] = {};
char prog_name[32];
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = buff,
+ .data_size_in = sizeof(buff),
+ .repeat = 1,
+ );
err = bpf_prog_test_load("tailcall4.o", BPF_PROG_TYPE_SCHED_CLS, &obj,
&prog_fd);
@@ -353,7 +358,7 @@ static void test_tailcall_4(void)
if (CHECK_FAIL(map_fd < 0))
return;
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_name(obj, prog_name);
@@ -369,18 +374,17 @@ static void test_tailcall_4(void)
goto out;
}
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
err = bpf_map_update_elem(data_fd, &zero, &i, BPF_ANY);
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != i, "tailcall",
- "err %d errno %d retval %d\n", err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, i, "tailcall retval");
}
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
err = bpf_map_update_elem(data_fd, &zero, &i, BPF_ANY);
if (CHECK_FAIL(err))
goto out;
@@ -389,10 +393,9 @@ static void test_tailcall_4(void)
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != 3, "tailcall",
- "err %d errno %d retval %d\n", err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, 3, "tailcall retval");
}
out:
bpf_object__close(obj);
@@ -407,10 +410,14 @@ static void test_tailcall_5(void)
struct bpf_map *prog_array, *data_map;
struct bpf_program *prog;
struct bpf_object *obj;
- __u32 retval, duration;
static const int zero = 0;
char buff[128] = {};
char prog_name[32];
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = buff,
+ .data_size_in = sizeof(buff),
+ .repeat = 1,
+ );
err = bpf_prog_test_load("tailcall5.o", BPF_PROG_TYPE_SCHED_CLS, &obj,
&prog_fd);
@@ -441,7 +448,7 @@ static void test_tailcall_5(void)
if (CHECK_FAIL(map_fd < 0))
return;
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_name(obj, prog_name);
@@ -457,18 +464,17 @@ static void test_tailcall_5(void)
goto out;
}
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
err = bpf_map_update_elem(data_fd, &zero, &key[i], BPF_ANY);
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != i, "tailcall",
- "err %d errno %d retval %d\n", err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, i, "tailcall retval");
}
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
err = bpf_map_update_elem(data_fd, &zero, &key[i], BPF_ANY);
if (CHECK_FAIL(err))
goto out;
@@ -477,10 +483,9 @@ static void test_tailcall_5(void)
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != 3, "tailcall",
- "err %d errno %d retval %d\n", err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, 3, "tailcall retval");
}
out:
bpf_object__close(obj);
@@ -495,8 +500,12 @@ static void test_tailcall_bpf2bpf_1(void)
struct bpf_map *prog_array;
struct bpf_program *prog;
struct bpf_object *obj;
- __u32 retval, duration;
char prog_name[32];
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
err = bpf_prog_test_load("tailcall_bpf2bpf1.o", BPF_PROG_TYPE_SCHED_CLS,
&obj, &prog_fd);
@@ -520,7 +529,7 @@ static void test_tailcall_bpf2bpf_1(void)
goto out;
/* nop -> jmp */
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_name(obj, prog_name);
@@ -536,10 +545,9 @@ static void test_tailcall_bpf2bpf_1(void)
goto out;
}
- err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0,
- 0, &retval, &duration);
- CHECK(err || retval != 1, "tailcall",
- "err %d errno %d retval %d\n", err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, 1, "tailcall retval");
/* jmp -> nop, call subprog that will do tailcall */
i = 1;
@@ -547,10 +555,9 @@ static void test_tailcall_bpf2bpf_1(void)
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0,
- 0, &retval, &duration);
- CHECK(err || retval != 0, "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_OK(topts.retval, "tailcall retval");
/* make sure that subprog can access ctx and entry prog that
* called this subprog can properly return
@@ -560,11 +567,9 @@ static void test_tailcall_bpf2bpf_1(void)
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0,
- 0, &retval, &duration);
- CHECK(err || retval != sizeof(pkt_v4) * 2,
- "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, sizeof(pkt_v4) * 2, "tailcall retval");
out:
bpf_object__close(obj);
}
@@ -579,8 +584,12 @@ static void test_tailcall_bpf2bpf_2(void)
struct bpf_map *prog_array, *data_map;
struct bpf_program *prog;
struct bpf_object *obj;
- __u32 retval, duration;
char buff[128] = {};
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = buff,
+ .data_size_in = sizeof(buff),
+ .repeat = 1,
+ );
err = bpf_prog_test_load("tailcall_bpf2bpf2.o", BPF_PROG_TYPE_SCHED_CLS,
&obj, &prog_fd);
@@ -616,10 +625,9 @@ static void test_tailcall_bpf2bpf_2(void)
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != 1, "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, 1, "tailcall retval");
data_map = bpf_object__find_map_by_name(obj, "tailcall.bss");
if (CHECK_FAIL(!data_map || !bpf_map__is_internal(data_map)))
@@ -631,18 +639,17 @@ static void test_tailcall_bpf2bpf_2(void)
i = 0;
err = bpf_map_lookup_elem(data_fd, &i, &val);
- CHECK(err || val != 33, "tailcall count", "err %d errno %d count %d\n",
- err, errno, val);
+ ASSERT_OK(err, "tailcall count");
+ ASSERT_EQ(val, 33, "tailcall count");
i = 0;
err = bpf_map_delete_elem(map_fd, &i);
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != 0, "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_OK(topts.retval, "tailcall retval");
out:
bpf_object__close(obj);
}
@@ -657,8 +664,12 @@ static void test_tailcall_bpf2bpf_3(void)
struct bpf_map *prog_array;
struct bpf_program *prog;
struct bpf_object *obj;
- __u32 retval, duration;
char prog_name[32];
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
err = bpf_prog_test_load("tailcall_bpf2bpf3.o", BPF_PROG_TYPE_SCHED_CLS,
&obj, &prog_fd);
@@ -681,7 +692,7 @@ static void test_tailcall_bpf2bpf_3(void)
if (CHECK_FAIL(map_fd < 0))
goto out;
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_name(obj, prog_name);
@@ -697,33 +708,27 @@ static void test_tailcall_bpf2bpf_3(void)
goto out;
}
- err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != sizeof(pkt_v4) * 3,
- "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, sizeof(pkt_v4) * 3, "tailcall retval");
i = 1;
err = bpf_map_delete_elem(map_fd, &i);
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != sizeof(pkt_v4),
- "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, sizeof(pkt_v4), "tailcall retval");
i = 0;
err = bpf_map_delete_elem(map_fd, &i);
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != sizeof(pkt_v4) * 2,
- "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, sizeof(pkt_v4) * 2, "tailcall retval");
out:
bpf_object__close(obj);
}
@@ -754,8 +759,12 @@ static void test_tailcall_bpf2bpf_4(bool noise)
struct bpf_map *prog_array, *data_map;
struct bpf_program *prog;
struct bpf_object *obj;
- __u32 retval, duration;
char prog_name[32];
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
err = bpf_prog_test_load("tailcall_bpf2bpf4.o", BPF_PROG_TYPE_SCHED_CLS,
&obj, &prog_fd);
@@ -778,7 +787,7 @@ static void test_tailcall_bpf2bpf_4(bool noise)
if (CHECK_FAIL(map_fd < 0))
goto out;
- for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_name(obj, prog_name);
@@ -809,15 +818,14 @@ static void test_tailcall_bpf2bpf_4(bool noise)
if (CHECK_FAIL(err))
goto out;
- err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0,
- &duration, &retval, NULL);
- CHECK(err || retval != sizeof(pkt_v4) * 3, "tailcall", "err %d errno %d retval %d\n",
- err, errno, retval);
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, sizeof(pkt_v4) * 3, "tailcall retval");
i = 0;
err = bpf_map_lookup_elem(data_fd, &i, &val);
- CHECK(err || val.count != 31, "tailcall count", "err %d errno %d count %d\n",
- err, errno, val.count);
+ ASSERT_OK(err, "tailcall count");
+ ASSERT_EQ(val.count, 31, "tailcall count");
out:
bpf_object__close(obj);
diff --git a/tools/testing/selftests/bpf/prog_tests/task_pt_regs.c b/tools/testing/selftests/bpf/prog_tests/task_pt_regs.c
index 37c20b5ffa70..61935e7e056a 100644
--- a/tools/testing/selftests/bpf/prog_tests/task_pt_regs.c
+++ b/tools/testing/selftests/bpf/prog_tests/task_pt_regs.c
@@ -3,18 +3,22 @@
#include <test_progs.h>
#include "test_task_pt_regs.skel.h"
+/* uprobe attach point */
+static void trigger_func(void)
+{
+ asm volatile ("");
+}
+
void test_task_pt_regs(void)
{
struct test_task_pt_regs *skel;
struct bpf_link *uprobe_link;
- size_t uprobe_offset;
- ssize_t base_addr;
+ ssize_t uprobe_offset;
bool match;
- base_addr = get_base_addr();
- if (!ASSERT_GT(base_addr, 0, "get_base_addr"))
+ uprobe_offset = get_uprobe_offset(&trigger_func);
+ if (!ASSERT_GE(uprobe_offset, 0, "uprobe_offset"))
return;
- uprobe_offset = get_uprobe_offset(&get_base_addr, base_addr);
skel = test_task_pt_regs__open_and_load();
if (!ASSERT_OK_PTR(skel, "skel_open"))
@@ -32,7 +36,7 @@ void test_task_pt_regs(void)
skel->links.handle_uprobe = uprobe_link;
/* trigger & validate uprobe */
- get_base_addr();
+ trigger_func();
if (!ASSERT_EQ(skel->bss->uprobe_res, 1, "check_uprobe_res"))
goto cleanup;
diff --git a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
index c2426df58e17..7ad66a247c02 100644
--- a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
+++ b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
@@ -10,17 +10,15 @@
* to drop unexpected traffic.
*/
-#define _GNU_SOURCE
-
#include <arpa/inet.h>
#include <linux/if.h>
#include <linux/if_tun.h>
#include <linux/limits.h>
#include <linux/sysctl.h>
-#include <sched.h>
+#include <linux/time_types.h>
+#include <linux/net_tstamp.h>
#include <stdbool.h>
#include <stdio.h>
-#include <sys/mount.h>
#include <sys/stat.h>
#include <unistd.h>
@@ -29,6 +27,11 @@
#include "test_tc_neigh_fib.skel.h"
#include "test_tc_neigh.skel.h"
#include "test_tc_peer.skel.h"
+#include "test_tc_dtime.skel.h"
+
+#ifndef TCP_TX_DELAY
+#define TCP_TX_DELAY 37
+#endif
#define NS_SRC "ns_src"
#define NS_FWD "ns_fwd"
@@ -61,6 +64,7 @@
#define CHK_PROG_PIN_FILE "/sys/fs/bpf/test_tc_chk"
#define TIMEOUT_MILLIS 10000
+#define NSEC_PER_SEC 1000000000ULL
#define log_err(MSG, ...) \
fprintf(stderr, "(%s:%d: errno: %s) " MSG "\n", \
@@ -84,91 +88,6 @@ static int write_file(const char *path, const char *newval)
return 0;
}
-struct nstoken {
- int orig_netns_fd;
-};
-
-static int setns_by_fd(int nsfd)
-{
- int err;
-
- err = setns(nsfd, CLONE_NEWNET);
- close(nsfd);
-
- if (!ASSERT_OK(err, "setns"))
- return err;
-
- /* Switch /sys to the new namespace so that e.g. /sys/class/net
- * reflects the devices in the new namespace.
- */
- err = unshare(CLONE_NEWNS);
- if (!ASSERT_OK(err, "unshare"))
- return err;
-
- /* Make our /sys mount private, so the following umount won't
- * trigger the global umount in case it's shared.
- */
- err = mount("none", "/sys", NULL, MS_PRIVATE, NULL);
- if (!ASSERT_OK(err, "remount private /sys"))
- return err;
-
- err = umount2("/sys", MNT_DETACH);
- if (!ASSERT_OK(err, "umount2 /sys"))
- return err;
-
- err = mount("sysfs", "/sys", "sysfs", 0, NULL);
- if (!ASSERT_OK(err, "mount /sys"))
- return err;
-
- err = mount("bpffs", "/sys/fs/bpf", "bpf", 0, NULL);
- if (!ASSERT_OK(err, "mount /sys/fs/bpf"))
- return err;
-
- return 0;
-}
-
-/**
- * open_netns() - Switch to specified network namespace by name.
- *
- * Returns token with which to restore the original namespace
- * using close_netns().
- */
-static struct nstoken *open_netns(const char *name)
-{
- int nsfd;
- char nspath[PATH_MAX];
- int err;
- struct nstoken *token;
-
- token = malloc(sizeof(struct nstoken));
- if (!ASSERT_OK_PTR(token, "malloc token"))
- return NULL;
-
- token->orig_netns_fd = open("/proc/self/ns/net", O_RDONLY);
- if (!ASSERT_GE(token->orig_netns_fd, 0, "open /proc/self/ns/net"))
- goto fail;
-
- snprintf(nspath, sizeof(nspath), "%s/%s", "/var/run/netns", name);
- nsfd = open(nspath, O_RDONLY | O_CLOEXEC);
- if (!ASSERT_GE(nsfd, 0, "open netns fd"))
- goto fail;
-
- err = setns_by_fd(nsfd);
- if (!ASSERT_OK(err, "setns_by_fd"))
- goto fail;
-
- return token;
-fail:
- free(token);
- return NULL;
-}
-
-static void close_netns(struct nstoken *token)
-{
- ASSERT_OK(setns_by_fd(token->orig_netns_fd), "setns_by_fd");
- free(token);
-}
-
static int netns_setup_namespaces(const char *verb)
{
const char * const *ns = namespaces;
@@ -440,6 +359,431 @@ static int set_forwarding(bool enable)
return 0;
}
+static void rcv_tstamp(int fd, const char *expected, size_t s)
+{
+ struct __kernel_timespec pkt_ts = {};
+ char ctl[CMSG_SPACE(sizeof(pkt_ts))];
+ struct timespec now_ts;
+ struct msghdr msg = {};
+ __u64 now_ns, pkt_ns;
+ struct cmsghdr *cmsg;
+ struct iovec iov;
+ char data[32];
+ int ret;
+
+ iov.iov_base = data;
+ iov.iov_len = sizeof(data);
+ msg.msg_iov = &iov;
+ msg.msg_iovlen = 1;
+ msg.msg_control = &ctl;
+ msg.msg_controllen = sizeof(ctl);
+
+ ret = recvmsg(fd, &msg, 0);
+ if (!ASSERT_EQ(ret, s, "recvmsg"))
+ return;
+ ASSERT_STRNEQ(data, expected, s, "expected rcv data");
+
+ cmsg = CMSG_FIRSTHDR(&msg);
+ if (cmsg && cmsg->cmsg_level == SOL_SOCKET &&
+ cmsg->cmsg_type == SO_TIMESTAMPNS_NEW)
+ memcpy(&pkt_ts, CMSG_DATA(cmsg), sizeof(pkt_ts));
+
+ pkt_ns = pkt_ts.tv_sec * NSEC_PER_SEC + pkt_ts.tv_nsec;
+ ASSERT_NEQ(pkt_ns, 0, "pkt rcv tstamp");
+
+ ret = clock_gettime(CLOCK_REALTIME, &now_ts);
+ ASSERT_OK(ret, "clock_gettime");
+ now_ns = now_ts.tv_sec * NSEC_PER_SEC + now_ts.tv_nsec;
+
+ if (ASSERT_GE(now_ns, pkt_ns, "check rcv tstamp"))
+ ASSERT_LT(now_ns - pkt_ns, 5 * NSEC_PER_SEC,
+ "check rcv tstamp");
+}
+
+static void snd_tstamp(int fd, char *b, size_t s)
+{
+ struct sock_txtime opt = { .clockid = CLOCK_TAI };
+ char ctl[CMSG_SPACE(sizeof(__u64))];
+ struct timespec now_ts;
+ struct msghdr msg = {};
+ struct cmsghdr *cmsg;
+ struct iovec iov;
+ __u64 now_ns;
+ int ret;
+
+ ret = clock_gettime(CLOCK_TAI, &now_ts);
+ ASSERT_OK(ret, "clock_get_time(CLOCK_TAI)");
+ now_ns = now_ts.tv_sec * NSEC_PER_SEC + now_ts.tv_nsec;
+
+ iov.iov_base = b;
+ iov.iov_len = s;
+ msg.msg_iov = &iov;
+ msg.msg_iovlen = 1;
+ msg.msg_control = &ctl;
+ msg.msg_controllen = sizeof(ctl);
+
+ cmsg = CMSG_FIRSTHDR(&msg);
+ cmsg->cmsg_level = SOL_SOCKET;
+ cmsg->cmsg_type = SCM_TXTIME;
+ cmsg->cmsg_len = CMSG_LEN(sizeof(now_ns));
+ *(__u64 *)CMSG_DATA(cmsg) = now_ns;
+
+ ret = setsockopt(fd, SOL_SOCKET, SO_TXTIME, &opt, sizeof(opt));
+ ASSERT_OK(ret, "setsockopt(SO_TXTIME)");
+
+ ret = sendmsg(fd, &msg, 0);
+ ASSERT_EQ(ret, s, "sendmsg");
+}
+
+static void test_inet_dtime(int family, int type, const char *addr, __u16 port)
+{
+ int opt = 1, accept_fd = -1, client_fd = -1, listen_fd, err;
+ char buf[] = "testing testing";
+ struct nstoken *nstoken;
+
+ nstoken = open_netns(NS_DST);
+ if (!ASSERT_OK_PTR(nstoken, "setns dst"))
+ return;
+ listen_fd = start_server(family, type, addr, port, 0);
+ close_netns(nstoken);
+
+ if (!ASSERT_GE(listen_fd, 0, "listen"))
+ return;
+
+ /* Ensure the kernel puts the (rcv) timestamp for all skb */
+ err = setsockopt(listen_fd, SOL_SOCKET, SO_TIMESTAMPNS_NEW,
+ &opt, sizeof(opt));
+ if (!ASSERT_OK(err, "setsockopt(SO_TIMESTAMPNS_NEW)"))
+ goto done;
+
+ if (type == SOCK_STREAM) {
+ /* Ensure the kernel set EDT when sending out rst/ack
+ * from the kernel's ctl_sk.
+ */
+ err = setsockopt(listen_fd, SOL_TCP, TCP_TX_DELAY, &opt,
+ sizeof(opt));
+ if (!ASSERT_OK(err, "setsockopt(TCP_TX_DELAY)"))
+ goto done;
+ }
+
+ nstoken = open_netns(NS_SRC);
+ if (!ASSERT_OK_PTR(nstoken, "setns src"))
+ goto done;
+ client_fd = connect_to_fd(listen_fd, TIMEOUT_MILLIS);
+ close_netns(nstoken);
+
+ if (!ASSERT_GE(client_fd, 0, "connect_to_fd"))
+ goto done;
+
+ if (type == SOCK_STREAM) {
+ int n;
+
+ accept_fd = accept(listen_fd, NULL, NULL);
+ if (!ASSERT_GE(accept_fd, 0, "accept"))
+ goto done;
+
+ n = write(client_fd, buf, sizeof(buf));
+ if (!ASSERT_EQ(n, sizeof(buf), "send to server"))
+ goto done;
+ rcv_tstamp(accept_fd, buf, sizeof(buf));
+ } else {
+ snd_tstamp(client_fd, buf, sizeof(buf));
+ rcv_tstamp(listen_fd, buf, sizeof(buf));
+ }
+
+done:
+ close(listen_fd);
+ if (accept_fd != -1)
+ close(accept_fd);
+ if (client_fd != -1)
+ close(client_fd);
+}
+
+static int netns_load_dtime_bpf(struct test_tc_dtime *skel)
+{
+ struct nstoken *nstoken;
+
+#define PIN_FNAME(__file) "/sys/fs/bpf/" #__file
+#define PIN(__prog) ({ \
+ int err = bpf_program__pin(skel->progs.__prog, PIN_FNAME(__prog)); \
+ if (!ASSERT_OK(err, "pin " #__prog)) \
+ goto fail; \
+ })
+
+ /* setup ns_src tc progs */
+ nstoken = open_netns(NS_SRC);
+ if (!ASSERT_OK_PTR(nstoken, "setns " NS_SRC))
+ return -1;
+ PIN(egress_host);
+ PIN(ingress_host);
+ SYS("tc qdisc add dev veth_src clsact");
+ SYS("tc filter add dev veth_src ingress bpf da object-pinned "
+ PIN_FNAME(ingress_host));
+ SYS("tc filter add dev veth_src egress bpf da object-pinned "
+ PIN_FNAME(egress_host));
+ close_netns(nstoken);
+
+ /* setup ns_dst tc progs */
+ nstoken = open_netns(NS_DST);
+ if (!ASSERT_OK_PTR(nstoken, "setns " NS_DST))
+ return -1;
+ PIN(egress_host);
+ PIN(ingress_host);
+ SYS("tc qdisc add dev veth_dst clsact");
+ SYS("tc filter add dev veth_dst ingress bpf da object-pinned "
+ PIN_FNAME(ingress_host));
+ SYS("tc filter add dev veth_dst egress bpf da object-pinned "
+ PIN_FNAME(egress_host));
+ close_netns(nstoken);
+
+ /* setup ns_fwd tc progs */
+ nstoken = open_netns(NS_FWD);
+ if (!ASSERT_OK_PTR(nstoken, "setns " NS_FWD))
+ return -1;
+ PIN(ingress_fwdns_prio100);
+ PIN(egress_fwdns_prio100);
+ PIN(ingress_fwdns_prio101);
+ PIN(egress_fwdns_prio101);
+ SYS("tc qdisc add dev veth_dst_fwd clsact");
+ SYS("tc filter add dev veth_dst_fwd ingress prio 100 bpf da object-pinned "
+ PIN_FNAME(ingress_fwdns_prio100));
+ SYS("tc filter add dev veth_dst_fwd ingress prio 101 bpf da object-pinned "
+ PIN_FNAME(ingress_fwdns_prio101));
+ SYS("tc filter add dev veth_dst_fwd egress prio 100 bpf da object-pinned "
+ PIN_FNAME(egress_fwdns_prio100));
+ SYS("tc filter add dev veth_dst_fwd egress prio 101 bpf da object-pinned "
+ PIN_FNAME(egress_fwdns_prio101));
+ SYS("tc qdisc add dev veth_src_fwd clsact");
+ SYS("tc filter add dev veth_src_fwd ingress prio 100 bpf da object-pinned "
+ PIN_FNAME(ingress_fwdns_prio100));
+ SYS("tc filter add dev veth_src_fwd ingress prio 101 bpf da object-pinned "
+ PIN_FNAME(ingress_fwdns_prio101));
+ SYS("tc filter add dev veth_src_fwd egress prio 100 bpf da object-pinned "
+ PIN_FNAME(egress_fwdns_prio100));
+ SYS("tc filter add dev veth_src_fwd egress prio 101 bpf da object-pinned "
+ PIN_FNAME(egress_fwdns_prio101));
+ close_netns(nstoken);
+
+#undef PIN
+
+ return 0;
+
+fail:
+ close_netns(nstoken);
+ return -1;
+}
+
+enum {
+ INGRESS_FWDNS_P100,
+ INGRESS_FWDNS_P101,
+ EGRESS_FWDNS_P100,
+ EGRESS_FWDNS_P101,
+ INGRESS_ENDHOST,
+ EGRESS_ENDHOST,
+ SET_DTIME,
+ __MAX_CNT,
+};
+
+const char *cnt_names[] = {
+ "ingress_fwdns_p100",
+ "ingress_fwdns_p101",
+ "egress_fwdns_p100",
+ "egress_fwdns_p101",
+ "ingress_endhost",
+ "egress_endhost",
+ "set_dtime",
+};
+
+enum {
+ TCP_IP6_CLEAR_DTIME,
+ TCP_IP4,
+ TCP_IP6,
+ UDP_IP4,
+ UDP_IP6,
+ TCP_IP4_RT_FWD,
+ TCP_IP6_RT_FWD,
+ UDP_IP4_RT_FWD,
+ UDP_IP6_RT_FWD,
+ UKN_TEST,
+ __NR_TESTS,
+};
+
+const char *test_names[] = {
+ "tcp ip6 clear dtime",
+ "tcp ip4",
+ "tcp ip6",
+ "udp ip4",
+ "udp ip6",
+ "tcp ip4 rt fwd",
+ "tcp ip6 rt fwd",
+ "udp ip4 rt fwd",
+ "udp ip6 rt fwd",
+};
+
+static const char *dtime_cnt_str(int test, int cnt)
+{
+ static char name[64];
+
+ snprintf(name, sizeof(name), "%s %s", test_names[test], cnt_names[cnt]);
+
+ return name;
+}
+
+static const char *dtime_err_str(int test, int cnt)
+{
+ static char name[64];
+
+ snprintf(name, sizeof(name), "%s %s errs", test_names[test],
+ cnt_names[cnt]);
+
+ return name;
+}
+
+static void test_tcp_clear_dtime(struct test_tc_dtime *skel)
+{
+ int i, t = TCP_IP6_CLEAR_DTIME;
+ __u32 *dtimes = skel->bss->dtimes[t];
+ __u32 *errs = skel->bss->errs[t];
+
+ skel->bss->test = t;
+ test_inet_dtime(AF_INET6, SOCK_STREAM, IP6_DST, 0);
+
+ ASSERT_EQ(dtimes[INGRESS_FWDNS_P100], 0,
+ dtime_cnt_str(t, INGRESS_FWDNS_P100));
+ ASSERT_EQ(dtimes[INGRESS_FWDNS_P101], 0,
+ dtime_cnt_str(t, INGRESS_FWDNS_P101));
+ ASSERT_GT(dtimes[EGRESS_FWDNS_P100], 0,
+ dtime_cnt_str(t, EGRESS_FWDNS_P100));
+ ASSERT_EQ(dtimes[EGRESS_FWDNS_P101], 0,
+ dtime_cnt_str(t, EGRESS_FWDNS_P101));
+ ASSERT_GT(dtimes[EGRESS_ENDHOST], 0,
+ dtime_cnt_str(t, EGRESS_ENDHOST));
+ ASSERT_GT(dtimes[INGRESS_ENDHOST], 0,
+ dtime_cnt_str(t, INGRESS_ENDHOST));
+
+ for (i = INGRESS_FWDNS_P100; i < __MAX_CNT; i++)
+ ASSERT_EQ(errs[i], 0, dtime_err_str(t, i));
+}
+
+static void test_tcp_dtime(struct test_tc_dtime *skel, int family, bool bpf_fwd)
+{
+ __u32 *dtimes, *errs;
+ const char *addr;
+ int i, t;
+
+ if (family == AF_INET) {
+ t = bpf_fwd ? TCP_IP4 : TCP_IP4_RT_FWD;
+ addr = IP4_DST;
+ } else {
+ t = bpf_fwd ? TCP_IP6 : TCP_IP6_RT_FWD;
+ addr = IP6_DST;
+ }
+
+ dtimes = skel->bss->dtimes[t];
+ errs = skel->bss->errs[t];
+
+ skel->bss->test = t;
+ test_inet_dtime(family, SOCK_STREAM, addr, 0);
+
+ /* fwdns_prio100 prog does not read delivery_time_type, so
+ * kernel puts the (rcv) timetamp in __sk_buff->tstamp
+ */
+ ASSERT_EQ(dtimes[INGRESS_FWDNS_P100], 0,
+ dtime_cnt_str(t, INGRESS_FWDNS_P100));
+ for (i = INGRESS_FWDNS_P101; i < SET_DTIME; i++)
+ ASSERT_GT(dtimes[i], 0, dtime_cnt_str(t, i));
+
+ for (i = INGRESS_FWDNS_P100; i < __MAX_CNT; i++)
+ ASSERT_EQ(errs[i], 0, dtime_err_str(t, i));
+}
+
+static void test_udp_dtime(struct test_tc_dtime *skel, int family, bool bpf_fwd)
+{
+ __u32 *dtimes, *errs;
+ const char *addr;
+ int i, t;
+
+ if (family == AF_INET) {
+ t = bpf_fwd ? UDP_IP4 : UDP_IP4_RT_FWD;
+ addr = IP4_DST;
+ } else {
+ t = bpf_fwd ? UDP_IP6 : UDP_IP6_RT_FWD;
+ addr = IP6_DST;
+ }
+
+ dtimes = skel->bss->dtimes[t];
+ errs = skel->bss->errs[t];
+
+ skel->bss->test = t;
+ test_inet_dtime(family, SOCK_DGRAM, addr, 0);
+
+ ASSERT_EQ(dtimes[INGRESS_FWDNS_P100], 0,
+ dtime_cnt_str(t, INGRESS_FWDNS_P100));
+ /* non mono delivery time is not forwarded */
+ ASSERT_EQ(dtimes[INGRESS_FWDNS_P101], 0,
+ dtime_cnt_str(t, INGRESS_FWDNS_P100));
+ for (i = EGRESS_FWDNS_P100; i < SET_DTIME; i++)
+ ASSERT_GT(dtimes[i], 0, dtime_cnt_str(t, i));
+
+ for (i = INGRESS_FWDNS_P100; i < __MAX_CNT; i++)
+ ASSERT_EQ(errs[i], 0, dtime_err_str(t, i));
+}
+
+static void test_tc_redirect_dtime(struct netns_setup_result *setup_result)
+{
+ struct test_tc_dtime *skel;
+ struct nstoken *nstoken;
+ int err;
+
+ skel = test_tc_dtime__open();
+ if (!ASSERT_OK_PTR(skel, "test_tc_dtime__open"))
+ return;
+
+ skel->rodata->IFINDEX_SRC = setup_result->ifindex_veth_src_fwd;
+ skel->rodata->IFINDEX_DST = setup_result->ifindex_veth_dst_fwd;
+
+ err = test_tc_dtime__load(skel);
+ if (!ASSERT_OK(err, "test_tc_dtime__load"))
+ goto done;
+
+ if (netns_load_dtime_bpf(skel))
+ goto done;
+
+ nstoken = open_netns(NS_FWD);
+ if (!ASSERT_OK_PTR(nstoken, "setns fwd"))
+ goto done;
+ err = set_forwarding(false);
+ close_netns(nstoken);
+ if (!ASSERT_OK(err, "disable forwarding"))
+ goto done;
+
+ test_tcp_clear_dtime(skel);
+
+ test_tcp_dtime(skel, AF_INET, true);
+ test_tcp_dtime(skel, AF_INET6, true);
+ test_udp_dtime(skel, AF_INET, true);
+ test_udp_dtime(skel, AF_INET6, true);
+
+ /* Test the kernel ip[6]_forward path instead
+ * of bpf_redirect_neigh().
+ */
+ nstoken = open_netns(NS_FWD);
+ if (!ASSERT_OK_PTR(nstoken, "setns fwd"))
+ goto done;
+ err = set_forwarding(true);
+ close_netns(nstoken);
+ if (!ASSERT_OK(err, "enable forwarding"))
+ goto done;
+
+ test_tcp_dtime(skel, AF_INET, false);
+ test_tcp_dtime(skel, AF_INET6, false);
+ test_udp_dtime(skel, AF_INET, false);
+ test_udp_dtime(skel, AF_INET6, false);
+
+done:
+ test_tc_dtime__destroy(skel);
+}
+
static void test_tc_redirect_neigh_fib(struct netns_setup_result *setup_result)
{
struct nstoken *nstoken = NULL;
@@ -787,6 +1131,7 @@ static void *test_tc_redirect_run_tests(void *arg)
RUN_TEST(tc_redirect_peer_l3);
RUN_TEST(tc_redirect_neigh);
RUN_TEST(tc_redirect_neigh_fib);
+ RUN_TEST(tc_redirect_dtime);
return NULL;
}
diff --git a/tools/testing/selftests/bpf/prog_tests/test_bpf_syscall_macro.c b/tools/testing/selftests/bpf/prog_tests/test_bpf_syscall_macro.c
new file mode 100644
index 000000000000..c381faaae741
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/test_bpf_syscall_macro.c
@@ -0,0 +1,73 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright 2022 Sony Group Corporation */
+#include <sys/prctl.h>
+#include <test_progs.h>
+#include "bpf_syscall_macro.skel.h"
+
+void test_bpf_syscall_macro(void)
+{
+ struct bpf_syscall_macro *skel = NULL;
+ int err;
+ int exp_arg1 = 1001;
+ unsigned long exp_arg2 = 12;
+ unsigned long exp_arg3 = 13;
+ unsigned long exp_arg4 = 14;
+ unsigned long exp_arg5 = 15;
+
+ /* check whether it can open program */
+ skel = bpf_syscall_macro__open();
+ if (!ASSERT_OK_PTR(skel, "bpf_syscall_macro__open"))
+ return;
+
+ skel->rodata->filter_pid = getpid();
+
+ /* check whether it can load program */
+ err = bpf_syscall_macro__load(skel);
+ if (!ASSERT_OK(err, "bpf_syscall_macro__load"))
+ goto cleanup;
+
+ /* check whether it can attach kprobe */
+ err = bpf_syscall_macro__attach(skel);
+ if (!ASSERT_OK(err, "bpf_syscall_macro__attach"))
+ goto cleanup;
+
+ /* check whether args of syscall are copied correctly */
+ prctl(exp_arg1, exp_arg2, exp_arg3, exp_arg4, exp_arg5);
+#if defined(__aarch64__) || defined(__s390__)
+ ASSERT_NEQ(skel->bss->arg1, exp_arg1, "syscall_arg1");
+#else
+ ASSERT_EQ(skel->bss->arg1, exp_arg1, "syscall_arg1");
+#endif
+ ASSERT_EQ(skel->bss->arg2, exp_arg2, "syscall_arg2");
+ ASSERT_EQ(skel->bss->arg3, exp_arg3, "syscall_arg3");
+ /* it cannot copy arg4 when uses PT_REGS_PARM4 on x86_64 */
+#ifdef __x86_64__
+ ASSERT_NEQ(skel->bss->arg4_cx, exp_arg4, "syscall_arg4_from_cx");
+#else
+ ASSERT_EQ(skel->bss->arg4_cx, exp_arg4, "syscall_arg4_from_cx");
+#endif
+ ASSERT_EQ(skel->bss->arg4, exp_arg4, "syscall_arg4");
+ ASSERT_EQ(skel->bss->arg5, exp_arg5, "syscall_arg5");
+
+ /* check whether args of syscall are copied correctly for CORE variants */
+ ASSERT_EQ(skel->bss->arg1_core, exp_arg1, "syscall_arg1_core_variant");
+ ASSERT_EQ(skel->bss->arg2_core, exp_arg2, "syscall_arg2_core_variant");
+ ASSERT_EQ(skel->bss->arg3_core, exp_arg3, "syscall_arg3_core_variant");
+ /* it cannot copy arg4 when uses PT_REGS_PARM4_CORE on x86_64 */
+#ifdef __x86_64__
+ ASSERT_NEQ(skel->bss->arg4_core_cx, exp_arg4, "syscall_arg4_from_cx_core_variant");
+#else
+ ASSERT_EQ(skel->bss->arg4_core_cx, exp_arg4, "syscall_arg4_from_cx_core_variant");
+#endif
+ ASSERT_EQ(skel->bss->arg4_core, exp_arg4, "syscall_arg4_core_variant");
+ ASSERT_EQ(skel->bss->arg5_core, exp_arg5, "syscall_arg5_core_variant");
+
+ ASSERT_EQ(skel->bss->option_syscall, exp_arg1, "BPF_KPROBE_SYSCALL_option");
+ ASSERT_EQ(skel->bss->arg2_syscall, exp_arg2, "BPF_KPROBE_SYSCALL_arg2");
+ ASSERT_EQ(skel->bss->arg3_syscall, exp_arg3, "BPF_KPROBE_SYSCALL_arg3");
+ ASSERT_EQ(skel->bss->arg4_syscall, exp_arg4, "BPF_KPROBE_SYSCALL_arg4");
+ ASSERT_EQ(skel->bss->arg5_syscall, exp_arg5, "BPF_KPROBE_SYSCALL_arg5");
+
+cleanup:
+ bpf_syscall_macro__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/test_ima.c b/tools/testing/selftests/bpf/prog_tests/test_ima.c
index 97d8a6f84f4a..b13feceb38f1 100644
--- a/tools/testing/selftests/bpf/prog_tests/test_ima.c
+++ b/tools/testing/selftests/bpf/prog_tests/test_ima.c
@@ -13,14 +13,17 @@
#include "ima.skel.h"
-static int run_measured_process(const char *measured_dir, u32 *monitored_pid)
+#define MAX_SAMPLES 4
+
+static int _run_measured_process(const char *measured_dir, u32 *monitored_pid,
+ const char *cmd)
{
int child_pid, child_status;
child_pid = fork();
if (child_pid == 0) {
*monitored_pid = getpid();
- execlp("./ima_setup.sh", "./ima_setup.sh", "run", measured_dir,
+ execlp("./ima_setup.sh", "./ima_setup.sh", cmd, measured_dir,
NULL);
exit(errno);
@@ -32,19 +35,39 @@ static int run_measured_process(const char *measured_dir, u32 *monitored_pid)
return -EINVAL;
}
-static u64 ima_hash_from_bpf;
+static int run_measured_process(const char *measured_dir, u32 *monitored_pid)
+{
+ return _run_measured_process(measured_dir, monitored_pid, "run");
+}
+
+static u64 ima_hash_from_bpf[MAX_SAMPLES];
+static int ima_hash_from_bpf_idx;
static int process_sample(void *ctx, void *data, size_t len)
{
- ima_hash_from_bpf = *((u64 *)data);
+ if (ima_hash_from_bpf_idx >= MAX_SAMPLES)
+ return -ENOSPC;
+
+ ima_hash_from_bpf[ima_hash_from_bpf_idx++] = *((u64 *)data);
return 0;
}
+static void test_init(struct ima__bss *bss)
+{
+ ima_hash_from_bpf_idx = 0;
+
+ bss->use_ima_file_hash = false;
+ bss->enable_bprm_creds_for_exec = false;
+ bss->enable_kernel_read_file = false;
+ bss->test_deny = false;
+}
+
void test_test_ima(void)
{
char measured_dir_template[] = "/tmp/ima_measuredXXXXXX";
struct ring_buffer *ringbuf = NULL;
const char *measured_dir;
+ u64 bin_true_sample;
char cmd[256];
int err, duration = 0;
@@ -72,13 +95,127 @@ void test_test_ima(void)
if (CHECK(err, "failed to run command", "%s, errno = %d\n", cmd, errno))
goto close_clean;
+ /*
+ * Test #1
+ * - Goal: obtain a sample with the bpf_ima_inode_hash() helper
+ * - Expected result: 1 sample (/bin/true)
+ */
+ test_init(skel->bss);
err = run_measured_process(measured_dir, &skel->bss->monitored_pid);
- if (CHECK(err, "run_measured_process", "err = %d\n", err))
+ if (CHECK(err, "run_measured_process #1", "err = %d\n", err))
goto close_clean;
err = ring_buffer__consume(ringbuf);
ASSERT_EQ(err, 1, "num_samples_or_err");
- ASSERT_NEQ(ima_hash_from_bpf, 0, "ima_hash");
+ ASSERT_NEQ(ima_hash_from_bpf[0], 0, "ima_hash");
+
+ /*
+ * Test #2
+ * - Goal: obtain samples with the bpf_ima_file_hash() helper
+ * - Expected result: 2 samples (./ima_setup.sh, /bin/true)
+ */
+ test_init(skel->bss);
+ skel->bss->use_ima_file_hash = true;
+ err = run_measured_process(measured_dir, &skel->bss->monitored_pid);
+ if (CHECK(err, "run_measured_process #2", "err = %d\n", err))
+ goto close_clean;
+
+ err = ring_buffer__consume(ringbuf);
+ ASSERT_EQ(err, 2, "num_samples_or_err");
+ ASSERT_NEQ(ima_hash_from_bpf[0], 0, "ima_hash");
+ ASSERT_NEQ(ima_hash_from_bpf[1], 0, "ima_hash");
+ bin_true_sample = ima_hash_from_bpf[1];
+
+ /*
+ * Test #3
+ * - Goal: confirm that bpf_ima_inode_hash() returns a non-fresh digest
+ * - Expected result: 2 samples (/bin/true: non-fresh, fresh)
+ */
+ test_init(skel->bss);
+
+ err = _run_measured_process(measured_dir, &skel->bss->monitored_pid,
+ "modify-bin");
+ if (CHECK(err, "modify-bin #3", "err = %d\n", err))
+ goto close_clean;
+
+ skel->bss->enable_bprm_creds_for_exec = true;
+ err = run_measured_process(measured_dir, &skel->bss->monitored_pid);
+ if (CHECK(err, "run_measured_process #3", "err = %d\n", err))
+ goto close_clean;
+
+ err = ring_buffer__consume(ringbuf);
+ ASSERT_EQ(err, 2, "num_samples_or_err");
+ ASSERT_NEQ(ima_hash_from_bpf[0], 0, "ima_hash");
+ ASSERT_NEQ(ima_hash_from_bpf[1], 0, "ima_hash");
+ ASSERT_EQ(ima_hash_from_bpf[0], bin_true_sample, "sample_equal_or_err");
+ /* IMA refreshed the digest. */
+ ASSERT_NEQ(ima_hash_from_bpf[1], bin_true_sample,
+ "sample_different_or_err");
+
+ /*
+ * Test #4
+ * - Goal: verify that bpf_ima_file_hash() returns a fresh digest
+ * - Expected result: 4 samples (./ima_setup.sh: fresh, fresh;
+ * /bin/true: fresh, fresh)
+ */
+ test_init(skel->bss);
+ skel->bss->use_ima_file_hash = true;
+ skel->bss->enable_bprm_creds_for_exec = true;
+ err = run_measured_process(measured_dir, &skel->bss->monitored_pid);
+ if (CHECK(err, "run_measured_process #4", "err = %d\n", err))
+ goto close_clean;
+
+ err = ring_buffer__consume(ringbuf);
+ ASSERT_EQ(err, 4, "num_samples_or_err");
+ ASSERT_NEQ(ima_hash_from_bpf[0], 0, "ima_hash");
+ ASSERT_NEQ(ima_hash_from_bpf[1], 0, "ima_hash");
+ ASSERT_NEQ(ima_hash_from_bpf[2], 0, "ima_hash");
+ ASSERT_NEQ(ima_hash_from_bpf[3], 0, "ima_hash");
+ ASSERT_NEQ(ima_hash_from_bpf[2], bin_true_sample,
+ "sample_different_or_err");
+ ASSERT_EQ(ima_hash_from_bpf[3], ima_hash_from_bpf[2],
+ "sample_equal_or_err");
+
+ skel->bss->use_ima_file_hash = false;
+ skel->bss->enable_bprm_creds_for_exec = false;
+ err = _run_measured_process(measured_dir, &skel->bss->monitored_pid,
+ "restore-bin");
+ if (CHECK(err, "restore-bin #3", "err = %d\n", err))
+ goto close_clean;
+
+ /*
+ * Test #5
+ * - Goal: obtain a sample from the kernel_read_file hook
+ * - Expected result: 2 samples (./ima_setup.sh, policy_test)
+ */
+ test_init(skel->bss);
+ skel->bss->use_ima_file_hash = true;
+ skel->bss->enable_kernel_read_file = true;
+ err = _run_measured_process(measured_dir, &skel->bss->monitored_pid,
+ "load-policy");
+ if (CHECK(err, "run_measured_process #5", "err = %d\n", err))
+ goto close_clean;
+
+ err = ring_buffer__consume(ringbuf);
+ ASSERT_EQ(err, 2, "num_samples_or_err");
+ ASSERT_NEQ(ima_hash_from_bpf[0], 0, "ima_hash");
+ ASSERT_NEQ(ima_hash_from_bpf[1], 0, "ima_hash");
+
+ /*
+ * Test #6
+ * - Goal: ensure that the kernel_read_file hook denies an operation
+ * - Expected result: 0 samples
+ */
+ test_init(skel->bss);
+ skel->bss->enable_kernel_read_file = true;
+ skel->bss->test_deny = true;
+ err = _run_measured_process(measured_dir, &skel->bss->monitored_pid,
+ "load-policy");
+ if (CHECK(!err, "run_measured_process #6", "err = %d\n", err))
+ goto close_clean;
+
+ err = ring_buffer__consume(ringbuf);
+ ASSERT_EQ(err, 0, "num_samples_or_err");
close_clean:
snprintf(cmd, sizeof(cmd), "./ima_setup.sh cleanup %s", measured_dir);
diff --git a/tools/testing/selftests/bpf/prog_tests/test_profiler.c b/tools/testing/selftests/bpf/prog_tests/test_profiler.c
index 4ca275101ee0..de24e8f0e738 100644
--- a/tools/testing/selftests/bpf/prog_tests/test_profiler.c
+++ b/tools/testing/selftests/bpf/prog_tests/test_profiler.c
@@ -8,20 +8,20 @@
static int sanity_run(struct bpf_program *prog)
{
- struct bpf_prog_test_run_attr test_attr = {};
+ LIBBPF_OPTS(bpf_test_run_opts, test_attr);
__u64 args[] = {1, 2, 3};
- __u32 duration = 0;
int err, prog_fd;
prog_fd = bpf_program__fd(prog);
- test_attr.prog_fd = prog_fd;
test_attr.ctx_in = args;
test_attr.ctx_size_in = sizeof(args);
- err = bpf_prog_test_run_xattr(&test_attr);
- if (CHECK(err || test_attr.retval, "test_run",
- "err %d errno %d retval %d duration %d\n",
- err, errno, test_attr.retval, duration))
+ err = bpf_prog_test_run_opts(prog_fd, &test_attr);
+ if (!ASSERT_OK(err, "test_run"))
+ return -1;
+
+ if (!ASSERT_OK(test_attr.retval, "test_run retval"))
return -1;
+
return 0;
}
diff --git a/tools/testing/selftests/bpf/prog_tests/test_skb_pkt_end.c b/tools/testing/selftests/bpf/prog_tests/test_skb_pkt_end.c
index cf1215531920..ae93411fd582 100644
--- a/tools/testing/selftests/bpf/prog_tests/test_skb_pkt_end.c
+++ b/tools/testing/selftests/bpf/prog_tests/test_skb_pkt_end.c
@@ -6,15 +6,18 @@
static int sanity_run(struct bpf_program *prog)
{
- __u32 duration, retval;
int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
prog_fd = bpf_program__fd(prog);
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
- NULL, NULL, &retval, &duration);
- if (CHECK(err || retval != 123, "test_run",
- "err %d errno %d retval %d duration %d\n",
- err, errno, retval, duration))
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ if (!ASSERT_OK(err, "test_run"))
+ return -1;
+ if (!ASSERT_EQ(topts.retval, 123, "test_run retval"))
return -1;
return 0;
}
diff --git a/tools/testing/selftests/bpf/prog_tests/timer.c b/tools/testing/selftests/bpf/prog_tests/timer.c
index 0f4e49e622cd..7eb049214859 100644
--- a/tools/testing/selftests/bpf/prog_tests/timer.c
+++ b/tools/testing/selftests/bpf/prog_tests/timer.c
@@ -6,7 +6,7 @@
static int timer(struct timer *timer_skel)
{
int err, prog_fd;
- __u32 duration = 0, retval;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
err = timer__attach(timer_skel);
if (!ASSERT_OK(err, "timer_attach"))
@@ -16,10 +16,9 @@ static int timer(struct timer *timer_skel)
ASSERT_EQ(timer_skel->data->callback2_check, 52, "callback2_check1");
prog_fd = bpf_program__fd(timer_skel->progs.test1);
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "test_run");
- ASSERT_EQ(retval, 0, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run");
timer__detach(timer_skel);
usleep(50); /* 10 usecs should be enough, but give it extra */
diff --git a/tools/testing/selftests/bpf/prog_tests/timer_mim.c b/tools/testing/selftests/bpf/prog_tests/timer_mim.c
index 949a0617869d..2ee5f5ae11d4 100644
--- a/tools/testing/selftests/bpf/prog_tests/timer_mim.c
+++ b/tools/testing/selftests/bpf/prog_tests/timer_mim.c
@@ -6,19 +6,18 @@
static int timer_mim(struct timer_mim *timer_skel)
{
- __u32 duration = 0, retval;
__u64 cnt1, cnt2;
int err, prog_fd, key1 = 1;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
err = timer_mim__attach(timer_skel);
if (!ASSERT_OK(err, "timer_attach"))
return err;
prog_fd = bpf_program__fd(timer_skel->progs.test1);
- err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
- NULL, NULL, &retval, &duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "test_run");
- ASSERT_EQ(retval, 0, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run");
timer_mim__detach(timer_skel);
/* check that timer_cb[12] are incrementing 'cnt' */
diff --git a/tools/testing/selftests/bpf/prog_tests/trace_ext.c b/tools/testing/selftests/bpf/prog_tests/trace_ext.c
index 924441d4362d..aabdff7bea3e 100644
--- a/tools/testing/selftests/bpf/prog_tests/trace_ext.c
+++ b/tools/testing/selftests/bpf/prog_tests/trace_ext.c
@@ -23,8 +23,12 @@ void test_trace_ext(void)
int err, pkt_fd, ext_fd;
struct bpf_program *prog;
char buf[100];
- __u32 retval;
__u64 len;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
/* open/load/attach test_pkt_md_access */
skel_pkt = test_pkt_md_access__open_and_load();
@@ -77,32 +81,32 @@ void test_trace_ext(void)
/* load/attach tracing */
err = test_trace_ext_tracing__load(skel_trace);
- if (CHECK(err, "setup", "tracing/test_pkt_md_access_new load failed\n")) {
+ if (!ASSERT_OK(err, "tracing/test_pkt_md_access_new load")) {
libbpf_strerror(err, buf, sizeof(buf));
fprintf(stderr, "%s\n", buf);
goto cleanup;
}
err = test_trace_ext_tracing__attach(skel_trace);
- if (CHECK(err, "setup", "tracing/test_pkt_md_access_new attach failed: %d\n", err))
+ if (!ASSERT_OK(err, "tracing/test_pkt_md_access_new attach"))
goto cleanup;
/* trigger the test */
- err = bpf_prog_test_run(pkt_fd, 1, &pkt_v4, sizeof(pkt_v4),
- NULL, NULL, &retval, &duration);
- CHECK(err || retval, "run", "err %d errno %d retval %d\n", err, errno, retval);
+ err = bpf_prog_test_run_opts(pkt_fd, &topts);
+ ASSERT_OK(err, "test_run_opts err");
+ ASSERT_OK(topts.retval, "test_run_opts retval");
bss_ext = skel_ext->bss;
bss_trace = skel_trace->bss;
len = bss_ext->ext_called;
- CHECK(bss_ext->ext_called == 0,
- "check", "failed to trigger freplace/test_pkt_md_access\n");
- CHECK(bss_trace->fentry_called != len,
- "check", "failed to trigger fentry/test_pkt_md_access_new\n");
- CHECK(bss_trace->fexit_called != len,
- "check", "failed to trigger fexit/test_pkt_md_access_new\n");
+ ASSERT_NEQ(bss_ext->ext_called, 0,
+ "failed to trigger freplace/test_pkt_md_access");
+ ASSERT_EQ(bss_trace->fentry_called, len,
+ "failed to trigger fentry/test_pkt_md_access_new");
+ ASSERT_EQ(bss_trace->fexit_called, len,
+ "failed to trigger fexit/test_pkt_md_access_new");
cleanup:
test_trace_ext_tracing__destroy(skel_trace);
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp.c b/tools/testing/selftests/bpf/prog_tests/xdp.c
index ac65456b7ab8..ec21c53cb1da 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp.c
@@ -13,8 +13,14 @@ void test_xdp(void)
char buf[128];
struct ipv6hdr iph6;
struct iphdr iph;
- __u32 duration, retval, size;
int err, prog_fd, map_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .data_out = buf,
+ .data_size_out = sizeof(buf),
+ .repeat = 1,
+ );
err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd);
if (CHECK_FAIL(err))
@@ -26,21 +32,23 @@ void test_xdp(void)
bpf_map_update_elem(map_fd, &key4, &value4, 0);
bpf_map_update_elem(map_fd, &key6, &value6, 0);
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
- buf, &size, &retval, &duration);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
memcpy(&iph, buf + sizeof(struct ethhdr), sizeof(iph));
- CHECK(err || retval != XDP_TX || size != 74 ||
- iph.protocol != IPPROTO_IPIP, "ipv4",
- "err %d errno %d retval %d size %d\n",
- err, errno, retval, size);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, XDP_TX, "ipv4 test_run retval");
+ ASSERT_EQ(topts.data_size_out, 74, "ipv4 test_run data_size_out");
+ ASSERT_EQ(iph.protocol, IPPROTO_IPIP, "ipv4 test_run iph.protocol");
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v6, sizeof(pkt_v6),
- buf, &size, &retval, &duration);
+ topts.data_in = &pkt_v6;
+ topts.data_size_in = sizeof(pkt_v6);
+ topts.data_size_out = sizeof(buf);
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
memcpy(&iph6, buf + sizeof(struct ethhdr), sizeof(iph6));
- CHECK(err || retval != XDP_TX || size != 114 ||
- iph6.nexthdr != IPPROTO_IPV6, "ipv6",
- "err %d errno %d retval %d size %d\n",
- err, errno, retval, size);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, XDP_TX, "ipv6 test_run retval");
+ ASSERT_EQ(topts.data_size_out, 114, "ipv6 test_run data_size_out");
+ ASSERT_EQ(iph6.nexthdr, IPPROTO_IPV6, "ipv6 test_run iph6.nexthdr");
out:
bpf_object__close(obj);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_frags.c b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_frags.c
new file mode 100644
index 000000000000..2f033da4cd45
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_frags.c
@@ -0,0 +1,146 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include <network_helpers.h>
+
+static void test_xdp_update_frags(void)
+{
+ const char *file = "./test_xdp_update_frags.o";
+ int err, prog_fd, max_skb_frags, buf_size, num;
+ struct bpf_program *prog;
+ struct bpf_object *obj;
+ __u32 *offset;
+ __u8 *buf;
+ FILE *f;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ obj = bpf_object__open(file);
+ if (libbpf_get_error(obj))
+ return;
+
+ prog = bpf_object__next_program(obj, NULL);
+ if (bpf_object__load(obj))
+ return;
+
+ prog_fd = bpf_program__fd(prog);
+
+ buf = malloc(128);
+ if (!ASSERT_OK_PTR(buf, "alloc buf 128b"))
+ goto out;
+
+ memset(buf, 0, 128);
+ offset = (__u32 *)buf;
+ *offset = 16;
+ buf[*offset] = 0xaa; /* marker at offset 16 (head) */
+ buf[*offset + 15] = 0xaa; /* marker at offset 31 (head) */
+
+ topts.data_in = buf;
+ topts.data_out = buf;
+ topts.data_size_in = 128;
+ topts.data_size_out = 128;
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+
+ /* test_xdp_update_frags: buf[16,31]: 0xaa -> 0xbb */
+ ASSERT_OK(err, "xdp_update_frag");
+ ASSERT_EQ(topts.retval, XDP_PASS, "xdp_update_frag retval");
+ ASSERT_EQ(buf[16], 0xbb, "xdp_update_frag buf[16]");
+ ASSERT_EQ(buf[31], 0xbb, "xdp_update_frag buf[31]");
+
+ free(buf);
+
+ buf = malloc(9000);
+ if (!ASSERT_OK_PTR(buf, "alloc buf 9Kb"))
+ goto out;
+
+ memset(buf, 0, 9000);
+ offset = (__u32 *)buf;
+ *offset = 5000;
+ buf[*offset] = 0xaa; /* marker at offset 5000 (frag0) */
+ buf[*offset + 15] = 0xaa; /* marker at offset 5015 (frag0) */
+
+ topts.data_in = buf;
+ topts.data_out = buf;
+ topts.data_size_in = 9000;
+ topts.data_size_out = 9000;
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+
+ /* test_xdp_update_frags: buf[5000,5015]: 0xaa -> 0xbb */
+ ASSERT_OK(err, "xdp_update_frag");
+ ASSERT_EQ(topts.retval, XDP_PASS, "xdp_update_frag retval");
+ ASSERT_EQ(buf[5000], 0xbb, "xdp_update_frag buf[5000]");
+ ASSERT_EQ(buf[5015], 0xbb, "xdp_update_frag buf[5015]");
+
+ memset(buf, 0, 9000);
+ offset = (__u32 *)buf;
+ *offset = 3510;
+ buf[*offset] = 0xaa; /* marker at offset 3510 (head) */
+ buf[*offset + 15] = 0xaa; /* marker at offset 3525 (frag0) */
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+
+ /* test_xdp_update_frags: buf[3510,3525]: 0xaa -> 0xbb */
+ ASSERT_OK(err, "xdp_update_frag");
+ ASSERT_EQ(topts.retval, XDP_PASS, "xdp_update_frag retval");
+ ASSERT_EQ(buf[3510], 0xbb, "xdp_update_frag buf[3510]");
+ ASSERT_EQ(buf[3525], 0xbb, "xdp_update_frag buf[3525]");
+
+ memset(buf, 0, 9000);
+ offset = (__u32 *)buf;
+ *offset = 7606;
+ buf[*offset] = 0xaa; /* marker at offset 7606 (frag0) */
+ buf[*offset + 15] = 0xaa; /* marker at offset 7621 (frag1) */
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+
+ /* test_xdp_update_frags: buf[7606,7621]: 0xaa -> 0xbb */
+ ASSERT_OK(err, "xdp_update_frag");
+ ASSERT_EQ(topts.retval, XDP_PASS, "xdp_update_frag retval");
+ ASSERT_EQ(buf[7606], 0xbb, "xdp_update_frag buf[7606]");
+ ASSERT_EQ(buf[7621], 0xbb, "xdp_update_frag buf[7621]");
+
+ free(buf);
+
+ /* test_xdp_update_frags: unsupported buffer size */
+ f = fopen("/proc/sys/net/core/max_skb_frags", "r");
+ if (!ASSERT_OK_PTR(f, "max_skb_frag file pointer"))
+ goto out;
+
+ num = fscanf(f, "%d", &max_skb_frags);
+ fclose(f);
+
+ if (!ASSERT_EQ(num, 1, "max_skb_frags read failed"))
+ goto out;
+
+ /* xdp_buff linear area size is always set to 4096 in the
+ * bpf_prog_test_run_xdp routine.
+ */
+ buf_size = 4096 + (max_skb_frags + 1) * sysconf(_SC_PAGE_SIZE);
+ buf = malloc(buf_size);
+ if (!ASSERT_OK_PTR(buf, "alloc buf"))
+ goto out;
+
+ memset(buf, 0, buf_size);
+ offset = (__u32 *)buf;
+ *offset = 16;
+ buf[*offset] = 0xaa;
+ buf[*offset + 15] = 0xaa;
+
+ topts.data_in = buf;
+ topts.data_out = buf;
+ topts.data_size_in = buf_size;
+ topts.data_size_out = buf_size;
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_EQ(err, -ENOMEM,
+ "unsupported buf size, possible non-default /proc/sys/net/core/max_skb_flags?");
+ free(buf);
+out:
+ bpf_object__close(obj);
+}
+
+void test_xdp_adjust_frags(void)
+{
+ if (test__start_subtest("xdp_adjust_frags"))
+ test_xdp_update_frags();
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c
index 3f5a17c38be5..21ceac24e174 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c
@@ -5,28 +5,35 @@
static void test_xdp_adjust_tail_shrink(void)
{
const char *file = "./test_xdp_adjust_tail_shrink.o";
- __u32 duration, retval, size, expect_sz;
+ __u32 expect_sz;
struct bpf_object *obj;
int err, prog_fd;
char buf[128];
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .data_out = buf,
+ .data_size_out = sizeof(buf),
+ .repeat = 1,
+ );
err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd);
- if (CHECK_FAIL(err))
+ if (ASSERT_OK(err, "test_xdp_adjust_tail_shrink"))
return;
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
- buf, &size, &retval, &duration);
-
- CHECK(err || retval != XDP_DROP,
- "ipv4", "err %d errno %d retval %d size %d\n",
- err, errno, retval, size);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "ipv4");
+ ASSERT_EQ(topts.retval, XDP_DROP, "ipv4 retval");
expect_sz = sizeof(pkt_v6) - 20; /* Test shrink with 20 bytes */
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v6, sizeof(pkt_v6),
- buf, &size, &retval, &duration);
- CHECK(err || retval != XDP_TX || size != expect_sz,
- "ipv6", "err %d errno %d retval %d size %d expect-size %d\n",
- err, errno, retval, size, expect_sz);
+ topts.data_in = &pkt_v6;
+ topts.data_size_in = sizeof(pkt_v6);
+ topts.data_size_out = sizeof(buf);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "ipv6");
+ ASSERT_EQ(topts.retval, XDP_TX, "ipv6 retval");
+ ASSERT_EQ(topts.data_size_out, expect_sz, "ipv6 size");
+
bpf_object__close(obj);
}
@@ -35,25 +42,31 @@ static void test_xdp_adjust_tail_grow(void)
const char *file = "./test_xdp_adjust_tail_grow.o";
struct bpf_object *obj;
char buf[4096]; /* avoid segfault: large buf to hold grow results */
- __u32 duration, retval, size, expect_sz;
+ __u32 expect_sz;
int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .data_out = buf,
+ .data_size_out = sizeof(buf),
+ .repeat = 1,
+ );
err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd);
- if (CHECK_FAIL(err))
+ if (ASSERT_OK(err, "test_xdp_adjust_tail_grow"))
return;
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
- buf, &size, &retval, &duration);
- CHECK(err || retval != XDP_DROP,
- "ipv4", "err %d errno %d retval %d size %d\n",
- err, errno, retval, size);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "ipv4");
+ ASSERT_EQ(topts.retval, XDP_DROP, "ipv4 retval");
expect_sz = sizeof(pkt_v6) + 40; /* Test grow with 40 bytes */
- err = bpf_prog_test_run(prog_fd, 1, &pkt_v6, sizeof(pkt_v6) /* 74 */,
- buf, &size, &retval, &duration);
- CHECK(err || retval != XDP_TX || size != expect_sz,
- "ipv6", "err %d errno %d retval %d size %d expect-size %d\n",
- err, errno, retval, size, expect_sz);
+ topts.data_in = &pkt_v6;
+ topts.data_size_in = sizeof(pkt_v6);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "ipv6");
+ ASSERT_EQ(topts.retval, XDP_TX, "ipv6 retval");
+ ASSERT_EQ(topts.data_size_out, expect_sz, "ipv6 size");
bpf_object__close(obj);
}
@@ -65,18 +78,18 @@ static void test_xdp_adjust_tail_grow2(void)
int tailroom = 320; /* SKB_DATA_ALIGN(sizeof(struct skb_shared_info))*/;
struct bpf_object *obj;
int err, cnt, i;
- int max_grow;
+ int max_grow, prog_fd;
- struct bpf_prog_test_run_attr tattr = {
+ LIBBPF_OPTS(bpf_test_run_opts, tattr,
.repeat = 1,
.data_in = &buf,
.data_out = &buf,
.data_size_in = 0, /* Per test */
.data_size_out = 0, /* Per test */
- };
+ );
- err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &tattr.prog_fd);
- if (CHECK_ATTR(err, "load", "err %d errno %d\n", err, errno))
+ err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd);
+ if (ASSERT_OK(err, "test_xdp_adjust_tail_grow"))
return;
/* Test case-64 */
@@ -84,49 +97,171 @@ static void test_xdp_adjust_tail_grow2(void)
tattr.data_size_in = 64; /* Determine test case via pkt size */
tattr.data_size_out = 128; /* Limit copy_size */
/* Kernel side alloc packet memory area that is zero init */
- err = bpf_prog_test_run_xattr(&tattr);
+ err = bpf_prog_test_run_opts(prog_fd, &tattr);
- CHECK_ATTR(errno != ENOSPC /* Due limit copy_size in bpf_test_finish */
- || tattr.retval != XDP_TX
- || tattr.data_size_out != 192, /* Expected grow size */
- "case-64",
- "err %d errno %d retval %d size %d\n",
- err, errno, tattr.retval, tattr.data_size_out);
+ ASSERT_EQ(errno, ENOSPC, "case-64 errno"); /* Due limit copy_size in bpf_test_finish */
+ ASSERT_EQ(tattr.retval, XDP_TX, "case-64 retval");
+ ASSERT_EQ(tattr.data_size_out, 192, "case-64 data_size_out"); /* Expected grow size */
/* Extra checks for data contents */
- CHECK_ATTR(tattr.data_size_out != 192
- || buf[0] != 1 || buf[63] != 1 /* 0-63 memset to 1 */
- || buf[64] != 0 || buf[127] != 0 /* 64-127 memset to 0 */
- || buf[128] != 1 || buf[191] != 1, /*128-191 memset to 1 */
- "case-64-data",
- "err %d errno %d retval %d size %d\n",
- err, errno, tattr.retval, tattr.data_size_out);
+ ASSERT_EQ(buf[0], 1, "case-64-data buf[0]"); /* 0-63 memset to 1 */
+ ASSERT_EQ(buf[63], 1, "case-64-data buf[63]");
+ ASSERT_EQ(buf[64], 0, "case-64-data buf[64]"); /* 64-127 memset to 0 */
+ ASSERT_EQ(buf[127], 0, "case-64-data buf[127]");
+ ASSERT_EQ(buf[128], 1, "case-64-data buf[128]"); /* 128-191 memset to 1 */
+ ASSERT_EQ(buf[191], 1, "case-64-data buf[191]");
/* Test case-128 */
memset(buf, 2, sizeof(buf));
tattr.data_size_in = 128; /* Determine test case via pkt size */
tattr.data_size_out = sizeof(buf); /* Copy everything */
- err = bpf_prog_test_run_xattr(&tattr);
+ err = bpf_prog_test_run_opts(prog_fd, &tattr);
max_grow = 4096 - XDP_PACKET_HEADROOM - tailroom; /* 3520 */
- CHECK_ATTR(err
- || tattr.retval != XDP_TX
- || tattr.data_size_out != max_grow,/* Expect max grow size */
- "case-128",
- "err %d errno %d retval %d size %d expect-size %d\n",
- err, errno, tattr.retval, tattr.data_size_out, max_grow);
+ ASSERT_OK(err, "case-128");
+ ASSERT_EQ(tattr.retval, XDP_TX, "case-128 retval");
+ ASSERT_EQ(tattr.data_size_out, max_grow, "case-128 data_size_out"); /* Expect max grow */
/* Extra checks for data content: Count grow size, will contain zeros */
for (i = 0, cnt = 0; i < sizeof(buf); i++) {
if (buf[i] == 0)
cnt++;
}
- CHECK_ATTR((cnt != (max_grow - tattr.data_size_in)) /* Grow increase */
- || tattr.data_size_out != max_grow, /* Total grow size */
- "case-128-data",
- "err %d errno %d retval %d size %d grow-size %d\n",
- err, errno, tattr.retval, tattr.data_size_out, cnt);
+ ASSERT_EQ(cnt, max_grow - tattr.data_size_in, "case-128-data cnt"); /* Grow increase */
+ ASSERT_EQ(tattr.data_size_out, max_grow, "case-128-data data_size_out"); /* Total grow */
+
+ bpf_object__close(obj);
+}
+
+static void test_xdp_adjust_frags_tail_shrink(void)
+{
+ const char *file = "./test_xdp_adjust_tail_shrink.o";
+ __u32 exp_size;
+ struct bpf_program *prog;
+ struct bpf_object *obj;
+ int err, prog_fd;
+ __u8 *buf;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ /* For the individual test cases, the first byte in the packet
+ * indicates which test will be run.
+ */
+ obj = bpf_object__open(file);
+ if (libbpf_get_error(obj))
+ return;
+
+ prog = bpf_object__next_program(obj, NULL);
+ if (bpf_object__load(obj))
+ return;
+
+ prog_fd = bpf_program__fd(prog);
+
+ buf = malloc(9000);
+ if (!ASSERT_OK_PTR(buf, "alloc buf 9Kb"))
+ goto out;
+
+ memset(buf, 0, 9000);
+
+ /* Test case removing 10 bytes from last frag, NOT freeing it */
+ exp_size = 8990; /* 9000 - 10 */
+ topts.data_in = buf;
+ topts.data_out = buf;
+ topts.data_size_in = 9000;
+ topts.data_size_out = 9000;
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+
+ ASSERT_OK(err, "9Kb-10b");
+ ASSERT_EQ(topts.retval, XDP_TX, "9Kb-10b retval");
+ ASSERT_EQ(topts.data_size_out, exp_size, "9Kb-10b size");
+
+ /* Test case removing one of two pages, assuming 4K pages */
+ buf[0] = 1;
+ exp_size = 4900; /* 9000 - 4100 */
+
+ topts.data_size_out = 9000; /* reset from previous invocation */
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+
+ ASSERT_OK(err, "9Kb-4Kb");
+ ASSERT_EQ(topts.retval, XDP_TX, "9Kb-4Kb retval");
+ ASSERT_EQ(topts.data_size_out, exp_size, "9Kb-4Kb size");
+
+ /* Test case removing two pages resulting in a linear xdp_buff */
+ buf[0] = 2;
+ exp_size = 800; /* 9000 - 8200 */
+ topts.data_size_out = 9000; /* reset from previous invocation */
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+
+ ASSERT_OK(err, "9Kb-9Kb");
+ ASSERT_EQ(topts.retval, XDP_TX, "9Kb-9Kb retval");
+ ASSERT_EQ(topts.data_size_out, exp_size, "9Kb-9Kb size");
+
+ free(buf);
+out:
+ bpf_object__close(obj);
+}
+
+static void test_xdp_adjust_frags_tail_grow(void)
+{
+ const char *file = "./test_xdp_adjust_tail_grow.o";
+ __u32 exp_size;
+ struct bpf_program *prog;
+ struct bpf_object *obj;
+ int err, i, prog_fd;
+ __u8 *buf;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ obj = bpf_object__open(file);
+ if (libbpf_get_error(obj))
+ return;
+
+ prog = bpf_object__next_program(obj, NULL);
+ if (bpf_object__load(obj))
+ return;
+
+ prog_fd = bpf_program__fd(prog);
+
+ buf = malloc(16384);
+ if (!ASSERT_OK_PTR(buf, "alloc buf 16Kb"))
+ goto out;
+
+ /* Test case add 10 bytes to last frag */
+ memset(buf, 1, 16384);
+ exp_size = 9000 + 10;
+
+ topts.data_in = buf;
+ topts.data_out = buf;
+ topts.data_size_in = 9000;
+ topts.data_size_out = 16384;
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+
+ ASSERT_OK(err, "9Kb+10b");
+ ASSERT_EQ(topts.retval, XDP_TX, "9Kb+10b retval");
+ ASSERT_EQ(topts.data_size_out, exp_size, "9Kb+10b size");
+
+ for (i = 0; i < 9000; i++)
+ ASSERT_EQ(buf[i], 1, "9Kb+10b-old");
+
+ for (i = 9000; i < 9010; i++)
+ ASSERT_EQ(buf[i], 0, "9Kb+10b-new");
+
+ for (i = 9010; i < 16384; i++)
+ ASSERT_EQ(buf[i], 1, "9Kb+10b-untouched");
+
+ /* Test a too large grow */
+ memset(buf, 1, 16384);
+ exp_size = 9001;
+
+ topts.data_in = topts.data_out = buf;
+ topts.data_size_in = 9001;
+ topts.data_size_out = 16384;
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+
+ ASSERT_OK(err, "9Kb+10b");
+ ASSERT_EQ(topts.retval, XDP_DROP, "9Kb+10b retval");
+ ASSERT_EQ(topts.data_size_out, exp_size, "9Kb+10b size");
+ free(buf);
+out:
bpf_object__close(obj);
}
@@ -138,4 +273,8 @@ void test_xdp_adjust_tail(void)
test_xdp_adjust_tail_grow();
if (test__start_subtest("xdp_adjust_tail_grow2"))
test_xdp_adjust_tail_grow2();
+ if (test__start_subtest("xdp_adjust_frags_tail_shrink"))
+ test_xdp_adjust_frags_tail_shrink();
+ if (test__start_subtest("xdp_adjust_frags_tail_grow"))
+ test_xdp_adjust_frags_tail_grow();
}
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_attach.c b/tools/testing/selftests/bpf/prog_tests/xdp_attach.c
index c6fa390e3aa1..62aa3edda5e6 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_attach.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_attach.c
@@ -11,8 +11,7 @@ void serial_test_xdp_attach(void)
const char *file = "./test_xdp.o";
struct bpf_prog_info info = {};
int err, fd1, fd2, fd3;
- DECLARE_LIBBPF_OPTS(bpf_xdp_set_link_opts, opts,
- .old_fd = -1);
+ LIBBPF_OPTS(bpf_xdp_attach_opts, opts);
len = sizeof(info);
@@ -38,49 +37,47 @@ void serial_test_xdp_attach(void)
if (CHECK_FAIL(err))
goto out_2;
- err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, fd1, XDP_FLAGS_REPLACE,
- &opts);
+ err = bpf_xdp_attach(IFINDEX_LO, fd1, XDP_FLAGS_REPLACE, &opts);
if (CHECK(err, "load_ok", "initial load failed"))
goto out_close;
- err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0);
+ err = bpf_xdp_query_id(IFINDEX_LO, 0, &id0);
if (CHECK(err || id0 != id1, "id1_check",
"loaded prog id %u != id1 %u, err %d", id0, id1, err))
goto out_close;
- err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, fd2, XDP_FLAGS_REPLACE,
- &opts);
+ err = bpf_xdp_attach(IFINDEX_LO, fd2, XDP_FLAGS_REPLACE, &opts);
if (CHECK(!err, "load_fail", "load with expected id didn't fail"))
goto out;
- opts.old_fd = fd1;
- err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, fd2, 0, &opts);
+ opts.old_prog_fd = fd1;
+ err = bpf_xdp_attach(IFINDEX_LO, fd2, 0, &opts);
if (CHECK(err, "replace_ok", "replace valid old_fd failed"))
goto out;
- err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0);
+ err = bpf_xdp_query_id(IFINDEX_LO, 0, &id0);
if (CHECK(err || id0 != id2, "id2_check",
"loaded prog id %u != id2 %u, err %d", id0, id2, err))
goto out_close;
- err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, fd3, 0, &opts);
+ err = bpf_xdp_attach(IFINDEX_LO, fd3, 0, &opts);
if (CHECK(!err, "replace_fail", "replace invalid old_fd didn't fail"))
goto out;
- err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, -1, 0, &opts);
+ err = bpf_xdp_detach(IFINDEX_LO, 0, &opts);
if (CHECK(!err, "remove_fail", "remove invalid old_fd didn't fail"))
goto out;
- opts.old_fd = fd2;
- err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, -1, 0, &opts);
+ opts.old_prog_fd = fd2;
+ err = bpf_xdp_detach(IFINDEX_LO, 0, &opts);
if (CHECK(err, "remove_ok", "remove valid old_fd failed"))
goto out;
- err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0);
+ err = bpf_xdp_query_id(IFINDEX_LO, 0, &id0);
if (CHECK(err || id0 != 0, "unload_check",
"loaded prog id %u != 0, err %d", id0, err))
goto out_close;
out:
- bpf_set_link_xdp_fd(IFINDEX_LO, -1, 0);
+ bpf_xdp_detach(IFINDEX_LO, 0, NULL);
out_close:
bpf_object__close(obj3);
out_2:
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c b/tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c
index c98a897ad692..76967d8ace9c 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c
@@ -10,40 +10,101 @@ struct meta {
int pkt_len;
};
+struct test_ctx_s {
+ bool passed;
+ int pkt_size;
+};
+
+struct test_ctx_s test_ctx;
+
static void on_sample(void *ctx, int cpu, void *data, __u32 size)
{
- int duration = 0;
struct meta *meta = (struct meta *)data;
struct ipv4_packet *trace_pkt_v4 = data + sizeof(*meta);
+ unsigned char *raw_pkt = data + sizeof(*meta);
+ struct test_ctx_s *tst_ctx = ctx;
+
+ ASSERT_GE(size, sizeof(pkt_v4) + sizeof(*meta), "check_size");
+ ASSERT_EQ(meta->ifindex, if_nametoindex("lo"), "check_meta_ifindex");
+ ASSERT_EQ(meta->pkt_len, tst_ctx->pkt_size, "check_meta_pkt_len");
+ ASSERT_EQ(memcmp(trace_pkt_v4, &pkt_v4, sizeof(pkt_v4)), 0,
+ "check_packet_content");
+
+ if (meta->pkt_len > sizeof(pkt_v4)) {
+ for (int i = 0; i < meta->pkt_len - sizeof(pkt_v4); i++)
+ ASSERT_EQ(raw_pkt[i + sizeof(pkt_v4)], (unsigned char)i,
+ "check_packet_content");
+ }
+
+ tst_ctx->passed = true;
+}
- if (CHECK(size < sizeof(pkt_v4) + sizeof(*meta),
- "check_size", "size %u < %zu\n",
- size, sizeof(pkt_v4) + sizeof(*meta)))
- return;
+#define BUF_SZ 9000
- if (CHECK(meta->ifindex != if_nametoindex("lo"), "check_meta_ifindex",
- "meta->ifindex = %d\n", meta->ifindex))
+static void run_xdp_bpf2bpf_pkt_size(int pkt_fd, struct perf_buffer *pb,
+ struct test_xdp_bpf2bpf *ftrace_skel,
+ int pkt_size)
+{
+ __u8 *buf, *buf_in;
+ int err;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ if (!ASSERT_LE(pkt_size, BUF_SZ, "pkt_size") ||
+ !ASSERT_GE(pkt_size, sizeof(pkt_v4), "pkt_size"))
return;
- if (CHECK(meta->pkt_len != sizeof(pkt_v4), "check_meta_pkt_len",
- "meta->pkt_len = %zd\n", sizeof(pkt_v4)))
+ buf_in = malloc(BUF_SZ);
+ if (!ASSERT_OK_PTR(buf_in, "buf_in malloc()"))
return;
- if (CHECK(memcmp(trace_pkt_v4, &pkt_v4, sizeof(pkt_v4)),
- "check_packet_content", "content not the same\n"))
+ buf = malloc(BUF_SZ);
+ if (!ASSERT_OK_PTR(buf, "buf malloc()")) {
+ free(buf_in);
return;
+ }
+
+ test_ctx.passed = false;
+ test_ctx.pkt_size = pkt_size;
+
+ memcpy(buf_in, &pkt_v4, sizeof(pkt_v4));
+ if (pkt_size > sizeof(pkt_v4)) {
+ for (int i = 0; i < (pkt_size - sizeof(pkt_v4)); i++)
+ buf_in[i + sizeof(pkt_v4)] = i;
+ }
+
+ /* Run test program */
+ topts.data_in = buf_in;
+ topts.data_size_in = pkt_size;
+ topts.data_out = buf;
+ topts.data_size_out = BUF_SZ;
+
+ err = bpf_prog_test_run_opts(pkt_fd, &topts);
+
+ ASSERT_OK(err, "ipv4");
+ ASSERT_EQ(topts.retval, XDP_PASS, "ipv4 retval");
+ ASSERT_EQ(topts.data_size_out, pkt_size, "ipv4 size");
+
+ /* Make sure bpf_xdp_output() was triggered and it sent the expected
+ * data to the perf ring buffer.
+ */
+ err = perf_buffer__poll(pb, 100);
+
+ ASSERT_GE(err, 0, "perf_buffer__poll");
+ ASSERT_TRUE(test_ctx.passed, "test passed");
+ /* Verify test results */
+ ASSERT_EQ(ftrace_skel->bss->test_result_fentry, if_nametoindex("lo"),
+ "fentry result");
+ ASSERT_EQ(ftrace_skel->bss->test_result_fexit, XDP_PASS, "fexit result");
- *(bool *)ctx = true;
+ free(buf);
+ free(buf_in);
}
void test_xdp_bpf2bpf(void)
{
- __u32 duration = 0, retval, size;
- char buf[128];
int err, pkt_fd, map_fd;
- bool passed = false;
- struct iphdr iph;
- struct iptnl_info value4 = {.family = AF_INET};
+ int pkt_sizes[] = {sizeof(pkt_v4), 1024, 4100, 8200};
+ struct iptnl_info value4 = {.family = AF_INET6};
struct test_xdp *pkt_skel = NULL;
struct test_xdp_bpf2bpf *ftrace_skel = NULL;
struct vip key4 = {.protocol = 6, .family = AF_INET};
@@ -52,7 +113,7 @@ void test_xdp_bpf2bpf(void)
/* Load XDP program to introspect */
pkt_skel = test_xdp__open_and_load();
- if (CHECK(!pkt_skel, "pkt_skel_load", "test_xdp skeleton failed\n"))
+ if (!ASSERT_OK_PTR(pkt_skel, "test_xdp__open_and_load"))
return;
pkt_fd = bpf_program__fd(pkt_skel->progs._xdp_tx_iptunnel);
@@ -62,7 +123,7 @@ void test_xdp_bpf2bpf(void)
/* Load trace program */
ftrace_skel = test_xdp_bpf2bpf__open();
- if (CHECK(!ftrace_skel, "__open", "ftrace skeleton failed\n"))
+ if (!ASSERT_OK_PTR(ftrace_skel, "test_xdp_bpf2bpf__open"))
goto out;
/* Demonstrate the bpf_program__set_attach_target() API rather than
@@ -77,50 +138,24 @@ void test_xdp_bpf2bpf(void)
bpf_program__set_attach_target(prog, pkt_fd, "_xdp_tx_iptunnel");
err = test_xdp_bpf2bpf__load(ftrace_skel);
- if (CHECK(err, "__load", "ftrace skeleton failed\n"))
+ if (!ASSERT_OK(err, "test_xdp_bpf2bpf__load"))
goto out;
err = test_xdp_bpf2bpf__attach(ftrace_skel);
- if (CHECK(err, "ftrace_attach", "ftrace attach failed: %d\n", err))
+ if (!ASSERT_OK(err, "test_xdp_bpf2bpf__attach"))
goto out;
/* Set up perf buffer */
- pb = perf_buffer__new(bpf_map__fd(ftrace_skel->maps.perf_buf_map), 1,
- on_sample, NULL, &passed, NULL);
+ pb = perf_buffer__new(bpf_map__fd(ftrace_skel->maps.perf_buf_map), 8,
+ on_sample, NULL, &test_ctx, NULL);
if (!ASSERT_OK_PTR(pb, "perf_buf__new"))
goto out;
- /* Run test program */
- err = bpf_prog_test_run(pkt_fd, 1, &pkt_v4, sizeof(pkt_v4),
- buf, &size, &retval, &duration);
- memcpy(&iph, buf + sizeof(struct ethhdr), sizeof(iph));
- if (CHECK(err || retval != XDP_TX || size != 74 ||
- iph.protocol != IPPROTO_IPIP, "ipv4",
- "err %d errno %d retval %d size %d\n",
- err, errno, retval, size))
- goto out;
-
- /* Make sure bpf_xdp_output() was triggered and it sent the expected
- * data to the perf ring buffer.
- */
- err = perf_buffer__poll(pb, 100);
- if (CHECK(err < 0, "perf_buffer__poll", "err %d\n", err))
- goto out;
-
- CHECK_FAIL(!passed);
-
- /* Verify test results */
- if (CHECK(ftrace_skel->bss->test_result_fentry != if_nametoindex("lo"),
- "result", "fentry failed err %llu\n",
- ftrace_skel->bss->test_result_fentry))
- goto out;
-
- CHECK(ftrace_skel->bss->test_result_fexit != XDP_TX, "result",
- "fexit failed err %llu\n", ftrace_skel->bss->test_result_fexit);
-
+ for (int i = 0; i < ARRAY_SIZE(pkt_sizes); i++)
+ run_xdp_bpf2bpf_pkt_size(pkt_fd, pb, ftrace_skel,
+ pkt_sizes[i]);
out:
- if (pb)
- perf_buffer__free(pb);
+ perf_buffer__free(pb);
test_xdp__destroy(pkt_skel);
test_xdp_bpf2bpf__destroy(ftrace_skel);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c b/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c
index fd812bd43600..f775a1613833 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c
@@ -3,11 +3,12 @@
#include <linux/if_link.h>
#include <test_progs.h>
+#include "test_xdp_with_cpumap_frags_helpers.skel.h"
#include "test_xdp_with_cpumap_helpers.skel.h"
#define IFINDEX_LO 1
-void serial_test_xdp_cpumap_attach(void)
+static void test_xdp_with_cpumap_helpers(void)
{
struct test_xdp_with_cpumap_helpers *skel;
struct bpf_prog_info info = {};
@@ -23,11 +24,11 @@ void serial_test_xdp_cpumap_attach(void)
return;
prog_fd = bpf_program__fd(skel->progs.xdp_redir_prog);
- err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE);
+ err = bpf_xdp_attach(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE, NULL);
if (!ASSERT_OK(err, "Generic attach of program with 8-byte CPUMAP"))
goto out_close;
- err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE);
+ err = bpf_xdp_detach(IFINDEX_LO, XDP_FLAGS_SKB_MODE, NULL);
ASSERT_OK(err, "XDP program detach");
prog_fd = bpf_program__fd(skel->progs.xdp_dummy_cm);
@@ -45,15 +46,76 @@ void serial_test_xdp_cpumap_attach(void)
ASSERT_EQ(info.id, val.bpf_prog.id, "Match program id to cpumap entry prog_id");
/* can not attach BPF_XDP_CPUMAP program to a device */
- err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE);
+ err = bpf_xdp_attach(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE, NULL);
if (!ASSERT_NEQ(err, 0, "Attach of BPF_XDP_CPUMAP program"))
- bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE);
+ bpf_xdp_detach(IFINDEX_LO, XDP_FLAGS_SKB_MODE, NULL);
val.qsize = 192;
val.bpf_prog.fd = bpf_program__fd(skel->progs.xdp_dummy_prog);
err = bpf_map_update_elem(map_fd, &idx, &val, 0);
ASSERT_NEQ(err, 0, "Add non-BPF_XDP_CPUMAP program to cpumap entry");
+ /* Try to attach BPF_XDP program with frags to cpumap when we have
+ * already loaded a BPF_XDP program on the map
+ */
+ idx = 1;
+ val.qsize = 192;
+ val.bpf_prog.fd = bpf_program__fd(skel->progs.xdp_dummy_cm_frags);
+ err = bpf_map_update_elem(map_fd, &idx, &val, 0);
+ ASSERT_NEQ(err, 0, "Add BPF_XDP program with frags to cpumap entry");
+
out_close:
test_xdp_with_cpumap_helpers__destroy(skel);
}
+
+static void test_xdp_with_cpumap_frags_helpers(void)
+{
+ struct test_xdp_with_cpumap_frags_helpers *skel;
+ struct bpf_prog_info info = {};
+ __u32 len = sizeof(info);
+ struct bpf_cpumap_val val = {
+ .qsize = 192,
+ };
+ int err, frags_prog_fd, map_fd;
+ __u32 idx = 0;
+
+ skel = test_xdp_with_cpumap_frags_helpers__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "test_xdp_with_cpumap_helpers__open_and_load"))
+ return;
+
+ frags_prog_fd = bpf_program__fd(skel->progs.xdp_dummy_cm_frags);
+ map_fd = bpf_map__fd(skel->maps.cpu_map);
+ err = bpf_obj_get_info_by_fd(frags_prog_fd, &info, &len);
+ if (!ASSERT_OK(err, "bpf_obj_get_info_by_fd"))
+ goto out_close;
+
+ val.bpf_prog.fd = frags_prog_fd;
+ err = bpf_map_update_elem(map_fd, &idx, &val, 0);
+ ASSERT_OK(err, "Add program to cpumap entry");
+
+ err = bpf_map_lookup_elem(map_fd, &idx, &val);
+ ASSERT_OK(err, "Read cpumap entry");
+ ASSERT_EQ(info.id, val.bpf_prog.id,
+ "Match program id to cpumap entry prog_id");
+
+ /* Try to attach BPF_XDP program to cpumap when we have
+ * already loaded a BPF_XDP program with frags on the map
+ */
+ idx = 1;
+ val.qsize = 192;
+ val.bpf_prog.fd = bpf_program__fd(skel->progs.xdp_dummy_cm);
+ err = bpf_map_update_elem(map_fd, &idx, &val, 0);
+ ASSERT_NEQ(err, 0, "Add BPF_XDP program to cpumap entry");
+
+out_close:
+ test_xdp_with_cpumap_frags_helpers__destroy(skel);
+}
+
+void serial_test_xdp_cpumap_attach(void)
+{
+ if (test__start_subtest("CPUMAP with programs in entries"))
+ test_xdp_with_cpumap_helpers();
+
+ if (test__start_subtest("CPUMAP with frags programs in entries"))
+ test_xdp_with_cpumap_frags_helpers();
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c b/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c
index 3079d5568f8f..ead40016c324 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c
@@ -4,6 +4,7 @@
#include <test_progs.h>
#include "test_xdp_devmap_helpers.skel.h"
+#include "test_xdp_with_devmap_frags_helpers.skel.h"
#include "test_xdp_with_devmap_helpers.skel.h"
#define IFINDEX_LO 1
@@ -25,11 +26,11 @@ static void test_xdp_with_devmap_helpers(void)
return;
dm_fd = bpf_program__fd(skel->progs.xdp_redir_prog);
- err = bpf_set_link_xdp_fd(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE);
+ err = bpf_xdp_attach(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE, NULL);
if (!ASSERT_OK(err, "Generic attach of program with 8-byte devmap"))
goto out_close;
- err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE);
+ err = bpf_xdp_detach(IFINDEX_LO, XDP_FLAGS_SKB_MODE, NULL);
ASSERT_OK(err, "XDP program detach");
dm_fd = bpf_program__fd(skel->progs.xdp_dummy_dm);
@@ -47,15 +48,24 @@ static void test_xdp_with_devmap_helpers(void)
ASSERT_EQ(info.id, val.bpf_prog.id, "Match program id to devmap entry prog_id");
/* can not attach BPF_XDP_DEVMAP program to a device */
- err = bpf_set_link_xdp_fd(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE);
+ err = bpf_xdp_attach(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE, NULL);
if (!ASSERT_NEQ(err, 0, "Attach of BPF_XDP_DEVMAP program"))
- bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE);
+ bpf_xdp_detach(IFINDEX_LO, XDP_FLAGS_SKB_MODE, NULL);
val.ifindex = 1;
val.bpf_prog.fd = bpf_program__fd(skel->progs.xdp_dummy_prog);
err = bpf_map_update_elem(map_fd, &idx, &val, 0);
ASSERT_NEQ(err, 0, "Add non-BPF_XDP_DEVMAP program to devmap entry");
+ /* Try to attach BPF_XDP program with frags to devmap when we have
+ * already loaded a BPF_XDP program on the map
+ */
+ idx = 1;
+ val.ifindex = 1;
+ val.bpf_prog.fd = bpf_program__fd(skel->progs.xdp_dummy_dm_frags);
+ err = bpf_map_update_elem(map_fd, &idx, &val, 0);
+ ASSERT_NEQ(err, 0, "Add BPF_XDP program with frags to devmap entry");
+
out_close:
test_xdp_with_devmap_helpers__destroy(skel);
}
@@ -71,12 +81,57 @@ static void test_neg_xdp_devmap_helpers(void)
}
}
+static void test_xdp_with_devmap_frags_helpers(void)
+{
+ struct test_xdp_with_devmap_frags_helpers *skel;
+ struct bpf_prog_info info = {};
+ struct bpf_devmap_val val = {
+ .ifindex = IFINDEX_LO,
+ };
+ __u32 len = sizeof(info);
+ int err, dm_fd_frags, map_fd;
+ __u32 idx = 0;
+
+ skel = test_xdp_with_devmap_frags_helpers__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "test_xdp_with_devmap_helpers__open_and_load"))
+ return;
+
+ dm_fd_frags = bpf_program__fd(skel->progs.xdp_dummy_dm_frags);
+ map_fd = bpf_map__fd(skel->maps.dm_ports);
+ err = bpf_obj_get_info_by_fd(dm_fd_frags, &info, &len);
+ if (!ASSERT_OK(err, "bpf_obj_get_info_by_fd"))
+ goto out_close;
+
+ val.bpf_prog.fd = dm_fd_frags;
+ err = bpf_map_update_elem(map_fd, &idx, &val, 0);
+ ASSERT_OK(err, "Add frags program to devmap entry");
+
+ err = bpf_map_lookup_elem(map_fd, &idx, &val);
+ ASSERT_OK(err, "Read devmap entry");
+ ASSERT_EQ(info.id, val.bpf_prog.id,
+ "Match program id to devmap entry prog_id");
+
+ /* Try to attach BPF_XDP program to devmap when we have
+ * already loaded a BPF_XDP program with frags on the map
+ */
+ idx = 1;
+ val.ifindex = 1;
+ val.bpf_prog.fd = bpf_program__fd(skel->progs.xdp_dummy_dm);
+ err = bpf_map_update_elem(map_fd, &idx, &val, 0);
+ ASSERT_NEQ(err, 0, "Add BPF_XDP program to devmap entry");
+
+out_close:
+ test_xdp_with_devmap_frags_helpers__destroy(skel);
+}
void serial_test_xdp_devmap_attach(void)
{
if (test__start_subtest("DEVMAP with programs in entries"))
test_xdp_with_devmap_helpers();
+ if (test__start_subtest("DEVMAP with frags programs in entries"))
+ test_xdp_with_devmap_frags_helpers();
+
if (test__start_subtest("Verifier check of DEVMAP programs"))
test_neg_xdp_devmap_helpers();
}
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c b/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c
new file mode 100644
index 000000000000..a50971c6cf4a
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c
@@ -0,0 +1,201 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include <network_helpers.h>
+#include <net/if.h>
+#include <linux/if_ether.h>
+#include <linux/if_packet.h>
+#include <linux/ipv6.h>
+#include <linux/in6.h>
+#include <linux/udp.h>
+#include <bpf/bpf_endian.h>
+#include "test_xdp_do_redirect.skel.h"
+
+#define SYS(fmt, ...) \
+ ({ \
+ char cmd[1024]; \
+ snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
+ if (!ASSERT_OK(system(cmd), cmd)) \
+ goto out; \
+ })
+
+struct udp_packet {
+ struct ethhdr eth;
+ struct ipv6hdr iph;
+ struct udphdr udp;
+ __u8 payload[64 - sizeof(struct udphdr)
+ - sizeof(struct ethhdr) - sizeof(struct ipv6hdr)];
+} __packed;
+
+static struct udp_packet pkt_udp = {
+ .eth.h_proto = __bpf_constant_htons(ETH_P_IPV6),
+ .eth.h_dest = {0x00, 0x11, 0x22, 0x33, 0x44, 0x55},
+ .eth.h_source = {0x66, 0x77, 0x88, 0x99, 0xaa, 0xbb},
+ .iph.version = 6,
+ .iph.nexthdr = IPPROTO_UDP,
+ .iph.payload_len = bpf_htons(sizeof(struct udp_packet)
+ - offsetof(struct udp_packet, udp)),
+ .iph.hop_limit = 2,
+ .iph.saddr.s6_addr16 = {bpf_htons(0xfc00), 0, 0, 0, 0, 0, 0, bpf_htons(1)},
+ .iph.daddr.s6_addr16 = {bpf_htons(0xfc00), 0, 0, 0, 0, 0, 0, bpf_htons(2)},
+ .udp.source = bpf_htons(1),
+ .udp.dest = bpf_htons(1),
+ .udp.len = bpf_htons(sizeof(struct udp_packet)
+ - offsetof(struct udp_packet, udp)),
+ .payload = {0x42}, /* receiver XDP program matches on this */
+};
+
+static int attach_tc_prog(struct bpf_tc_hook *hook, int fd)
+{
+ DECLARE_LIBBPF_OPTS(bpf_tc_opts, opts, .handle = 1, .priority = 1, .prog_fd = fd);
+ int ret;
+
+ ret = bpf_tc_hook_create(hook);
+ if (!ASSERT_OK(ret, "create tc hook"))
+ return ret;
+
+ ret = bpf_tc_attach(hook, &opts);
+ if (!ASSERT_OK(ret, "bpf_tc_attach")) {
+ bpf_tc_hook_destroy(hook);
+ return ret;
+ }
+
+ return 0;
+}
+
+/* The maximum permissible size is: PAGE_SIZE - sizeof(struct xdp_page_head) -
+ * sizeof(struct skb_shared_info) - XDP_PACKET_HEADROOM = 3368 bytes
+ */
+#define MAX_PKT_SIZE 3368
+static void test_max_pkt_size(int fd)
+{
+ char data[MAX_PKT_SIZE + 1] = {};
+ int err;
+ DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,
+ .data_in = &data,
+ .data_size_in = MAX_PKT_SIZE,
+ .flags = BPF_F_TEST_XDP_LIVE_FRAMES,
+ .repeat = 1,
+ );
+ err = bpf_prog_test_run_opts(fd, &opts);
+ ASSERT_OK(err, "prog_run_max_size");
+
+ opts.data_size_in += 1;
+ err = bpf_prog_test_run_opts(fd, &opts);
+ ASSERT_EQ(err, -EINVAL, "prog_run_too_big");
+}
+
+#define NUM_PKTS 10000
+void test_xdp_do_redirect(void)
+{
+ int err, xdp_prog_fd, tc_prog_fd, ifindex_src, ifindex_dst;
+ char data[sizeof(pkt_udp) + sizeof(__u32)];
+ struct test_xdp_do_redirect *skel = NULL;
+ struct nstoken *nstoken = NULL;
+ struct bpf_link *link;
+
+ struct xdp_md ctx_in = { .data = sizeof(__u32),
+ .data_end = sizeof(data) };
+ DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,
+ .data_in = &data,
+ .data_size_in = sizeof(data),
+ .ctx_in = &ctx_in,
+ .ctx_size_in = sizeof(ctx_in),
+ .flags = BPF_F_TEST_XDP_LIVE_FRAMES,
+ .repeat = NUM_PKTS,
+ .batch_size = 64,
+ );
+ DECLARE_LIBBPF_OPTS(bpf_tc_hook, tc_hook,
+ .attach_point = BPF_TC_INGRESS);
+
+ memcpy(&data[sizeof(__u32)], &pkt_udp, sizeof(pkt_udp));
+ *((__u32 *)data) = 0x42; /* metadata test value */
+
+ skel = test_xdp_do_redirect__open();
+ if (!ASSERT_OK_PTR(skel, "skel"))
+ return;
+
+ /* The XDP program we run with bpf_prog_run() will cycle through all
+ * three xmit (PASS/TX/REDIRECT) return codes starting from above, and
+ * ending up with PASS, so we should end up with two packets on the dst
+ * iface and NUM_PKTS-2 in the TC hook. We match the packets on the UDP
+ * payload.
+ */
+ SYS("ip netns add testns");
+ nstoken = open_netns("testns");
+ if (!ASSERT_OK_PTR(nstoken, "setns"))
+ goto out;
+
+ SYS("ip link add veth_src type veth peer name veth_dst");
+ SYS("ip link set dev veth_src address 00:11:22:33:44:55");
+ SYS("ip link set dev veth_dst address 66:77:88:99:aa:bb");
+ SYS("ip link set dev veth_src up");
+ SYS("ip link set dev veth_dst up");
+ SYS("ip addr add dev veth_src fc00::1/64");
+ SYS("ip addr add dev veth_dst fc00::2/64");
+ SYS("ip neigh add fc00::2 dev veth_src lladdr 66:77:88:99:aa:bb");
+
+ /* We enable forwarding in the test namespace because that will cause
+ * the packets that go through the kernel stack (with XDP_PASS) to be
+ * forwarded back out the same interface (because of the packet dst
+ * combined with the interface addresses). When this happens, the
+ * regular forwarding path will end up going through the same
+ * veth_xdp_xmit() call as the XDP_REDIRECT code, which can cause a
+ * deadlock if it happens on the same CPU. There's a local_bh_disable()
+ * in the test_run code to prevent this, but an earlier version of the
+ * code didn't have this, so we keep the test behaviour to make sure the
+ * bug doesn't resurface.
+ */
+ SYS("sysctl -qw net.ipv6.conf.all.forwarding=1");
+
+ ifindex_src = if_nametoindex("veth_src");
+ ifindex_dst = if_nametoindex("veth_dst");
+ if (!ASSERT_NEQ(ifindex_src, 0, "ifindex_src") ||
+ !ASSERT_NEQ(ifindex_dst, 0, "ifindex_dst"))
+ goto out;
+
+ memcpy(skel->rodata->expect_dst, &pkt_udp.eth.h_dest, ETH_ALEN);
+ skel->rodata->ifindex_out = ifindex_src; /* redirect back to the same iface */
+ skel->rodata->ifindex_in = ifindex_src;
+ ctx_in.ingress_ifindex = ifindex_src;
+ tc_hook.ifindex = ifindex_src;
+
+ if (!ASSERT_OK(test_xdp_do_redirect__load(skel), "load"))
+ goto out;
+
+ link = bpf_program__attach_xdp(skel->progs.xdp_count_pkts, ifindex_dst);
+ if (!ASSERT_OK_PTR(link, "prog_attach"))
+ goto out;
+ skel->links.xdp_count_pkts = link;
+
+ tc_prog_fd = bpf_program__fd(skel->progs.tc_count_pkts);
+ if (attach_tc_prog(&tc_hook, tc_prog_fd))
+ goto out;
+
+ xdp_prog_fd = bpf_program__fd(skel->progs.xdp_redirect);
+ err = bpf_prog_test_run_opts(xdp_prog_fd, &opts);
+ if (!ASSERT_OK(err, "prog_run"))
+ goto out_tc;
+
+ /* wait for the packets to be flushed */
+ kern_sync_rcu();
+
+ /* There will be one packet sent through XDP_REDIRECT and one through
+ * XDP_TX; these will show up on the XDP counting program, while the
+ * rest will be counted at the TC ingress hook (and the counting program
+ * resets the packet payload so they don't get counted twice even though
+ * they are re-xmited out the veth device
+ */
+ ASSERT_EQ(skel->bss->pkts_seen_xdp, 2, "pkt_count_xdp");
+ ASSERT_EQ(skel->bss->pkts_seen_zero, 2, "pkt_count_zero");
+ ASSERT_EQ(skel->bss->pkts_seen_tc, NUM_PKTS - 2, "pkt_count_tc");
+
+ test_max_pkt_size(bpf_program__fd(skel->progs.xdp_count_pkts));
+
+out_tc:
+ bpf_tc_hook_destroy(&tc_hook);
+out:
+ if (nstoken)
+ close_netns(nstoken);
+ system("ip netns del testns");
+ test_xdp_do_redirect__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_info.c b/tools/testing/selftests/bpf/prog_tests/xdp_info.c
index abe48e82e1dc..0d01ff6cb91a 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_info.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_info.c
@@ -14,13 +14,13 @@ void serial_test_xdp_info(void)
/* Get prog_id for XDP_ATTACHED_NONE mode */
- err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, 0);
+ err = bpf_xdp_query_id(IFINDEX_LO, 0, &prog_id);
if (CHECK(err, "get_xdp_none", "errno=%d\n", errno))
return;
if (CHECK(prog_id, "prog_id_none", "unexpected prog_id=%u\n", prog_id))
return;
- err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, XDP_FLAGS_SKB_MODE);
+ err = bpf_xdp_query_id(IFINDEX_LO, XDP_FLAGS_SKB_MODE, &prog_id);
if (CHECK(err, "get_xdp_none_skb", "errno=%d\n", errno))
return;
if (CHECK(prog_id, "prog_id_none_skb", "unexpected prog_id=%u\n",
@@ -37,32 +37,32 @@ void serial_test_xdp_info(void)
if (CHECK(err, "get_prog_info", "errno=%d\n", errno))
goto out_close;
- err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE);
+ err = bpf_xdp_attach(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE, NULL);
if (CHECK(err, "set_xdp_skb", "errno=%d\n", errno))
goto out_close;
/* Get prog_id for single prog mode */
- err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, 0);
+ err = bpf_xdp_query_id(IFINDEX_LO, 0, &prog_id);
if (CHECK(err, "get_xdp", "errno=%d\n", errno))
goto out;
if (CHECK(prog_id != info.id, "prog_id", "prog_id not available\n"))
goto out;
- err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, XDP_FLAGS_SKB_MODE);
+ err = bpf_xdp_query_id(IFINDEX_LO, XDP_FLAGS_SKB_MODE, &prog_id);
if (CHECK(err, "get_xdp_skb", "errno=%d\n", errno))
goto out;
if (CHECK(prog_id != info.id, "prog_id_skb", "prog_id not available\n"))
goto out;
- err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, XDP_FLAGS_DRV_MODE);
+ err = bpf_xdp_query_id(IFINDEX_LO, XDP_FLAGS_DRV_MODE, &prog_id);
if (CHECK(err, "get_xdp_drv", "errno=%d\n", errno))
goto out;
if (CHECK(prog_id, "prog_id_drv", "unexpected prog_id=%u\n", prog_id))
goto out;
out:
- bpf_set_link_xdp_fd(IFINDEX_LO, -1, 0);
+ bpf_xdp_detach(IFINDEX_LO, 0, NULL);
out_close:
bpf_object__close(obj);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_link.c b/tools/testing/selftests/bpf/prog_tests/xdp_link.c
index b2b357f8c74c..3e9d5c5521f0 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_link.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_link.c
@@ -8,9 +8,9 @@
void serial_test_xdp_link(void)
{
- DECLARE_LIBBPF_OPTS(bpf_xdp_set_link_opts, opts, .old_fd = -1);
struct test_xdp_link *skel1 = NULL, *skel2 = NULL;
__u32 id1, id2, id0 = 0, prog_fd1, prog_fd2;
+ LIBBPF_OPTS(bpf_xdp_attach_opts, opts);
struct bpf_link_info link_info;
struct bpf_prog_info prog_info;
struct bpf_link *link;
@@ -41,12 +41,12 @@ void serial_test_xdp_link(void)
id2 = prog_info.id;
/* set initial prog attachment */
- err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, prog_fd1, XDP_FLAGS_REPLACE, &opts);
+ err = bpf_xdp_attach(IFINDEX_LO, prog_fd1, XDP_FLAGS_REPLACE, &opts);
if (!ASSERT_OK(err, "fd_attach"))
goto cleanup;
/* validate prog ID */
- err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0);
+ err = bpf_xdp_query_id(IFINDEX_LO, 0, &id0);
if (!ASSERT_OK(err, "id1_check_err") || !ASSERT_EQ(id0, id1, "id1_check_val"))
goto cleanup;
@@ -55,14 +55,14 @@ void serial_test_xdp_link(void)
if (!ASSERT_ERR_PTR(link, "link_attach_should_fail")) {
bpf_link__destroy(link);
/* best-effort detach prog */
- opts.old_fd = prog_fd1;
- bpf_set_link_xdp_fd_opts(IFINDEX_LO, -1, XDP_FLAGS_REPLACE, &opts);
+ opts.old_prog_fd = prog_fd1;
+ bpf_xdp_detach(IFINDEX_LO, XDP_FLAGS_REPLACE, &opts);
goto cleanup;
}
/* detach BPF program */
- opts.old_fd = prog_fd1;
- err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, -1, XDP_FLAGS_REPLACE, &opts);
+ opts.old_prog_fd = prog_fd1;
+ err = bpf_xdp_detach(IFINDEX_LO, XDP_FLAGS_REPLACE, &opts);
if (!ASSERT_OK(err, "prog_detach"))
goto cleanup;
@@ -73,23 +73,23 @@ void serial_test_xdp_link(void)
skel1->links.xdp_handler = link;
/* validate prog ID */
- err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0);
+ err = bpf_xdp_query_id(IFINDEX_LO, 0, &id0);
if (!ASSERT_OK(err, "id1_check_err") || !ASSERT_EQ(id0, id1, "id1_check_val"))
goto cleanup;
/* BPF prog attach is not allowed to replace BPF link */
- opts.old_fd = prog_fd1;
- err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, prog_fd2, XDP_FLAGS_REPLACE, &opts);
+ opts.old_prog_fd = prog_fd1;
+ err = bpf_xdp_attach(IFINDEX_LO, prog_fd2, XDP_FLAGS_REPLACE, &opts);
if (!ASSERT_ERR(err, "prog_attach_fail"))
goto cleanup;
/* Can't force-update when BPF link is active */
- err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd2, 0);
+ err = bpf_xdp_attach(IFINDEX_LO, prog_fd2, 0, NULL);
if (!ASSERT_ERR(err, "prog_update_fail"))
goto cleanup;
/* Can't force-detach when BPF link is active */
- err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, 0);
+ err = bpf_xdp_detach(IFINDEX_LO, 0, NULL);
if (!ASSERT_ERR(err, "prog_detach_fail"))
goto cleanup;
@@ -109,7 +109,7 @@ void serial_test_xdp_link(void)
goto cleanup;
skel2->links.xdp_handler = link;
- err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0);
+ err = bpf_xdp_query_id(IFINDEX_LO, 0, &id0);
if (!ASSERT_OK(err, "id2_check_err") || !ASSERT_EQ(id0, id2, "id2_check_val"))
goto cleanup;
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_noinline.c b/tools/testing/selftests/bpf/prog_tests/xdp_noinline.c
index 0281095de266..92ef0aa50866 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_noinline.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_noinline.c
@@ -25,43 +25,49 @@ void test_xdp_noinline(void)
__u8 flags;
} real_def = {.dst = MAGIC_VAL};
__u32 ch_key = 11, real_num = 3;
- __u32 duration = 0, retval, size;
int err, i;
__u64 bytes = 0, pkts = 0;
char buf[128];
u32 *magic = (u32 *)buf;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .data_out = buf,
+ .data_size_out = sizeof(buf),
+ .repeat = NUM_ITER,
+ );
skel = test_xdp_noinline__open_and_load();
- if (CHECK(!skel, "skel_open_and_load", "failed\n"))
+ if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
return;
bpf_map_update_elem(bpf_map__fd(skel->maps.vip_map), &key, &value, 0);
bpf_map_update_elem(bpf_map__fd(skel->maps.ch_rings), &ch_key, &real_num, 0);
bpf_map_update_elem(bpf_map__fd(skel->maps.reals), &real_num, &real_def, 0);
- err = bpf_prog_test_run(bpf_program__fd(skel->progs.balancer_ingress_v4),
- NUM_ITER, &pkt_v4, sizeof(pkt_v4),
- buf, &size, &retval, &duration);
- CHECK(err || retval != 1 || size != 54 ||
- *magic != MAGIC_VAL, "ipv4",
- "err %d errno %d retval %d size %d magic %x\n",
- err, errno, retval, size, *magic);
+ err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.balancer_ingress_v4), &topts);
+ ASSERT_OK(err, "ipv4 test_run");
+ ASSERT_EQ(topts.retval, 1, "ipv4 test_run retval");
+ ASSERT_EQ(topts.data_size_out, 54, "ipv4 test_run data_size_out");
+ ASSERT_EQ(*magic, MAGIC_VAL, "ipv4 test_run magic");
- err = bpf_prog_test_run(bpf_program__fd(skel->progs.balancer_ingress_v6),
- NUM_ITER, &pkt_v6, sizeof(pkt_v6),
- buf, &size, &retval, &duration);
- CHECK(err || retval != 1 || size != 74 ||
- *magic != MAGIC_VAL, "ipv6",
- "err %d errno %d retval %d size %d magic %x\n",
- err, errno, retval, size, *magic);
+ topts.data_in = &pkt_v6;
+ topts.data_size_in = sizeof(pkt_v6);
+ topts.data_out = buf;
+ topts.data_size_out = sizeof(buf);
+
+ err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.balancer_ingress_v6), &topts);
+ ASSERT_OK(err, "ipv6 test_run");
+ ASSERT_EQ(topts.retval, 1, "ipv6 test_run retval");
+ ASSERT_EQ(topts.data_size_out, 74, "ipv6 test_run data_size_out");
+ ASSERT_EQ(*magic, MAGIC_VAL, "ipv6 test_run magic");
bpf_map_lookup_elem(bpf_map__fd(skel->maps.stats), &stats_key, stats);
for (i = 0; i < nr_cpus; i++) {
bytes += stats[i].bytes;
pkts += stats[i].pkts;
}
- CHECK(bytes != MAGIC_BYTES * NUM_ITER * 2 || pkts != NUM_ITER * 2,
- "stats", "bytes %lld pkts %lld\n",
- (unsigned long long)bytes, (unsigned long long)pkts);
+ ASSERT_EQ(bytes, MAGIC_BYTES * NUM_ITER * 2, "stats bytes");
+ ASSERT_EQ(pkts, NUM_ITER * 2, "stats pkts");
test_xdp_noinline__destroy(skel);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_perf.c b/tools/testing/selftests/bpf/prog_tests/xdp_perf.c
index 15a3900e4370..f543d1bd21b8 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_perf.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_perf.c
@@ -4,22 +4,25 @@
void test_xdp_perf(void)
{
const char *file = "./xdp_dummy.o";
- __u32 duration, retval, size;
struct bpf_object *obj;
char in[128], out[128];
int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = in,
+ .data_size_in = sizeof(in),
+ .data_out = out,
+ .data_size_out = sizeof(out),
+ .repeat = 1000000,
+ );
err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd);
if (CHECK_FAIL(err))
return;
- err = bpf_prog_test_run(prog_fd, 1000000, &in[0], 128,
- out, &size, &retval, &duration);
-
- CHECK(err || retval != XDP_PASS || size != 128,
- "xdp-perf",
- "err %d errno %d retval %d size %d\n",
- err, errno, retval, size);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, XDP_PASS, "test_run retval");
+ ASSERT_EQ(topts.data_size_out, 128, "test_run data_size_out");
bpf_object__close(obj);
}
diff --git a/tools/testing/selftests/bpf/progs/atomics.c b/tools/testing/selftests/bpf/progs/atomics.c
index 16e57313204a..f89c7f0cc53b 100644
--- a/tools/testing/selftests/bpf/progs/atomics.c
+++ b/tools/testing/selftests/bpf/progs/atomics.c
@@ -20,8 +20,8 @@ __u64 add_stack_value_copy = 0;
__u64 add_stack_result = 0;
__u64 add_noreturn_value = 1;
-SEC("fentry/bpf_fentry_test1")
-int BPF_PROG(add, int a)
+SEC("raw_tp/sys_enter")
+int add(const void *ctx)
{
if (pid != (bpf_get_current_pid_tgid() >> 32))
return 0;
@@ -46,8 +46,8 @@ __s64 sub_stack_value_copy = 0;
__s64 sub_stack_result = 0;
__s64 sub_noreturn_value = 1;
-SEC("fentry/bpf_fentry_test1")
-int BPF_PROG(sub, int a)
+SEC("raw_tp/sys_enter")
+int sub(const void *ctx)
{
if (pid != (bpf_get_current_pid_tgid() >> 32))
return 0;
@@ -70,8 +70,8 @@ __u32 and32_value = 0x110;
__u32 and32_result = 0;
__u64 and_noreturn_value = (0x110ull << 32);
-SEC("fentry/bpf_fentry_test1")
-int BPF_PROG(and, int a)
+SEC("raw_tp/sys_enter")
+int and(const void *ctx)
{
if (pid != (bpf_get_current_pid_tgid() >> 32))
return 0;
@@ -91,8 +91,8 @@ __u32 or32_value = 0x110;
__u32 or32_result = 0;
__u64 or_noreturn_value = (0x110ull << 32);
-SEC("fentry/bpf_fentry_test1")
-int BPF_PROG(or, int a)
+SEC("raw_tp/sys_enter")
+int or(const void *ctx)
{
if (pid != (bpf_get_current_pid_tgid() >> 32))
return 0;
@@ -111,8 +111,8 @@ __u32 xor32_value = 0x110;
__u32 xor32_result = 0;
__u64 xor_noreturn_value = (0x110ull << 32);
-SEC("fentry/bpf_fentry_test1")
-int BPF_PROG(xor, int a)
+SEC("raw_tp/sys_enter")
+int xor(const void *ctx)
{
if (pid != (bpf_get_current_pid_tgid() >> 32))
return 0;
@@ -132,8 +132,8 @@ __u32 cmpxchg32_value = 1;
__u32 cmpxchg32_result_fail = 0;
__u32 cmpxchg32_result_succeed = 0;
-SEC("fentry/bpf_fentry_test1")
-int BPF_PROG(cmpxchg, int a)
+SEC("raw_tp/sys_enter")
+int cmpxchg(const void *ctx)
{
if (pid != (bpf_get_current_pid_tgid() >> 32))
return 0;
@@ -153,8 +153,8 @@ __u64 xchg64_result = 0;
__u32 xchg32_value = 1;
__u32 xchg32_result = 0;
-SEC("fentry/bpf_fentry_test1")
-int BPF_PROG(xchg, int a)
+SEC("raw_tp/sys_enter")
+int xchg(const void *ctx)
{
if (pid != (bpf_get_current_pid_tgid() >> 32))
return 0;
diff --git a/tools/testing/selftests/bpf/progs/bloom_filter_bench.c b/tools/testing/selftests/bpf/progs/bloom_filter_bench.c
index d9a88dd1ea65..7efcbdbe772d 100644
--- a/tools/testing/selftests/bpf/progs/bloom_filter_bench.c
+++ b/tools/testing/selftests/bpf/progs/bloom_filter_bench.c
@@ -5,6 +5,7 @@
#include <linux/bpf.h>
#include <stdbool.h>
#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
char _license[] SEC("license") = "GPL";
@@ -87,7 +88,7 @@ bloom_callback(struct bpf_map *map, __u32 *key, void *val,
return 0;
}
-SEC("fentry/__x64_sys_getpgid")
+SEC("fentry/" SYS_PREFIX "sys_getpgid")
int bloom_lookup(void *ctx)
{
struct callback_ctx data;
@@ -100,7 +101,7 @@ int bloom_lookup(void *ctx)
return 0;
}
-SEC("fentry/__x64_sys_getpgid")
+SEC("fentry/" SYS_PREFIX "sys_getpgid")
int bloom_update(void *ctx)
{
struct callback_ctx data;
@@ -113,7 +114,7 @@ int bloom_update(void *ctx)
return 0;
}
-SEC("fentry/__x64_sys_getpgid")
+SEC("fentry/" SYS_PREFIX "sys_getpgid")
int bloom_hashmap_lookup(void *ctx)
{
__u64 *result;
diff --git a/tools/testing/selftests/bpf/progs/bloom_filter_map.c b/tools/testing/selftests/bpf/progs/bloom_filter_map.c
index 1316f3db79d9..f245fcfe0c61 100644
--- a/tools/testing/selftests/bpf/progs/bloom_filter_map.c
+++ b/tools/testing/selftests/bpf/progs/bloom_filter_map.c
@@ -3,6 +3,7 @@
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
char _license[] SEC("license") = "GPL";
@@ -51,7 +52,7 @@ check_elem(struct bpf_map *map, __u32 *key, __u32 *val,
return 0;
}
-SEC("fentry/__x64_sys_getpgid")
+SEC("fentry/" SYS_PREFIX "sys_getpgid")
int inner_map(void *ctx)
{
struct bpf_map *inner_map;
@@ -70,7 +71,7 @@ int inner_map(void *ctx)
return 0;
}
-SEC("fentry/__x64_sys_getpgid")
+SEC("fentry/" SYS_PREFIX "sys_getpgid")
int check_bloom(void *ctx)
{
struct callback_ctx data;
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_setsockopt_unix.c b/tools/testing/selftests/bpf/progs/bpf_iter_setsockopt_unix.c
new file mode 100644
index 000000000000..eafc877ea460
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_setsockopt_unix.c
@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright Amazon.com Inc. or its affiliates. */
+#include "bpf_iter.h"
+#include "bpf_tracing_net.h"
+#include <bpf/bpf_helpers.h>
+#include <limits.h>
+
+#define AUTOBIND_LEN 6
+char sun_path[AUTOBIND_LEN];
+
+#define NR_CASES 5
+int sndbuf_setsockopt[NR_CASES] = {-1, 0, 8192, INT_MAX / 2, INT_MAX};
+int sndbuf_getsockopt[NR_CASES] = {-1, -1, -1, -1, -1};
+int sndbuf_getsockopt_expected[NR_CASES];
+
+static inline int cmpname(struct unix_sock *unix_sk)
+{
+ int i;
+
+ for (i = 0; i < AUTOBIND_LEN; i++) {
+ if (unix_sk->addr->name->sun_path[i] != sun_path[i])
+ return -1;
+ }
+
+ return 0;
+}
+
+SEC("iter/unix")
+int change_sndbuf(struct bpf_iter__unix *ctx)
+{
+ struct unix_sock *unix_sk = ctx->unix_sk;
+ int i, err;
+
+ if (!unix_sk || !unix_sk->addr)
+ return 0;
+
+ if (unix_sk->addr->name->sun_path[0])
+ return 0;
+
+ if (cmpname(unix_sk))
+ return 0;
+
+ for (i = 0; i < NR_CASES; i++) {
+ err = bpf_setsockopt(unix_sk, SOL_SOCKET, SO_SNDBUF,
+ &sndbuf_setsockopt[i],
+ sizeof(sndbuf_setsockopt[i]));
+ if (err)
+ break;
+
+ err = bpf_getsockopt(unix_sk, SOL_SOCKET, SO_SNDBUF,
+ &sndbuf_getsockopt[i],
+ sizeof(sndbuf_getsockopt[i]));
+ if (err)
+ break;
+ }
+
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_task.c b/tools/testing/selftests/bpf/progs/bpf_iter_task.c
index c86b93f33b32..d22741272692 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter_task.c
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_task.c
@@ -2,6 +2,7 @@
/* Copyright (c) 2020 Facebook */
#include "bpf_iter.h"
#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
char _license[] SEC("license") = "GPL";
@@ -23,3 +24,56 @@ int dump_task(struct bpf_iter__task *ctx)
BPF_SEQ_PRINTF(seq, "%8d %8d\n", task->tgid, task->pid);
return 0;
}
+
+int num_expected_failure_copy_from_user_task = 0;
+int num_success_copy_from_user_task = 0;
+
+SEC("iter.s/task")
+int dump_task_sleepable(struct bpf_iter__task *ctx)
+{
+ struct seq_file *seq = ctx->meta->seq;
+ struct task_struct *task = ctx->task;
+ static const char info[] = " === END ===";
+ struct pt_regs *regs;
+ void *ptr;
+ uint32_t user_data = 0;
+ int ret;
+
+ if (task == (void *)0) {
+ BPF_SEQ_PRINTF(seq, "%s\n", info);
+ return 0;
+ }
+
+ /* Read an invalid pointer and ensure we get an error */
+ ptr = NULL;
+ ret = bpf_copy_from_user_task(&user_data, sizeof(uint32_t), ptr, task, 0);
+ if (ret) {
+ ++num_expected_failure_copy_from_user_task;
+ } else {
+ BPF_SEQ_PRINTF(seq, "%s\n", info);
+ return 0;
+ }
+
+ /* Try to read the contents of the task's instruction pointer from the
+ * remote task's address space.
+ */
+ regs = (struct pt_regs *)bpf_task_pt_regs(task);
+ if (regs == (void *)0) {
+ BPF_SEQ_PRINTF(seq, "%s\n", info);
+ return 0;
+ }
+ ptr = (void *)PT_REGS_IP(regs);
+
+ ret = bpf_copy_from_user_task(&user_data, sizeof(uint32_t), ptr, task, 0);
+ if (ret) {
+ BPF_SEQ_PRINTF(seq, "%s\n", info);
+ return 0;
+ }
+ ++num_success_copy_from_user_task;
+
+ if (ctx->meta->seq_num == 0)
+ BPF_SEQ_PRINTF(seq, " tgid gid data\n");
+
+ BPF_SEQ_PRINTF(seq, "%8d %8d %8d\n", task->tgid, task->pid, user_data);
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_unix.c b/tools/testing/selftests/bpf/progs/bpf_iter_unix.c
index c21e3f545371..e6aefae38894 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter_unix.c
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_unix.c
@@ -63,7 +63,7 @@ int dump_unix(struct bpf_iter__unix *ctx)
BPF_SEQ_PRINTF(seq, " @");
for (i = 1; i < len; i++) {
- /* unix_mkname() tests this upper bound. */
+ /* unix_validate_addr() tests this upper bound. */
if (i >= sizeof(struct sockaddr_un))
break;
diff --git a/tools/testing/selftests/bpf/progs/bpf_loop.c b/tools/testing/selftests/bpf/progs/bpf_loop.c
index 12349e4601e8..e08565282759 100644
--- a/tools/testing/selftests/bpf/progs/bpf_loop.c
+++ b/tools/testing/selftests/bpf/progs/bpf_loop.c
@@ -3,6 +3,7 @@
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
char _license[] SEC("license") = "GPL";
@@ -53,7 +54,7 @@ static int nested_callback1(__u32 index, void *data)
return 0;
}
-SEC("fentry/__x64_sys_nanosleep")
+SEC("fentry/" SYS_PREFIX "sys_nanosleep")
int test_prog(void *ctx)
{
struct callback_ctx data = {};
@@ -71,7 +72,7 @@ int test_prog(void *ctx)
return 0;
}
-SEC("fentry/__x64_sys_nanosleep")
+SEC("fentry/" SYS_PREFIX "sys_nanosleep")
int prog_null_ctx(void *ctx)
{
if (bpf_get_current_pid_tgid() >> 32 != pid)
@@ -82,7 +83,7 @@ int prog_null_ctx(void *ctx)
return 0;
}
-SEC("fentry/__x64_sys_nanosleep")
+SEC("fentry/" SYS_PREFIX "sys_nanosleep")
int prog_invalid_flags(void *ctx)
{
struct callback_ctx data = {};
@@ -95,7 +96,7 @@ int prog_invalid_flags(void *ctx)
return 0;
}
-SEC("fentry/__x64_sys_nanosleep")
+SEC("fentry/" SYS_PREFIX "sys_nanosleep")
int prog_nested_calls(void *ctx)
{
struct callback_ctx data = {};
diff --git a/tools/testing/selftests/bpf/progs/bpf_loop_bench.c b/tools/testing/selftests/bpf/progs/bpf_loop_bench.c
index 9dafdc244462..4ce76eb064c4 100644
--- a/tools/testing/selftests/bpf/progs/bpf_loop_bench.c
+++ b/tools/testing/selftests/bpf/progs/bpf_loop_bench.c
@@ -3,6 +3,7 @@
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
char _license[] SEC("license") = "GPL";
@@ -14,7 +15,7 @@ static int empty_callback(__u32 index, void *data)
return 0;
}
-SEC("fentry/__x64_sys_getpgid")
+SEC("fentry/" SYS_PREFIX "sys_getpgid")
int benchmark(void *ctx)
{
for (int i = 0; i < 1000; i++) {
diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
new file mode 100644
index 000000000000..5bb11fe595a4
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __BPF_MISC_H__
+#define __BPF_MISC_H__
+
+#if defined(__TARGET_ARCH_x86)
+#define SYSCALL_WRAPPER 1
+#define SYS_PREFIX "__x64_"
+#elif defined(__TARGET_ARCH_s390)
+#define SYSCALL_WRAPPER 1
+#define SYS_PREFIX "__s390x_"
+#elif defined(__TARGET_ARCH_arm64)
+#define SYSCALL_WRAPPER 1
+#define SYS_PREFIX "__arm64_"
+#else
+#define SYSCALL_WRAPPER 0
+#define SYS_PREFIX "__se_"
+#endif
+
+#endif
diff --git a/tools/testing/selftests/bpf/progs/bpf_mod_race.c b/tools/testing/selftests/bpf/progs/bpf_mod_race.c
new file mode 100644
index 000000000000..82a5c6c6ba83
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_mod_race.c
@@ -0,0 +1,100 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+const volatile struct {
+ /* thread to activate trace programs for */
+ pid_t tgid;
+ /* return error from __init function */
+ int inject_error;
+ /* uffd monitored range start address */
+ void *fault_addr;
+} bpf_mod_race_config = { -1 };
+
+int bpf_blocking = 0;
+int res_try_get_module = -1;
+
+static __always_inline bool check_thread_id(void)
+{
+ struct task_struct *task = bpf_get_current_task_btf();
+
+ return task->tgid == bpf_mod_race_config.tgid;
+}
+
+/* The trace of execution is something like this:
+ *
+ * finit_module()
+ * load_module()
+ * prepare_coming_module()
+ * notifier_call(MODULE_STATE_COMING)
+ * btf_parse_module()
+ * btf_alloc_id() // Visible to userspace at this point
+ * list_add(btf_mod->list, &btf_modules)
+ * do_init_module()
+ * freeinit = kmalloc()
+ * ret = mod->init()
+ * bpf_prog_widen_race()
+ * bpf_copy_from_user()
+ * ...<sleep>...
+ * if (ret < 0)
+ * ...
+ * free_module()
+ * return ret
+ *
+ * At this point, module loading thread is blocked, we now load the program:
+ *
+ * bpf_check
+ * add_kfunc_call/check_pseudo_btf_id
+ * btf_try_get_module
+ * try_get_module_live == false
+ * return -ENXIO
+ *
+ * Without the fix (try_get_module_live in btf_try_get_module):
+ *
+ * bpf_check
+ * add_kfunc_call/check_pseudo_btf_id
+ * btf_try_get_module
+ * try_get_module == true
+ * <store module reference in btf_kfunc_tab or used_btf array>
+ * ...
+ * return fd
+ *
+ * Now, if we inject an error in the blocked program, our module will be freed
+ * (going straight from MODULE_STATE_COMING to MODULE_STATE_GOING).
+ * Later, when bpf program is freed, it will try to module_put already freed
+ * module. This is why try_get_module_live returns false if mod->state is not
+ * MODULE_STATE_LIVE.
+ */
+
+SEC("fmod_ret.s/bpf_fentry_test1")
+int BPF_PROG(widen_race, int a, int ret)
+{
+ char dst;
+
+ if (!check_thread_id())
+ return 0;
+ /* Indicate that we will attempt to block */
+ bpf_blocking = 1;
+ bpf_copy_from_user(&dst, 1, bpf_mod_race_config.fault_addr);
+ return bpf_mod_race_config.inject_error;
+}
+
+SEC("fexit/do_init_module")
+int BPF_PROG(fexit_init_module, struct module *mod, int ret)
+{
+ if (!check_thread_id())
+ return 0;
+ /* Indicate that we finished blocking */
+ bpf_blocking = 2;
+ return 0;
+}
+
+SEC("fexit/btf_try_get_module")
+int BPF_PROG(fexit_module_get, const struct btf *btf, struct module *mod)
+{
+ res_try_get_module = !!mod;
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/bpf_syscall_macro.c b/tools/testing/selftests/bpf/progs/bpf_syscall_macro.c
new file mode 100644
index 000000000000..05838ed9b89c
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_syscall_macro.c
@@ -0,0 +1,84 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright 2022 Sony Group Corporation */
+#include <vmlinux.h>
+
+#include <bpf/bpf_core_read.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+
+int arg1 = 0;
+unsigned long arg2 = 0;
+unsigned long arg3 = 0;
+unsigned long arg4_cx = 0;
+unsigned long arg4 = 0;
+unsigned long arg5 = 0;
+
+int arg1_core = 0;
+unsigned long arg2_core = 0;
+unsigned long arg3_core = 0;
+unsigned long arg4_core_cx = 0;
+unsigned long arg4_core = 0;
+unsigned long arg5_core = 0;
+
+int option_syscall = 0;
+unsigned long arg2_syscall = 0;
+unsigned long arg3_syscall = 0;
+unsigned long arg4_syscall = 0;
+unsigned long arg5_syscall = 0;
+
+const volatile pid_t filter_pid = 0;
+
+SEC("kprobe/" SYS_PREFIX "sys_prctl")
+int BPF_KPROBE(handle_sys_prctl)
+{
+ struct pt_regs *real_regs;
+ pid_t pid = bpf_get_current_pid_tgid() >> 32;
+ unsigned long tmp = 0;
+
+ if (pid != filter_pid)
+ return 0;
+
+ real_regs = PT_REGS_SYSCALL_REGS(ctx);
+
+ /* test for PT_REGS_PARM */
+
+#if !defined(bpf_target_arm64) && !defined(bpf_target_s390)
+ bpf_probe_read_kernel(&tmp, sizeof(tmp), &PT_REGS_PARM1_SYSCALL(real_regs));
+#endif
+ arg1 = tmp;
+ bpf_probe_read_kernel(&arg2, sizeof(arg2), &PT_REGS_PARM2_SYSCALL(real_regs));
+ bpf_probe_read_kernel(&arg3, sizeof(arg3), &PT_REGS_PARM3_SYSCALL(real_regs));
+ bpf_probe_read_kernel(&arg4_cx, sizeof(arg4_cx), &PT_REGS_PARM4(real_regs));
+ bpf_probe_read_kernel(&arg4, sizeof(arg4), &PT_REGS_PARM4_SYSCALL(real_regs));
+ bpf_probe_read_kernel(&arg5, sizeof(arg5), &PT_REGS_PARM5_SYSCALL(real_regs));
+
+ /* test for the CORE variant of PT_REGS_PARM */
+ arg1_core = PT_REGS_PARM1_CORE_SYSCALL(real_regs);
+ arg2_core = PT_REGS_PARM2_CORE_SYSCALL(real_regs);
+ arg3_core = PT_REGS_PARM3_CORE_SYSCALL(real_regs);
+ arg4_core_cx = PT_REGS_PARM4_CORE(real_regs);
+ arg4_core = PT_REGS_PARM4_CORE_SYSCALL(real_regs);
+ arg5_core = PT_REGS_PARM5_CORE_SYSCALL(real_regs);
+
+ return 0;
+}
+
+SEC("kprobe/" SYS_PREFIX "sys_prctl")
+int BPF_KPROBE_SYSCALL(prctl_enter, int option, unsigned long arg2,
+ unsigned long arg3, unsigned long arg4, unsigned long arg5)
+{
+ pid_t pid = bpf_get_current_pid_tgid() >> 32;
+
+ if (pid != filter_pid)
+ return 0;
+
+ option_syscall = option;
+ arg2_syscall = arg2;
+ arg3_syscall = arg3;
+ arg4_syscall = arg4;
+ arg5_syscall = arg5;
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/bpf_tracing_net.h b/tools/testing/selftests/bpf/progs/bpf_tracing_net.h
index e0f42601be9b..1c1289ba5fc5 100644
--- a/tools/testing/selftests/bpf/progs/bpf_tracing_net.h
+++ b/tools/testing/selftests/bpf/progs/bpf_tracing_net.h
@@ -5,6 +5,8 @@
#define AF_INET 2
#define AF_INET6 10
+#define SOL_SOCKET 1
+#define SO_SNDBUF 7
#define __SO_ACCEPTCON (1 << 16)
#define SOL_TCP 6
diff --git a/tools/testing/selftests/bpf/progs/btf_type_tag_percpu.c b/tools/testing/selftests/bpf/progs/btf_type_tag_percpu.c
new file mode 100644
index 000000000000..8feddb8289cf
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf_type_tag_percpu.c
@@ -0,0 +1,66 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Google */
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+struct bpf_testmod_btf_type_tag_1 {
+ int a;
+};
+
+struct bpf_testmod_btf_type_tag_2 {
+ struct bpf_testmod_btf_type_tag_1 *p;
+};
+
+__u64 g;
+
+SEC("fentry/bpf_testmod_test_btf_type_tag_percpu_1")
+int BPF_PROG(test_percpu1, struct bpf_testmod_btf_type_tag_1 *arg)
+{
+ g = arg->a;
+ return 0;
+}
+
+SEC("fentry/bpf_testmod_test_btf_type_tag_percpu_2")
+int BPF_PROG(test_percpu2, struct bpf_testmod_btf_type_tag_2 *arg)
+{
+ g = arg->p->a;
+ return 0;
+}
+
+/* trace_cgroup_mkdir(struct cgroup *cgrp, const char *path)
+ *
+ * struct cgroup_rstat_cpu {
+ * ...
+ * struct cgroup *updated_children;
+ * ...
+ * };
+ *
+ * struct cgroup {
+ * ...
+ * struct cgroup_rstat_cpu __percpu *rstat_cpu;
+ * ...
+ * };
+ */
+SEC("tp_btf/cgroup_mkdir")
+int BPF_PROG(test_percpu_load, struct cgroup *cgrp, const char *path)
+{
+ g = (__u64)cgrp->rstat_cpu->updated_children;
+ return 0;
+}
+
+SEC("tp_btf/cgroup_mkdir")
+int BPF_PROG(test_percpu_helper, struct cgroup *cgrp, const char *path)
+{
+ struct cgroup_rstat_cpu *rstat;
+ __u32 cpu;
+
+ cpu = bpf_get_smp_processor_id();
+ rstat = (struct cgroup_rstat_cpu *)bpf_per_cpu_ptr(cgrp->rstat_cpu, cpu);
+ if (rstat) {
+ /* READ_ONCE */
+ *(volatile int *)rstat;
+ }
+
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/btf_type_tag_user.c b/tools/testing/selftests/bpf/progs/btf_type_tag_user.c
new file mode 100644
index 000000000000..5523f77c5a44
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf_type_tag_user.c
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Facebook */
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+struct bpf_testmod_btf_type_tag_1 {
+ int a;
+};
+
+struct bpf_testmod_btf_type_tag_2 {
+ struct bpf_testmod_btf_type_tag_1 *p;
+};
+
+int g;
+
+SEC("fentry/bpf_testmod_test_btf_type_tag_user_1")
+int BPF_PROG(test_user1, struct bpf_testmod_btf_type_tag_1 *arg)
+{
+ g = arg->a;
+ return 0;
+}
+
+SEC("fentry/bpf_testmod_test_btf_type_tag_user_2")
+int BPF_PROG(test_user2, struct bpf_testmod_btf_type_tag_2 *arg)
+{
+ g = arg->p->a;
+ return 0;
+}
+
+/* int __sys_getsockname(int fd, struct sockaddr __user *usockaddr,
+ * int __user *usockaddr_len);
+ */
+SEC("fentry/__sys_getsockname")
+int BPF_PROG(test_sys_getsockname, int fd, struct sockaddr *usockaddr,
+ int *usockaddr_len)
+{
+ g = usockaddr->sa_family;
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/cgroup_getset_retval_getsockopt.c b/tools/testing/selftests/bpf/progs/cgroup_getset_retval_getsockopt.c
new file mode 100644
index 000000000000..b2a409e6382a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/cgroup_getset_retval_getsockopt.c
@@ -0,0 +1,45 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+/*
+ * Copyright 2021 Google LLC.
+ */
+
+#include <errno.h>
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+__u32 invocations = 0;
+__u32 assertion_error = 0;
+__u32 retval_value = 0;
+__u32 ctx_retval_value = 0;
+
+SEC("cgroup/getsockopt")
+int get_retval(struct bpf_sockopt *ctx)
+{
+ retval_value = bpf_get_retval();
+ ctx_retval_value = ctx->retval;
+ __sync_fetch_and_add(&invocations, 1);
+
+ return 1;
+}
+
+SEC("cgroup/getsockopt")
+int set_eisconn(struct bpf_sockopt *ctx)
+{
+ __sync_fetch_and_add(&invocations, 1);
+
+ if (bpf_set_retval(-EISCONN))
+ assertion_error = 1;
+
+ return 1;
+}
+
+SEC("cgroup/getsockopt")
+int clear_retval(struct bpf_sockopt *ctx)
+{
+ __sync_fetch_and_add(&invocations, 1);
+
+ ctx->retval = 0;
+
+ return 1;
+}
diff --git a/tools/testing/selftests/bpf/progs/cgroup_getset_retval_setsockopt.c b/tools/testing/selftests/bpf/progs/cgroup_getset_retval_setsockopt.c
new file mode 100644
index 000000000000..d6e5903e06ba
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/cgroup_getset_retval_setsockopt.c
@@ -0,0 +1,52 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+/*
+ * Copyright 2021 Google LLC.
+ */
+
+#include <errno.h>
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+__u32 invocations = 0;
+__u32 assertion_error = 0;
+__u32 retval_value = 0;
+
+SEC("cgroup/setsockopt")
+int get_retval(struct bpf_sockopt *ctx)
+{
+ retval_value = bpf_get_retval();
+ __sync_fetch_and_add(&invocations, 1);
+
+ return 1;
+}
+
+SEC("cgroup/setsockopt")
+int set_eunatch(struct bpf_sockopt *ctx)
+{
+ __sync_fetch_and_add(&invocations, 1);
+
+ if (bpf_set_retval(-EUNATCH))
+ assertion_error = 1;
+
+ return 0;
+}
+
+SEC("cgroup/setsockopt")
+int set_eisconn(struct bpf_sockopt *ctx)
+{
+ __sync_fetch_and_add(&invocations, 1);
+
+ if (bpf_set_retval(-EISCONN))
+ assertion_error = 1;
+
+ return 0;
+}
+
+SEC("cgroup/setsockopt")
+int legacy_eperm(struct bpf_sockopt *ctx)
+{
+ __sync_fetch_and_add(&invocations, 1);
+
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/core_kern.c b/tools/testing/selftests/bpf/progs/core_kern.c
index 13499cc15c7d..2715fe27d4cf 100644
--- a/tools/testing/selftests/bpf/progs/core_kern.c
+++ b/tools/testing/selftests/bpf/progs/core_kern.c
@@ -101,4 +101,20 @@ int balancer_ingress(struct __sk_buff *ctx)
return 0;
}
+typedef int (*func_proto_typedef___match)(long);
+typedef int (*func_proto_typedef___doesnt_match)(char *);
+typedef int (*func_proto_typedef_nested1)(func_proto_typedef___match);
+
+int proto_out[3];
+
+SEC("raw_tracepoint/sys_enter")
+int core_relo_proto(void *ctx)
+{
+ proto_out[0] = bpf_core_type_exists(func_proto_typedef___match);
+ proto_out[1] = bpf_core_type_exists(func_proto_typedef___doesnt_match);
+ proto_out[2] = bpf_core_type_exists(func_proto_typedef_nested1);
+
+ return 0;
+}
+
char LICENSE[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/core_kern_overflow.c b/tools/testing/selftests/bpf/progs/core_kern_overflow.c
new file mode 100644
index 000000000000..f0d5652256ba
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/core_kern_overflow.c
@@ -0,0 +1,22 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "vmlinux.h"
+
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+
+typedef int (*func_proto_typedef)(long);
+typedef int (*func_proto_typedef_nested1)(func_proto_typedef);
+typedef int (*func_proto_typedef_nested2)(func_proto_typedef_nested1);
+
+int proto_out;
+
+SEC("raw_tracepoint/sys_enter")
+int core_relo_proto(void *ctx)
+{
+ proto_out = bpf_core_type_exists(func_proto_typedef_nested2);
+
+ return 0;
+}
+
+char LICENSE[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/fexit_sleep.c b/tools/testing/selftests/bpf/progs/fexit_sleep.c
index bca92c9bd29a..106dc75efcc4 100644
--- a/tools/testing/selftests/bpf/progs/fexit_sleep.c
+++ b/tools/testing/selftests/bpf/progs/fexit_sleep.c
@@ -3,6 +3,7 @@
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
char LICENSE[] SEC("license") = "GPL";
@@ -10,8 +11,8 @@ int pid = 0;
int fentry_cnt = 0;
int fexit_cnt = 0;
-SEC("fentry/__x64_sys_nanosleep")
-int BPF_PROG(nanosleep_fentry, const struct pt_regs *regs)
+SEC("fentry/" SYS_PREFIX "sys_nanosleep")
+int nanosleep_fentry(void *ctx)
{
if (bpf_get_current_pid_tgid() >> 32 != pid)
return 0;
@@ -20,8 +21,8 @@ int BPF_PROG(nanosleep_fentry, const struct pt_regs *regs)
return 0;
}
-SEC("fexit/__x64_sys_nanosleep")
-int BPF_PROG(nanosleep_fexit, const struct pt_regs *regs, int ret)
+SEC("fexit/" SYS_PREFIX "sys_nanosleep")
+int nanosleep_fexit(void *ctx)
{
if (bpf_get_current_pid_tgid() >> 32 != pid)
return 0;
diff --git a/tools/testing/selftests/bpf/progs/freplace_cls_redirect.c b/tools/testing/selftests/bpf/progs/freplace_cls_redirect.c
index 68a5a9db928a..7e94412d47a5 100644
--- a/tools/testing/selftests/bpf/progs/freplace_cls_redirect.c
+++ b/tools/testing/selftests/bpf/progs/freplace_cls_redirect.c
@@ -7,12 +7,12 @@
#include <bpf/bpf_endian.h>
#include <bpf/bpf_helpers.h>
-struct bpf_map_def SEC("maps") sock_map = {
- .type = BPF_MAP_TYPE_SOCKMAP,
- .key_size = sizeof(int),
- .value_size = sizeof(int),
- .max_entries = 2,
-};
+struct {
+ __uint(type, BPF_MAP_TYPE_SOCKMAP);
+ __type(key, int);
+ __type(value, int);
+ __uint(max_entries, 2);
+} sock_map SEC(".maps");
SEC("freplace/cls_redirect")
int freplace_cls_redirect_test(struct __sk_buff *skb)
diff --git a/tools/testing/selftests/bpf/progs/ima.c b/tools/testing/selftests/bpf/progs/ima.c
index 96060ff4ffc6..e16a2c208481 100644
--- a/tools/testing/selftests/bpf/progs/ima.c
+++ b/tools/testing/selftests/bpf/progs/ima.c
@@ -18,8 +18,12 @@ struct {
char _license[] SEC("license") = "GPL";
-SEC("lsm.s/bprm_committed_creds")
-void BPF_PROG(ima, struct linux_binprm *bprm)
+bool use_ima_file_hash;
+bool enable_bprm_creds_for_exec;
+bool enable_kernel_read_file;
+bool test_deny;
+
+static void ima_test_common(struct file *file)
{
u64 ima_hash = 0;
u64 *sample;
@@ -28,8 +32,12 @@ void BPF_PROG(ima, struct linux_binprm *bprm)
pid = bpf_get_current_pid_tgid() >> 32;
if (pid == monitored_pid) {
- ret = bpf_ima_inode_hash(bprm->file->f_inode, &ima_hash,
- sizeof(ima_hash));
+ if (!use_ima_file_hash)
+ ret = bpf_ima_inode_hash(file->f_inode, &ima_hash,
+ sizeof(ima_hash));
+ else
+ ret = bpf_ima_file_hash(file, &ima_hash,
+ sizeof(ima_hash));
if (ret < 0 || ima_hash == 0)
return;
@@ -43,3 +51,53 @@ void BPF_PROG(ima, struct linux_binprm *bprm)
return;
}
+
+static int ima_test_deny(void)
+{
+ u32 pid;
+
+ pid = bpf_get_current_pid_tgid() >> 32;
+ if (pid == monitored_pid && test_deny)
+ return -EPERM;
+
+ return 0;
+}
+
+SEC("lsm.s/bprm_committed_creds")
+void BPF_PROG(bprm_committed_creds, struct linux_binprm *bprm)
+{
+ ima_test_common(bprm->file);
+}
+
+SEC("lsm.s/bprm_creds_for_exec")
+int BPF_PROG(bprm_creds_for_exec, struct linux_binprm *bprm)
+{
+ if (!enable_bprm_creds_for_exec)
+ return 0;
+
+ ima_test_common(bprm->file);
+ return 0;
+}
+
+SEC("lsm.s/kernel_read_file")
+int BPF_PROG(kernel_read_file, struct file *file, enum kernel_read_file_id id,
+ bool contents)
+{
+ int ret;
+
+ if (!enable_kernel_read_file)
+ return 0;
+
+ if (!contents)
+ return 0;
+
+ if (id != READING_POLICY)
+ return 0;
+
+ ret = ima_test_deny();
+ if (ret < 0)
+ return ret;
+
+ ima_test_common(file);
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/kfunc_call_race.c b/tools/testing/selftests/bpf/progs/kfunc_call_race.c
new file mode 100644
index 000000000000..4e8fed75a4e0
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/kfunc_call_race.c
@@ -0,0 +1,14 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+
+extern void bpf_testmod_test_mod_kfunc(int i) __ksym;
+
+SEC("tc")
+int kfunc_call_fail(struct __sk_buff *ctx)
+{
+ bpf_testmod_test_mod_kfunc(0);
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/kfunc_call_test.c b/tools/testing/selftests/bpf/progs/kfunc_call_test.c
index 8a8cf59017aa..5aecbb9fdc68 100644
--- a/tools/testing/selftests/bpf/progs/kfunc_call_test.c
+++ b/tools/testing/selftests/bpf/progs/kfunc_call_test.c
@@ -1,13 +1,20 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2021 Facebook */
-#include <linux/bpf.h>
+#include <vmlinux.h>
#include <bpf/bpf_helpers.h>
-#include "bpf_tcp_helpers.h"
extern int bpf_kfunc_call_test2(struct sock *sk, __u32 a, __u32 b) __ksym;
extern __u64 bpf_kfunc_call_test1(struct sock *sk, __u32 a, __u64 b,
__u32 c, __u64 d) __ksym;
+extern struct prog_test_ref_kfunc *bpf_kfunc_call_test_acquire(unsigned long *sp) __ksym;
+extern void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym;
+extern void bpf_kfunc_call_test_pass_ctx(struct __sk_buff *skb) __ksym;
+extern void bpf_kfunc_call_test_pass1(struct prog_test_pass1 *p) __ksym;
+extern void bpf_kfunc_call_test_pass2(struct prog_test_pass2 *p) __ksym;
+extern void bpf_kfunc_call_test_mem_len_pass1(void *mem, int len) __ksym;
+extern void bpf_kfunc_call_test_mem_len_fail2(__u64 *mem, int len) __ksym;
+
SEC("tc")
int kfunc_call_test2(struct __sk_buff *skb)
{
@@ -44,4 +51,45 @@ int kfunc_call_test1(struct __sk_buff *skb)
return ret;
}
+SEC("tc")
+int kfunc_call_test_ref_btf_id(struct __sk_buff *skb)
+{
+ struct prog_test_ref_kfunc *pt;
+ unsigned long s = 0;
+ int ret = 0;
+
+ pt = bpf_kfunc_call_test_acquire(&s);
+ if (pt) {
+ if (pt->a != 42 || pt->b != 108)
+ ret = -1;
+ bpf_kfunc_call_test_release(pt);
+ }
+ return ret;
+}
+
+SEC("tc")
+int kfunc_call_test_pass(struct __sk_buff *skb)
+{
+ struct prog_test_pass1 p1 = {};
+ struct prog_test_pass2 p2 = {};
+ short a = 0;
+ __u64 b = 0;
+ long c = 0;
+ char d = 0;
+ int e = 0;
+
+ bpf_kfunc_call_test_pass_ctx(skb);
+ bpf_kfunc_call_test_pass1(&p1);
+ bpf_kfunc_call_test_pass2(&p2);
+
+ bpf_kfunc_call_test_mem_len_pass1(&a, sizeof(a));
+ bpf_kfunc_call_test_mem_len_pass1(&b, sizeof(b));
+ bpf_kfunc_call_test_mem_len_pass1(&c, sizeof(c));
+ bpf_kfunc_call_test_mem_len_pass1(&d, sizeof(d));
+ bpf_kfunc_call_test_mem_len_pass1(&e, sizeof(e));
+ bpf_kfunc_call_test_mem_len_fail2(&b, -1);
+
+ return 0;
+}
+
char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/kprobe_multi.c b/tools/testing/selftests/bpf/progs/kprobe_multi.c
new file mode 100644
index 000000000000..600be50800f8
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/kprobe_multi.c
@@ -0,0 +1,100 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <stdbool.h>
+
+char _license[] SEC("license") = "GPL";
+
+extern const void bpf_fentry_test1 __ksym;
+extern const void bpf_fentry_test2 __ksym;
+extern const void bpf_fentry_test3 __ksym;
+extern const void bpf_fentry_test4 __ksym;
+extern const void bpf_fentry_test5 __ksym;
+extern const void bpf_fentry_test6 __ksym;
+extern const void bpf_fentry_test7 __ksym;
+extern const void bpf_fentry_test8 __ksym;
+
+int pid = 0;
+bool test_cookie = false;
+
+__u64 kprobe_test1_result = 0;
+__u64 kprobe_test2_result = 0;
+__u64 kprobe_test3_result = 0;
+__u64 kprobe_test4_result = 0;
+__u64 kprobe_test5_result = 0;
+__u64 kprobe_test6_result = 0;
+__u64 kprobe_test7_result = 0;
+__u64 kprobe_test8_result = 0;
+
+__u64 kretprobe_test1_result = 0;
+__u64 kretprobe_test2_result = 0;
+__u64 kretprobe_test3_result = 0;
+__u64 kretprobe_test4_result = 0;
+__u64 kretprobe_test5_result = 0;
+__u64 kretprobe_test6_result = 0;
+__u64 kretprobe_test7_result = 0;
+__u64 kretprobe_test8_result = 0;
+
+extern bool CONFIG_X86_KERNEL_IBT __kconfig __weak;
+
+static void kprobe_multi_check(void *ctx, bool is_return)
+{
+ if (bpf_get_current_pid_tgid() >> 32 != pid)
+ return;
+
+ __u64 cookie = test_cookie ? bpf_get_attach_cookie(ctx) : 0;
+ __u64 addr = bpf_get_func_ip(ctx) - (CONFIG_X86_KERNEL_IBT ? 4 : 0);
+
+#define SET(__var, __addr, __cookie) ({ \
+ if (((const void *) addr == __addr) && \
+ (!test_cookie || (cookie == __cookie))) \
+ __var = 1; \
+})
+
+ if (is_return) {
+ SET(kretprobe_test1_result, &bpf_fentry_test1, 8);
+ SET(kretprobe_test2_result, &bpf_fentry_test2, 7);
+ SET(kretprobe_test3_result, &bpf_fentry_test3, 6);
+ SET(kretprobe_test4_result, &bpf_fentry_test4, 5);
+ SET(kretprobe_test5_result, &bpf_fentry_test5, 4);
+ SET(kretprobe_test6_result, &bpf_fentry_test6, 3);
+ SET(kretprobe_test7_result, &bpf_fentry_test7, 2);
+ SET(kretprobe_test8_result, &bpf_fentry_test8, 1);
+ } else {
+ SET(kprobe_test1_result, &bpf_fentry_test1, 1);
+ SET(kprobe_test2_result, &bpf_fentry_test2, 2);
+ SET(kprobe_test3_result, &bpf_fentry_test3, 3);
+ SET(kprobe_test4_result, &bpf_fentry_test4, 4);
+ SET(kprobe_test5_result, &bpf_fentry_test5, 5);
+ SET(kprobe_test6_result, &bpf_fentry_test6, 6);
+ SET(kprobe_test7_result, &bpf_fentry_test7, 7);
+ SET(kprobe_test8_result, &bpf_fentry_test8, 8);
+ }
+
+#undef SET
+}
+
+/*
+ * No tests in here, just to trigger 'bpf_fentry_test*'
+ * through tracing test_run
+ */
+SEC("fentry/bpf_modify_return_test")
+int BPF_PROG(trigger)
+{
+ return 0;
+}
+
+SEC("kprobe.multi/bpf_fentry_tes??")
+int test_kprobe(struct pt_regs *ctx)
+{
+ kprobe_multi_check(ctx, false);
+ return 0;
+}
+
+SEC("kretprobe.multi/bpf_fentry_test*")
+int test_kretprobe(struct pt_regs *ctx)
+{
+ kprobe_multi_check(ctx, true);
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/ksym_race.c b/tools/testing/selftests/bpf/progs/ksym_race.c
new file mode 100644
index 000000000000..def97f2fed90
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/ksym_race.c
@@ -0,0 +1,13 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+
+extern int bpf_testmod_ksym_percpu __ksym;
+
+SEC("tc")
+int ksym_fail(struct __sk_buff *ctx)
+{
+ return *(int *)bpf_this_cpu_ptr(&bpf_testmod_ksym_percpu);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/local_storage.c b/tools/testing/selftests/bpf/progs/local_storage.c
index 9b1f9b75d5c2..19423ed862e3 100644
--- a/tools/testing/selftests/bpf/progs/local_storage.c
+++ b/tools/testing/selftests/bpf/progs/local_storage.c
@@ -37,6 +37,13 @@ struct {
} sk_storage_map SEC(".maps");
struct {
+ __uint(type, BPF_MAP_TYPE_SK_STORAGE);
+ __uint(map_flags, BPF_F_NO_PREALLOC | BPF_F_CLONE);
+ __type(key, int);
+ __type(value, struct local_storage);
+} sk_storage_map2 SEC(".maps");
+
+struct {
__uint(type, BPF_MAP_TYPE_TASK_STORAGE);
__uint(map_flags, BPF_F_NO_PREALLOC);
__type(key, int);
@@ -115,7 +122,19 @@ int BPF_PROG(socket_bind, struct socket *sock, struct sockaddr *address,
if (storage->value != DUMMY_STORAGE_VALUE)
sk_storage_result = -1;
+ /* This tests that we can associate multiple elements
+ * with the local storage.
+ */
+ storage = bpf_sk_storage_get(&sk_storage_map2, sock->sk, 0,
+ BPF_LOCAL_STORAGE_GET_F_CREATE);
+ if (!storage)
+ return 0;
+
err = bpf_sk_storage_delete(&sk_storage_map, sock->sk);
+ if (err)
+ return 0;
+
+ err = bpf_sk_storage_delete(&sk_storage_map2, sock->sk);
if (!err)
sk_storage_result = err;
diff --git a/tools/testing/selftests/bpf/progs/perfbuf_bench.c b/tools/testing/selftests/bpf/progs/perfbuf_bench.c
index e5ab4836a641..45204fe0c570 100644
--- a/tools/testing/selftests/bpf/progs/perfbuf_bench.c
+++ b/tools/testing/selftests/bpf/progs/perfbuf_bench.c
@@ -4,6 +4,7 @@
#include <linux/bpf.h>
#include <stdint.h>
#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
char _license[] SEC("license") = "GPL";
@@ -18,7 +19,7 @@ const volatile int batch_cnt = 0;
long sample_val = 42;
long dropped __attribute__((aligned(128))) = 0;
-SEC("fentry/__x64_sys_getpgid")
+SEC("fentry/" SYS_PREFIX "sys_getpgid")
int bench_perfbuf(void *ctx)
{
__u64 *sample;
diff --git a/tools/testing/selftests/bpf/progs/ringbuf_bench.c b/tools/testing/selftests/bpf/progs/ringbuf_bench.c
index 123607d314d6..6a468496f539 100644
--- a/tools/testing/selftests/bpf/progs/ringbuf_bench.c
+++ b/tools/testing/selftests/bpf/progs/ringbuf_bench.c
@@ -4,6 +4,7 @@
#include <linux/bpf.h>
#include <stdint.h>
#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
char _license[] SEC("license") = "GPL";
@@ -30,7 +31,7 @@ static __always_inline long get_flags()
return sz >= wakeup_data_size ? BPF_RB_FORCE_WAKEUP : BPF_RB_NO_WAKEUP;
}
-SEC("fentry/__x64_sys_getpgid")
+SEC("fentry/" SYS_PREFIX "sys_getpgid")
int bench_ringbuf(void *ctx)
{
long *sample, flags;
diff --git a/tools/testing/selftests/bpf/progs/sample_map_ret0.c b/tools/testing/selftests/bpf/progs/sample_map_ret0.c
index 1612a32007b6..495990d355ef 100644
--- a/tools/testing/selftests/bpf/progs/sample_map_ret0.c
+++ b/tools/testing/selftests/bpf/progs/sample_map_ret0.c
@@ -2,19 +2,19 @@
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
-struct bpf_map_def SEC("maps") htab = {
- .type = BPF_MAP_TYPE_HASH,
- .key_size = sizeof(__u32),
- .value_size = sizeof(long),
- .max_entries = 2,
-};
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __type(key, __u32);
+ __type(value, long);
+ __uint(max_entries, 2);
+} htab SEC(".maps");
-struct bpf_map_def SEC("maps") array = {
- .type = BPF_MAP_TYPE_ARRAY,
- .key_size = sizeof(__u32),
- .value_size = sizeof(long),
- .max_entries = 2,
-};
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __type(key, __u32);
+ __type(value, long);
+ __uint(max_entries, 2);
+} array SEC(".maps");
/* Sample program which should always load for testing control paths. */
SEC(".text") int func()
diff --git a/tools/testing/selftests/bpf/progs/sockmap_parse_prog.c b/tools/testing/selftests/bpf/progs/sockmap_parse_prog.c
index 95d5b941bc1f..c9abfe3a11af 100644
--- a/tools/testing/selftests/bpf/progs/sockmap_parse_prog.c
+++ b/tools/testing/selftests/bpf/progs/sockmap_parse_prog.c
@@ -7,8 +7,6 @@ int bpf_prog1(struct __sk_buff *skb)
{
void *data_end = (void *)(long) skb->data_end;
void *data = (void *)(long) skb->data;
- __u32 lport = skb->local_port;
- __u32 rport = skb->remote_port;
__u8 *d = data;
int err;
diff --git a/tools/testing/selftests/bpf/progs/sockopt_sk.c b/tools/testing/selftests/bpf/progs/sockopt_sk.c
index 79c8139b63b8..c8d810010a94 100644
--- a/tools/testing/selftests/bpf/progs/sockopt_sk.c
+++ b/tools/testing/selftests/bpf/progs/sockopt_sk.c
@@ -72,18 +72,19 @@ int _getsockopt(struct bpf_sockopt *ctx)
* reasons.
*/
- if (optval + sizeof(struct tcp_zerocopy_receive) > optval_end)
- return 0; /* EPERM, bounds check */
+ /* Check that optval contains address (__u64) */
+ if (optval + sizeof(__u64) > optval_end)
+ return 0; /* bounds check */
if (((struct tcp_zerocopy_receive *)optval)->address != 0)
- return 0; /* EPERM, unexpected data */
+ return 0; /* unexpected data */
return 1;
}
if (ctx->level == SOL_IP && ctx->optname == IP_FREEBIND) {
if (optval + 1 > optval_end)
- return 0; /* EPERM, bounds check */
+ return 0; /* bounds check */
ctx->retval = 0; /* Reset system call return value to zero */
@@ -96,24 +97,24 @@ int _getsockopt(struct bpf_sockopt *ctx)
* bytes of data.
*/
if (optval_end - optval != page_size)
- return 0; /* EPERM, unexpected data size */
+ return 0; /* unexpected data size */
return 1;
}
if (ctx->level != SOL_CUSTOM)
- return 0; /* EPERM, deny everything except custom level */
+ return 0; /* deny everything except custom level */
if (optval + 1 > optval_end)
- return 0; /* EPERM, bounds check */
+ return 0; /* bounds check */
storage = bpf_sk_storage_get(&socket_storage_map, ctx->sk, 0,
BPF_SK_STORAGE_GET_F_CREATE);
if (!storage)
- return 0; /* EPERM, couldn't get sk storage */
+ return 0; /* couldn't get sk storage */
if (!ctx->retval)
- return 0; /* EPERM, kernel should not have handled
+ return 0; /* kernel should not have handled
* SOL_CUSTOM, something is wrong!
*/
ctx->retval = 0; /* Reset system call return value to zero */
@@ -152,7 +153,7 @@ int _setsockopt(struct bpf_sockopt *ctx)
/* Overwrite SO_SNDBUF value */
if (optval + sizeof(__u32) > optval_end)
- return 0; /* EPERM, bounds check */
+ return 0; /* bounds check */
*(__u32 *)optval = 0x55AA;
ctx->optlen = 4;
@@ -164,7 +165,7 @@ int _setsockopt(struct bpf_sockopt *ctx)
/* Always use cubic */
if (optval + 5 > optval_end)
- return 0; /* EPERM, bounds check */
+ return 0; /* bounds check */
memcpy(optval, "cubic", 5);
ctx->optlen = 5;
@@ -175,10 +176,10 @@ int _setsockopt(struct bpf_sockopt *ctx)
if (ctx->level == SOL_IP && ctx->optname == IP_FREEBIND) {
/* Original optlen is larger than PAGE_SIZE. */
if (ctx->optlen != page_size * 2)
- return 0; /* EPERM, unexpected data size */
+ return 0; /* unexpected data size */
if (optval + 1 > optval_end)
- return 0; /* EPERM, bounds check */
+ return 0; /* bounds check */
/* Make sure we can trim the buffer. */
optval[0] = 0;
@@ -189,21 +190,21 @@ int _setsockopt(struct bpf_sockopt *ctx)
* bytes of data.
*/
if (optval_end - optval != page_size)
- return 0; /* EPERM, unexpected data size */
+ return 0; /* unexpected data size */
return 1;
}
if (ctx->level != SOL_CUSTOM)
- return 0; /* EPERM, deny everything except custom level */
+ return 0; /* deny everything except custom level */
if (optval + 1 > optval_end)
- return 0; /* EPERM, bounds check */
+ return 0; /* bounds check */
storage = bpf_sk_storage_get(&socket_storage_map, ctx->sk, 0,
BPF_SK_STORAGE_GET_F_CREATE);
if (!storage)
- return 0; /* EPERM, couldn't get sk storage */
+ return 0; /* couldn't get sk storage */
storage->val = optval[0];
ctx->optlen = -1; /* BPF has consumed this option, don't call kernel
diff --git a/tools/testing/selftests/bpf/progs/stacktrace_map_skip.c b/tools/testing/selftests/bpf/progs/stacktrace_map_skip.c
new file mode 100644
index 000000000000..2eb297df3dd6
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/stacktrace_map_skip.c
@@ -0,0 +1,68 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+
+#define TEST_STACK_DEPTH 2
+#define TEST_MAX_ENTRIES 16384
+
+typedef __u64 stack_trace_t[TEST_STACK_DEPTH];
+
+struct {
+ __uint(type, BPF_MAP_TYPE_STACK_TRACE);
+ __uint(max_entries, TEST_MAX_ENTRIES);
+ __type(key, __u32);
+ __type(value, stack_trace_t);
+} stackmap SEC(".maps");
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(max_entries, TEST_MAX_ENTRIES);
+ __type(key, __u32);
+ __type(value, __u32);
+} stackid_hmap SEC(".maps");
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __uint(max_entries, TEST_MAX_ENTRIES);
+ __type(key, __u32);
+ __type(value, stack_trace_t);
+} stack_amap SEC(".maps");
+
+int pid = 0;
+int control = 0;
+int failed = 0;
+
+SEC("tracepoint/sched/sched_switch")
+int oncpu(struct trace_event_raw_sched_switch *ctx)
+{
+ __u32 max_len = TEST_STACK_DEPTH * sizeof(__u64);
+ __u32 key = 0, val = 0;
+ __u64 *stack_p;
+
+ if (pid != (bpf_get_current_pid_tgid() >> 32))
+ return 0;
+
+ if (control)
+ return 0;
+
+ /* it should allow skipping whole buffer size entries */
+ key = bpf_get_stackid(ctx, &stackmap, TEST_STACK_DEPTH);
+ if ((int)key >= 0) {
+ /* The size of stackmap and stack_amap should be the same */
+ bpf_map_update_elem(&stackid_hmap, &key, &val, 0);
+ stack_p = bpf_map_lookup_elem(&stack_amap, &key);
+ if (stack_p) {
+ bpf_get_stack(ctx, stack_p, max_len, TEST_STACK_DEPTH);
+ /* it wrongly skipped all the entries and filled zero */
+ if (stack_p[0] == 0)
+ failed = 1;
+ }
+ } else {
+ /* old kernel doesn't support skipping that many entries */
+ failed = 2;
+ }
+
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_bpf_nf.c b/tools/testing/selftests/bpf/progs/test_bpf_nf.c
new file mode 100644
index 000000000000..f00a9731930e
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_bpf_nf.c
@@ -0,0 +1,118 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+
+#define EAFNOSUPPORT 97
+#define EPROTO 71
+#define ENONET 64
+#define EINVAL 22
+#define ENOENT 2
+
+int test_einval_bpf_tuple = 0;
+int test_einval_reserved = 0;
+int test_einval_netns_id = 0;
+int test_einval_len_opts = 0;
+int test_eproto_l4proto = 0;
+int test_enonet_netns_id = 0;
+int test_enoent_lookup = 0;
+int test_eafnosupport = 0;
+
+struct nf_conn;
+
+struct bpf_ct_opts___local {
+ s32 netns_id;
+ s32 error;
+ u8 l4proto;
+ u8 reserved[3];
+} __attribute__((preserve_access_index));
+
+struct nf_conn *bpf_xdp_ct_lookup(struct xdp_md *, struct bpf_sock_tuple *, u32,
+ struct bpf_ct_opts___local *, u32) __ksym;
+struct nf_conn *bpf_skb_ct_lookup(struct __sk_buff *, struct bpf_sock_tuple *, u32,
+ struct bpf_ct_opts___local *, u32) __ksym;
+void bpf_ct_release(struct nf_conn *) __ksym;
+
+static __always_inline void
+nf_ct_test(struct nf_conn *(*func)(void *, struct bpf_sock_tuple *, u32,
+ struct bpf_ct_opts___local *, u32),
+ void *ctx)
+{
+ struct bpf_ct_opts___local opts_def = { .l4proto = IPPROTO_TCP, .netns_id = -1 };
+ struct bpf_sock_tuple bpf_tuple;
+ struct nf_conn *ct;
+
+ __builtin_memset(&bpf_tuple, 0, sizeof(bpf_tuple.ipv4));
+
+ ct = func(ctx, NULL, 0, &opts_def, sizeof(opts_def));
+ if (ct)
+ bpf_ct_release(ct);
+ else
+ test_einval_bpf_tuple = opts_def.error;
+
+ opts_def.reserved[0] = 1;
+ ct = func(ctx, &bpf_tuple, sizeof(bpf_tuple.ipv4), &opts_def, sizeof(opts_def));
+ opts_def.reserved[0] = 0;
+ opts_def.l4proto = IPPROTO_TCP;
+ if (ct)
+ bpf_ct_release(ct);
+ else
+ test_einval_reserved = opts_def.error;
+
+ opts_def.netns_id = -2;
+ ct = func(ctx, &bpf_tuple, sizeof(bpf_tuple.ipv4), &opts_def, sizeof(opts_def));
+ opts_def.netns_id = -1;
+ if (ct)
+ bpf_ct_release(ct);
+ else
+ test_einval_netns_id = opts_def.error;
+
+ ct = func(ctx, &bpf_tuple, sizeof(bpf_tuple.ipv4), &opts_def, sizeof(opts_def) - 1);
+ if (ct)
+ bpf_ct_release(ct);
+ else
+ test_einval_len_opts = opts_def.error;
+
+ opts_def.l4proto = IPPROTO_ICMP;
+ ct = func(ctx, &bpf_tuple, sizeof(bpf_tuple.ipv4), &opts_def, sizeof(opts_def));
+ opts_def.l4proto = IPPROTO_TCP;
+ if (ct)
+ bpf_ct_release(ct);
+ else
+ test_eproto_l4proto = opts_def.error;
+
+ opts_def.netns_id = 0xf00f;
+ ct = func(ctx, &bpf_tuple, sizeof(bpf_tuple.ipv4), &opts_def, sizeof(opts_def));
+ opts_def.netns_id = -1;
+ if (ct)
+ bpf_ct_release(ct);
+ else
+ test_enonet_netns_id = opts_def.error;
+
+ ct = func(ctx, &bpf_tuple, sizeof(bpf_tuple.ipv4), &opts_def, sizeof(opts_def));
+ if (ct)
+ bpf_ct_release(ct);
+ else
+ test_enoent_lookup = opts_def.error;
+
+ ct = func(ctx, &bpf_tuple, sizeof(bpf_tuple.ipv4) - 1, &opts_def, sizeof(opts_def));
+ if (ct)
+ bpf_ct_release(ct);
+ else
+ test_eafnosupport = opts_def.error;
+}
+
+SEC("xdp")
+int nf_xdp_ct_test(struct xdp_md *ctx)
+{
+ nf_ct_test((void *)bpf_xdp_ct_lookup, ctx);
+ return 0;
+}
+
+SEC("tc")
+int nf_skb_ct_test(struct __sk_buff *ctx)
+{
+ nf_ct_test((void *)bpf_skb_ct_lookup, ctx);
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/btf_decl_tag.c b/tools/testing/selftests/bpf/progs/test_btf_decl_tag.c
index c88ccc53529a..c88ccc53529a 100644
--- a/tools/testing/selftests/bpf/progs/btf_decl_tag.c
+++ b/tools/testing/selftests/bpf/progs/test_btf_decl_tag.c
diff --git a/tools/testing/selftests/bpf/progs/test_btf_haskv.c b/tools/testing/selftests/bpf/progs/test_btf_haskv.c
index 160ead6c67b2..07c94df13660 100644
--- a/tools/testing/selftests/bpf/progs/test_btf_haskv.c
+++ b/tools/testing/selftests/bpf/progs/test_btf_haskv.c
@@ -9,12 +9,15 @@ struct ipv_counts {
unsigned int v6;
};
+#pragma GCC diagnostic push
+#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
struct bpf_map_def SEC("maps") btf_map = {
.type = BPF_MAP_TYPE_ARRAY,
.key_size = sizeof(int),
.value_size = sizeof(struct ipv_counts),
.max_entries = 4,
};
+#pragma GCC diagnostic pop
BPF_ANNOTATE_KV_PAIR(btf_map, int, struct ipv_counts);
diff --git a/tools/testing/selftests/bpf/progs/test_btf_newkv.c b/tools/testing/selftests/bpf/progs/test_btf_newkv.c
index 1884a5bd10f5..762671a2e90c 100644
--- a/tools/testing/selftests/bpf/progs/test_btf_newkv.c
+++ b/tools/testing/selftests/bpf/progs/test_btf_newkv.c
@@ -9,6 +9,8 @@ struct ipv_counts {
unsigned int v6;
};
+#pragma GCC diagnostic push
+#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
/* just to validate we can handle maps in multiple sections */
struct bpf_map_def SEC("maps") btf_map_legacy = {
.type = BPF_MAP_TYPE_ARRAY,
@@ -16,6 +18,7 @@ struct bpf_map_def SEC("maps") btf_map_legacy = {
.value_size = sizeof(long long),
.max_entries = 4,
};
+#pragma GCC diagnostic pop
BPF_ANNOTATE_KV_PAIR(btf_map_legacy, int, struct ipv_counts);
diff --git a/tools/testing/selftests/bpf/progs/test_btf_nokv.c b/tools/testing/selftests/bpf/progs/test_btf_nokv.c
index 15e0f9945fe4..1dabb88f8cb4 100644
--- a/tools/testing/selftests/bpf/progs/test_btf_nokv.c
+++ b/tools/testing/selftests/bpf/progs/test_btf_nokv.c
@@ -8,12 +8,12 @@ struct ipv_counts {
unsigned int v6;
};
-struct bpf_map_def SEC("maps") btf_map = {
- .type = BPF_MAP_TYPE_ARRAY,
- .key_size = sizeof(int),
- .value_size = sizeof(struct ipv_counts),
- .max_entries = 4,
-};
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __uint(key_size, sizeof(int));
+ __uint(value_size, sizeof(struct ipv_counts));
+ __uint(max_entries, 4);
+} btf_map SEC(".maps");
__attribute__((noinline))
int test_long_fname_2(void)
diff --git a/tools/testing/selftests/bpf/progs/test_custom_sec_handlers.c b/tools/testing/selftests/bpf/progs/test_custom_sec_handlers.c
new file mode 100644
index 000000000000..4061f701ca50
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_custom_sec_handlers.c
@@ -0,0 +1,63 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Facebook */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+const volatile int my_pid;
+
+bool abc1_called;
+bool abc2_called;
+bool custom1_called;
+bool custom2_called;
+bool kprobe1_called;
+bool xyz_called;
+
+SEC("abc")
+int abc1(void *ctx)
+{
+ abc1_called = true;
+ return 0;
+}
+
+SEC("abc/whatever")
+int abc2(void *ctx)
+{
+ abc2_called = true;
+ return 0;
+}
+
+SEC("custom")
+int custom1(void *ctx)
+{
+ custom1_called = true;
+ return 0;
+}
+
+SEC("custom/something")
+int custom2(void *ctx)
+{
+ custom2_called = true;
+ return 0;
+}
+
+SEC("kprobe")
+int kprobe1(void *ctx)
+{
+ kprobe1_called = true;
+ return 0;
+}
+
+SEC("xyz/blah")
+int xyz(void *ctx)
+{
+ int whatever;
+
+ /* use sleepable helper, custom handler should set sleepable flag */
+ bpf_copy_from_user(&whatever, sizeof(whatever), NULL);
+ xyz_called = true;
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_probe_user.c b/tools/testing/selftests/bpf/progs/test_probe_user.c
index 8812a90da4eb..702578a5e496 100644
--- a/tools/testing/selftests/bpf/progs/test_probe_user.c
+++ b/tools/testing/selftests/bpf/progs/test_probe_user.c
@@ -7,20 +7,7 @@
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
-
-#if defined(__TARGET_ARCH_x86)
-#define SYSCALL_WRAPPER 1
-#define SYS_PREFIX "__x64_"
-#elif defined(__TARGET_ARCH_s390)
-#define SYSCALL_WRAPPER 1
-#define SYS_PREFIX "__s390x_"
-#elif defined(__TARGET_ARCH_arm64)
-#define SYSCALL_WRAPPER 1
-#define SYS_PREFIX "__arm64_"
-#else
-#define SYSCALL_WRAPPER 0
-#define SYS_PREFIX ""
-#endif
+#include "bpf_misc.h"
static struct sockaddr_in old;
diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf.c b/tools/testing/selftests/bpf/progs/test_ringbuf.c
index eaa7d9dba0be..5bdc0d38efc0 100644
--- a/tools/testing/selftests/bpf/progs/test_ringbuf.c
+++ b/tools/testing/selftests/bpf/progs/test_ringbuf.c
@@ -3,6 +3,7 @@
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
char _license[] SEC("license") = "GPL";
@@ -35,7 +36,7 @@ long prod_pos = 0;
/* inner state */
long seq = 0;
-SEC("fentry/__x64_sys_getpgid")
+SEC("fentry/" SYS_PREFIX "sys_getpgid")
int test_ringbuf(void *ctx)
{
int cur_pid = bpf_get_current_pid_tgid() >> 32;
diff --git a/tools/testing/selftests/bpf/progs/test_send_signal_kern.c b/tools/testing/selftests/bpf/progs/test_send_signal_kern.c
index b4233d3efac2..92354cd72044 100644
--- a/tools/testing/selftests/bpf/progs/test_send_signal_kern.c
+++ b/tools/testing/selftests/bpf/progs/test_send_signal_kern.c
@@ -10,7 +10,7 @@ static __always_inline int bpf_send_signal_test(void *ctx)
{
int ret;
- if (status != 0 || sig == 0 || pid == 0)
+ if (status != 0 || pid == 0)
return 0;
if ((bpf_get_current_pid_tgid() >> 32) == pid) {
diff --git a/tools/testing/selftests/bpf/progs/test_sk_lookup.c b/tools/testing/selftests/bpf/progs/test_sk_lookup.c
index 83b0aaa52ef7..6058dcb11b36 100644
--- a/tools/testing/selftests/bpf/progs/test_sk_lookup.c
+++ b/tools/testing/selftests/bpf/progs/test_sk_lookup.c
@@ -392,6 +392,7 @@ int ctx_narrow_access(struct bpf_sk_lookup *ctx)
{
struct bpf_sock *sk;
int err, family;
+ __u32 val_u32;
bool v4;
v4 = (ctx->family == AF_INET);
@@ -412,12 +413,22 @@ int ctx_narrow_access(struct bpf_sk_lookup *ctx)
/* Narrow loads from remote_port field. Expect SRC_PORT. */
if (LSB(ctx->remote_port, 0) != ((SRC_PORT >> 0) & 0xff) ||
- LSB(ctx->remote_port, 1) != ((SRC_PORT >> 8) & 0xff) ||
- LSB(ctx->remote_port, 2) != 0 || LSB(ctx->remote_port, 3) != 0)
+ LSB(ctx->remote_port, 1) != ((SRC_PORT >> 8) & 0xff))
return SK_DROP;
if (LSW(ctx->remote_port, 0) != SRC_PORT)
return SK_DROP;
+ /*
+ * NOTE: 4-byte load from bpf_sk_lookup at remote_port offset
+ * is quirky. It gets rewritten by the access converter to a
+ * 2-byte load for backward compatibility. Treating the load
+ * result as a be16 value makes the code portable across
+ * little- and big-endian platforms.
+ */
+ val_u32 = *(__u32 *)&ctx->remote_port;
+ if (val_u32 != SRC_PORT)
+ return SK_DROP;
+
/* Narrow loads from local_port field. Expect DST_PORT. */
if (LSB(ctx->local_port, 0) != ((DST_PORT >> 0) & 0xff) ||
LSB(ctx->local_port, 1) != ((DST_PORT >> 8) & 0xff) ||
diff --git a/tools/testing/selftests/bpf/progs/test_skb_cgroup_id_kern.c b/tools/testing/selftests/bpf/progs/test_skb_cgroup_id_kern.c
index c304cd5b8cad..37aacc66cd68 100644
--- a/tools/testing/selftests/bpf/progs/test_skb_cgroup_id_kern.c
+++ b/tools/testing/selftests/bpf/progs/test_skb_cgroup_id_kern.c
@@ -10,12 +10,12 @@
#define NUM_CGROUP_LEVELS 4
-struct bpf_map_def SEC("maps") cgroup_ids = {
- .type = BPF_MAP_TYPE_ARRAY,
- .key_size = sizeof(__u32),
- .value_size = sizeof(__u64),
- .max_entries = NUM_CGROUP_LEVELS,
-};
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __type(key, __u32);
+ __type(value, __u64);
+ __uint(max_entries, NUM_CGROUP_LEVELS);
+} cgroup_ids SEC(".maps");
static __always_inline void log_nth_level(struct __sk_buff *skb, __u32 level)
{
diff --git a/tools/testing/selftests/bpf/progs/test_sock_fields.c b/tools/testing/selftests/bpf/progs/test_sock_fields.c
index 81b57b9aaaea..9f4b8f9f1181 100644
--- a/tools/testing/selftests/bpf/progs/test_sock_fields.c
+++ b/tools/testing/selftests/bpf/progs/test_sock_fields.c
@@ -12,6 +12,7 @@
enum bpf_linum_array_idx {
EGRESS_LINUM_IDX,
INGRESS_LINUM_IDX,
+ READ_SK_DST_PORT_LINUM_IDX,
__NR_BPF_LINUM_ARRAY_IDX,
};
@@ -113,7 +114,7 @@ static void tpcpy(struct bpf_tcp_sock *dst,
#define RET_LOG() ({ \
linum = __LINE__; \
- bpf_map_update_elem(&linum_map, &linum_idx, &linum, BPF_NOEXIST); \
+ bpf_map_update_elem(&linum_map, &linum_idx, &linum, BPF_ANY); \
return CG_OK; \
})
@@ -133,11 +134,11 @@ int egress_read_sock_fields(struct __sk_buff *skb)
if (!sk)
RET_LOG();
- /* Not the testing egress traffic or
- * TCP_LISTEN (10) socket will be copied at the ingress side.
+ /* Not testing the egress traffic or the listening socket,
+ * which are covered by the cgroup_skb/ingress test program.
*/
if (sk->family != AF_INET6 || !is_loopback6(sk->src_ip6) ||
- sk->state == 10)
+ sk->state == BPF_TCP_LISTEN)
return CG_OK;
if (sk->src_port == bpf_ntohs(srv_sa6.sin6_port)) {
@@ -231,8 +232,8 @@ int ingress_read_sock_fields(struct __sk_buff *skb)
sk->src_port != bpf_ntohs(srv_sa6.sin6_port))
return CG_OK;
- /* Only interested in TCP_LISTEN */
- if (sk->state != 10)
+ /* Only interested in the listening socket */
+ if (sk->state != BPF_TCP_LISTEN)
return CG_OK;
/* It must be a fullsock for cgroup_skb/ingress prog */
@@ -250,4 +251,54 @@ int ingress_read_sock_fields(struct __sk_buff *skb)
return CG_OK;
}
+/*
+ * NOTE: 4-byte load from bpf_sock at dst_port offset is quirky. It
+ * gets rewritten by the access converter to a 2-byte load for
+ * backward compatibility. Treating the load result as a be16 value
+ * makes the code portable across little- and big-endian platforms.
+ */
+static __noinline bool sk_dst_port__load_word(struct bpf_sock *sk)
+{
+ __u32 *word = (__u32 *)&sk->dst_port;
+ return word[0] == bpf_htons(0xcafe);
+}
+
+static __noinline bool sk_dst_port__load_half(struct bpf_sock *sk)
+{
+ __u16 *half = (__u16 *)&sk->dst_port;
+ return half[0] == bpf_htons(0xcafe);
+}
+
+static __noinline bool sk_dst_port__load_byte(struct bpf_sock *sk)
+{
+ __u8 *byte = (__u8 *)&sk->dst_port;
+ return byte[0] == 0xca && byte[1] == 0xfe;
+}
+
+SEC("cgroup_skb/egress")
+int read_sk_dst_port(struct __sk_buff *skb)
+{
+ __u32 linum, linum_idx;
+ struct bpf_sock *sk;
+
+ linum_idx = READ_SK_DST_PORT_LINUM_IDX;
+
+ sk = skb->sk;
+ if (!sk)
+ RET_LOG();
+
+ /* Ignore everything but the SYN from the client socket */
+ if (sk->state != BPF_TCP_SYN_SENT)
+ return CG_OK;
+
+ if (!sk_dst_port__load_word(sk))
+ RET_LOG();
+ if (!sk_dst_port__load_half(sk))
+ RET_LOG();
+ if (!sk_dst_port__load_byte(sk))
+ RET_LOG();
+
+ return CG_OK;
+}
+
char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_sockmap_progs_query.c b/tools/testing/selftests/bpf/progs/test_sockmap_progs_query.c
new file mode 100644
index 000000000000..9d58d61c0dee
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_sockmap_progs_query.c
@@ -0,0 +1,24 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+
+struct {
+ __uint(type, BPF_MAP_TYPE_SOCKMAP);
+ __uint(max_entries, 1);
+ __type(key, __u32);
+ __type(value, __u64);
+} sock_map SEC(".maps");
+
+SEC("sk_skb")
+int prog_skb_verdict(struct __sk_buff *skb)
+{
+ return SK_PASS;
+}
+
+SEC("sk_msg")
+int prog_skmsg_verdict(struct sk_msg_md *msg)
+{
+ return SK_PASS;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_subskeleton.c b/tools/testing/selftests/bpf/progs/test_subskeleton.c
new file mode 100644
index 000000000000..006417974372
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_subskeleton.c
@@ -0,0 +1,28 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) Meta Platforms, Inc. and affiliates. */
+
+#include <stdbool.h>
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+/* volatile to force a read, compiler may assume 0 otherwise */
+const volatile int rovar1;
+int out1;
+
+/* Override weak symbol in test_subskeleton_lib */
+int var5 = 5;
+
+extern volatile bool CONFIG_BPF_SYSCALL __kconfig;
+
+extern int lib_routine(void);
+
+SEC("raw_tp/sys_enter")
+int handler1(const void *ctx)
+{
+ (void) CONFIG_BPF_SYSCALL;
+
+ out1 = lib_routine() * rovar1;
+ return 0;
+}
+
+char LICENSE[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_subskeleton_lib.c b/tools/testing/selftests/bpf/progs/test_subskeleton_lib.c
new file mode 100644
index 000000000000..ecfafe812c36
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_subskeleton_lib.c
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) Meta Platforms, Inc. and affiliates. */
+
+#include <stdbool.h>
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+/* volatile to force a read */
+const volatile int var1;
+volatile int var2 = 1;
+struct {
+ int var3_1;
+ __s64 var3_2;
+} var3;
+int libout1;
+
+extern volatile bool CONFIG_BPF_SYSCALL __kconfig;
+
+int var4[4];
+
+__weak int var5 SEC(".data");
+
+/* Fully contained within library extern-and-definition */
+extern int var6;
+
+int var7 SEC(".data.custom");
+
+int (*fn_ptr)(void);
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __type(key, __u32);
+ __type(value, __u32);
+ __uint(max_entries, 16);
+} map1 SEC(".maps");
+
+extern struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __type(key, __u32);
+ __type(value, __u32);
+ __uint(max_entries, 16);
+} map2 SEC(".maps");
+
+int lib_routine(void)
+{
+ __u32 key = 1, value = 2;
+
+ (void) CONFIG_BPF_SYSCALL;
+ bpf_map_update_elem(&map2, &key, &value, BPF_ANY);
+
+ libout1 = var1 + var2 + var3.var3_1 + var3.var3_2 + var5 + var6;
+ return libout1;
+}
+
+SEC("perf_event")
+int lib_perf_handler(struct pt_regs *ctx)
+{
+ return 0;
+}
+
+char LICENSE[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_subskeleton_lib2.c b/tools/testing/selftests/bpf/progs/test_subskeleton_lib2.c
new file mode 100644
index 000000000000..80238486b7ce
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_subskeleton_lib2.c
@@ -0,0 +1,16 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) Meta Platforms, Inc. and affiliates. */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+int var6 = 6;
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __type(key, __u32);
+ __type(value, __u32);
+ __uint(max_entries, 16);
+} map2 SEC(".maps");
+
+char LICENSE[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_tc_dtime.c b/tools/testing/selftests/bpf/progs/test_tc_dtime.c
new file mode 100644
index 000000000000..06f300d06dbd
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_tc_dtime.c
@@ -0,0 +1,349 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2022 Meta
+
+#include <stddef.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include <linux/bpf.h>
+#include <linux/stddef.h>
+#include <linux/pkt_cls.h>
+#include <linux/if_ether.h>
+#include <linux/in.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_endian.h>
+#include <sys/socket.h>
+
+/* veth_src --- veth_src_fwd --- veth_det_fwd --- veth_dst
+ * | |
+ * ns_src | ns_fwd | ns_dst
+ *
+ * ns_src and ns_dst: ENDHOST namespace
+ * ns_fwd: Fowarding namespace
+ */
+
+#define ctx_ptr(field) (void *)(long)(field)
+
+#define ip4_src __bpf_htonl(0xac100164) /* 172.16.1.100 */
+#define ip4_dst __bpf_htonl(0xac100264) /* 172.16.2.100 */
+
+#define ip6_src { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, \
+ 0x00, 0x01, 0xde, 0xad, 0xbe, 0xef, 0xca, 0xfe }
+#define ip6_dst { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, \
+ 0x00, 0x02, 0xde, 0xad, 0xbe, 0xef, 0xca, 0xfe }
+
+#define v6_equal(a, b) (a.s6_addr32[0] == b.s6_addr32[0] && \
+ a.s6_addr32[1] == b.s6_addr32[1] && \
+ a.s6_addr32[2] == b.s6_addr32[2] && \
+ a.s6_addr32[3] == b.s6_addr32[3])
+
+volatile const __u32 IFINDEX_SRC;
+volatile const __u32 IFINDEX_DST;
+
+#define EGRESS_ENDHOST_MAGIC 0x0b9fbeef
+#define INGRESS_FWDNS_MAGIC 0x1b9fbeef
+#define EGRESS_FWDNS_MAGIC 0x2b9fbeef
+
+enum {
+ INGRESS_FWDNS_P100,
+ INGRESS_FWDNS_P101,
+ EGRESS_FWDNS_P100,
+ EGRESS_FWDNS_P101,
+ INGRESS_ENDHOST,
+ EGRESS_ENDHOST,
+ SET_DTIME,
+ __MAX_CNT,
+};
+
+enum {
+ TCP_IP6_CLEAR_DTIME,
+ TCP_IP4,
+ TCP_IP6,
+ UDP_IP4,
+ UDP_IP6,
+ TCP_IP4_RT_FWD,
+ TCP_IP6_RT_FWD,
+ UDP_IP4_RT_FWD,
+ UDP_IP6_RT_FWD,
+ UKN_TEST,
+ __NR_TESTS,
+};
+
+enum {
+ SRC_NS = 1,
+ DST_NS,
+};
+
+__u32 dtimes[__NR_TESTS][__MAX_CNT] = {};
+__u32 errs[__NR_TESTS][__MAX_CNT] = {};
+__u32 test = 0;
+
+static void inc_dtimes(__u32 idx)
+{
+ if (test < __NR_TESTS)
+ dtimes[test][idx]++;
+ else
+ dtimes[UKN_TEST][idx]++;
+}
+
+static void inc_errs(__u32 idx)
+{
+ if (test < __NR_TESTS)
+ errs[test][idx]++;
+ else
+ errs[UKN_TEST][idx]++;
+}
+
+static int skb_proto(int type)
+{
+ return type & 0xff;
+}
+
+static int skb_ns(int type)
+{
+ return (type >> 8) & 0xff;
+}
+
+static bool fwdns_clear_dtime(void)
+{
+ return test == TCP_IP6_CLEAR_DTIME;
+}
+
+static bool bpf_fwd(void)
+{
+ return test < TCP_IP4_RT_FWD;
+}
+
+/* -1: parse error: TC_ACT_SHOT
+ * 0: not testing traffic: TC_ACT_OK
+ * >0: first byte is the inet_proto, second byte has the netns
+ * of the sender
+ */
+static int skb_get_type(struct __sk_buff *skb)
+{
+ void *data_end = ctx_ptr(skb->data_end);
+ void *data = ctx_ptr(skb->data);
+ __u8 inet_proto = 0, ns = 0;
+ struct ipv6hdr *ip6h;
+ struct iphdr *iph;
+
+ switch (skb->protocol) {
+ case __bpf_htons(ETH_P_IP):
+ iph = data + sizeof(struct ethhdr);
+ if (iph + 1 > data_end)
+ return -1;
+ if (iph->saddr == ip4_src)
+ ns = SRC_NS;
+ else if (iph->saddr == ip4_dst)
+ ns = DST_NS;
+ inet_proto = iph->protocol;
+ break;
+ case __bpf_htons(ETH_P_IPV6):
+ ip6h = data + sizeof(struct ethhdr);
+ if (ip6h + 1 > data_end)
+ return -1;
+ if (v6_equal(ip6h->saddr, (struct in6_addr)ip6_src))
+ ns = SRC_NS;
+ else if (v6_equal(ip6h->saddr, (struct in6_addr)ip6_dst))
+ ns = DST_NS;
+ inet_proto = ip6h->nexthdr;
+ break;
+ default:
+ return 0;
+ }
+
+ if ((inet_proto != IPPROTO_TCP && inet_proto != IPPROTO_UDP) || !ns)
+ return 0;
+
+ return (ns << 8 | inet_proto);
+}
+
+/* format: direction@iface@netns
+ * egress@veth_(src|dst)@ns_(src|dst)
+ */
+SEC("tc")
+int egress_host(struct __sk_buff *skb)
+{
+ int skb_type;
+
+ skb_type = skb_get_type(skb);
+ if (skb_type == -1)
+ return TC_ACT_SHOT;
+ if (!skb_type)
+ return TC_ACT_OK;
+
+ if (skb_proto(skb_type) == IPPROTO_TCP) {
+ if (skb->tstamp_type == BPF_SKB_TSTAMP_DELIVERY_MONO &&
+ skb->tstamp)
+ inc_dtimes(EGRESS_ENDHOST);
+ else
+ inc_errs(EGRESS_ENDHOST);
+ } else {
+ if (skb->tstamp_type == BPF_SKB_TSTAMP_UNSPEC &&
+ skb->tstamp)
+ inc_dtimes(EGRESS_ENDHOST);
+ else
+ inc_errs(EGRESS_ENDHOST);
+ }
+
+ skb->tstamp = EGRESS_ENDHOST_MAGIC;
+
+ return TC_ACT_OK;
+}
+
+/* ingress@veth_(src|dst)@ns_(src|dst) */
+SEC("tc")
+int ingress_host(struct __sk_buff *skb)
+{
+ int skb_type;
+
+ skb_type = skb_get_type(skb);
+ if (skb_type == -1)
+ return TC_ACT_SHOT;
+ if (!skb_type)
+ return TC_ACT_OK;
+
+ if (skb->tstamp_type == BPF_SKB_TSTAMP_DELIVERY_MONO &&
+ skb->tstamp == EGRESS_FWDNS_MAGIC)
+ inc_dtimes(INGRESS_ENDHOST);
+ else
+ inc_errs(INGRESS_ENDHOST);
+
+ return TC_ACT_OK;
+}
+
+/* ingress@veth_(src|dst)_fwd@ns_fwd priority 100 */
+SEC("tc")
+int ingress_fwdns_prio100(struct __sk_buff *skb)
+{
+ int skb_type;
+
+ skb_type = skb_get_type(skb);
+ if (skb_type == -1)
+ return TC_ACT_SHOT;
+ if (!skb_type)
+ return TC_ACT_OK;
+
+ /* delivery_time is only available to the ingress
+ * if the tc-bpf checks the skb->tstamp_type.
+ */
+ if (skb->tstamp == EGRESS_ENDHOST_MAGIC)
+ inc_errs(INGRESS_FWDNS_P100);
+
+ if (fwdns_clear_dtime())
+ skb->tstamp = 0;
+
+ return TC_ACT_UNSPEC;
+}
+
+/* egress@veth_(src|dst)_fwd@ns_fwd priority 100 */
+SEC("tc")
+int egress_fwdns_prio100(struct __sk_buff *skb)
+{
+ int skb_type;
+
+ skb_type = skb_get_type(skb);
+ if (skb_type == -1)
+ return TC_ACT_SHOT;
+ if (!skb_type)
+ return TC_ACT_OK;
+
+ /* delivery_time is always available to egress even
+ * the tc-bpf did not use the tstamp_type.
+ */
+ if (skb->tstamp == INGRESS_FWDNS_MAGIC)
+ inc_dtimes(EGRESS_FWDNS_P100);
+ else
+ inc_errs(EGRESS_FWDNS_P100);
+
+ if (fwdns_clear_dtime())
+ skb->tstamp = 0;
+
+ return TC_ACT_UNSPEC;
+}
+
+/* ingress@veth_(src|dst)_fwd@ns_fwd priority 101 */
+SEC("tc")
+int ingress_fwdns_prio101(struct __sk_buff *skb)
+{
+ __u64 expected_dtime = EGRESS_ENDHOST_MAGIC;
+ int skb_type;
+
+ skb_type = skb_get_type(skb);
+ if (skb_type == -1 || !skb_type)
+ /* Should have handled in prio100 */
+ return TC_ACT_SHOT;
+
+ if (skb_proto(skb_type) == IPPROTO_UDP)
+ expected_dtime = 0;
+
+ if (skb->tstamp_type) {
+ if (fwdns_clear_dtime() ||
+ skb->tstamp_type != BPF_SKB_TSTAMP_DELIVERY_MONO ||
+ skb->tstamp != expected_dtime)
+ inc_errs(INGRESS_FWDNS_P101);
+ else
+ inc_dtimes(INGRESS_FWDNS_P101);
+ } else {
+ if (!fwdns_clear_dtime() && expected_dtime)
+ inc_errs(INGRESS_FWDNS_P101);
+ }
+
+ if (skb->tstamp_type == BPF_SKB_TSTAMP_DELIVERY_MONO) {
+ skb->tstamp = INGRESS_FWDNS_MAGIC;
+ } else {
+ if (bpf_skb_set_tstamp(skb, INGRESS_FWDNS_MAGIC,
+ BPF_SKB_TSTAMP_DELIVERY_MONO))
+ inc_errs(SET_DTIME);
+ if (!bpf_skb_set_tstamp(skb, INGRESS_FWDNS_MAGIC,
+ BPF_SKB_TSTAMP_UNSPEC))
+ inc_errs(SET_DTIME);
+ }
+
+ if (skb_ns(skb_type) == SRC_NS)
+ return bpf_fwd() ?
+ bpf_redirect_neigh(IFINDEX_DST, NULL, 0, 0) : TC_ACT_OK;
+ else
+ return bpf_fwd() ?
+ bpf_redirect_neigh(IFINDEX_SRC, NULL, 0, 0) : TC_ACT_OK;
+}
+
+/* egress@veth_(src|dst)_fwd@ns_fwd priority 101 */
+SEC("tc")
+int egress_fwdns_prio101(struct __sk_buff *skb)
+{
+ int skb_type;
+
+ skb_type = skb_get_type(skb);
+ if (skb_type == -1 || !skb_type)
+ /* Should have handled in prio100 */
+ return TC_ACT_SHOT;
+
+ if (skb->tstamp_type) {
+ if (fwdns_clear_dtime() ||
+ skb->tstamp_type != BPF_SKB_TSTAMP_DELIVERY_MONO ||
+ skb->tstamp != INGRESS_FWDNS_MAGIC)
+ inc_errs(EGRESS_FWDNS_P101);
+ else
+ inc_dtimes(EGRESS_FWDNS_P101);
+ } else {
+ if (!fwdns_clear_dtime())
+ inc_errs(EGRESS_FWDNS_P101);
+ }
+
+ if (skb->tstamp_type == BPF_SKB_TSTAMP_DELIVERY_MONO) {
+ skb->tstamp = EGRESS_FWDNS_MAGIC;
+ } else {
+ if (bpf_skb_set_tstamp(skb, EGRESS_FWDNS_MAGIC,
+ BPF_SKB_TSTAMP_DELIVERY_MONO))
+ inc_errs(SET_DTIME);
+ if (!bpf_skb_set_tstamp(skb, INGRESS_FWDNS_MAGIC,
+ BPF_SKB_TSTAMP_UNSPEC))
+ inc_errs(SET_DTIME);
+ }
+
+ return TC_ACT_OK;
+}
+
+char __license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_tc_edt.c b/tools/testing/selftests/bpf/progs/test_tc_edt.c
index bf28814bfde5..950a70b61e74 100644
--- a/tools/testing/selftests/bpf/progs/test_tc_edt.c
+++ b/tools/testing/selftests/bpf/progs/test_tc_edt.c
@@ -17,12 +17,12 @@
#define THROTTLE_RATE_BPS (5 * 1000 * 1000)
/* flow_key => last_tstamp timestamp used */
-struct bpf_map_def SEC("maps") flow_map = {
- .type = BPF_MAP_TYPE_HASH,
- .key_size = sizeof(uint32_t),
- .value_size = sizeof(uint64_t),
- .max_entries = 1,
-};
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __type(key, uint32_t);
+ __type(value, uint64_t);
+ __uint(max_entries, 1);
+} flow_map SEC(".maps");
static inline int throttle_flow(struct __sk_buff *skb)
{
diff --git a/tools/testing/selftests/bpf/progs/test_tcp_check_syncookie_kern.c b/tools/testing/selftests/bpf/progs/test_tcp_check_syncookie_kern.c
index cd747cd93dbe..6edebce563b5 100644
--- a/tools/testing/selftests/bpf/progs/test_tcp_check_syncookie_kern.c
+++ b/tools/testing/selftests/bpf/progs/test_tcp_check_syncookie_kern.c
@@ -16,12 +16,12 @@
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_endian.h>
-struct bpf_map_def SEC("maps") results = {
- .type = BPF_MAP_TYPE_ARRAY,
- .key_size = sizeof(__u32),
- .value_size = sizeof(__u32),
- .max_entries = 3,
-};
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __type(key, __u32);
+ __type(value, __u32);
+ __uint(max_entries, 3);
+} results SEC(".maps");
static __always_inline __s64 gen_syncookie(void *data_end, struct bpf_sock *sk,
void *iph, __u32 ip_size,
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c
index 199c61b7d062..53b64c999450 100644
--- a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c
+++ b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c
@@ -7,11 +7,10 @@ int _xdp_adjust_tail_grow(struct xdp_md *xdp)
{
void *data_end = (void *)(long)xdp->data_end;
void *data = (void *)(long)xdp->data;
- unsigned int data_len;
+ int data_len = bpf_xdp_get_buff_len(xdp);
int offset = 0;
/* Data length determine test case */
- data_len = data_end - data;
if (data_len == 54) { /* sizeof(pkt_v4) */
offset = 4096; /* test too large offset */
@@ -20,7 +19,12 @@ int _xdp_adjust_tail_grow(struct xdp_md *xdp)
} else if (data_len == 64) {
offset = 128;
} else if (data_len == 128) {
- offset = 4096 - 256 - 320 - data_len; /* Max tail grow 3520 */
+ /* Max tail grow 3520 */
+ offset = 4096 - 256 - 320 - data_len;
+ } else if (data_len == 9000) {
+ offset = 10;
+ } else if (data_len == 9001) {
+ offset = 4096;
} else {
return XDP_ABORTED; /* No matching test */
}
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_shrink.c b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_shrink.c
index b7448253d135..ca68c038357c 100644
--- a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_shrink.c
+++ b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_shrink.c
@@ -12,14 +12,38 @@
SEC("xdp")
int _xdp_adjust_tail_shrink(struct xdp_md *xdp)
{
- void *data_end = (void *)(long)xdp->data_end;
- void *data = (void *)(long)xdp->data;
+ __u8 *data_end = (void *)(long)xdp->data_end;
+ __u8 *data = (void *)(long)xdp->data;
int offset = 0;
- if (data_end - data == 54) /* sizeof(pkt_v4) */
+ switch (bpf_xdp_get_buff_len(xdp)) {
+ case 54:
+ /* sizeof(pkt_v4) */
offset = 256; /* shrink too much */
- else
+ break;
+ case 9000:
+ /* non-linear buff test cases */
+ if (data + 1 > data_end)
+ return XDP_DROP;
+
+ switch (data[0]) {
+ case 0:
+ offset = 10;
+ break;
+ case 1:
+ offset = 4100;
+ break;
+ case 2:
+ offset = 8200;
+ break;
+ default:
+ return XDP_DROP;
+ }
+ break;
+ default:
offset = 20;
+ break;
+ }
if (bpf_xdp_adjust_tail(xdp, 0 - offset))
return XDP_DROP;
return XDP_TX;
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c b/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c
index 58cf4345f5cc..3379d303f41a 100644
--- a/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c
+++ b/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c
@@ -49,7 +49,7 @@ int BPF_PROG(trace_on_entry, struct xdp_buff *xdp)
void *data = (void *)(long)xdp->data;
meta.ifindex = xdp->rxq->dev->ifindex;
- meta.pkt_len = data_end - data;
+ meta.pkt_len = bpf_xdp_get_buff_len((struct xdp_md *)xdp);
bpf_xdp_output(xdp, &perf_buf_map,
((__u64) meta.pkt_len << 32) |
BPF_F_CURRENT_CPU,
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_do_redirect.c b/tools/testing/selftests/bpf/progs/test_xdp_do_redirect.c
new file mode 100644
index 000000000000..77a123071940
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_xdp_do_redirect.c
@@ -0,0 +1,100 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+
+#define ETH_ALEN 6
+#define HDR_SZ (sizeof(struct ethhdr) + sizeof(struct ipv6hdr) + sizeof(struct udphdr))
+const volatile int ifindex_out;
+const volatile int ifindex_in;
+const volatile __u8 expect_dst[ETH_ALEN];
+volatile int pkts_seen_xdp = 0;
+volatile int pkts_seen_zero = 0;
+volatile int pkts_seen_tc = 0;
+volatile int retcode = XDP_REDIRECT;
+
+SEC("xdp")
+int xdp_redirect(struct xdp_md *xdp)
+{
+ __u32 *metadata = (void *)(long)xdp->data_meta;
+ void *data_end = (void *)(long)xdp->data_end;
+ void *data = (void *)(long)xdp->data;
+
+ __u8 *payload = data + HDR_SZ;
+ int ret = retcode;
+
+ if (payload + 1 > data_end)
+ return XDP_ABORTED;
+
+ if (xdp->ingress_ifindex != ifindex_in)
+ return XDP_ABORTED;
+
+ if (metadata + 1 > data)
+ return XDP_ABORTED;
+
+ if (*metadata != 0x42)
+ return XDP_ABORTED;
+
+ if (*payload == 0) {
+ *payload = 0x42;
+ pkts_seen_zero++;
+ }
+
+ if (bpf_xdp_adjust_meta(xdp, 4))
+ return XDP_ABORTED;
+
+ if (retcode > XDP_PASS)
+ retcode--;
+
+ if (ret == XDP_REDIRECT)
+ return bpf_redirect(ifindex_out, 0);
+
+ return ret;
+}
+
+static bool check_pkt(void *data, void *data_end)
+{
+ struct ipv6hdr *iph = data + sizeof(struct ethhdr);
+ __u8 *payload = data + HDR_SZ;
+
+ if (payload + 1 > data_end)
+ return false;
+
+ if (iph->nexthdr != IPPROTO_UDP || *payload != 0x42)
+ return false;
+
+ /* reset the payload so the same packet doesn't get counted twice when
+ * it cycles back through the kernel path and out the dst veth
+ */
+ *payload = 0;
+ return true;
+}
+
+SEC("xdp")
+int xdp_count_pkts(struct xdp_md *xdp)
+{
+ void *data = (void *)(long)xdp->data;
+ void *data_end = (void *)(long)xdp->data_end;
+
+ if (check_pkt(data, data_end))
+ pkts_seen_xdp++;
+
+ /* Return XDP_DROP to make sure the data page is recycled, like when it
+ * exits a physical NIC. Recycled pages will be counted in the
+ * pkts_seen_zero counter above.
+ */
+ return XDP_DROP;
+}
+
+SEC("tc")
+int tc_count_pkts(struct __sk_buff *skb)
+{
+ void *data = (void *)(long)skb->data;
+ void *data_end = (void *)(long)skb->data_end;
+
+ if (check_pkt(data, data_end))
+ pkts_seen_tc++;
+
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_update_frags.c b/tools/testing/selftests/bpf/progs/test_xdp_update_frags.c
new file mode 100644
index 000000000000..2a3496d8e327
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_xdp_update_frags.c
@@ -0,0 +1,42 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#include <linux/bpf.h>
+#include <linux/if_ether.h>
+#include <bpf/bpf_helpers.h>
+
+int _version SEC("version") = 1;
+
+SEC("xdp.frags")
+int xdp_adjust_frags(struct xdp_md *xdp)
+{
+ __u8 *data_end = (void *)(long)xdp->data_end;
+ __u8 *data = (void *)(long)xdp->data;
+ __u8 val[16] = {};
+ __u32 offset;
+ int err;
+
+ if (data + sizeof(__u32) > data_end)
+ return XDP_DROP;
+
+ offset = *(__u32 *)data;
+ err = bpf_xdp_load_bytes(xdp, offset, val, sizeof(val));
+ if (err < 0)
+ return XDP_DROP;
+
+ if (val[0] != 0xaa || val[15] != 0xaa) /* marker */
+ return XDP_DROP;
+
+ val[0] = 0xbb; /* update the marker */
+ val[15] = 0xbb;
+ err = bpf_xdp_store_bytes(xdp, offset, val, sizeof(val));
+ if (err < 0)
+ return XDP_DROP;
+
+ return XDP_PASS;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_frags_helpers.c b/tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_frags_helpers.c
new file mode 100644
index 000000000000..97ed625bb70a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_frags_helpers.c
@@ -0,0 +1,27 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+#define IFINDEX_LO 1
+
+struct {
+ __uint(type, BPF_MAP_TYPE_CPUMAP);
+ __uint(key_size, sizeof(__u32));
+ __uint(value_size, sizeof(struct bpf_cpumap_val));
+ __uint(max_entries, 4);
+} cpu_map SEC(".maps");
+
+SEC("xdp/cpumap")
+int xdp_dummy_cm(struct xdp_md *ctx)
+{
+ return XDP_PASS;
+}
+
+SEC("xdp.frags/cpumap")
+int xdp_dummy_cm_frags(struct xdp_md *ctx)
+{
+ return XDP_PASS;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_helpers.c b/tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_helpers.c
index 532025057711..20ec6723df18 100644
--- a/tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_helpers.c
+++ b/tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_helpers.c
@@ -24,7 +24,7 @@ int xdp_dummy_prog(struct xdp_md *ctx)
return XDP_PASS;
}
-SEC("xdp_cpumap/dummy_cm")
+SEC("xdp/cpumap")
int xdp_dummy_cm(struct xdp_md *ctx)
{
if (ctx->ingress_ifindex == IFINDEX_LO)
@@ -33,4 +33,10 @@ int xdp_dummy_cm(struct xdp_md *ctx)
return XDP_PASS;
}
+SEC("xdp.frags/cpumap")
+int xdp_dummy_cm_frags(struct xdp_md *ctx)
+{
+ return XDP_PASS;
+}
+
char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_with_devmap_frags_helpers.c b/tools/testing/selftests/bpf/progs/test_xdp_with_devmap_frags_helpers.c
new file mode 100644
index 000000000000..cdcf7de7ec8c
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_xdp_with_devmap_frags_helpers.c
@@ -0,0 +1,27 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+struct {
+ __uint(type, BPF_MAP_TYPE_DEVMAP);
+ __uint(key_size, sizeof(__u32));
+ __uint(value_size, sizeof(struct bpf_devmap_val));
+ __uint(max_entries, 4);
+} dm_ports SEC(".maps");
+
+/* valid program on DEVMAP entry via SEC name;
+ * has access to egress and ingress ifindex
+ */
+SEC("xdp/devmap")
+int xdp_dummy_dm(struct xdp_md *ctx)
+{
+ return XDP_PASS;
+}
+
+SEC("xdp.frags/devmap")
+int xdp_dummy_dm_frags(struct xdp_md *ctx)
+{
+ return XDP_PASS;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_with_devmap_helpers.c b/tools/testing/selftests/bpf/progs/test_xdp_with_devmap_helpers.c
index 1e6b9c38ea6d..4139a14f9996 100644
--- a/tools/testing/selftests/bpf/progs/test_xdp_with_devmap_helpers.c
+++ b/tools/testing/selftests/bpf/progs/test_xdp_with_devmap_helpers.c
@@ -27,7 +27,7 @@ int xdp_dummy_prog(struct xdp_md *ctx)
/* valid program on DEVMAP entry via SEC name;
* has access to egress and ingress ifindex
*/
-SEC("xdp_devmap/map_prog")
+SEC("xdp/devmap")
int xdp_dummy_dm(struct xdp_md *ctx)
{
char fmt[] = "devmap redirect: dev %u -> dev %u len %u\n";
@@ -40,4 +40,11 @@ int xdp_dummy_dm(struct xdp_md *ctx)
return XDP_PASS;
}
+
+SEC("xdp.frags/devmap")
+int xdp_dummy_dm_frags(struct xdp_md *ctx)
+{
+ return XDP_PASS;
+}
+
char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/trace_printk.c b/tools/testing/selftests/bpf/progs/trace_printk.c
index 119582aa105a..6695478c2b25 100644
--- a/tools/testing/selftests/bpf/progs/trace_printk.c
+++ b/tools/testing/selftests/bpf/progs/trace_printk.c
@@ -4,6 +4,7 @@
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
char _license[] SEC("license") = "GPL";
@@ -12,7 +13,7 @@ int trace_printk_ran = 0;
const char fmt[] = "Testing,testing %d\n";
-SEC("fentry/__x64_sys_nanosleep")
+SEC("fentry/" SYS_PREFIX "sys_nanosleep")
int sys_enter(void *ctx)
{
trace_printk_ret = bpf_trace_printk(fmt, sizeof(fmt),
diff --git a/tools/testing/selftests/bpf/progs/trace_vprintk.c b/tools/testing/selftests/bpf/progs/trace_vprintk.c
index d327241ba047..969306cd4f33 100644
--- a/tools/testing/selftests/bpf/progs/trace_vprintk.c
+++ b/tools/testing/selftests/bpf/progs/trace_vprintk.c
@@ -4,6 +4,7 @@
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
char _license[] SEC("license") = "GPL";
@@ -11,7 +12,7 @@ int null_data_vprintk_ret = 0;
int trace_vprintk_ret = 0;
int trace_vprintk_ran = 0;
-SEC("fentry/__x64_sys_nanosleep")
+SEC("fentry/" SYS_PREFIX "sys_nanosleep")
int sys_enter(void *ctx)
{
static const char one[] = "1";
diff --git a/tools/testing/selftests/bpf/progs/trigger_bench.c b/tools/testing/selftests/bpf/progs/trigger_bench.c
index 2098f3f27f18..2ab049b54d6c 100644
--- a/tools/testing/selftests/bpf/progs/trigger_bench.c
+++ b/tools/testing/selftests/bpf/progs/trigger_bench.c
@@ -5,6 +5,7 @@
#include <asm/unistd.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
char _license[] SEC("license") = "GPL";
@@ -25,28 +26,28 @@ int BPF_PROG(bench_trigger_raw_tp, struct pt_regs *regs, long id)
return 0;
}
-SEC("kprobe/__x64_sys_getpgid")
+SEC("kprobe/" SYS_PREFIX "sys_getpgid")
int bench_trigger_kprobe(void *ctx)
{
__sync_add_and_fetch(&hits, 1);
return 0;
}
-SEC("fentry/__x64_sys_getpgid")
+SEC("fentry/" SYS_PREFIX "sys_getpgid")
int bench_trigger_fentry(void *ctx)
{
__sync_add_and_fetch(&hits, 1);
return 0;
}
-SEC("fentry.s/__x64_sys_getpgid")
+SEC("fentry.s/" SYS_PREFIX "sys_getpgid")
int bench_trigger_fentry_sleep(void *ctx)
{
__sync_add_and_fetch(&hits, 1);
return 0;
}
-SEC("fmod_ret/__x64_sys_getpgid")
+SEC("fmod_ret/" SYS_PREFIX "sys_getpgid")
int bench_trigger_fmodret(void *ctx)
{
__sync_add_and_fetch(&hits, 1);
diff --git a/tools/testing/selftests/bpf/progs/xdp_redirect_multi_kern.c b/tools/testing/selftests/bpf/progs/xdp_redirect_multi_kern.c
index 8395782b6e0a..97b26a30b59a 100644
--- a/tools/testing/selftests/bpf/progs/xdp_redirect_multi_kern.c
+++ b/tools/testing/selftests/bpf/progs/xdp_redirect_multi_kern.c
@@ -70,7 +70,7 @@ int xdp_redirect_map_all_prog(struct xdp_md *ctx)
BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS);
}
-SEC("xdp_devmap/map_prog")
+SEC("xdp/devmap")
int xdp_devmap_prog(struct xdp_md *ctx)
{
void *data_end = (void *)(long)ctx->data_end;
diff --git a/tools/testing/selftests/bpf/test_cgroup_storage.c b/tools/testing/selftests/bpf/test_cgroup_storage.c
index 5b8314cd77fd..d6a1be4d8020 100644
--- a/tools/testing/selftests/bpf/test_cgroup_storage.c
+++ b/tools/testing/selftests/bpf/test_cgroup_storage.c
@@ -36,7 +36,7 @@ int main(int argc, char **argv)
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
BPF_EXIT_INSN(),
};
- size_t insns_cnt = sizeof(prog) / sizeof(struct bpf_insn);
+ size_t insns_cnt = ARRAY_SIZE(prog);
int error = EXIT_FAILURE;
int map_fd, percpu_map_fd, prog_fd, cgroup_fd;
struct bpf_cgroup_storage_key key;
diff --git a/tools/testing/selftests/bpf/test_cpp.cpp b/tools/testing/selftests/bpf/test_cpp.cpp
index e00201de2890..19ad172036da 100644
--- a/tools/testing/selftests/bpf/test_cpp.cpp
+++ b/tools/testing/selftests/bpf/test_cpp.cpp
@@ -1,22 +1,107 @@
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
#include <iostream>
+#pragma GCC diagnostic push
+#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
#include <bpf/libbpf.h>
+#pragma GCC diagnostic pop
#include <bpf/bpf.h>
#include <bpf/btf.h>
#include "test_core_extern.skel.h"
-/* do nothing, just make sure we can link successfully */
+template <typename T>
+class Skeleton {
+private:
+ T *skel;
+public:
+ Skeleton(): skel(nullptr) { }
+
+ ~Skeleton() { if (skel) T::destroy(skel); }
+
+ int open(const struct bpf_object_open_opts *opts = nullptr)
+ {
+ int err;
+
+ if (skel)
+ return -EBUSY;
+
+ skel = T::open(opts);
+ err = libbpf_get_error(skel);
+ if (err) {
+ skel = nullptr;
+ return err;
+ }
+
+ return 0;
+ }
+
+ int load() { return T::load(skel); }
+
+ int attach() { return T::attach(skel); }
+
+ void detach() { return T::detach(skel); }
+
+ const T* operator->() const { return skel; }
+
+ T* operator->() { return skel; }
+
+ const T *get() const { return skel; }
+};
static void dump_printf(void *ctx, const char *fmt, va_list args)
{
}
+static void try_skeleton_template()
+{
+ Skeleton<test_core_extern> skel;
+ std::string prog_name;
+ int err;
+ LIBBPF_OPTS(bpf_object_open_opts, opts);
+
+ err = skel.open(&opts);
+ if (err) {
+ fprintf(stderr, "Skeleton open failed: %d\n", err);
+ return;
+ }
+
+ skel->data->kern_ver = 123;
+ skel->data->int_val = skel->data->ushort_val;
+
+ err = skel.load();
+ if (err) {
+ fprintf(stderr, "Skeleton load failed: %d\n", err);
+ return;
+ }
+
+ if (!skel->kconfig->CONFIG_BPF_SYSCALL)
+ fprintf(stderr, "Seems like CONFIG_BPF_SYSCALL isn't set?!\n");
+
+ err = skel.attach();
+ if (err) {
+ fprintf(stderr, "Skeleton attach failed: %d\n", err);
+ return;
+ }
+
+ prog_name = bpf_program__name(skel->progs.handle_sys_enter);
+ if (prog_name != "handle_sys_enter")
+ fprintf(stderr, "Unexpected program name: %s\n", prog_name.c_str());
+
+ bpf_link__destroy(skel->links.handle_sys_enter);
+ skel->links.handle_sys_enter = bpf_program__attach(skel->progs.handle_sys_enter);
+
+ skel.detach();
+
+ /* destructor will destory underlying skeleton */
+}
+
int main(int argc, char *argv[])
{
struct btf_dump_opts opts = { };
struct test_core_extern *skel;
struct btf *btf;
+ try_skeleton_template();
+
/* libbpf.h */
libbpf_set_print(NULL);
@@ -25,7 +110,8 @@ int main(int argc, char *argv[])
/* btf.h */
btf = btf__new(NULL, 0);
- btf_dump__new(btf, dump_printf, nullptr, &opts);
+ if (!libbpf_get_error(btf))
+ btf_dump__new(btf, dump_printf, nullptr, &opts);
/* BPF skeleton */
skel = test_core_extern__open_and_load();
diff --git a/tools/testing/selftests/bpf/test_lirc_mode2.sh b/tools/testing/selftests/bpf/test_lirc_mode2.sh
index ec4e15948e40..5252b91f48a1 100755
--- a/tools/testing/selftests/bpf/test_lirc_mode2.sh
+++ b/tools/testing/selftests/bpf/test_lirc_mode2.sh
@@ -3,6 +3,7 @@
# Kselftest framework requirement - SKIP code is 4.
ksft_skip=4
+ret=$ksft_skip
msg="skip all tests:"
if [ $UID != 0 ]; then
@@ -25,7 +26,7 @@ do
fi
done
-if [ -n $LIRCDEV ];
+if [ -n "$LIRCDEV" ];
then
TYPE=lirc_mode2
./test_lirc_mode2_user $LIRCDEV $INPUTDEV
@@ -36,3 +37,5 @@ then
echo -e ${GREEN}"PASS: $TYPE"${NC}
fi
fi
+
+exit $ret
diff --git a/tools/testing/selftests/bpf/test_lru_map.c b/tools/testing/selftests/bpf/test_lru_map.c
index b9f1bbbc8aba..563bbe18c172 100644
--- a/tools/testing/selftests/bpf/test_lru_map.c
+++ b/tools/testing/selftests/bpf/test_lru_map.c
@@ -61,7 +61,11 @@ static int bpf_map_lookup_elem_with_ref_bit(int fd, unsigned long long key,
};
__u8 data[64] = {};
int mfd, pfd, ret, zero = 0;
- __u32 retval = 0;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = data,
+ .data_size_in = sizeof(data),
+ .repeat = 1,
+ );
mfd = bpf_map_create(BPF_MAP_TYPE_ARRAY, NULL, sizeof(int), sizeof(__u64), 1, NULL);
if (mfd < 0)
@@ -75,9 +79,8 @@ static int bpf_map_lookup_elem_with_ref_bit(int fd, unsigned long long key,
return -1;
}
- ret = bpf_prog_test_run(pfd, 1, data, sizeof(data),
- NULL, NULL, &retval, NULL);
- if (ret < 0 || retval != 42) {
+ ret = bpf_prog_test_run_opts(pfd, &topts);
+ if (ret < 0 || topts.retval != 42) {
ret = -1;
} else {
assert(!bpf_map_lookup_elem(mfd, &zero, value));
@@ -875,11 +878,11 @@ int main(int argc, char **argv)
assert(nr_cpus != -1);
printf("nr_cpus:%d\n\n", nr_cpus);
- for (f = 0; f < sizeof(map_flags) / sizeof(*map_flags); f++) {
+ for (f = 0; f < ARRAY_SIZE(map_flags); f++) {
unsigned int tgt_free = (map_flags[f] & BPF_F_NO_COMMON_LRU) ?
PERCPU_FREE_TARGET : LOCAL_FREE_TARGET;
- for (t = 0; t < sizeof(map_types) / sizeof(*map_types); t++) {
+ for (t = 0; t < ARRAY_SIZE(map_types); t++) {
test_lru_sanity0(map_types[t], map_flags[f]);
test_lru_sanity1(map_types[t], map_flags[f], tgt_free);
test_lru_sanity2(map_types[t], map_flags[f], tgt_free);
diff --git a/tools/testing/selftests/bpf/test_lwt_ip_encap.sh b/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
index b497bb85b667..6c69c42b1d60 100755
--- a/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
+++ b/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
@@ -120,6 +120,14 @@ setup()
ip netns exec ${NS2} sysctl -wq net.ipv4.conf.default.rp_filter=0
ip netns exec ${NS3} sysctl -wq net.ipv4.conf.default.rp_filter=0
+ # disable IPv6 DAD because it sometimes takes too long and fails tests
+ ip netns exec ${NS1} sysctl -wq net.ipv6.conf.all.accept_dad=0
+ ip netns exec ${NS2} sysctl -wq net.ipv6.conf.all.accept_dad=0
+ ip netns exec ${NS3} sysctl -wq net.ipv6.conf.all.accept_dad=0
+ ip netns exec ${NS1} sysctl -wq net.ipv6.conf.default.accept_dad=0
+ ip netns exec ${NS2} sysctl -wq net.ipv6.conf.default.accept_dad=0
+ ip netns exec ${NS3} sysctl -wq net.ipv6.conf.default.accept_dad=0
+
ip link add veth1 type veth peer name veth2
ip link add veth3 type veth peer name veth4
ip link add veth5 type veth peer name veth6
@@ -289,7 +297,7 @@ test_ping()
ip netns exec ${NS1} ping -c 1 -W 1 -I veth1 ${IPv4_DST} 2>&1 > /dev/null
RET=$?
elif [ "${PROTO}" == "IPv6" ] ; then
- ip netns exec ${NS1} ping6 -c 1 -W 6 -I veth1 ${IPv6_DST} 2>&1 > /dev/null
+ ip netns exec ${NS1} ping6 -c 1 -W 1 -I veth1 ${IPv6_DST} 2>&1 > /dev/null
RET=$?
else
echo " test_ping: unknown PROTO: ${PROTO}"
diff --git a/tools/testing/selftests/bpf/test_lwt_seg6local.sh b/tools/testing/selftests/bpf/test_lwt_seg6local.sh
index 5620919fde9e..826f4423ce02 100755
--- a/tools/testing/selftests/bpf/test_lwt_seg6local.sh
+++ b/tools/testing/selftests/bpf/test_lwt_seg6local.sh
@@ -23,6 +23,12 @@
# Kselftest framework requirement - SKIP code is 4.
ksft_skip=4
+readonly NS1="ns1-$(mktemp -u XXXXXX)"
+readonly NS2="ns2-$(mktemp -u XXXXXX)"
+readonly NS3="ns3-$(mktemp -u XXXXXX)"
+readonly NS4="ns4-$(mktemp -u XXXXXX)"
+readonly NS5="ns5-$(mktemp -u XXXXXX)"
+readonly NS6="ns6-$(mktemp -u XXXXXX)"
msg="skip all tests:"
if [ $UID != 0 ]; then
@@ -41,23 +47,23 @@ cleanup()
fi
set +e
- ip netns del ns1 2> /dev/null
- ip netns del ns2 2> /dev/null
- ip netns del ns3 2> /dev/null
- ip netns del ns4 2> /dev/null
- ip netns del ns5 2> /dev/null
- ip netns del ns6 2> /dev/null
+ ip netns del ${NS1} 2> /dev/null
+ ip netns del ${NS2} 2> /dev/null
+ ip netns del ${NS3} 2> /dev/null
+ ip netns del ${NS4} 2> /dev/null
+ ip netns del ${NS5} 2> /dev/null
+ ip netns del ${NS6} 2> /dev/null
rm -f $TMP_FILE
}
set -e
-ip netns add ns1
-ip netns add ns2
-ip netns add ns3
-ip netns add ns4
-ip netns add ns5
-ip netns add ns6
+ip netns add ${NS1}
+ip netns add ${NS2}
+ip netns add ${NS3}
+ip netns add ${NS4}
+ip netns add ${NS5}
+ip netns add ${NS6}
trap cleanup 0 2 3 6 9
@@ -67,78 +73,78 @@ ip link add veth5 type veth peer name veth6
ip link add veth7 type veth peer name veth8
ip link add veth9 type veth peer name veth10
-ip link set veth1 netns ns1
-ip link set veth2 netns ns2
-ip link set veth3 netns ns2
-ip link set veth4 netns ns3
-ip link set veth5 netns ns3
-ip link set veth6 netns ns4
-ip link set veth7 netns ns4
-ip link set veth8 netns ns5
-ip link set veth9 netns ns5
-ip link set veth10 netns ns6
-
-ip netns exec ns1 ip link set dev veth1 up
-ip netns exec ns2 ip link set dev veth2 up
-ip netns exec ns2 ip link set dev veth3 up
-ip netns exec ns3 ip link set dev veth4 up
-ip netns exec ns3 ip link set dev veth5 up
-ip netns exec ns4 ip link set dev veth6 up
-ip netns exec ns4 ip link set dev veth7 up
-ip netns exec ns5 ip link set dev veth8 up
-ip netns exec ns5 ip link set dev veth9 up
-ip netns exec ns6 ip link set dev veth10 up
-ip netns exec ns6 ip link set dev lo up
+ip link set veth1 netns ${NS1}
+ip link set veth2 netns ${NS2}
+ip link set veth3 netns ${NS2}
+ip link set veth4 netns ${NS3}
+ip link set veth5 netns ${NS3}
+ip link set veth6 netns ${NS4}
+ip link set veth7 netns ${NS4}
+ip link set veth8 netns ${NS5}
+ip link set veth9 netns ${NS5}
+ip link set veth10 netns ${NS6}
+
+ip netns exec ${NS1} ip link set dev veth1 up
+ip netns exec ${NS2} ip link set dev veth2 up
+ip netns exec ${NS2} ip link set dev veth3 up
+ip netns exec ${NS3} ip link set dev veth4 up
+ip netns exec ${NS3} ip link set dev veth5 up
+ip netns exec ${NS4} ip link set dev veth6 up
+ip netns exec ${NS4} ip link set dev veth7 up
+ip netns exec ${NS5} ip link set dev veth8 up
+ip netns exec ${NS5} ip link set dev veth9 up
+ip netns exec ${NS6} ip link set dev veth10 up
+ip netns exec ${NS6} ip link set dev lo up
# All link scope addresses and routes required between veths
-ip netns exec ns1 ip -6 addr add fb00::12/16 dev veth1 scope link
-ip netns exec ns1 ip -6 route add fb00::21 dev veth1 scope link
-ip netns exec ns2 ip -6 addr add fb00::21/16 dev veth2 scope link
-ip netns exec ns2 ip -6 addr add fb00::34/16 dev veth3 scope link
-ip netns exec ns2 ip -6 route add fb00::43 dev veth3 scope link
-ip netns exec ns3 ip -6 route add fb00::65 dev veth5 scope link
-ip netns exec ns3 ip -6 addr add fb00::43/16 dev veth4 scope link
-ip netns exec ns3 ip -6 addr add fb00::56/16 dev veth5 scope link
-ip netns exec ns4 ip -6 addr add fb00::65/16 dev veth6 scope link
-ip netns exec ns4 ip -6 addr add fb00::78/16 dev veth7 scope link
-ip netns exec ns4 ip -6 route add fb00::87 dev veth7 scope link
-ip netns exec ns5 ip -6 addr add fb00::87/16 dev veth8 scope link
-ip netns exec ns5 ip -6 addr add fb00::910/16 dev veth9 scope link
-ip netns exec ns5 ip -6 route add fb00::109 dev veth9 scope link
-ip netns exec ns5 ip -6 route add fb00::109 table 117 dev veth9 scope link
-ip netns exec ns6 ip -6 addr add fb00::109/16 dev veth10 scope link
-
-ip netns exec ns1 ip -6 addr add fb00::1/16 dev lo
-ip netns exec ns1 ip -6 route add fb00::6 dev veth1 via fb00::21
-
-ip netns exec ns2 ip -6 route add fb00::6 encap bpf in obj test_lwt_seg6local.o sec encap_srh dev veth2
-ip netns exec ns2 ip -6 route add fd00::1 dev veth3 via fb00::43 scope link
-
-ip netns exec ns3 ip -6 route add fc42::1 dev veth5 via fb00::65
-ip netns exec ns3 ip -6 route add fd00::1 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec add_egr_x dev veth4
-
-ip netns exec ns4 ip -6 route add fd00::2 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec pop_egr dev veth6
-ip netns exec ns4 ip -6 addr add fc42::1 dev lo
-ip netns exec ns4 ip -6 route add fd00::3 dev veth7 via fb00::87
-
-ip netns exec ns5 ip -6 route add fd00::4 table 117 dev veth9 via fb00::109
-ip netns exec ns5 ip -6 route add fd00::3 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec inspect_t dev veth8
-
-ip netns exec ns6 ip -6 addr add fb00::6/16 dev lo
-ip netns exec ns6 ip -6 addr add fd00::4/16 dev lo
-
-ip netns exec ns1 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
-ip netns exec ns2 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
-ip netns exec ns3 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
-ip netns exec ns4 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
-ip netns exec ns5 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
-
-ip netns exec ns6 sysctl net.ipv6.conf.all.seg6_enabled=1 > /dev/null
-ip netns exec ns6 sysctl net.ipv6.conf.lo.seg6_enabled=1 > /dev/null
-ip netns exec ns6 sysctl net.ipv6.conf.veth10.seg6_enabled=1 > /dev/null
-
-ip netns exec ns6 nc -l -6 -u -d 7330 > $TMP_FILE &
-ip netns exec ns1 bash -c "echo 'foobar' | nc -w0 -6 -u -p 2121 -s fb00::1 fb00::6 7330"
+ip netns exec ${NS1} ip -6 addr add fb00::12/16 dev veth1 scope link
+ip netns exec ${NS1} ip -6 route add fb00::21 dev veth1 scope link
+ip netns exec ${NS2} ip -6 addr add fb00::21/16 dev veth2 scope link
+ip netns exec ${NS2} ip -6 addr add fb00::34/16 dev veth3 scope link
+ip netns exec ${NS2} ip -6 route add fb00::43 dev veth3 scope link
+ip netns exec ${NS3} ip -6 route add fb00::65 dev veth5 scope link
+ip netns exec ${NS3} ip -6 addr add fb00::43/16 dev veth4 scope link
+ip netns exec ${NS3} ip -6 addr add fb00::56/16 dev veth5 scope link
+ip netns exec ${NS4} ip -6 addr add fb00::65/16 dev veth6 scope link
+ip netns exec ${NS4} ip -6 addr add fb00::78/16 dev veth7 scope link
+ip netns exec ${NS4} ip -6 route add fb00::87 dev veth7 scope link
+ip netns exec ${NS5} ip -6 addr add fb00::87/16 dev veth8 scope link
+ip netns exec ${NS5} ip -6 addr add fb00::910/16 dev veth9 scope link
+ip netns exec ${NS5} ip -6 route add fb00::109 dev veth9 scope link
+ip netns exec ${NS5} ip -6 route add fb00::109 table 117 dev veth9 scope link
+ip netns exec ${NS6} ip -6 addr add fb00::109/16 dev veth10 scope link
+
+ip netns exec ${NS1} ip -6 addr add fb00::1/16 dev lo
+ip netns exec ${NS1} ip -6 route add fb00::6 dev veth1 via fb00::21
+
+ip netns exec ${NS2} ip -6 route add fb00::6 encap bpf in obj test_lwt_seg6local.o sec encap_srh dev veth2
+ip netns exec ${NS2} ip -6 route add fd00::1 dev veth3 via fb00::43 scope link
+
+ip netns exec ${NS3} ip -6 route add fc42::1 dev veth5 via fb00::65
+ip netns exec ${NS3} ip -6 route add fd00::1 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec add_egr_x dev veth4
+
+ip netns exec ${NS4} ip -6 route add fd00::2 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec pop_egr dev veth6
+ip netns exec ${NS4} ip -6 addr add fc42::1 dev lo
+ip netns exec ${NS4} ip -6 route add fd00::3 dev veth7 via fb00::87
+
+ip netns exec ${NS5} ip -6 route add fd00::4 table 117 dev veth9 via fb00::109
+ip netns exec ${NS5} ip -6 route add fd00::3 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec inspect_t dev veth8
+
+ip netns exec ${NS6} ip -6 addr add fb00::6/16 dev lo
+ip netns exec ${NS6} ip -6 addr add fd00::4/16 dev lo
+
+ip netns exec ${NS1} sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
+ip netns exec ${NS2} sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
+ip netns exec ${NS3} sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
+ip netns exec ${NS4} sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
+ip netns exec ${NS5} sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
+
+ip netns exec ${NS6} sysctl net.ipv6.conf.all.seg6_enabled=1 > /dev/null
+ip netns exec ${NS6} sysctl net.ipv6.conf.lo.seg6_enabled=1 > /dev/null
+ip netns exec ${NS6} sysctl net.ipv6.conf.veth10.seg6_enabled=1 > /dev/null
+
+ip netns exec ${NS6} nc -l -6 -u -d 7330 > $TMP_FILE &
+ip netns exec ${NS1} bash -c "echo 'foobar' | nc -w0 -6 -u -p 2121 -s fb00::1 fb00::6 7330"
sleep 5 # wait enough time to ensure the UDP datagram arrived to the last segment
kill -TERM $!
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index 50f7e74ca0b9..cbebfaa7c1e8 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -738,7 +738,7 @@ static void test_sockmap(unsigned int tasks, void *data)
sizeof(key), sizeof(value),
6, NULL);
if (fd < 0) {
- if (!bpf_probe_map_type(BPF_MAP_TYPE_SOCKMAP, 0)) {
+ if (!libbpf_probe_bpf_map_type(BPF_MAP_TYPE_SOCKMAP, NULL)) {
printf("%s SKIP (unsupported map type BPF_MAP_TYPE_SOCKMAP)\n",
__func__);
skips++;
diff --git a/tools/testing/selftests/bpf/test_sock_addr.c b/tools/testing/selftests/bpf/test_sock_addr.c
index f0c8d05ba6d1..f3d5d7ac6505 100644
--- a/tools/testing/selftests/bpf/test_sock_addr.c
+++ b/tools/testing/selftests/bpf/test_sock_addr.c
@@ -723,7 +723,7 @@ static int xmsg_ret_only_prog_load(const struct sock_addr_test *test,
BPF_MOV64_IMM(BPF_REG_0, rc),
BPF_EXIT_INSN(),
};
- return load_insns(test, insns, sizeof(insns) / sizeof(struct bpf_insn));
+ return load_insns(test, insns, ARRAY_SIZE(insns));
}
static int sendmsg_allow_prog_load(const struct sock_addr_test *test)
@@ -795,7 +795,7 @@ static int sendmsg4_rw_asm_prog_load(const struct sock_addr_test *test)
BPF_EXIT_INSN(),
};
- return load_insns(test, insns, sizeof(insns) / sizeof(struct bpf_insn));
+ return load_insns(test, insns, ARRAY_SIZE(insns));
}
static int recvmsg4_rw_c_prog_load(const struct sock_addr_test *test)
@@ -858,7 +858,7 @@ static int sendmsg6_rw_dst_asm_prog_load(const struct sock_addr_test *test,
BPF_EXIT_INSN(),
};
- return load_insns(test, insns, sizeof(insns) / sizeof(struct bpf_insn));
+ return load_insns(test, insns, ARRAY_SIZE(insns));
}
static int sendmsg6_rw_asm_prog_load(const struct sock_addr_test *test)
diff --git a/tools/testing/selftests/bpf/test_sockmap.c b/tools/testing/selftests/bpf/test_sockmap.c
index 1ba7e7346afb..dfb4f5c0fcb9 100644
--- a/tools/testing/selftests/bpf/test_sockmap.c
+++ b/tools/testing/selftests/bpf/test_sockmap.c
@@ -1786,7 +1786,7 @@ static int populate_progs(char *bpf_file)
i++;
}
- for (i = 0; i < sizeof(map_fd)/sizeof(int); i++) {
+ for (i = 0; i < ARRAY_SIZE(map_fd); i++) {
maps[i] = bpf_object__find_map_by_name(obj, map_names[i]);
map_fd[i] = bpf_map__fd(maps[i]);
if (map_fd[i] < 0) {
@@ -1867,7 +1867,7 @@ static int __test_selftests(int cg_fd, struct sockmap_options *opt)
}
/* Tests basic commands and APIs */
- for (i = 0; i < sizeof(test)/sizeof(struct _test); i++) {
+ for (i = 0; i < ARRAY_SIZE(test); i++) {
struct _test t = test[i];
if (check_whitelist(&t, opt) != 0)
diff --git a/tools/testing/selftests/bpf/test_tcp_check_syncookie.sh b/tools/testing/selftests/bpf/test_tcp_check_syncookie.sh
index 6413c1472554..102e6588e2fe 100755
--- a/tools/testing/selftests/bpf/test_tcp_check_syncookie.sh
+++ b/tools/testing/selftests/bpf/test_tcp_check_syncookie.sh
@@ -4,6 +4,7 @@
# Copyright (c) 2019 Cloudflare
set -eu
+readonly NS1="ns1-$(mktemp -u XXXXXX)"
wait_for_ip()
{
@@ -28,12 +29,12 @@ get_prog_id()
ns1_exec()
{
- ip netns exec ns1 "$@"
+ ip netns exec ${NS1} "$@"
}
setup()
{
- ip netns add ns1
+ ip netns add ${NS1}
ns1_exec ip link set lo up
ns1_exec sysctl -w net.ipv4.tcp_syncookies=2
diff --git a/tools/testing/selftests/bpf/test_tunnel.sh b/tools/testing/selftests/bpf/test_tunnel.sh
index ca1372924023..2817d9948d59 100755
--- a/tools/testing/selftests/bpf/test_tunnel.sh
+++ b/tools/testing/selftests/bpf/test_tunnel.sh
@@ -39,7 +39,7 @@
# from root namespace, the following operations happen:
# 1) Route lookup shows 10.1.1.100/24 belongs to tnl dev, fwd to tnl dev.
# 2) Tnl device's egress BPF program is triggered and set the tunnel metadata,
-# with remote_ip=172.16.1.200 and others.
+# with remote_ip=172.16.1.100 and others.
# 3) Outer tunnel header is prepended and route the packet to veth1's egress
# 4) veth0's ingress queue receive the tunneled packet at namespace at_ns0
# 5) Tunnel protocol handler, ex: vxlan_rcv, decap the packet
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 76cd903117af..a2cd236c32eb 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -22,8 +22,6 @@
#include <limits.h>
#include <assert.h>
-#include <sys/capability.h>
-
#include <linux/unistd.h>
#include <linux/filter.h>
#include <linux/bpf_perf_event.h>
@@ -31,6 +29,7 @@
#include <linux/if_ether.h>
#include <linux/btf.h>
+#include <bpf/btf.h>
#include <bpf/bpf.h>
#include <bpf/libbpf.h>
@@ -41,6 +40,7 @@
# define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 1
# endif
#endif
+#include "cap_helpers.h"
#include "bpf_rand.h"
#include "bpf_util.h"
#include "test_btf.h"
@@ -61,11 +61,20 @@
#define F_NEEDS_EFFICIENT_UNALIGNED_ACCESS (1 << 0)
#define F_LOAD_WITH_STRICT_ALIGNMENT (1 << 1)
+/* need CAP_BPF, CAP_NET_ADMIN, CAP_PERFMON to load progs */
+#define ADMIN_CAPS (1ULL << CAP_NET_ADMIN | \
+ 1ULL << CAP_PERFMON | \
+ 1ULL << CAP_BPF)
#define UNPRIV_SYSCTL "kernel/unprivileged_bpf_disabled"
static bool unpriv_disabled = false;
static int skips;
static bool verbose = false;
+struct kfunc_btf_id_pair {
+ const char *kfunc;
+ int insn_idx;
+};
+
struct bpf_test {
const char *descr;
struct bpf_insn insns[MAX_INSNS];
@@ -92,6 +101,7 @@ struct bpf_test {
int fixup_map_reuseport_array[MAX_FIXUPS];
int fixup_map_ringbuf[MAX_FIXUPS];
int fixup_map_timer[MAX_FIXUPS];
+ struct kfunc_btf_id_pair fixup_kfunc_btf_id[MAX_FIXUPS];
/* Expected verifier log output for result REJECT or VERBOSE_ACCEPT.
* Can be a tab-separated sequence of expected strings. An empty string
* means no log verification.
@@ -449,7 +459,7 @@ static int probe_filter_length(const struct bpf_insn *fp)
static bool skip_unsupported_map(enum bpf_map_type map_type)
{
- if (!bpf_probe_map_type(map_type, 0)) {
+ if (!libbpf_probe_bpf_map_type(map_type, NULL)) {
printf("SKIP (unsupported map type %d)\n", map_type);
skips++;
return true;
@@ -744,6 +754,7 @@ static void do_test_fixup(struct bpf_test *test, enum bpf_prog_type prog_type,
int *fixup_map_reuseport_array = test->fixup_map_reuseport_array;
int *fixup_map_ringbuf = test->fixup_map_ringbuf;
int *fixup_map_timer = test->fixup_map_timer;
+ struct kfunc_btf_id_pair *fixup_kfunc_btf_id = test->fixup_kfunc_btf_id;
if (test->fill_helper) {
test->fill_insns = calloc(MAX_TEST_INSNS, sizeof(struct bpf_insn));
@@ -936,6 +947,26 @@ static void do_test_fixup(struct bpf_test *test, enum bpf_prog_type prog_type,
fixup_map_timer++;
} while (*fixup_map_timer);
}
+
+ /* Patch in kfunc BTF IDs */
+ if (fixup_kfunc_btf_id->kfunc) {
+ struct btf *btf;
+ int btf_id;
+
+ do {
+ btf_id = 0;
+ btf = btf__load_vmlinux_btf();
+ if (btf) {
+ btf_id = btf__find_by_name_kind(btf,
+ fixup_kfunc_btf_id->kfunc,
+ BTF_KIND_FUNC);
+ btf_id = btf_id < 0 ? 0 : btf_id;
+ }
+ btf__free(btf);
+ prog[fixup_kfunc_btf_id->insn_idx].imm = btf_id;
+ fixup_kfunc_btf_id++;
+ } while (fixup_kfunc_btf_id->kfunc);
+ }
}
struct libcap {
@@ -945,47 +976,19 @@ struct libcap {
static int set_admin(bool admin)
{
- cap_t caps;
- /* need CAP_BPF, CAP_NET_ADMIN, CAP_PERFMON to load progs */
- const cap_value_t cap_net_admin = CAP_NET_ADMIN;
- const cap_value_t cap_sys_admin = CAP_SYS_ADMIN;
- struct libcap *cap;
- int ret = -1;
-
- caps = cap_get_proc();
- if (!caps) {
- perror("cap_get_proc");
- return -1;
- }
- cap = (struct libcap *)caps;
- if (cap_set_flag(caps, CAP_EFFECTIVE, 1, &cap_sys_admin, CAP_CLEAR)) {
- perror("cap_set_flag clear admin");
- goto out;
- }
- if (cap_set_flag(caps, CAP_EFFECTIVE, 1, &cap_net_admin,
- admin ? CAP_SET : CAP_CLEAR)) {
- perror("cap_set_flag set_or_clear net");
- goto out;
- }
- /* libcap is likely old and simply ignores CAP_BPF and CAP_PERFMON,
- * so update effective bits manually
- */
+ int err;
+
if (admin) {
- cap->data[1].effective |= 1 << (38 /* CAP_PERFMON */ - 32);
- cap->data[1].effective |= 1 << (39 /* CAP_BPF */ - 32);
+ err = cap_enable_effective(ADMIN_CAPS, NULL);
+ if (err)
+ perror("cap_enable_effective(ADMIN_CAPS)");
} else {
- cap->data[1].effective &= ~(1 << (38 - 32));
- cap->data[1].effective &= ~(1 << (39 - 32));
+ err = cap_disable_effective(ADMIN_CAPS, NULL);
+ if (err)
+ perror("cap_disable_effective(ADMIN_CAPS)");
}
- if (cap_set_proc(caps)) {
- perror("cap_set_proc");
- goto out;
- }
- ret = 0;
-out:
- if (cap_free(caps))
- perror("cap_free");
- return ret;
+
+ return err;
}
static int do_prog_test_run(int fd_prog, bool unpriv, uint32_t expected_val,
@@ -993,13 +996,18 @@ static int do_prog_test_run(int fd_prog, bool unpriv, uint32_t expected_val,
{
__u8 tmp[TEST_DATA_LEN << 2];
__u32 size_tmp = sizeof(tmp);
- uint32_t retval;
int err, saved_errno;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = data,
+ .data_size_in = size_data,
+ .data_out = tmp,
+ .data_size_out = size_tmp,
+ .repeat = 1,
+ );
if (unpriv)
set_admin(true);
- err = bpf_prog_test_run(fd_prog, 1, data, size_data,
- tmp, &size_tmp, &retval, NULL);
+ err = bpf_prog_test_run_opts(fd_prog, &topts);
saved_errno = errno;
if (unpriv)
@@ -1023,9 +1031,8 @@ static int do_prog_test_run(int fd_prog, bool unpriv, uint32_t expected_val,
}
}
- if (retval != expected_val &&
- expected_val != POINTER_VALUE) {
- printf("FAIL retval %d != %d ", retval, expected_val);
+ if (topts.retval != expected_val && expected_val != POINTER_VALUE) {
+ printf("FAIL retval %d != %d ", topts.retval, expected_val);
return 1;
}
@@ -1148,7 +1155,7 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
* bpf_probe_prog_type won't give correct answer
*/
if (fd_prog < 0 && prog_type != BPF_PROG_TYPE_TRACING &&
- !bpf_probe_prog_type(prog_type, 0)) {
+ !libbpf_probe_bpf_prog_type(prog_type, NULL)) {
printf("SKIP (unsupported program type %d)\n", prog_type);
skips++;
goto close_fds;
@@ -1259,31 +1266,18 @@ fail_log:
static bool is_admin(void)
{
- cap_flag_value_t net_priv = CAP_CLEAR;
- bool perfmon_priv = false;
- bool bpf_priv = false;
- struct libcap *cap;
- cap_t caps;
-
-#ifdef CAP_IS_SUPPORTED
- if (!CAP_IS_SUPPORTED(CAP_SETFCAP)) {
- perror("cap_get_flag");
- return false;
- }
-#endif
- caps = cap_get_proc();
- if (!caps) {
- perror("cap_get_proc");
+ __u64 caps;
+
+ /* The test checks for finer cap as CAP_NET_ADMIN,
+ * CAP_PERFMON, and CAP_BPF instead of CAP_SYS_ADMIN.
+ * Thus, disable CAP_SYS_ADMIN at the beginning.
+ */
+ if (cap_disable_effective(1ULL << CAP_SYS_ADMIN, &caps)) {
+ perror("cap_disable_effective(CAP_SYS_ADMIN)");
return false;
}
- cap = (struct libcap *)caps;
- bpf_priv = cap->data[1].effective & (1 << (39/* CAP_BPF */ - 32));
- perfmon_priv = cap->data[1].effective & (1 << (38/* CAP_PERFMON */ - 32));
- if (cap_get_flag(caps, CAP_NET_ADMIN, CAP_EFFECTIVE, &net_priv))
- perror("cap_get_flag NET");
- if (cap_free(caps))
- perror("cap_free");
- return bpf_priv && perfmon_priv && net_priv == CAP_SET;
+
+ return (caps & ADMIN_CAPS) == ADMIN_CAPS;
}
static void get_unpriv_disabled()
diff --git a/tools/testing/selftests/bpf/test_xdp_meta.sh b/tools/testing/selftests/bpf/test_xdp_meta.sh
index d10cefd6eb09..ea69370caae3 100755
--- a/tools/testing/selftests/bpf/test_xdp_meta.sh
+++ b/tools/testing/selftests/bpf/test_xdp_meta.sh
@@ -2,6 +2,8 @@
# Kselftest framework requirement - SKIP code is 4.
readonly KSFT_SKIP=4
+readonly NS1="ns1-$(mktemp -u XXXXXX)"
+readonly NS2="ns2-$(mktemp -u XXXXXX)"
cleanup()
{
@@ -13,8 +15,8 @@ cleanup()
set +e
ip link del veth1 2> /dev/null
- ip netns del ns1 2> /dev/null
- ip netns del ns2 2> /dev/null
+ ip netns del ${NS1} 2> /dev/null
+ ip netns del ${NS2} 2> /dev/null
}
ip link set dev lo xdp off 2>/dev/null > /dev/null
@@ -24,32 +26,32 @@ if [ $? -ne 0 ];then
fi
set -e
-ip netns add ns1
-ip netns add ns2
+ip netns add ${NS1}
+ip netns add ${NS2}
trap cleanup 0 2 3 6 9
ip link add veth1 type veth peer name veth2
-ip link set veth1 netns ns1
-ip link set veth2 netns ns2
+ip link set veth1 netns ${NS1}
+ip link set veth2 netns ${NS2}
-ip netns exec ns1 ip addr add 10.1.1.11/24 dev veth1
-ip netns exec ns2 ip addr add 10.1.1.22/24 dev veth2
+ip netns exec ${NS1} ip addr add 10.1.1.11/24 dev veth1
+ip netns exec ${NS2} ip addr add 10.1.1.22/24 dev veth2
-ip netns exec ns1 tc qdisc add dev veth1 clsact
-ip netns exec ns2 tc qdisc add dev veth2 clsact
+ip netns exec ${NS1} tc qdisc add dev veth1 clsact
+ip netns exec ${NS2} tc qdisc add dev veth2 clsact
-ip netns exec ns1 tc filter add dev veth1 ingress bpf da obj test_xdp_meta.o sec t
-ip netns exec ns2 tc filter add dev veth2 ingress bpf da obj test_xdp_meta.o sec t
+ip netns exec ${NS1} tc filter add dev veth1 ingress bpf da obj test_xdp_meta.o sec t
+ip netns exec ${NS2} tc filter add dev veth2 ingress bpf da obj test_xdp_meta.o sec t
-ip netns exec ns1 ip link set dev veth1 xdp obj test_xdp_meta.o sec x
-ip netns exec ns2 ip link set dev veth2 xdp obj test_xdp_meta.o sec x
+ip netns exec ${NS1} ip link set dev veth1 xdp obj test_xdp_meta.o sec x
+ip netns exec ${NS2} ip link set dev veth2 xdp obj test_xdp_meta.o sec x
-ip netns exec ns1 ip link set dev veth1 up
-ip netns exec ns2 ip link set dev veth2 up
+ip netns exec ${NS1} ip link set dev veth1 up
+ip netns exec ${NS2} ip link set dev veth2 up
-ip netns exec ns1 ping -c 1 10.1.1.22
-ip netns exec ns2 ping -c 1 10.1.1.11
+ip netns exec ${NS1} ping -c 1 10.1.1.22
+ip netns exec ${NS2} ping -c 1 10.1.1.11
exit 0
diff --git a/tools/testing/selftests/bpf/test_xdp_redirect.sh b/tools/testing/selftests/bpf/test_xdp_redirect.sh
index 57c8db9972a6..1d79f31480ad 100755
--- a/tools/testing/selftests/bpf/test_xdp_redirect.sh
+++ b/tools/testing/selftests/bpf/test_xdp_redirect.sh
@@ -10,6 +10,8 @@
# | xdp forwarding |
# ------------------
+readonly NS1="ns1-$(mktemp -u XXXXXX)"
+readonly NS2="ns2-$(mktemp -u XXXXXX)"
ret=0
setup()
@@ -17,27 +19,27 @@ setup()
local xdpmode=$1
- ip netns add ns1
- ip netns add ns2
+ ip netns add ${NS1}
+ ip netns add ${NS2}
- ip link add veth1 index 111 type veth peer name veth11 netns ns1
- ip link add veth2 index 222 type veth peer name veth22 netns ns2
+ ip link add veth1 index 111 type veth peer name veth11 netns ${NS1}
+ ip link add veth2 index 222 type veth peer name veth22 netns ${NS2}
ip link set veth1 up
ip link set veth2 up
- ip -n ns1 link set dev veth11 up
- ip -n ns2 link set dev veth22 up
+ ip -n ${NS1} link set dev veth11 up
+ ip -n ${NS2} link set dev veth22 up
- ip -n ns1 addr add 10.1.1.11/24 dev veth11
- ip -n ns2 addr add 10.1.1.22/24 dev veth22
+ ip -n ${NS1} addr add 10.1.1.11/24 dev veth11
+ ip -n ${NS2} addr add 10.1.1.22/24 dev veth22
}
cleanup()
{
ip link del veth1 2> /dev/null
ip link del veth2 2> /dev/null
- ip netns del ns1 2> /dev/null
- ip netns del ns2 2> /dev/null
+ ip netns del ${NS1} 2> /dev/null
+ ip netns del ${NS2} 2> /dev/null
}
test_xdp_redirect()
@@ -52,13 +54,13 @@ test_xdp_redirect()
return 0
fi
- ip -n ns1 link set veth11 $xdpmode obj xdp_dummy.o sec xdp &> /dev/null
- ip -n ns2 link set veth22 $xdpmode obj xdp_dummy.o sec xdp &> /dev/null
+ ip -n ${NS1} link set veth11 $xdpmode obj xdp_dummy.o sec xdp &> /dev/null
+ ip -n ${NS2} link set veth22 $xdpmode obj xdp_dummy.o sec xdp &> /dev/null
ip link set dev veth1 $xdpmode obj test_xdp_redirect.o sec redirect_to_222 &> /dev/null
ip link set dev veth2 $xdpmode obj test_xdp_redirect.o sec redirect_to_111 &> /dev/null
- if ip netns exec ns1 ping -c 1 10.1.1.22 &> /dev/null &&
- ip netns exec ns2 ping -c 1 10.1.1.11 &> /dev/null; then
+ if ip netns exec ${NS1} ping -c 1 10.1.1.22 &> /dev/null &&
+ ip netns exec ${NS2} ping -c 1 10.1.1.11 &> /dev/null; then
echo "selftests: test_xdp_redirect $xdpmode [PASS]";
else
ret=1
diff --git a/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh b/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
index 05f872740999..cc57cb87e65f 100755
--- a/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
+++ b/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
@@ -32,6 +32,11 @@ DRV_MODE="xdpgeneric xdpdrv xdpegress"
PASS=0
FAIL=0
LOG_DIR=$(mktemp -d)
+declare -a NS
+NS[0]="ns0-$(mktemp -u XXXXXX)"
+NS[1]="ns1-$(mktemp -u XXXXXX)"
+NS[2]="ns2-$(mktemp -u XXXXXX)"
+NS[3]="ns3-$(mktemp -u XXXXXX)"
test_pass()
{
@@ -47,11 +52,9 @@ test_fail()
clean_up()
{
- for i in $(seq $NUM); do
- ip link del veth$i 2> /dev/null
- ip netns del ns$i 2> /dev/null
+ for i in $(seq 0 $NUM); do
+ ip netns del ${NS[$i]} 2> /dev/null
done
- ip netns del ns0 2> /dev/null
}
# Kselftest framework requirement - SKIP code is 4.
@@ -79,23 +82,22 @@ setup_ns()
mode="xdpdrv"
fi
- ip netns add ns0
+ ip netns add ${NS[0]}
for i in $(seq $NUM); do
- ip netns add ns$i
- ip -n ns$i link add veth0 index 2 type veth \
- peer name veth$i netns ns0 index $((1 + $i))
- ip -n ns0 link set veth$i up
- ip -n ns$i link set veth0 up
-
- ip -n ns$i addr add 192.0.2.$i/24 dev veth0
- ip -n ns$i addr add 2001:db8::$i/64 dev veth0
+ ip netns add ${NS[$i]}
+ ip -n ${NS[$i]} link add veth0 type veth peer name veth$i netns ${NS[0]}
+ ip -n ${NS[$i]} link set veth0 up
+ ip -n ${NS[0]} link set veth$i up
+
+ ip -n ${NS[$i]} addr add 192.0.2.$i/24 dev veth0
+ ip -n ${NS[$i]} addr add 2001:db8::$i/64 dev veth0
# Add a neigh entry for IPv4 ping test
- ip -n ns$i neigh add 192.0.2.253 lladdr 00:00:00:00:00:01 dev veth0
- ip -n ns$i link set veth0 $mode obj \
+ ip -n ${NS[$i]} neigh add 192.0.2.253 lladdr 00:00:00:00:00:01 dev veth0
+ ip -n ${NS[$i]} link set veth0 $mode obj \
xdp_dummy.o sec xdp &> /dev/null || \
{ test_fail "Unable to load dummy xdp" && exit 1; }
IFACES="$IFACES veth$i"
- veth_mac[$i]=$(ip -n ns0 link show veth$i | awk '/link\/ether/ {print $2}')
+ veth_mac[$i]=$(ip -n ${NS[0]} link show veth$i | awk '/link\/ether/ {print $2}')
done
}
@@ -104,10 +106,10 @@ do_egress_tests()
local mode=$1
# mac test
- ip netns exec ns2 tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-2_${mode}.log &
- ip netns exec ns3 tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-3_${mode}.log &
+ ip netns exec ${NS[2]} tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-2_${mode}.log &
+ ip netns exec ${NS[3]} tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-3_${mode}.log &
sleep 0.5
- ip netns exec ns1 ping 192.0.2.254 -i 0.1 -c 4 &> /dev/null
+ ip netns exec ${NS[1]} ping 192.0.2.254 -i 0.1 -c 4 &> /dev/null
sleep 0.5
pkill tcpdump
@@ -123,18 +125,18 @@ do_ping_tests()
local mode=$1
# ping6 test: echo request should be redirect back to itself, not others
- ip netns exec ns1 ip neigh add 2001:db8::2 dev veth0 lladdr 00:00:00:00:00:02
+ ip netns exec ${NS[1]} ip neigh add 2001:db8::2 dev veth0 lladdr 00:00:00:00:00:02
- ip netns exec ns1 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-1_${mode}.log &
- ip netns exec ns2 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-2_${mode}.log &
- ip netns exec ns3 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-3_${mode}.log &
+ ip netns exec ${NS[1]} tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-1_${mode}.log &
+ ip netns exec ${NS[2]} tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-2_${mode}.log &
+ ip netns exec ${NS[3]} tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-3_${mode}.log &
sleep 0.5
# ARP test
- ip netns exec ns1 arping -q -c 2 -I veth0 192.0.2.254
+ ip netns exec ${NS[1]} arping -q -c 2 -I veth0 192.0.2.254
# IPv4 test
- ip netns exec ns1 ping 192.0.2.253 -i 0.1 -c 4 &> /dev/null
+ ip netns exec ${NS[1]} ping 192.0.2.253 -i 0.1 -c 4 &> /dev/null
# IPv6 test
- ip netns exec ns1 ping6 2001:db8::2 -i 0.1 -c 2 &> /dev/null
+ ip netns exec ${NS[1]} ping6 2001:db8::2 -i 0.1 -c 2 &> /dev/null
sleep 0.5
pkill tcpdump
@@ -180,7 +182,7 @@ do_tests()
xdpgeneric) drv_p="-S";;
esac
- ip netns exec ns0 ./xdp_redirect_multi $drv_p $IFACES &> ${LOG_DIR}/xdp_redirect_${mode}.log &
+ ip netns exec ${NS[0]} ./xdp_redirect_multi $drv_p $IFACES &> ${LOG_DIR}/xdp_redirect_${mode}.log &
xdp_pid=$!
sleep 1
if ! ps -p $xdp_pid > /dev/null; then
@@ -197,10 +199,10 @@ do_tests()
kill $xdp_pid
}
-trap clean_up EXIT
-
check_env
+trap clean_up EXIT
+
for mode in ${DRV_MODE}; do
setup_ns $mode
do_tests $mode
diff --git a/tools/testing/selftests/bpf/test_xdp_veth.sh b/tools/testing/selftests/bpf/test_xdp_veth.sh
index a3a1eaee26ea..392d28cc4e58 100755
--- a/tools/testing/selftests/bpf/test_xdp_veth.sh
+++ b/tools/testing/selftests/bpf/test_xdp_veth.sh
@@ -22,6 +22,9 @@ ksft_skip=4
TESTNAME=xdp_veth
BPF_FS=$(awk '$3 == "bpf" {print $2; exit}' /proc/mounts)
BPF_DIR=$BPF_FS/test_$TESTNAME
+readonly NS1="ns1-$(mktemp -u XXXXXX)"
+readonly NS2="ns2-$(mktemp -u XXXXXX)"
+readonly NS3="ns3-$(mktemp -u XXXXXX)"
_cleanup()
{
@@ -29,9 +32,9 @@ _cleanup()
ip link del veth1 2> /dev/null
ip link del veth2 2> /dev/null
ip link del veth3 2> /dev/null
- ip netns del ns1 2> /dev/null
- ip netns del ns2 2> /dev/null
- ip netns del ns3 2> /dev/null
+ ip netns del ${NS1} 2> /dev/null
+ ip netns del ${NS2} 2> /dev/null
+ ip netns del ${NS3} 2> /dev/null
rm -rf $BPF_DIR 2> /dev/null
}
@@ -77,24 +80,24 @@ set -e
trap cleanup_skip EXIT
-ip netns add ns1
-ip netns add ns2
-ip netns add ns3
+ip netns add ${NS1}
+ip netns add ${NS2}
+ip netns add ${NS3}
-ip link add veth1 index 111 type veth peer name veth11 netns ns1
-ip link add veth2 index 122 type veth peer name veth22 netns ns2
-ip link add veth3 index 133 type veth peer name veth33 netns ns3
+ip link add veth1 index 111 type veth peer name veth11 netns ${NS1}
+ip link add veth2 index 122 type veth peer name veth22 netns ${NS2}
+ip link add veth3 index 133 type veth peer name veth33 netns ${NS3}
ip link set veth1 up
ip link set veth2 up
ip link set veth3 up
-ip -n ns1 addr add 10.1.1.11/24 dev veth11
-ip -n ns3 addr add 10.1.1.33/24 dev veth33
+ip -n ${NS1} addr add 10.1.1.11/24 dev veth11
+ip -n ${NS3} addr add 10.1.1.33/24 dev veth33
-ip -n ns1 link set dev veth11 up
-ip -n ns2 link set dev veth22 up
-ip -n ns3 link set dev veth33 up
+ip -n ${NS1} link set dev veth11 up
+ip -n ${NS2} link set dev veth22 up
+ip -n ${NS3} link set dev veth33 up
mkdir $BPF_DIR
bpftool prog loadall \
@@ -107,12 +110,12 @@ ip link set dev veth1 xdp pinned $BPF_DIR/progs/redirect_map_0
ip link set dev veth2 xdp pinned $BPF_DIR/progs/redirect_map_1
ip link set dev veth3 xdp pinned $BPF_DIR/progs/redirect_map_2
-ip -n ns1 link set dev veth11 xdp obj xdp_dummy.o sec xdp
-ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec xdp
-ip -n ns3 link set dev veth33 xdp obj xdp_dummy.o sec xdp
+ip -n ${NS1} link set dev veth11 xdp obj xdp_dummy.o sec xdp
+ip -n ${NS2} link set dev veth22 xdp obj xdp_tx.o sec xdp
+ip -n ${NS3} link set dev veth33 xdp obj xdp_dummy.o sec xdp
trap cleanup EXIT
-ip netns exec ns1 ping -c 1 -W 1 10.1.1.33
+ip netns exec ${NS1} ping -c 1 -W 1 10.1.1.33
exit 0
diff --git a/tools/testing/selftests/bpf/test_xdp_vlan.sh b/tools/testing/selftests/bpf/test_xdp_vlan.sh
index 0cbc7604a2f8..810c407e0286 100755
--- a/tools/testing/selftests/bpf/test_xdp_vlan.sh
+++ b/tools/testing/selftests/bpf/test_xdp_vlan.sh
@@ -4,6 +4,8 @@
# Kselftest framework requirement - SKIP code is 4.
readonly KSFT_SKIP=4
+readonly NS1="ns1-$(mktemp -u XXXXXX)"
+readonly NS2="ns2-$(mktemp -u XXXXXX)"
# Allow wrapper scripts to name test
if [ -z "$TESTNAME" ]; then
@@ -49,15 +51,15 @@ cleanup()
if [ -n "$INTERACTIVE" ]; then
echo "Namespace setup still active explore with:"
- echo " ip netns exec ns1 bash"
- echo " ip netns exec ns2 bash"
+ echo " ip netns exec ${NS1} bash"
+ echo " ip netns exec ${NS2} bash"
exit $status
fi
set +e
ip link del veth1 2> /dev/null
- ip netns del ns1 2> /dev/null
- ip netns del ns2 2> /dev/null
+ ip netns del ${NS1} 2> /dev/null
+ ip netns del ${NS2} 2> /dev/null
}
# Using external program "getopt" to get --long-options
@@ -126,8 +128,8 @@ fi
# Interactive mode likely require us to cleanup netns
if [ -n "$INTERACTIVE" ]; then
ip link del veth1 2> /dev/null
- ip netns del ns1 2> /dev/null
- ip netns del ns2 2> /dev/null
+ ip netns del ${NS1} 2> /dev/null
+ ip netns del ${NS2} 2> /dev/null
fi
# Exit on failure
@@ -144,8 +146,8 @@ if [ -n "$VERBOSE" ]; then
fi
# Create two namespaces
-ip netns add ns1
-ip netns add ns2
+ip netns add ${NS1}
+ip netns add ${NS2}
# Run cleanup if failing or on kill
trap cleanup 0 2 3 6 9
@@ -154,44 +156,44 @@ trap cleanup 0 2 3 6 9
ip link add veth1 type veth peer name veth2
# Move veth1 and veth2 into the respective namespaces
-ip link set veth1 netns ns1
-ip link set veth2 netns ns2
+ip link set veth1 netns ${NS1}
+ip link set veth2 netns ${NS2}
# NOTICE: XDP require VLAN header inside packet payload
# - Thus, disable VLAN offloading driver features
# - For veth REMEMBER TX side VLAN-offload
#
# Disable rx-vlan-offload (mostly needed on ns1)
-ip netns exec ns1 ethtool -K veth1 rxvlan off
-ip netns exec ns2 ethtool -K veth2 rxvlan off
+ip netns exec ${NS1} ethtool -K veth1 rxvlan off
+ip netns exec ${NS2} ethtool -K veth2 rxvlan off
#
# Disable tx-vlan-offload (mostly needed on ns2)
-ip netns exec ns2 ethtool -K veth2 txvlan off
-ip netns exec ns1 ethtool -K veth1 txvlan off
+ip netns exec ${NS2} ethtool -K veth2 txvlan off
+ip netns exec ${NS1} ethtool -K veth1 txvlan off
export IPADDR1=100.64.41.1
export IPADDR2=100.64.41.2
# In ns1/veth1 add IP-addr on plain net_device
-ip netns exec ns1 ip addr add ${IPADDR1}/24 dev veth1
-ip netns exec ns1 ip link set veth1 up
+ip netns exec ${NS1} ip addr add ${IPADDR1}/24 dev veth1
+ip netns exec ${NS1} ip link set veth1 up
# In ns2/veth2 create VLAN device
export VLAN=4011
export DEVNS2=veth2
-ip netns exec ns2 ip link add link $DEVNS2 name $DEVNS2.$VLAN type vlan id $VLAN
-ip netns exec ns2 ip addr add ${IPADDR2}/24 dev $DEVNS2.$VLAN
-ip netns exec ns2 ip link set $DEVNS2 up
-ip netns exec ns2 ip link set $DEVNS2.$VLAN up
+ip netns exec ${NS2} ip link add link $DEVNS2 name $DEVNS2.$VLAN type vlan id $VLAN
+ip netns exec ${NS2} ip addr add ${IPADDR2}/24 dev $DEVNS2.$VLAN
+ip netns exec ${NS2} ip link set $DEVNS2 up
+ip netns exec ${NS2} ip link set $DEVNS2.$VLAN up
# Bringup lo in netns (to avoids confusing people using --interactive)
-ip netns exec ns1 ip link set lo up
-ip netns exec ns2 ip link set lo up
+ip netns exec ${NS1} ip link set lo up
+ip netns exec ${NS2} ip link set lo up
# At this point, the hosts cannot reach each-other,
# because ns2 are using VLAN tags on the packets.
-ip netns exec ns2 sh -c 'ping -W 1 -c 1 100.64.41.1 || echo "Success: First ping must fail"'
+ip netns exec ${NS2} sh -c 'ping -W 1 -c 1 100.64.41.1 || echo "Success: First ping must fail"'
# Now we can use the test_xdp_vlan.c program to pop/push these VLAN tags
@@ -202,19 +204,19 @@ export FILE=test_xdp_vlan.o
# First test: Remove VLAN by setting VLAN ID 0, using "xdp_vlan_change"
export XDP_PROG=xdp_vlan_change
-ip netns exec ns1 ip link set $DEVNS1 $XDP_MODE object $FILE section $XDP_PROG
+ip netns exec ${NS1} ip link set $DEVNS1 $XDP_MODE object $FILE section $XDP_PROG
# In ns1: egress use TC to add back VLAN tag 4011
# (del cmd)
# tc qdisc del dev $DEVNS1 clsact 2> /dev/null
#
-ip netns exec ns1 tc qdisc add dev $DEVNS1 clsact
-ip netns exec ns1 tc filter add dev $DEVNS1 egress \
+ip netns exec ${NS1} tc qdisc add dev $DEVNS1 clsact
+ip netns exec ${NS1} tc filter add dev $DEVNS1 egress \
prio 1 handle 1 bpf da obj $FILE sec tc_vlan_push
# Now the namespaces can reach each-other, test with ping:
-ip netns exec ns2 ping -i 0.2 -W 2 -c 2 $IPADDR1
-ip netns exec ns1 ping -i 0.2 -W 2 -c 2 $IPADDR2
+ip netns exec ${NS2} ping -i 0.2 -W 2 -c 2 $IPADDR1
+ip netns exec ${NS1} ping -i 0.2 -W 2 -c 2 $IPADDR2
# Second test: Replace xdp prog, that fully remove vlan header
#
@@ -223,9 +225,9 @@ ip netns exec ns1 ping -i 0.2 -W 2 -c 2 $IPADDR2
# ETH_P_8021Q indication, and this cause overwriting of our changes.
#
export XDP_PROG=xdp_vlan_remove_outer2
-ip netns exec ns1 ip link set $DEVNS1 $XDP_MODE off
-ip netns exec ns1 ip link set $DEVNS1 $XDP_MODE object $FILE section $XDP_PROG
+ip netns exec ${NS1} ip link set $DEVNS1 $XDP_MODE off
+ip netns exec ${NS1} ip link set $DEVNS1 $XDP_MODE object $FILE section $XDP_PROG
# Now the namespaces should still be able reach each-other, test with ping:
-ip netns exec ns2 ping -i 0.2 -W 2 -c 2 $IPADDR1
-ip netns exec ns1 ping -i 0.2 -W 2 -c 2 $IPADDR2
+ip netns exec ${NS2} ping -i 0.2 -W 2 -c 2 $IPADDR1
+ip netns exec ${NS1} ping -i 0.2 -W 2 -c 2 $IPADDR2
diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c
index 7b7f918eda77..3d6217e3aff7 100644
--- a/tools/testing/selftests/bpf/trace_helpers.c
+++ b/tools/testing/selftests/bpf/trace_helpers.c
@@ -34,6 +34,13 @@ int load_kallsyms(void)
if (!f)
return -ENOENT;
+ /*
+ * This is called/used from multiplace places,
+ * load symbols just once.
+ */
+ if (sym_cnt)
+ return 0;
+
while (fgets(buf, sizeof(buf), f)) {
if (sscanf(buf, "%p %c %s", &addr, &symbol, func) != 3)
break;
@@ -138,6 +145,29 @@ void read_trace_pipe(void)
}
}
+ssize_t get_uprobe_offset(const void *addr)
+{
+ size_t start, end, base;
+ char buf[256];
+ bool found = false;
+ FILE *f;
+
+ f = fopen("/proc/self/maps", "r");
+ if (!f)
+ return -errno;
+
+ while (fscanf(f, "%zx-%zx %s %zx %*[^\n]\n", &start, &end, buf, &base) == 4) {
+ if (buf[2] == 'x' && (uintptr_t)addr >= start && (uintptr_t)addr < end) {
+ found = true;
+ break;
+ }
+ }
+
+ fclose(f);
+
+ if (!found)
+ return -ESRCH;
+
#if defined(__powerpc64__) && defined(_CALL_ELF) && _CALL_ELF == 2
#define OP_RT_RA_MASK 0xffff0000UL
@@ -145,10 +175,6 @@ void read_trace_pipe(void)
#define ADDIS_R2_R12 0x3c4c0000UL
#define ADDI_R2_R2 0x38420000UL
-ssize_t get_uprobe_offset(const void *addr, ssize_t base)
-{
- u32 *insn = (u32 *)(uintptr_t)addr;
-
/*
* A PPC64 ABIv2 function may have a local and a global entry
* point. We need to use the local entry point when patching
@@ -165,43 +191,16 @@ ssize_t get_uprobe_offset(const void *addr, ssize_t base)
* lis r2,XXXX
* addi r2,r2,XXXX
*/
- if ((((*insn & OP_RT_RA_MASK) == ADDIS_R2_R12) ||
- ((*insn & OP_RT_RA_MASK) == LIS_R2)) &&
- ((*(insn + 1) & OP_RT_RA_MASK) == ADDI_R2_R2))
- return (ssize_t)(insn + 2) - base;
- else
- return (uintptr_t)addr - base;
-}
+ {
+ const u32 *insn = (const u32 *)(uintptr_t)addr;
-#else
-
-ssize_t get_uprobe_offset(const void *addr, ssize_t base)
-{
- return (uintptr_t)addr - base;
-}
-
-#endif
-
-ssize_t get_base_addr(void)
-{
- size_t start, offset;
- char buf[256];
- FILE *f;
-
- f = fopen("/proc/self/maps", "r");
- if (!f)
- return -errno;
-
- while (fscanf(f, "%zx-%*x %s %zx %*[^\n]\n",
- &start, buf, &offset) == 3) {
- if (strcmp(buf, "r-xp") == 0) {
- fclose(f);
- return start - offset;
- }
+ if ((((*insn & OP_RT_RA_MASK) == ADDIS_R2_R12) ||
+ ((*insn & OP_RT_RA_MASK) == LIS_R2)) &&
+ ((*(insn + 1) & OP_RT_RA_MASK) == ADDI_R2_R2))
+ return (uintptr_t)(insn + 2) - start + base;
}
-
- fclose(f);
- return -EINVAL;
+#endif
+ return (uintptr_t)addr - start + base;
}
ssize_t get_rel_offset(uintptr_t addr)
diff --git a/tools/testing/selftests/bpf/trace_helpers.h b/tools/testing/selftests/bpf/trace_helpers.h
index d907b445524d..238a9c98cde2 100644
--- a/tools/testing/selftests/bpf/trace_helpers.h
+++ b/tools/testing/selftests/bpf/trace_helpers.h
@@ -18,8 +18,7 @@ int kallsyms_find(const char *sym, unsigned long long *addr);
void read_trace_pipe(void);
-ssize_t get_uprobe_offset(const void *addr, ssize_t base);
-ssize_t get_base_addr(void);
+ssize_t get_uprobe_offset(const void *addr);
ssize_t get_rel_offset(uintptr_t addr);
#endif
diff --git a/tools/testing/selftests/bpf/verifier/atomic_invalid.c b/tools/testing/selftests/bpf/verifier/atomic_invalid.c
index 39272720b2f6..25f4ac1c69ab 100644
--- a/tools/testing/selftests/bpf/verifier/atomic_invalid.c
+++ b/tools/testing/selftests/bpf/verifier/atomic_invalid.c
@@ -1,6 +1,6 @@
-#define __INVALID_ATOMIC_ACCESS_TEST(op) \
+#define __INVALID_ATOMIC_ACCESS_TEST(op) \
{ \
- "atomic " #op " access through non-pointer ", \
+ "atomic " #op " access through non-pointer ", \
.insns = { \
BPF_MOV64_IMM(BPF_REG_0, 1), \
BPF_MOV64_IMM(BPF_REG_1, 0), \
@@ -9,7 +9,7 @@
BPF_EXIT_INSN(), \
}, \
.result = REJECT, \
- .errstr = "R1 invalid mem access 'inv'" \
+ .errstr = "R1 invalid mem access 'scalar'" \
}
__INVALID_ATOMIC_ACCESS_TEST(BPF_ADD),
__INVALID_ATOMIC_ACCESS_TEST(BPF_ADD | BPF_FETCH),
diff --git a/tools/testing/selftests/bpf/verifier/bounds.c b/tools/testing/selftests/bpf/verifier/bounds.c
index e061e8799ce2..33125d5f6772 100644
--- a/tools/testing/selftests/bpf/verifier/bounds.c
+++ b/tools/testing/selftests/bpf/verifier/bounds.c
@@ -508,7 +508,7 @@
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1),
BPF_EXIT_INSN(),
},
- .errstr_unpriv = "R0 invalid mem access 'inv'",
+ .errstr_unpriv = "R0 invalid mem access 'scalar'",
.result_unpriv = REJECT,
.result = ACCEPT
},
@@ -530,7 +530,7 @@
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1),
BPF_EXIT_INSN(),
},
- .errstr_unpriv = "R0 invalid mem access 'inv'",
+ .errstr_unpriv = "R0 invalid mem access 'scalar'",
.result_unpriv = REJECT,
.result = ACCEPT
},
diff --git a/tools/testing/selftests/bpf/verifier/bounds_deduction.c b/tools/testing/selftests/bpf/verifier/bounds_deduction.c
index 91869aea6d64..3931c481e30c 100644
--- a/tools/testing/selftests/bpf/verifier/bounds_deduction.c
+++ b/tools/testing/selftests/bpf/verifier/bounds_deduction.c
@@ -105,7 +105,7 @@
BPF_EXIT_INSN(),
},
.errstr_unpriv = "R1 has pointer with unsupported alu operation",
- .errstr = "dereference of modified ctx ptr",
+ .errstr = "negative offset ctx ptr R1 off=-1 disallowed",
.result = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
index d7b74eb28333..2e03decb11b6 100644
--- a/tools/testing/selftests/bpf/verifier/calls.c
+++ b/tools/testing/selftests/bpf/verifier/calls.c
@@ -22,6 +22,183 @@
.result = ACCEPT,
},
{
+ "calls: invalid kfunc call: ptr_to_mem to struct with non-scalar",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ .result = REJECT,
+ .errstr = "arg#0 pointer type STRUCT prog_test_fail1 must point to scalar",
+ .fixup_kfunc_btf_id = {
+ { "bpf_kfunc_call_test_fail1", 2 },
+ },
+},
+{
+ "calls: invalid kfunc call: ptr_to_mem to struct with nesting depth > 4",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ .result = REJECT,
+ .errstr = "max struct nesting depth exceeded\narg#0 pointer type STRUCT prog_test_fail2",
+ .fixup_kfunc_btf_id = {
+ { "bpf_kfunc_call_test_fail2", 2 },
+ },
+},
+{
+ "calls: invalid kfunc call: ptr_to_mem to struct with FAM",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ .result = REJECT,
+ .errstr = "arg#0 pointer type STRUCT prog_test_fail3 must point to scalar",
+ .fixup_kfunc_btf_id = {
+ { "bpf_kfunc_call_test_fail3", 2 },
+ },
+},
+{
+ "calls: invalid kfunc call: reg->type != PTR_TO_CTX",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ .result = REJECT,
+ .errstr = "arg#0 expected pointer to ctx, but got PTR",
+ .fixup_kfunc_btf_id = {
+ { "bpf_kfunc_call_test_pass_ctx", 2 },
+ },
+},
+{
+ "calls: invalid kfunc call: void * not allowed in func proto without mem size arg",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ .result = REJECT,
+ .errstr = "arg#0 pointer type UNKNOWN must point to scalar",
+ .fixup_kfunc_btf_id = {
+ { "bpf_kfunc_call_test_mem_len_fail1", 2 },
+ },
+},
+{
+ "calls: trigger reg2btf_ids[reg->type] for reg->type > __BPF_REG_TYPE_MAX",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ .result = REJECT,
+ .errstr = "arg#0 pointer type STRUCT prog_test_ref_kfunc must point",
+ .fixup_kfunc_btf_id = {
+ { "bpf_kfunc_call_test_acquire", 3 },
+ { "bpf_kfunc_call_test_release", 5 },
+ },
+},
+{
+ "calls: invalid kfunc call: reg->off must be zero when passed to release kfunc",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+ BPF_EXIT_INSN(),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ .result = REJECT,
+ .errstr = "R1 must have zero offset when passed to release func",
+ .fixup_kfunc_btf_id = {
+ { "bpf_kfunc_call_test_acquire", 3 },
+ { "bpf_kfunc_call_memb_release", 8 },
+ },
+},
+{
+ "calls: invalid kfunc call: PTR_TO_BTF_ID with negative offset",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+ BPF_EXIT_INSN(),
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, 16),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -4),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ .fixup_kfunc_btf_id = {
+ { "bpf_kfunc_call_test_acquire", 3 },
+ { "bpf_kfunc_call_test_release", 9 },
+ },
+ .result_unpriv = REJECT,
+ .result = REJECT,
+ .errstr = "negative offset ptr_ ptr R1 off=-4 disallowed",
+},
+{
+ "calls: invalid kfunc call: PTR_TO_BTF_ID with variable offset",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+ BPF_EXIT_INSN(),
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+ BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_0, 4),
+ BPF_JMP_IMM(BPF_JLE, BPF_REG_2, 4, 3),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 3),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ .fixup_kfunc_btf_id = {
+ { "bpf_kfunc_call_test_acquire", 3 },
+ { "bpf_kfunc_call_test_release", 9 },
+ { "bpf_kfunc_call_test_release", 13 },
+ { "bpf_kfunc_call_test_release", 17 },
+ },
+ .result_unpriv = REJECT,
+ .result = REJECT,
+ .errstr = "variable ptr_ access var_off=(0x0; 0x7) disallowed",
+},
+{
"calls: basic sanity",
.insns = {
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
@@ -94,7 +271,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "R0 invalid mem access 'inv'",
+ .errstr = "R0 invalid mem access 'scalar'",
},
{
"calls: multiple ret types in subprog 2",
@@ -397,7 +574,7 @@
BPF_EXIT_INSN(),
},
.result = REJECT,
- .errstr = "R6 invalid mem access 'inv'",
+ .errstr = "R6 invalid mem access 'scalar'",
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
@@ -1603,7 +1780,7 @@
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.fixup_map_hash_8b = { 12, 22 },
.result = REJECT,
- .errstr = "R0 invalid mem access 'inv'",
+ .errstr = "R0 invalid mem access 'scalar'",
},
{
"calls: pkt_ptr spill into caller stack",
diff --git a/tools/testing/selftests/bpf/verifier/ctx.c b/tools/testing/selftests/bpf/verifier/ctx.c
index 23080862aafd..c8eaf0536c24 100644
--- a/tools/testing/selftests/bpf/verifier/ctx.c
+++ b/tools/testing/selftests/bpf/verifier/ctx.c
@@ -58,7 +58,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "dereference of modified ctx ptr",
+ .errstr = "negative offset ctx ptr R1 off=-612 disallowed",
},
{
"pass modified ctx pointer to helper, 2",
@@ -71,8 +71,8 @@
},
.result_unpriv = REJECT,
.result = REJECT,
- .errstr_unpriv = "dereference of modified ctx ptr",
- .errstr = "dereference of modified ctx ptr",
+ .errstr_unpriv = "negative offset ctx ptr R1 off=-612 disallowed",
+ .errstr = "negative offset ctx ptr R1 off=-612 disallowed",
},
{
"pass modified ctx pointer to helper, 3",
@@ -127,7 +127,7 @@
.prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
.expected_attach_type = BPF_CGROUP_UDP6_SENDMSG,
.result = REJECT,
- .errstr = "R1 type=inv expected=ctx",
+ .errstr = "R1 type=scalar expected=ctx",
},
{
"pass ctx or null check, 4: ctx - const",
@@ -141,7 +141,7 @@
.prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
.expected_attach_type = BPF_CGROUP_UDP6_SENDMSG,
.result = REJECT,
- .errstr = "dereference of modified ctx ptr",
+ .errstr = "negative offset ctx ptr R1 off=-612 disallowed",
},
{
"pass ctx or null check, 5: null (connect)",
@@ -193,5 +193,5 @@
.prog_type = BPF_PROG_TYPE_CGROUP_SOCK,
.expected_attach_type = BPF_CGROUP_INET4_POST_BIND,
.result = REJECT,
- .errstr = "R1 type=inv expected=ctx",
+ .errstr = "R1 type=scalar expected=ctx",
},
diff --git a/tools/testing/selftests/bpf/verifier/direct_packet_access.c b/tools/testing/selftests/bpf/verifier/direct_packet_access.c
index ac1e19d0f520..11acd1855acf 100644
--- a/tools/testing/selftests/bpf/verifier/direct_packet_access.c
+++ b/tools/testing/selftests/bpf/verifier/direct_packet_access.c
@@ -339,7 +339,7 @@
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
- .errstr = "R2 invalid mem access 'inv'",
+ .errstr = "R2 invalid mem access 'scalar'",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
diff --git a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
index 0ab7f1dfc97a..a6c869a7319c 100644
--- a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
+++ b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
@@ -350,7 +350,7 @@
BPF_EMIT_CALL(BPF_FUNC_csum_diff),
BPF_EXIT_INSN(),
},
- .errstr = "R1 type=inv expected=fp",
+ .errstr = "R1 type=scalar expected=fp",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
},
@@ -471,7 +471,7 @@
BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
BPF_EXIT_INSN(),
},
- .errstr = "R1 type=inv expected=fp",
+ .errstr = "R1 type=scalar expected=fp",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
},
@@ -484,7 +484,7 @@
BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
BPF_EXIT_INSN(),
},
- .errstr = "R1 type=inv expected=fp",
+ .errstr = "R1 type=scalar expected=fp",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
},
diff --git a/tools/testing/selftests/bpf/verifier/jmp32.c b/tools/testing/selftests/bpf/verifier/jmp32.c
index 1c857b2fbdf0..6ddc418fdfaf 100644
--- a/tools/testing/selftests/bpf/verifier/jmp32.c
+++ b/tools/testing/selftests/bpf/verifier/jmp32.c
@@ -286,7 +286,7 @@
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
- .errstr_unpriv = "R0 invalid mem access 'inv'",
+ .errstr_unpriv = "R0 invalid mem access 'scalar'",
.result_unpriv = REJECT,
.result = ACCEPT,
.retval = 2,
@@ -356,7 +356,7 @@
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
- .errstr_unpriv = "R0 invalid mem access 'inv'",
+ .errstr_unpriv = "R0 invalid mem access 'scalar'",
.result_unpriv = REJECT,
.result = ACCEPT,
.retval = 2,
@@ -426,7 +426,7 @@
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
- .errstr_unpriv = "R0 invalid mem access 'inv'",
+ .errstr_unpriv = "R0 invalid mem access 'scalar'",
.result_unpriv = REJECT,
.result = ACCEPT,
.retval = 2,
@@ -496,7 +496,7 @@
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
- .errstr_unpriv = "R0 invalid mem access 'inv'",
+ .errstr_unpriv = "R0 invalid mem access 'scalar'",
.result_unpriv = REJECT,
.result = ACCEPT,
.retval = 2,
@@ -566,7 +566,7 @@
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
- .errstr_unpriv = "R0 invalid mem access 'inv'",
+ .errstr_unpriv = "R0 invalid mem access 'scalar'",
.result_unpriv = REJECT,
.result = ACCEPT,
.retval = 2,
@@ -636,7 +636,7 @@
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
- .errstr_unpriv = "R0 invalid mem access 'inv'",
+ .errstr_unpriv = "R0 invalid mem access 'scalar'",
.result_unpriv = REJECT,
.result = ACCEPT,
.retval = 2,
@@ -706,7 +706,7 @@
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
- .errstr_unpriv = "R0 invalid mem access 'inv'",
+ .errstr_unpriv = "R0 invalid mem access 'scalar'",
.result_unpriv = REJECT,
.result = ACCEPT,
.retval = 2,
@@ -776,7 +776,7 @@
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
- .errstr_unpriv = "R0 invalid mem access 'inv'",
+ .errstr_unpriv = "R0 invalid mem access 'scalar'",
.result_unpriv = REJECT,
.result = ACCEPT,
.retval = 2,
diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c
index 6dc8003ffc70..9e754423fa8b 100644
--- a/tools/testing/selftests/bpf/verifier/precise.c
+++ b/tools/testing/selftests/bpf/verifier/precise.c
@@ -27,7 +27,7 @@
BPF_JMP_IMM(BPF_JLT, BPF_REG_2, 8, 1),
BPF_EXIT_INSN(),
- BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), /* R2=inv(umin=1, umax=8) */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), /* R2=scalar(umin=1, umax=8) */
BPF_MOV64_REG(BPF_REG_1, BPF_REG_FP),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -87,7 +87,7 @@
BPF_JMP_IMM(BPF_JLT, BPF_REG_2, 8, 1),
BPF_EXIT_INSN(),
- BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), /* R2=inv(umin=1, umax=8) */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), /* R2=scalar(umin=1, umax=8) */
BPF_MOV64_REG(BPF_REG_1, BPF_REG_FP),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
BPF_MOV64_IMM(BPF_REG_3, 0),
diff --git a/tools/testing/selftests/bpf/verifier/raw_stack.c b/tools/testing/selftests/bpf/verifier/raw_stack.c
index cc8e8c3cdc03..eb5ed936580b 100644
--- a/tools/testing/selftests/bpf/verifier/raw_stack.c
+++ b/tools/testing/selftests/bpf/verifier/raw_stack.c
@@ -132,7 +132,7 @@
BPF_EXIT_INSN(),
},
.result = REJECT,
- .errstr = "R0 invalid mem access 'inv'",
+ .errstr = "R0 invalid mem access 'scalar'",
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
@@ -162,7 +162,7 @@
BPF_EXIT_INSN(),
},
.result = REJECT,
- .errstr = "R3 invalid mem access 'inv'",
+ .errstr = "R3 invalid mem access 'scalar'",
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
diff --git a/tools/testing/selftests/bpf/verifier/ref_tracking.c b/tools/testing/selftests/bpf/verifier/ref_tracking.c
index 3b6ee009c00b..fbd682520e47 100644
--- a/tools/testing/selftests/bpf/verifier/ref_tracking.c
+++ b/tools/testing/selftests/bpf/verifier/ref_tracking.c
@@ -162,7 +162,7 @@
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
- .errstr = "type=inv expected=sock",
+ .errstr = "type=scalar expected=sock",
.result = REJECT,
},
{
@@ -178,7 +178,7 @@
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
- .errstr = "type=inv expected=sock",
+ .errstr = "type=scalar expected=sock",
.result = REJECT,
},
{
@@ -274,7 +274,7 @@
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
- .errstr = "type=inv expected=sock",
+ .errstr = "type=scalar expected=sock",
.result = REJECT,
},
{
diff --git a/tools/testing/selftests/bpf/verifier/search_pruning.c b/tools/testing/selftests/bpf/verifier/search_pruning.c
index 682519769fe3..68b14fdfebdb 100644
--- a/tools/testing/selftests/bpf/verifier/search_pruning.c
+++ b/tools/testing/selftests/bpf/verifier/search_pruning.c
@@ -104,7 +104,7 @@
BPF_EXIT_INSN(),
},
.fixup_map_hash_8b = { 3 },
- .errstr = "R6 invalid mem access 'inv'",
+ .errstr = "R6 invalid mem access 'scalar'",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
},
diff --git a/tools/testing/selftests/bpf/verifier/sock.c b/tools/testing/selftests/bpf/verifier/sock.c
index ce13ece08d51..86b24cad27a7 100644
--- a/tools/testing/selftests/bpf/verifier/sock.c
+++ b/tools/testing/selftests/bpf/verifier/sock.c
@@ -121,7 +121,25 @@
.result = ACCEPT,
},
{
- "sk_fullsock(skb->sk): sk->dst_port [narrow load]",
+ "sk_fullsock(skb->sk): sk->dst_port [word load] (backward compatibility)",
+ .insns = {
+ BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port)),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_CGROUP_SKB,
+ .result = ACCEPT,
+},
+{
+ "sk_fullsock(skb->sk): sk->dst_port [half load]",
.insns = {
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
@@ -139,7 +157,64 @@
.result = ACCEPT,
},
{
- "sk_fullsock(skb->sk): sk->dst_port [load 2nd byte]",
+ "sk_fullsock(skb->sk): sk->dst_port [half load] (invalid)",
+ .insns = {
+ BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 2),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_CGROUP_SKB,
+ .result = REJECT,
+ .errstr = "invalid sock access",
+},
+{
+ "sk_fullsock(skb->sk): sk->dst_port [byte load]",
+ .insns = {
+ BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_0, offsetof(struct bpf_sock, dst_port)),
+ BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 1),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_CGROUP_SKB,
+ .result = ACCEPT,
+},
+{
+ "sk_fullsock(skb->sk): sk->dst_port [byte load] (invalid)",
+ .insns = {
+ BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 2),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_CGROUP_SKB,
+ .result = REJECT,
+ .errstr = "invalid sock access",
+},
+{
+ "sk_fullsock(skb->sk): past sk->dst_port [half load] (invalid)",
.insns = {
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
@@ -149,7 +224,7 @@
BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
- BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 1),
+ BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetofend(struct bpf_sock, dst_port)),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
@@ -427,7 +502,7 @@
.fixup_sk_storage_map = { 11 },
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "R3 type=inv expected=fp",
+ .errstr = "R3 type=scalar expected=fp",
},
{
"sk_storage_get(map, skb->sk, &stack_value, 1): stack_value",
diff --git a/tools/testing/selftests/bpf/verifier/spill_fill.c b/tools/testing/selftests/bpf/verifier/spill_fill.c
index 8cfc5349d2a8..e23f07175e1b 100644
--- a/tools/testing/selftests/bpf/verifier/spill_fill.c
+++ b/tools/testing/selftests/bpf/verifier/spill_fill.c
@@ -102,7 +102,7 @@
BPF_EXIT_INSN(),
},
.errstr_unpriv = "attempt to corrupt spilled",
- .errstr = "R0 invalid mem access 'inv",
+ .errstr = "R0 invalid mem access 'scalar'",
.result = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
@@ -147,11 +147,11 @@
BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -8),
/* r0 = r2 */
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
- /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv20 */
+ /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=20 */
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
- /* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=inv20 */
+ /* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
- /* r0 = *(u32 *)r2 R0=pkt,off=20,r=20 R2=pkt,r=20 R3=pkt_end R4=inv20 */
+ /* r0 = *(u32 *)r2 R0=pkt,off=20,r=20 R2=pkt,r=20 R3=pkt_end R4=20 */
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
@@ -190,11 +190,11 @@
BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -8),
/* r0 = r2 */
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
- /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=65535 */
+ /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
- /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv,umax=65535 */
+ /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
- /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv20 */
+ /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
@@ -222,11 +222,11 @@
BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -8),
/* r0 = r2 */
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
- /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=65535 */
+ /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
- /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv,umax=65535 */
+ /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
- /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv20 */
+ /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
@@ -250,11 +250,11 @@
BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -6),
/* r0 = r2 */
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
- /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=65535 */
+ /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
- /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv,umax=65535 */
+ /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
- /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv20 */
+ /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
@@ -280,11 +280,11 @@
BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -4),
/* r0 = r2 */
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
- /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=U32_MAX */
+ /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=U32_MAX */
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
- /* if (r0 > r3) R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4=inv */
+ /* if (r0 > r3) R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4= */
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
- /* r0 = *(u32 *)r2 R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4=inv */
+ /* r0 = *(u32 *)r2 R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4= */
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
@@ -305,13 +305,13 @@
BPF_JMP_IMM(BPF_JLE, BPF_REG_4, 40, 2),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
- /* *(u32 *)(r10 -8) = r4 R4=inv,umax=40 */
+ /* *(u32 *)(r10 -8) = r4 R4=umax=40 */
BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8),
/* r4 = (*u32 *)(r10 - 8) */
BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -8),
- /* r2 += r4 R2=pkt R4=inv,umax=40 */
+ /* r2 += r4 R2=pkt R4=umax=40 */
BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_4),
- /* r0 = r2 R2=pkt,umax=40 R4=inv,umax=40 */
+ /* r0 = r2 R2=pkt,umax=40 R4=umax=40 */
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
/* r2 += 20 R0=pkt,umax=40 R2=pkt,umax=40 */
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 20),
diff --git a/tools/testing/selftests/bpf/verifier/unpriv.c b/tools/testing/selftests/bpf/verifier/unpriv.c
index 111801aea5e3..878ca26c3f0a 100644
--- a/tools/testing/selftests/bpf/verifier/unpriv.c
+++ b/tools/testing/selftests/bpf/verifier/unpriv.c
@@ -214,7 +214,7 @@
BPF_EXIT_INSN(),
},
.result = REJECT,
- .errstr = "R1 type=inv expected=ctx",
+ .errstr = "R1 type=scalar expected=ctx",
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
},
{
@@ -420,7 +420,7 @@
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
BPF_EXIT_INSN(),
},
- .errstr_unpriv = "R7 invalid mem access 'inv'",
+ .errstr_unpriv = "R7 invalid mem access 'scalar'",
.result_unpriv = REJECT,
.result = ACCEPT,
.retval = 0,
diff --git a/tools/testing/selftests/bpf/verifier/value_illegal_alu.c b/tools/testing/selftests/bpf/verifier/value_illegal_alu.c
index 489062867218..d6f29eb4bd57 100644
--- a/tools/testing/selftests/bpf/verifier/value_illegal_alu.c
+++ b/tools/testing/selftests/bpf/verifier/value_illegal_alu.c
@@ -64,7 +64,7 @@
},
.fixup_map_hash_48b = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited",
- .errstr = "invalid mem access 'inv'",
+ .errstr = "invalid mem access 'scalar'",
.result = REJECT,
.result_unpriv = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
@@ -89,7 +89,7 @@
},
.fixup_map_hash_48b = { 3 },
.errstr_unpriv = "leaking pointer from stack off -8",
- .errstr = "R0 invalid mem access 'inv'",
+ .errstr = "R0 invalid mem access 'scalar'",
.result = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
diff --git a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
index 359f3e8f8b60..249187d3c530 100644
--- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
+++ b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
@@ -397,7 +397,7 @@
.fixup_map_array_48b = { 1 },
.result = ACCEPT,
.result_unpriv = REJECT,
- .errstr_unpriv = "R0 invalid mem access 'inv'",
+ .errstr_unpriv = "R0 invalid mem access 'scalar'",
.retval = 0,
},
{
@@ -1074,7 +1074,7 @@
},
.fixup_map_array_48b = { 3 },
.result = REJECT,
- .errstr = "R0 invalid mem access 'inv'",
+ .errstr = "R0 invalid mem access 'scalar'",
.errstr_unpriv = "R0 pointer -= pointer prohibited",
},
{
diff --git a/tools/testing/selftests/bpf/verifier/var_off.c b/tools/testing/selftests/bpf/verifier/var_off.c
index eab1f7f56e2f..187c6f6e32bc 100644
--- a/tools/testing/selftests/bpf/verifier/var_off.c
+++ b/tools/testing/selftests/bpf/verifier/var_off.c
@@ -131,7 +131,7 @@
* write might have overwritten the spilled pointer (i.e. we lose track
* of the spilled register when we analyze the write).
*/
- .errstr = "R2 invalid mem access 'inv'",
+ .errstr = "R2 invalid mem access 'scalar'",
.result = REJECT,
},
{
diff --git a/tools/testing/selftests/bpf/vmtest.sh b/tools/testing/selftests/bpf/vmtest.sh
index b3afd43549fa..e0bb04a97e10 100755
--- a/tools/testing/selftests/bpf/vmtest.sh
+++ b/tools/testing/selftests/bpf/vmtest.sh
@@ -241,7 +241,7 @@ EOF
-nodefaults \
-display none \
-serial mon:stdio \
- "${qemu_flags[@]}" \
+ "${QEMU_FLAGS[@]}" \
-enable-kvm \
-m 4G \
-drive file="${rootfs_img}",format=raw,index=1,media=disk,if=virtio,cache=none \
diff --git a/tools/testing/selftests/bpf/xdp_redirect_multi.c b/tools/testing/selftests/bpf/xdp_redirect_multi.c
index 51c8224b4ccc..aaedbf4955c3 100644
--- a/tools/testing/selftests/bpf/xdp_redirect_multi.c
+++ b/tools/testing/selftests/bpf/xdp_redirect_multi.c
@@ -32,12 +32,12 @@ static void int_exit(int sig)
int i;
for (i = 0; ifaces[i] > 0; i++) {
- if (bpf_get_link_xdp_id(ifaces[i], &prog_id, xdp_flags)) {
- printf("bpf_get_link_xdp_id failed\n");
+ if (bpf_xdp_query_id(ifaces[i], xdp_flags, &prog_id)) {
+ printf("bpf_xdp_query_id failed\n");
exit(1);
}
if (prog_id)
- bpf_set_link_xdp_fd(ifaces[i], -1, xdp_flags);
+ bpf_xdp_detach(ifaces[i], xdp_flags, NULL);
}
exit(0);
@@ -210,7 +210,7 @@ int main(int argc, char **argv)
}
/* bind prog_fd to each interface */
- ret = bpf_set_link_xdp_fd(ifindex, prog_fd, xdp_flags);
+ ret = bpf_xdp_attach(ifindex, prog_fd, xdp_flags, NULL);
if (ret) {
printf("Set xdp fd failed on %d\n", ifindex);
goto err_out;
diff --git a/tools/testing/selftests/bpf/xdping.c b/tools/testing/selftests/bpf/xdping.c
index baa870a759a2..c567856fd1bc 100644
--- a/tools/testing/selftests/bpf/xdping.c
+++ b/tools/testing/selftests/bpf/xdping.c
@@ -29,7 +29,7 @@ static __u32 xdp_flags = XDP_FLAGS_UPDATE_IF_NOEXIST;
static void cleanup(int sig)
{
- bpf_set_link_xdp_fd(ifindex, -1, xdp_flags);
+ bpf_xdp_detach(ifindex, xdp_flags, NULL);
if (sig)
exit(1);
}
@@ -203,7 +203,7 @@ int main(int argc, char **argv)
printf("XDP setup disrupts network connectivity, hit Ctrl+C to quit\n");
- if (bpf_set_link_xdp_fd(ifindex, prog_fd, xdp_flags) < 0) {
+ if (bpf_xdp_attach(ifindex, prog_fd, xdp_flags, NULL) < 0) {
fprintf(stderr, "Link set xdp fd failed for %s\n", ifname);
goto done;
}
diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c
index 0a5d23da486d..5f8296d29e77 100644
--- a/tools/testing/selftests/bpf/xdpxceiver.c
+++ b/tools/testing/selftests/bpf/xdpxceiver.c
@@ -266,22 +266,24 @@ static int xsk_configure_umem(struct xsk_umem_info *umem, void *buffer, u64 size
}
static int xsk_configure_socket(struct xsk_socket_info *xsk, struct xsk_umem_info *umem,
- struct ifobject *ifobject, u32 qid)
+ struct ifobject *ifobject, bool shared)
{
- struct xsk_socket_config cfg;
+ struct xsk_socket_config cfg = {};
struct xsk_ring_cons *rxr;
struct xsk_ring_prod *txr;
xsk->umem = umem;
cfg.rx_size = xsk->rxqsize;
cfg.tx_size = XSK_RING_PROD__DEFAULT_NUM_DESCS;
- cfg.libbpf_flags = 0;
+ cfg.libbpf_flags = XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD;
cfg.xdp_flags = ifobject->xdp_flags;
cfg.bind_flags = ifobject->bind_flags;
+ if (shared)
+ cfg.bind_flags |= XDP_SHARED_UMEM;
txr = ifobject->tx_on ? &xsk->tx : NULL;
rxr = ifobject->rx_on ? &xsk->rx : NULL;
- return xsk_socket__create(&xsk->xsk, ifobject->ifname, qid, umem->umem, rxr, txr, &cfg);
+ return xsk_socket__create(&xsk->xsk, ifobject->ifname, 0, umem->umem, rxr, txr, &cfg);
}
static struct option long_options[] = {
@@ -387,7 +389,6 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx,
for (i = 0; i < MAX_INTERFACES; i++) {
struct ifobject *ifobj = i ? ifobj_rx : ifobj_tx;
- ifobj->umem = &ifobj->umem_arr[0];
ifobj->xsk = &ifobj->xsk_arr[0];
ifobj->use_poll = false;
ifobj->pacing_on = true;
@@ -401,11 +402,12 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx,
ifobj->tx_on = false;
}
+ memset(ifobj->umem, 0, sizeof(*ifobj->umem));
+ ifobj->umem->num_frames = DEFAULT_UMEM_BUFFERS;
+ ifobj->umem->frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE;
+
for (j = 0; j < MAX_SOCKETS; j++) {
- memset(&ifobj->umem_arr[j], 0, sizeof(ifobj->umem_arr[j]));
memset(&ifobj->xsk_arr[j], 0, sizeof(ifobj->xsk_arr[j]));
- ifobj->umem_arr[j].num_frames = DEFAULT_UMEM_BUFFERS;
- ifobj->umem_arr[j].frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE;
ifobj->xsk_arr[j].rxqsize = XSK_RING_CONS__DEFAULT_NUM_DESCS;
}
}
@@ -906,7 +908,10 @@ static bool rx_stats_are_valid(struct ifobject *ifobject)
return true;
case STAT_TEST_RX_FULL:
xsk_stat = stats.rx_ring_full;
- expected_stat -= RX_FULL_RXQSIZE;
+ if (ifobject->umem->num_frames < XSK_RING_PROD__DEFAULT_NUM_DESCS)
+ expected_stat = ifobject->umem->num_frames - RX_FULL_RXQSIZE;
+ else
+ expected_stat = XSK_RING_PROD__DEFAULT_NUM_DESCS - RX_FULL_RXQSIZE;
break;
case STAT_TEST_RX_FILL_EMPTY:
xsk_stat = stats.rx_fill_ring_empty_descs;
@@ -947,7 +952,10 @@ static void tx_stats_validate(struct ifobject *ifobject)
static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject)
{
+ u64 umem_sz = ifobject->umem->num_frames * ifobject->umem->frame_size;
int mmap_flags = MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE;
+ int ret, ifindex;
+ void *bufs;
u32 i;
ifobject->ns_fd = switch_namespace(ifobject->nsname);
@@ -955,23 +963,20 @@ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject)
if (ifobject->umem->unaligned_mode)
mmap_flags |= MAP_HUGETLB;
- for (i = 0; i < test->nb_sockets; i++) {
- u64 umem_sz = ifobject->umem->num_frames * ifobject->umem->frame_size;
- u32 ctr = 0;
- void *bufs;
- int ret;
+ bufs = mmap(NULL, umem_sz, PROT_READ | PROT_WRITE, mmap_flags, -1, 0);
+ if (bufs == MAP_FAILED)
+ exit_with_error(errno);
- bufs = mmap(NULL, umem_sz, PROT_READ | PROT_WRITE, mmap_flags, -1, 0);
- if (bufs == MAP_FAILED)
- exit_with_error(errno);
+ ret = xsk_configure_umem(ifobject->umem, bufs, umem_sz);
+ if (ret)
+ exit_with_error(-ret);
- ret = xsk_configure_umem(&ifobject->umem_arr[i], bufs, umem_sz);
- if (ret)
- exit_with_error(-ret);
+ for (i = 0; i < test->nb_sockets; i++) {
+ u32 ctr = 0;
while (ctr++ < SOCK_RECONF_CTR) {
- ret = xsk_configure_socket(&ifobject->xsk_arr[i], &ifobject->umem_arr[i],
- ifobject, i);
+ ret = xsk_configure_socket(&ifobject->xsk_arr[i], ifobject->umem,
+ ifobject, !!i);
if (!ret)
break;
@@ -982,8 +987,22 @@ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject)
}
}
- ifobject->umem = &ifobject->umem_arr[0];
ifobject->xsk = &ifobject->xsk_arr[0];
+
+ if (!ifobject->rx_on)
+ return;
+
+ ifindex = if_nametoindex(ifobject->ifname);
+ if (!ifindex)
+ exit_with_error(errno);
+
+ ret = xsk_setup_xdp_prog(ifindex, &ifobject->xsk_map_fd);
+ if (ret)
+ exit_with_error(-ret);
+
+ ret = xsk_socket__update_xskmap(ifobject->xsk->xsk, ifobject->xsk_map_fd);
+ if (ret)
+ exit_with_error(-ret);
}
static void testapp_cleanup_xsk_res(struct ifobject *ifobj)
@@ -1139,14 +1158,16 @@ static void testapp_bidi(struct test_spec *test)
static void swap_xsk_resources(struct ifobject *ifobj_tx, struct ifobject *ifobj_rx)
{
+ int ret;
+
xsk_socket__delete(ifobj_tx->xsk->xsk);
- xsk_umem__delete(ifobj_tx->umem->umem);
xsk_socket__delete(ifobj_rx->xsk->xsk);
- xsk_umem__delete(ifobj_rx->umem->umem);
- ifobj_tx->umem = &ifobj_tx->umem_arr[1];
ifobj_tx->xsk = &ifobj_tx->xsk_arr[1];
- ifobj_rx->umem = &ifobj_rx->umem_arr[1];
ifobj_rx->xsk = &ifobj_rx->xsk_arr[1];
+
+ ret = xsk_socket__update_xskmap(ifobj_rx->xsk->xsk, ifobj_rx->xsk_map_fd);
+ if (ret)
+ exit_with_error(-ret);
}
static void testapp_bpf_res(struct test_spec *test)
@@ -1405,13 +1426,13 @@ static struct ifobject *ifobject_create(void)
if (!ifobj->xsk_arr)
goto out_xsk_arr;
- ifobj->umem_arr = calloc(MAX_SOCKETS, sizeof(*ifobj->umem_arr));
- if (!ifobj->umem_arr)
- goto out_umem_arr;
+ ifobj->umem = calloc(1, sizeof(*ifobj->umem));
+ if (!ifobj->umem)
+ goto out_umem;
return ifobj;
-out_umem_arr:
+out_umem:
free(ifobj->xsk_arr);
out_xsk_arr:
free(ifobj);
@@ -1420,7 +1441,7 @@ out_xsk_arr:
static void ifobject_delete(struct ifobject *ifobj)
{
- free(ifobj->umem_arr);
+ free(ifobj->umem);
free(ifobj->xsk_arr);
free(ifobj);
}
diff --git a/tools/testing/selftests/bpf/xdpxceiver.h b/tools/testing/selftests/bpf/xdpxceiver.h
index 2f705f44b748..62a3e6388632 100644
--- a/tools/testing/selftests/bpf/xdpxceiver.h
+++ b/tools/testing/selftests/bpf/xdpxceiver.h
@@ -125,10 +125,10 @@ struct ifobject {
struct xsk_socket_info *xsk;
struct xsk_socket_info *xsk_arr;
struct xsk_umem_info *umem;
- struct xsk_umem_info *umem_arr;
thread_func_t func_ptr;
struct pkt_stream *pkt_stream;
int ns_fd;
+ int xsk_map_fd;
u32 dst_ip;
u32 src_ip;
u32 xdp_flags;
diff --git a/tools/testing/selftests/drivers/net/mlxsw/hw_stats_l3.sh b/tools/testing/selftests/drivers/net/mlxsw/hw_stats_l3.sh
new file mode 100755
index 000000000000..941ba4c485c9
--- /dev/null
+++ b/tools/testing/selftests/drivers/net/mlxsw/hw_stats_l3.sh
@@ -0,0 +1,31 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+lib_dir=$(dirname $0)/../../../net/forwarding
+
+ALL_TESTS="
+ l3_monitor_test
+"
+NUM_NETIFS=0
+source $lib_dir/lib.sh
+
+swp=$NETIF_NO_CABLE
+
+cleanup()
+{
+ pre_cleanup
+}
+
+l3_monitor_test()
+{
+ hw_stats_monitor_test $swp l3 \
+ "ip addr add dev $swp 192.0.2.1/28" \
+ "ip addr del dev $swp 192.0.2.1/28"
+}
+
+trap cleanup EXIT
+
+setup_wait
+tests_run
+
+exit $EXIT_STATUS
diff --git a/tools/testing/selftests/drivers/net/netdevsim/hw_stats_l3.sh b/tools/testing/selftests/drivers/net/netdevsim/hw_stats_l3.sh
new file mode 100755
index 000000000000..fe1898402987
--- /dev/null
+++ b/tools/testing/selftests/drivers/net/netdevsim/hw_stats_l3.sh
@@ -0,0 +1,421 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+lib_dir=$(dirname $0)/../../../net/forwarding
+
+ALL_TESTS="
+ l3_reporting_test
+ l3_fail_next_test
+ l3_counter_test
+ l3_rollback_test
+ l3_monitor_test
+"
+
+NETDEVSIM_PATH=/sys/bus/netdevsim/
+DEV_ADDR_1=1337
+DEV_ADDR_2=1057
+DEV_ADDR_3=5417
+NUM_NETIFS=0
+source $lib_dir/lib.sh
+
+DUMMY_IFINDEX=
+
+DEV_ADDR()
+{
+ local n=$1; shift
+ local var=DEV_ADDR_$n
+
+ echo ${!var}
+}
+
+DEV()
+{
+ echo netdevsim$(DEV_ADDR $1)
+}
+
+DEVLINK_DEV()
+{
+ echo netdevsim/$(DEV $1)
+}
+
+SYSFS_NET_DIR()
+{
+ echo /sys/bus/netdevsim/devices/$(DEV $1)/net/
+}
+
+DEBUGFS_DIR()
+{
+ echo /sys/kernel/debug/netdevsim/$(DEV $1)/
+}
+
+nsim_add()
+{
+ local n=$1; shift
+
+ echo "$(DEV_ADDR $n) 1" > ${NETDEVSIM_PATH}/new_device
+ while [ ! -d $(SYSFS_NET_DIR $n) ] ; do :; done
+}
+
+nsim_reload()
+{
+ local n=$1; shift
+ local ns=$1; shift
+
+ devlink dev reload $(DEVLINK_DEV $n) netns $ns
+
+ if [ $? -ne 0 ]; then
+ echo "Failed to reload $(DEV $n) into netns \"testns1\""
+ exit 1
+ fi
+
+}
+
+nsim_del()
+{
+ local n=$1; shift
+
+ echo "$(DEV_ADDR $n)" > ${NETDEVSIM_PATH}/del_device
+}
+
+nsim_hwstats_toggle()
+{
+ local action=$1; shift
+ local instance=$1; shift
+ local netdev=$1; shift
+ local type=$1; shift
+
+ local ifindex=$($IP -j link show dev $netdev | jq '.[].ifindex')
+
+ echo $ifindex > $(DEBUGFS_DIR $instance)/hwstats/$type/$action
+}
+
+nsim_hwstats_enable()
+{
+ nsim_hwstats_toggle enable_ifindex "$@"
+}
+
+nsim_hwstats_disable()
+{
+ nsim_hwstats_toggle disable_ifindex "$@"
+}
+
+nsim_hwstats_fail_next_enable()
+{
+ nsim_hwstats_toggle fail_next_enable "$@"
+}
+
+setup_prepare()
+{
+ modprobe netdevsim &> /dev/null
+ nsim_add 1
+ nsim_add 2
+ nsim_add 3
+
+ ip netns add testns1
+
+ if [ $? -ne 0 ]; then
+ echo "Failed to add netns \"testns1\""
+ exit 1
+ fi
+
+ nsim_reload 1 testns1
+ nsim_reload 2 testns1
+ nsim_reload 3 testns1
+
+ IP="ip -n testns1"
+
+ $IP link add name dummy1 type dummy
+ $IP link set dev dummy1 up
+ DUMMY_IFINDEX=$($IP -j link show dev dummy1 | jq '.[].ifindex')
+}
+
+cleanup()
+{
+ pre_cleanup
+
+ $IP link del name dummy1
+ ip netns del testns1
+ nsim_del 3
+ nsim_del 2
+ nsim_del 1
+ modprobe -r netdevsim &> /dev/null
+}
+
+netdev_hwstats_used()
+{
+ local netdev=$1; shift
+ local type=$1; shift
+
+ $IP -j stats show dev "$netdev" group offload subgroup hw_stats_info |
+ jq '.[].info.l3_stats.used'
+}
+
+netdev_check_used()
+{
+ local netdev=$1; shift
+ local type=$1; shift
+
+ [[ $(netdev_hwstats_used $netdev $type) == "true" ]]
+}
+
+netdev_check_unused()
+{
+ local netdev=$1; shift
+ local type=$1; shift
+
+ [[ $(netdev_hwstats_used $netdev $type) == "false" ]]
+}
+
+netdev_hwstats_request()
+{
+ local netdev=$1; shift
+ local type=$1; shift
+
+ $IP -j stats show dev "$netdev" group offload subgroup hw_stats_info |
+ jq ".[].info.${type}_stats.request"
+}
+
+netdev_check_requested()
+{
+ local netdev=$1; shift
+ local type=$1; shift
+
+ [[ $(netdev_hwstats_request $netdev $type) == "true" ]]
+}
+
+netdev_check_unrequested()
+{
+ local netdev=$1; shift
+ local type=$1; shift
+
+ [[ $(netdev_hwstats_request $netdev $type) == "false" ]]
+}
+
+reporting_test()
+{
+ local type=$1; shift
+ local instance=1
+
+ RET=0
+
+ [[ -n $(netdev_hwstats_used dummy1 $type) ]]
+ check_err $? "$type stats not reported"
+
+ netdev_check_unused dummy1 $type
+ check_err $? "$type stats reported as used before either device or netdevsim request"
+
+ nsim_hwstats_enable $instance dummy1 $type
+ netdev_check_unused dummy1 $type
+ check_err $? "$type stats reported as used before device request"
+ netdev_check_unrequested dummy1 $type
+ check_err $? "$type stats reported as requested before device request"
+
+ $IP stats set dev dummy1 ${type}_stats on
+ netdev_check_used dummy1 $type
+ check_err $? "$type stats reported as not used after both device and netdevsim request"
+ netdev_check_requested dummy1 $type
+ check_err $? "$type stats reported as not requested after device request"
+
+ nsim_hwstats_disable $instance dummy1 $type
+ netdev_check_unused dummy1 $type
+ check_err $? "$type stats reported as used after netdevsim request withdrawn"
+
+ nsim_hwstats_enable $instance dummy1 $type
+ netdev_check_used dummy1 $type
+ check_err $? "$type stats reported as not used after netdevsim request reenabled"
+
+ $IP stats set dev dummy1 ${type}_stats off
+ netdev_check_unused dummy1 $type
+ check_err $? "$type stats reported as used after device request withdrawn"
+ netdev_check_unrequested dummy1 $type
+ check_err $? "$type stats reported as requested after device request withdrawn"
+
+ nsim_hwstats_disable $instance dummy1 $type
+ netdev_check_unused dummy1 $type
+ check_err $? "$type stats reported as used after both requests withdrawn"
+
+ log_test "Reporting of $type stats usage"
+}
+
+l3_reporting_test()
+{
+ reporting_test l3
+}
+
+__fail_next_test()
+{
+ local instance=$1; shift
+ local type=$1; shift
+
+ RET=0
+
+ netdev_check_unused dummy1 $type
+ check_err $? "$type stats reported as used before either device or netdevsim request"
+
+ nsim_hwstats_enable $instance dummy1 $type
+ nsim_hwstats_fail_next_enable $instance dummy1 $type
+ netdev_check_unused dummy1 $type
+ check_err $? "$type stats reported as used before device request"
+ netdev_check_unrequested dummy1 $type
+ check_err $? "$type stats reported as requested before device request"
+
+ $IP stats set dev dummy1 ${type}_stats on 2>/dev/null
+ check_fail $? "$type stats request not bounced as it should have been"
+ netdev_check_unused dummy1 $type
+ check_err $? "$type stats reported as used after bounce"
+ netdev_check_unrequested dummy1 $type
+ check_err $? "$type stats reported as requested after bounce"
+
+ $IP stats set dev dummy1 ${type}_stats on
+ check_err $? "$type stats request failed when it shouldn't have"
+ netdev_check_used dummy1 $type
+ check_err $? "$type stats reported as not used after both device and netdevsim request"
+ netdev_check_requested dummy1 $type
+ check_err $? "$type stats reported as not requested after device request"
+
+ $IP stats set dev dummy1 ${type}_stats off
+ nsim_hwstats_disable $instance dummy1 $type
+
+ log_test "Injected failure of $type stats enablement (netdevsim #$instance)"
+}
+
+fail_next_test()
+{
+ __fail_next_test 1 "$@"
+ __fail_next_test 2 "$@"
+ __fail_next_test 3 "$@"
+}
+
+l3_fail_next_test()
+{
+ fail_next_test l3
+}
+
+get_hwstat()
+{
+ local netdev=$1; shift
+ local type=$1; shift
+ local selector=$1; shift
+
+ $IP -j stats show dev $netdev group offload subgroup ${type}_stats |
+ jq ".[0].stats64.${selector}"
+}
+
+counter_test()
+{
+ local type=$1; shift
+ local instance=1
+
+ RET=0
+
+ nsim_hwstats_enable $instance dummy1 $type
+ $IP stats set dev dummy1 ${type}_stats on
+ netdev_check_used dummy1 $type
+ check_err $? "$type stats reported as not used after both device and netdevsim request"
+
+ # Netdevsim counts 10pps on ingress. We should see maybe a couple
+ # packets, unless things take a reeealy long time.
+ local pkts=$(get_hwstat dummy1 l3 rx.packets)
+ ((pkts < 10))
+ check_err $? "$type stats show >= 10 packets after first enablement"
+
+ sleep 2
+
+ local pkts=$(get_hwstat dummy1 l3 rx.packets)
+ ((pkts >= 20))
+ check_err $? "$type stats show < 20 packets after 2s passed"
+
+ $IP stats set dev dummy1 ${type}_stats off
+
+ sleep 2
+
+ $IP stats set dev dummy1 ${type}_stats on
+ local pkts=$(get_hwstat dummy1 l3 rx.packets)
+ ((pkts < 10))
+ check_err $? "$type stats show >= 10 packets after second enablement"
+
+ $IP stats set dev dummy1 ${type}_stats off
+ nsim_hwstats_fail_next_enable $instance dummy1 $type
+ $IP stats set dev dummy1 ${type}_stats on 2>/dev/null
+ check_fail $? "$type stats request not bounced as it should have been"
+
+ sleep 2
+
+ $IP stats set dev dummy1 ${type}_stats on
+ local pkts=$(get_hwstat dummy1 l3 rx.packets)
+ ((pkts < 10))
+ check_err $? "$type stats show >= 10 packets after post-fail enablement"
+
+ $IP stats set dev dummy1 ${type}_stats off
+
+ log_test "Counter values in $type stats"
+}
+
+l3_counter_test()
+{
+ counter_test l3
+}
+
+rollback_test()
+{
+ local type=$1; shift
+
+ RET=0
+
+ nsim_hwstats_enable 1 dummy1 l3
+ nsim_hwstats_enable 2 dummy1 l3
+ nsim_hwstats_enable 3 dummy1 l3
+
+ # The three netdevsim instances are registered in order of their number
+ # one after another. It is reasonable to expect that whatever
+ # notifications take place hit no. 2 in between hitting nos. 1 and 3,
+ # whatever the actual order. This allows us to test that a fail caused
+ # by no. 2 does not leave the system in a partial state, and rolls
+ # everything back.
+
+ nsim_hwstats_fail_next_enable 2 dummy1 l3
+ $IP stats set dev dummy1 ${type}_stats on 2>/dev/null
+ check_fail $? "$type stats request not bounced as it should have been"
+
+ netdev_check_unused dummy1 $type
+ check_err $? "$type stats reported as used after bounce"
+ netdev_check_unrequested dummy1 $type
+ check_err $? "$type stats reported as requested after bounce"
+
+ sleep 2
+
+ $IP stats set dev dummy1 ${type}_stats on
+ check_err $? "$type stats request not upheld as it should have been"
+
+ local pkts=$(get_hwstat dummy1 l3 rx.packets)
+ ((pkts < 10))
+ check_err $? "$type stats show $pkts packets after post-fail enablement"
+
+ $IP stats set dev dummy1 ${type}_stats off
+
+ nsim_hwstats_disable 3 dummy1 l3
+ nsim_hwstats_disable 2 dummy1 l3
+ nsim_hwstats_disable 1 dummy1 l3
+
+ log_test "Failure in $type stats enablement rolled back"
+}
+
+l3_rollback_test()
+{
+ rollback_test l3
+}
+
+l3_monitor_test()
+{
+ hw_stats_monitor_test dummy1 l3 \
+ "nsim_hwstats_enable 1 dummy1 l3" \
+ "nsim_hwstats_disable 1 dummy1 l3" \
+ "$IP"
+}
+
+trap cleanup EXIT
+
+setup_prepare
+tests_run
+
+exit $EXIT_STATUS
diff --git a/tools/testing/selftests/net/.gitignore b/tools/testing/selftests/net/.gitignore
index 7581a7348e1b..21a411b04890 100644
--- a/tools/testing/selftests/net/.gitignore
+++ b/tools/testing/selftests/net/.gitignore
@@ -35,4 +35,4 @@ test_unix_oob
gro
ioam6_parser
toeplitz
-cmsg_so_mark
+cmsg_sender
diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile
index 0b1488616c55..3fe2515aa616 100644
--- a/tools/testing/selftests/net/Makefile
+++ b/tools/testing/selftests/net/Makefile
@@ -30,6 +30,7 @@ TEST_PROGS += ioam6.sh
TEST_PROGS += gro.sh
TEST_PROGS += gre_gso.sh
TEST_PROGS += cmsg_so_mark.sh
+TEST_PROGS += cmsg_time.sh
TEST_PROGS += srv6_end_dt46_l3vpn_test.sh
TEST_PROGS += srv6_end_dt4_l3vpn_test.sh
TEST_PROGS += srv6_end_dt6_l3vpn_test.sh
@@ -52,7 +53,7 @@ TEST_GEN_FILES += gro
TEST_GEN_PROGS = reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa
TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict tls
TEST_GEN_FILES += toeplitz
-TEST_GEN_FILES += cmsg_so_mark
+TEST_GEN_FILES += cmsg_sender
TEST_FILES := settings
diff --git a/tools/testing/selftests/net/af_unix/test_unix_oob.c b/tools/testing/selftests/net/af_unix/test_unix_oob.c
index 3dece8b29253..b57e91e1c3f2 100644
--- a/tools/testing/selftests/net/af_unix/test_unix_oob.c
+++ b/tools/testing/selftests/net/af_unix/test_unix_oob.c
@@ -218,10 +218,10 @@ main(int argc, char **argv)
/* Test 1:
* veriyf that SIGURG is
- * delivered and 63 bytes are
- * read and oob is '@'
+ * delivered, 63 bytes are
+ * read, oob is '@', and POLLPRI works.
*/
- wait_for_data(pfd, POLLIN | POLLPRI);
+ wait_for_data(pfd, POLLPRI);
read_oob(pfd, &oob);
len = read_data(pfd, buf, 1024);
if (!signal_recvd || len != 63 || oob != '@') {
diff --git a/tools/testing/selftests/net/cmsg_ipv6.sh b/tools/testing/selftests/net/cmsg_ipv6.sh
new file mode 100755
index 000000000000..2d89cb0ad288
--- /dev/null
+++ b/tools/testing/selftests/net/cmsg_ipv6.sh
@@ -0,0 +1,156 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+ksft_skip=4
+
+NS=ns
+IP6=2001:db8:1::1/64
+TGT6=2001:db8:1::2
+TMPF=`mktemp`
+
+cleanup()
+{
+ rm -f $TMPF
+ ip netns del $NS
+}
+
+trap cleanup EXIT
+
+NSEXE="ip netns exec $NS"
+
+tcpdump -h | grep immediate-mode >> /dev/null
+if [ $? -ne 0 ]; then
+ echo "SKIP - tcpdump with --immediate-mode option required"
+ exit $ksft_skip
+fi
+
+# Namespaces
+ip netns add $NS
+
+$NSEXE sysctl -w net.ipv4.ping_group_range='0 2147483647' > /dev/null
+
+# Connectivity
+ip -netns $NS link add type dummy
+ip -netns $NS link set dev dummy0 up
+ip -netns $NS addr add $IP6 dev dummy0
+
+# Test
+BAD=0
+TOTAL=0
+
+check_result() {
+ ((TOTAL++))
+ if [ $1 -ne $2 ]; then
+ echo " Case $3 returned $1, expected $2"
+ ((BAD++))
+ fi
+}
+
+# IPV6_DONTFRAG
+for ovr in setsock cmsg both diff; do
+ for df in 0 1; do
+ for p in u i r; do
+ [ $p == "u" ] && prot=UDP
+ [ $p == "i" ] && prot=ICMP
+ [ $p == "r" ] && prot=RAW
+
+ [ $ovr == "setsock" ] && m="-F $df"
+ [ $ovr == "cmsg" ] && m="-f $df"
+ [ $ovr == "both" ] && m="-F $df -f $df"
+ [ $ovr == "diff" ] && m="-F $((1 - df)) -f $df"
+
+ $NSEXE ./cmsg_sender -s -S 2000 -6 -p $p $m $TGT6 1234
+ check_result $? $df "DONTFRAG $prot $ovr"
+ done
+ done
+done
+
+# IPV6_TCLASS
+TOS=0x10
+TOS2=0x20
+
+ip -6 -netns $NS rule add tos $TOS lookup 300
+ip -6 -netns $NS route add table 300 prohibit any
+
+for ovr in setsock cmsg both diff; do
+ for p in u i r; do
+ [ $p == "u" ] && prot=UDP
+ [ $p == "i" ] && prot=ICMP
+ [ $p == "r" ] && prot=RAW
+
+ [ $ovr == "setsock" ] && m="-C"
+ [ $ovr == "cmsg" ] && m="-c"
+ [ $ovr == "both" ] && m="-C $((TOS2)) -c"
+ [ $ovr == "diff" ] && m="-C $((TOS )) -c"
+
+ $NSEXE nohup tcpdump --immediate-mode -p -ni dummy0 -w $TMPF -c 4 2> /dev/null &
+ BG=$!
+ sleep 0.05
+
+ $NSEXE ./cmsg_sender -6 -p $p $m $((TOS2)) $TGT6 1234
+ check_result $? 0 "TCLASS $prot $ovr - pass"
+
+ while [ -d /proc/$BG ]; do
+ $NSEXE ./cmsg_sender -6 -p u $TGT6 1234
+ done
+
+ tcpdump -r $TMPF -v 2>&1 | grep "class $TOS2" >> /dev/null
+ check_result $? 0 "TCLASS $prot $ovr - packet data"
+ rm $TMPF
+
+ [ $ovr == "both" ] && m="-C $((TOS )) -c"
+ [ $ovr == "diff" ] && m="-C $((TOS2)) -c"
+
+ $NSEXE ./cmsg_sender -6 -p $p $m $((TOS)) -s $TGT6 1234
+ check_result $? 1 "TCLASS $prot $ovr - rejection"
+ done
+done
+
+# IPV6_HOPLIMIT
+LIM=4
+
+for ovr in setsock cmsg both diff; do
+ for p in u i r; do
+ [ $p == "u" ] && prot=UDP
+ [ $p == "i" ] && prot=ICMP
+ [ $p == "r" ] && prot=RAW
+
+ [ $ovr == "setsock" ] && m="-L"
+ [ $ovr == "cmsg" ] && m="-l"
+ [ $ovr == "both" ] && m="-L $LIM -l"
+ [ $ovr == "diff" ] && m="-L $((LIM + 1)) -l"
+
+ $NSEXE nohup tcpdump --immediate-mode -p -ni dummy0 -w $TMPF -c 4 2> /dev/null &
+ BG=$!
+ sleep 0.05
+
+ $NSEXE ./cmsg_sender -6 -p $p $m $LIM $TGT6 1234
+ check_result $? 0 "HOPLIMIT $prot $ovr - pass"
+
+ while [ -d /proc/$BG ]; do
+ $NSEXE ./cmsg_sender -6 -p u $TGT6 1234
+ done
+
+ tcpdump -r $TMPF -v 2>&1 | grep "hlim $LIM[^0-9]" >> /dev/null
+ check_result $? 0 "HOPLIMIT $prot $ovr - packet data"
+ rm $TMPF
+ done
+done
+
+# IPV6 exthdr
+for p in u i r; do
+ # Very basic "does it crash" test
+ for h in h d r; do
+ $NSEXE ./cmsg_sender -p $p -6 -H $h $TGT6 1234
+ check_result $? 0 "ExtHdr $prot $ovr - pass"
+ done
+done
+
+# Summary
+if [ $BAD -ne 0 ]; then
+ echo "FAIL - $BAD/$TOTAL cases failed"
+ exit 1
+else
+ echo "OK"
+ exit 0
+fi
diff --git a/tools/testing/selftests/net/cmsg_sender.c b/tools/testing/selftests/net/cmsg_sender.c
new file mode 100644
index 000000000000..bc2162909a1a
--- /dev/null
+++ b/tools/testing/selftests/net/cmsg_sender.c
@@ -0,0 +1,506 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+#include <errno.h>
+#include <error.h>
+#include <netdb.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <time.h>
+#include <unistd.h>
+#include <linux/errqueue.h>
+#include <linux/icmp.h>
+#include <linux/icmpv6.h>
+#include <linux/net_tstamp.h>
+#include <linux/types.h>
+#include <linux/udp.h>
+#include <sys/socket.h>
+
+#include "../kselftest.h"
+
+enum {
+ ERN_SUCCESS = 0,
+ /* Well defined errors, callers may depend on these */
+ ERN_SEND = 1,
+ /* Informational, can reorder */
+ ERN_HELP,
+ ERN_SEND_SHORT,
+ ERN_SOCK_CREATE,
+ ERN_RESOLVE,
+ ERN_CMSG_WR,
+ ERN_SOCKOPT,
+ ERN_GETTIME,
+ ERN_RECVERR,
+ ERN_CMSG_RD,
+ ERN_CMSG_RCV,
+};
+
+struct option_cmsg_u32 {
+ bool ena;
+ unsigned int val;
+};
+
+struct options {
+ bool silent_send;
+ const char *host;
+ const char *service;
+ unsigned int size;
+ struct {
+ unsigned int mark;
+ unsigned int dontfrag;
+ unsigned int tclass;
+ unsigned int hlimit;
+ } sockopt;
+ struct {
+ unsigned int family;
+ unsigned int type;
+ unsigned int proto;
+ } sock;
+ struct option_cmsg_u32 mark;
+ struct {
+ bool ena;
+ unsigned int delay;
+ } txtime;
+ struct {
+ bool ena;
+ } ts;
+ struct {
+ struct option_cmsg_u32 dontfrag;
+ struct option_cmsg_u32 tclass;
+ struct option_cmsg_u32 hlimit;
+ struct option_cmsg_u32 exthdr;
+ } v6;
+} opt = {
+ .size = 13,
+ .sock = {
+ .family = AF_UNSPEC,
+ .type = SOCK_DGRAM,
+ .proto = IPPROTO_UDP,
+ },
+};
+
+static struct timespec time_start_real;
+static struct timespec time_start_mono;
+
+static void __attribute__((noreturn)) cs_usage(const char *bin)
+{
+ printf("Usage: %s [opts] <dst host> <dst port / service>\n", bin);
+ printf("Options:\n"
+ "\t\t-s Silent send() failures\n"
+ "\t\t-S send() size\n"
+ "\t\t-4/-6 Force IPv4 / IPv6 only\n"
+ "\t\t-p prot Socket protocol\n"
+ "\t\t (u = UDP (default); i = ICMP; r = RAW)\n"
+ "\n"
+ "\t\t-m val Set SO_MARK with given value\n"
+ "\t\t-M val Set SO_MARK via setsockopt\n"
+ "\t\t-d val Set SO_TXTIME with given delay (usec)\n"
+ "\t\t-t Enable time stamp reporting\n"
+ "\t\t-f val Set don't fragment via cmsg\n"
+ "\t\t-F val Set don't fragment via setsockopt\n"
+ "\t\t-c val Set TCLASS via cmsg\n"
+ "\t\t-C val Set TCLASS via setsockopt\n"
+ "\t\t-l val Set HOPLIMIT via cmsg\n"
+ "\t\t-L val Set HOPLIMIT via setsockopt\n"
+ "\t\t-H type Add an IPv6 header option\n"
+ "\t\t (h = HOP; d = DST; r = RTDST)"
+ "");
+ exit(ERN_HELP);
+}
+
+static void cs_parse_args(int argc, char *argv[])
+{
+ char o;
+
+ while ((o = getopt(argc, argv, "46sS:p:m:M:d:tf:F:c:C:l:L:H:")) != -1) {
+ switch (o) {
+ case 's':
+ opt.silent_send = true;
+ break;
+ case 'S':
+ opt.size = atoi(optarg);
+ break;
+ case '4':
+ opt.sock.family = AF_INET;
+ break;
+ case '6':
+ opt.sock.family = AF_INET6;
+ break;
+ case 'p':
+ if (*optarg == 'u' || *optarg == 'U') {
+ opt.sock.proto = IPPROTO_UDP;
+ } else if (*optarg == 'i' || *optarg == 'I') {
+ opt.sock.proto = IPPROTO_ICMP;
+ } else if (*optarg == 'r') {
+ opt.sock.type = SOCK_RAW;
+ } else {
+ printf("Error: unknown protocol: %s\n", optarg);
+ cs_usage(argv[0]);
+ }
+ break;
+
+ case 'm':
+ opt.mark.ena = true;
+ opt.mark.val = atoi(optarg);
+ break;
+ case 'M':
+ opt.sockopt.mark = atoi(optarg);
+ break;
+ case 'd':
+ opt.txtime.ena = true;
+ opt.txtime.delay = atoi(optarg);
+ break;
+ case 't':
+ opt.ts.ena = true;
+ break;
+ case 'f':
+ opt.v6.dontfrag.ena = true;
+ opt.v6.dontfrag.val = atoi(optarg);
+ break;
+ case 'F':
+ opt.sockopt.dontfrag = atoi(optarg);
+ break;
+ case 'c':
+ opt.v6.tclass.ena = true;
+ opt.v6.tclass.val = atoi(optarg);
+ break;
+ case 'C':
+ opt.sockopt.tclass = atoi(optarg);
+ break;
+ case 'l':
+ opt.v6.hlimit.ena = true;
+ opt.v6.hlimit.val = atoi(optarg);
+ break;
+ case 'L':
+ opt.sockopt.hlimit = atoi(optarg);
+ break;
+ case 'H':
+ opt.v6.exthdr.ena = true;
+ switch (optarg[0]) {
+ case 'h':
+ opt.v6.exthdr.val = IPV6_HOPOPTS;
+ break;
+ case 'd':
+ opt.v6.exthdr.val = IPV6_DSTOPTS;
+ break;
+ case 'r':
+ opt.v6.exthdr.val = IPV6_RTHDRDSTOPTS;
+ break;
+ default:
+ printf("Error: hdr type: %s\n", optarg);
+ break;
+ }
+ break;
+ }
+ }
+
+ if (optind != argc - 2)
+ cs_usage(argv[0]);
+
+ opt.host = argv[optind];
+ opt.service = argv[optind + 1];
+}
+
+static void memrnd(void *s, size_t n)
+{
+ int *dword = s;
+ char *byte;
+
+ for (; n >= 4; n -= 4)
+ *dword++ = rand();
+ byte = (void *)dword;
+ while (n--)
+ *byte++ = rand();
+}
+
+static void
+ca_write_cmsg_u32(char *cbuf, size_t cbuf_sz, size_t *cmsg_len,
+ int level, int optname, struct option_cmsg_u32 *uopt)
+{
+ struct cmsghdr *cmsg;
+
+ if (!uopt->ena)
+ return;
+
+ cmsg = (struct cmsghdr *)(cbuf + *cmsg_len);
+ *cmsg_len += CMSG_SPACE(sizeof(__u32));
+ if (cbuf_sz < *cmsg_len)
+ error(ERN_CMSG_WR, EFAULT, "cmsg buffer too small");
+
+ cmsg->cmsg_level = level;
+ cmsg->cmsg_type = optname;
+ cmsg->cmsg_len = CMSG_LEN(sizeof(__u32));
+ *(__u32 *)CMSG_DATA(cmsg) = uopt->val;
+}
+
+static void
+cs_write_cmsg(int fd, struct msghdr *msg, char *cbuf, size_t cbuf_sz)
+{
+ struct cmsghdr *cmsg;
+ size_t cmsg_len;
+
+ msg->msg_control = cbuf;
+ cmsg_len = 0;
+
+ ca_write_cmsg_u32(cbuf, cbuf_sz, &cmsg_len,
+ SOL_SOCKET, SO_MARK, &opt.mark);
+ ca_write_cmsg_u32(cbuf, cbuf_sz, &cmsg_len,
+ SOL_IPV6, IPV6_DONTFRAG, &opt.v6.dontfrag);
+ ca_write_cmsg_u32(cbuf, cbuf_sz, &cmsg_len,
+ SOL_IPV6, IPV6_TCLASS, &opt.v6.tclass);
+ ca_write_cmsg_u32(cbuf, cbuf_sz, &cmsg_len,
+ SOL_IPV6, IPV6_HOPLIMIT, &opt.v6.hlimit);
+
+ if (opt.txtime.ena) {
+ struct sock_txtime so_txtime = {
+ .clockid = CLOCK_MONOTONIC,
+ };
+ __u64 txtime;
+
+ if (setsockopt(fd, SOL_SOCKET, SO_TXTIME,
+ &so_txtime, sizeof(so_txtime)))
+ error(ERN_SOCKOPT, errno, "setsockopt TXTIME");
+
+ txtime = time_start_mono.tv_sec * (1000ULL * 1000 * 1000) +
+ time_start_mono.tv_nsec +
+ opt.txtime.delay * 1000;
+
+ cmsg = (struct cmsghdr *)(cbuf + cmsg_len);
+ cmsg_len += CMSG_SPACE(sizeof(txtime));
+ if (cbuf_sz < cmsg_len)
+ error(ERN_CMSG_WR, EFAULT, "cmsg buffer too small");
+
+ cmsg->cmsg_level = SOL_SOCKET;
+ cmsg->cmsg_type = SCM_TXTIME;
+ cmsg->cmsg_len = CMSG_LEN(sizeof(txtime));
+ memcpy(CMSG_DATA(cmsg), &txtime, sizeof(txtime));
+ }
+ if (opt.ts.ena) {
+ __u32 val = SOF_TIMESTAMPING_SOFTWARE |
+ SOF_TIMESTAMPING_OPT_TSONLY;
+
+ if (setsockopt(fd, SOL_SOCKET, SO_TIMESTAMPING,
+ &val, sizeof(val)))
+ error(ERN_SOCKOPT, errno, "setsockopt TIMESTAMPING");
+
+ cmsg = (struct cmsghdr *)(cbuf + cmsg_len);
+ cmsg_len += CMSG_SPACE(sizeof(__u32));
+ if (cbuf_sz < cmsg_len)
+ error(ERN_CMSG_WR, EFAULT, "cmsg buffer too small");
+
+ cmsg->cmsg_level = SOL_SOCKET;
+ cmsg->cmsg_type = SO_TIMESTAMPING;
+ cmsg->cmsg_len = CMSG_LEN(sizeof(__u32));
+ *(__u32 *)CMSG_DATA(cmsg) = SOF_TIMESTAMPING_TX_SCHED |
+ SOF_TIMESTAMPING_TX_SOFTWARE;
+ }
+ if (opt.v6.exthdr.ena) {
+ cmsg = (struct cmsghdr *)(cbuf + cmsg_len);
+ cmsg_len += CMSG_SPACE(8);
+ if (cbuf_sz < cmsg_len)
+ error(ERN_CMSG_WR, EFAULT, "cmsg buffer too small");
+
+ cmsg->cmsg_level = SOL_IPV6;
+ cmsg->cmsg_type = opt.v6.exthdr.val;
+ cmsg->cmsg_len = CMSG_LEN(8);
+ *(__u64 *)CMSG_DATA(cmsg) = 0;
+ }
+
+ if (cmsg_len)
+ msg->msg_controllen = cmsg_len;
+ else
+ msg->msg_control = NULL;
+}
+
+static const char *cs_ts_info2str(unsigned int info)
+{
+ static const char *names[] = {
+ [SCM_TSTAMP_SND] = "SND",
+ [SCM_TSTAMP_SCHED] = "SCHED",
+ [SCM_TSTAMP_ACK] = "ACK",
+ };
+
+ if (info < ARRAY_SIZE(names))
+ return names[info];
+ return "unknown";
+}
+
+static void
+cs_read_cmsg(int fd, struct msghdr *msg, char *cbuf, size_t cbuf_sz)
+{
+ struct sock_extended_err *see;
+ struct scm_timestamping *ts;
+ struct cmsghdr *cmsg;
+ int i, err;
+
+ if (!opt.ts.ena)
+ return;
+ msg->msg_control = cbuf;
+ msg->msg_controllen = cbuf_sz;
+
+ while (true) {
+ ts = NULL;
+ see = NULL;
+ memset(cbuf, 0, cbuf_sz);
+
+ err = recvmsg(fd, msg, MSG_ERRQUEUE);
+ if (err < 0) {
+ if (errno == EAGAIN)
+ break;
+ error(ERN_RECVERR, errno, "recvmsg ERRQ");
+ }
+
+ for (cmsg = CMSG_FIRSTHDR(msg); cmsg != NULL;
+ cmsg = CMSG_NXTHDR(msg, cmsg)) {
+ if (cmsg->cmsg_level == SOL_SOCKET &&
+ cmsg->cmsg_type == SO_TIMESTAMPING_OLD) {
+ if (cmsg->cmsg_len < sizeof(*ts))
+ error(ERN_CMSG_RD, EINVAL, "TS cmsg");
+
+ ts = (void *)CMSG_DATA(cmsg);
+ }
+ if ((cmsg->cmsg_level == SOL_IP &&
+ cmsg->cmsg_type == IP_RECVERR) ||
+ (cmsg->cmsg_level == SOL_IPV6 &&
+ cmsg->cmsg_type == IPV6_RECVERR)) {
+ if (cmsg->cmsg_len < sizeof(*see))
+ error(ERN_CMSG_RD, EINVAL, "sock_err cmsg");
+
+ see = (void *)CMSG_DATA(cmsg);
+ }
+ }
+
+ if (!ts)
+ error(ERN_CMSG_RCV, ENOENT, "TS cmsg not found");
+ if (!see)
+ error(ERN_CMSG_RCV, ENOENT, "sock_err cmsg not found");
+
+ for (i = 0; i < 3; i++) {
+ unsigned long long rel_time;
+
+ if (!ts->ts[i].tv_sec && !ts->ts[i].tv_nsec)
+ continue;
+
+ rel_time = (ts->ts[i].tv_sec - time_start_real.tv_sec) *
+ (1000ULL * 1000) +
+ (ts->ts[i].tv_nsec - time_start_real.tv_nsec) /
+ 1000;
+ printf(" %5s ts%d %lluus\n",
+ cs_ts_info2str(see->ee_info),
+ i, rel_time);
+ }
+ }
+}
+
+static void ca_set_sockopts(int fd)
+{
+ if (opt.sockopt.mark &&
+ setsockopt(fd, SOL_SOCKET, SO_MARK,
+ &opt.sockopt.mark, sizeof(opt.sockopt.mark)))
+ error(ERN_SOCKOPT, errno, "setsockopt SO_MARK");
+ if (opt.sockopt.dontfrag &&
+ setsockopt(fd, SOL_IPV6, IPV6_DONTFRAG,
+ &opt.sockopt.dontfrag, sizeof(opt.sockopt.dontfrag)))
+ error(ERN_SOCKOPT, errno, "setsockopt IPV6_DONTFRAG");
+ if (opt.sockopt.tclass &&
+ setsockopt(fd, SOL_IPV6, IPV6_TCLASS,
+ &opt.sockopt.tclass, sizeof(opt.sockopt.tclass)))
+ error(ERN_SOCKOPT, errno, "setsockopt IPV6_TCLASS");
+ if (opt.sockopt.hlimit &&
+ setsockopt(fd, SOL_IPV6, IPV6_UNICAST_HOPS,
+ &opt.sockopt.hlimit, sizeof(opt.sockopt.hlimit)))
+ error(ERN_SOCKOPT, errno, "setsockopt IPV6_HOPLIMIT");
+}
+
+int main(int argc, char *argv[])
+{
+ struct addrinfo hints, *ai;
+ struct iovec iov[1];
+ struct msghdr msg;
+ char cbuf[1024];
+ char *buf;
+ int err;
+ int fd;
+
+ cs_parse_args(argc, argv);
+
+ buf = malloc(opt.size);
+ memrnd(buf, opt.size);
+
+ memset(&hints, 0, sizeof(hints));
+ hints.ai_family = opt.sock.family;
+
+ ai = NULL;
+ err = getaddrinfo(opt.host, opt.service, &hints, &ai);
+ if (err) {
+ fprintf(stderr, "Can't resolve address [%s]:%s\n",
+ opt.host, opt.service);
+ return ERN_SOCK_CREATE;
+ }
+
+ if (ai->ai_family == AF_INET6 && opt.sock.proto == IPPROTO_ICMP)
+ opt.sock.proto = IPPROTO_ICMPV6;
+
+ fd = socket(ai->ai_family, opt.sock.type, opt.sock.proto);
+ if (fd < 0) {
+ fprintf(stderr, "Can't open socket: %s\n", strerror(errno));
+ freeaddrinfo(ai);
+ return ERN_RESOLVE;
+ }
+
+ if (opt.sock.proto == IPPROTO_ICMP) {
+ buf[0] = ICMP_ECHO;
+ buf[1] = 0;
+ } else if (opt.sock.proto == IPPROTO_ICMPV6) {
+ buf[0] = ICMPV6_ECHO_REQUEST;
+ buf[1] = 0;
+ } else if (opt.sock.type == SOCK_RAW) {
+ struct udphdr hdr = { 1, 2, htons(opt.size), 0 };
+ struct sockaddr_in6 *sin6 = (void *)ai->ai_addr;;
+
+ memcpy(buf, &hdr, sizeof(hdr));
+ sin6->sin6_port = htons(opt.sock.proto);
+ }
+
+ ca_set_sockopts(fd);
+
+ if (clock_gettime(CLOCK_REALTIME, &time_start_real))
+ error(ERN_GETTIME, errno, "gettime REALTIME");
+ if (clock_gettime(CLOCK_MONOTONIC, &time_start_mono))
+ error(ERN_GETTIME, errno, "gettime MONOTONIC");
+
+ iov[0].iov_base = buf;
+ iov[0].iov_len = opt.size;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.msg_name = ai->ai_addr;
+ msg.msg_namelen = ai->ai_addrlen;
+ msg.msg_iov = iov;
+ msg.msg_iovlen = 1;
+
+ cs_write_cmsg(fd, &msg, cbuf, sizeof(cbuf));
+
+ err = sendmsg(fd, &msg, 0);
+ if (err < 0) {
+ if (!opt.silent_send)
+ fprintf(stderr, "send failed: %s\n", strerror(errno));
+ err = ERN_SEND;
+ goto err_out;
+ } else if (err != (int)opt.size) {
+ fprintf(stderr, "short send\n");
+ err = ERN_SEND_SHORT;
+ goto err_out;
+ } else {
+ err = ERN_SUCCESS;
+ }
+
+ /* Make sure all timestamps have time to loop back */
+ usleep(opt.txtime.delay);
+
+ cs_read_cmsg(fd, &msg, cbuf, sizeof(cbuf));
+
+err_out:
+ close(fd);
+ freeaddrinfo(ai);
+ return err;
+}
diff --git a/tools/testing/selftests/net/cmsg_so_mark.c b/tools/testing/selftests/net/cmsg_so_mark.c
deleted file mode 100644
index 27f2804892a7..000000000000
--- a/tools/testing/selftests/net/cmsg_so_mark.c
+++ /dev/null
@@ -1,67 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-#include <errno.h>
-#include <netdb.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-#include <unistd.h>
-#include <linux/types.h>
-#include <sys/socket.h>
-
-int main(int argc, const char **argv)
-{
- char cbuf[CMSG_SPACE(sizeof(__u32))];
- struct addrinfo hints, *ai;
- struct cmsghdr *cmsg;
- struct iovec iov[1];
- struct msghdr msg;
- int mark;
- int err;
- int fd;
-
- if (argc != 4) {
- fprintf(stderr, "Usage: %s <dst_ip> <port> <mark>\n", argv[0]);
- return 1;
- }
- mark = atoi(argv[3]);
-
- memset(&hints, 0, sizeof(hints));
- hints.ai_family = AF_UNSPEC;
- hints.ai_socktype = SOCK_DGRAM;
-
- ai = NULL;
- err = getaddrinfo(argv[1], argv[2], &hints, &ai);
- if (err) {
- fprintf(stderr, "Can't resolve address: %s\n", strerror(errno));
- return 1;
- }
-
- fd = socket(ai->ai_family, SOCK_DGRAM, IPPROTO_UDP);
- if (fd < 0) {
- fprintf(stderr, "Can't open socket: %s\n", strerror(errno));
- freeaddrinfo(ai);
- return 1;
- }
-
- iov[0].iov_base = "bla";
- iov[0].iov_len = 4;
-
- msg.msg_name = ai->ai_addr;
- msg.msg_namelen = ai->ai_addrlen;
- msg.msg_iov = iov;
- msg.msg_iovlen = 1;
- msg.msg_control = cbuf;
- msg.msg_controllen = sizeof(cbuf);
-
- cmsg = CMSG_FIRSTHDR(&msg);
- cmsg->cmsg_level = SOL_SOCKET;
- cmsg->cmsg_type = SO_MARK;
- cmsg->cmsg_len = CMSG_LEN(sizeof(__u32));
- *(__u32 *)CMSG_DATA(cmsg) = mark;
-
- err = sendmsg(fd, &msg, 0);
-
- close(fd);
- freeaddrinfo(ai);
- return err != 4;
-}
diff --git a/tools/testing/selftests/net/cmsg_so_mark.sh b/tools/testing/selftests/net/cmsg_so_mark.sh
index 19c6aab8d0e9..1650b8622f2f 100755
--- a/tools/testing/selftests/net/cmsg_so_mark.sh
+++ b/tools/testing/selftests/net/cmsg_so_mark.sh
@@ -18,6 +18,8 @@ trap cleanup EXIT
# Namespaces
ip netns add $NS
+ip netns exec $NS sysctl -w net.ipv4.ping_group_range='0 2147483647' > /dev/null
+
# Connectivity
ip -netns $NS link add type dummy
ip -netns $NS link set dev dummy0 up
@@ -41,15 +43,29 @@ check_result() {
fi
}
-ip netns exec $NS ./cmsg_so_mark $TGT4 1234 $((MARK + 1))
-check_result $? 0 "IPv4 pass"
-ip netns exec $NS ./cmsg_so_mark $TGT6 1234 $((MARK + 1))
-check_result $? 0 "IPv6 pass"
+for ovr in setsock cmsg both; do
+ for i in 4 6; do
+ [ $i == 4 ] && TGT=$TGT4 || TGT=$TGT6
+
+ for p in u i r; do
+ [ $p == "u" ] && prot=UDP
+ [ $p == "i" ] && prot=ICMP
+ [ $p == "r" ] && prot=RAW
+
+ [ $ovr == "setsock" ] && m="-M"
+ [ $ovr == "cmsg" ] && m="-m"
+ [ $ovr == "both" ] && m="-M $MARK -m"
+
+ ip netns exec $NS ./cmsg_sender -$i -p $p $m $((MARK + 1)) $TGT 1234
+ check_result $? 0 "$prot $ovr - pass"
+
+ [ $ovr == "diff" ] && m="-M $((MARK + 1)) -m"
-ip netns exec $NS ./cmsg_so_mark $TGT4 1234 $MARK
-check_result $? 1 "IPv4 rejection"
-ip netns exec $NS ./cmsg_so_mark $TGT6 1234 $MARK
-check_result $? 1 "IPv6 rejection"
+ ip netns exec $NS ./cmsg_sender -$i -p $p $m $MARK -s $TGT 1234
+ check_result $? 1 "$prot $ovr - rejection"
+ done
+ done
+done
# Summary
if [ $BAD -ne 0 ]; then
diff --git a/tools/testing/selftests/net/cmsg_time.sh b/tools/testing/selftests/net/cmsg_time.sh
new file mode 100755
index 000000000000..91161e1da734
--- /dev/null
+++ b/tools/testing/selftests/net/cmsg_time.sh
@@ -0,0 +1,83 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+NS=ns
+IP4=172.16.0.1/24
+TGT4=172.16.0.2
+IP6=2001:db8:1::1/64
+TGT6=2001:db8:1::2
+
+cleanup()
+{
+ ip netns del $NS
+}
+
+trap cleanup EXIT
+
+# Namespaces
+ip netns add $NS
+
+ip netns exec $NS sysctl -w net.ipv4.ping_group_range='0 2147483647' > /dev/null
+
+# Connectivity
+ip -netns $NS link add type dummy
+ip -netns $NS link set dev dummy0 up
+ip -netns $NS addr add $IP4 dev dummy0
+ip -netns $NS addr add $IP6 dev dummy0
+
+# Need FQ for TXTIME
+ip netns exec $NS tc qdisc replace dev dummy0 root fq
+
+# Test
+BAD=0
+TOTAL=0
+
+check_result() {
+ ((TOTAL++))
+ if [ $1 -ne 0 ]; then
+ echo " Case $4 returned $1, expected 0"
+ ((BAD++))
+ elif [ "$2" != "$3" ]; then
+ echo " Case $4 returned '$2', expected '$3'"
+ ((BAD++))
+ fi
+}
+
+for i in "-4 $TGT4" "-6 $TGT6"; do
+ for p in u i r; do
+ [ $p == "u" ] && prot=UDPv${i:1:2}
+ [ $p == "i" ] && prot=ICMPv${i:1:2}
+ [ $p == "r" ] && prot=RAWv${i:1:2}
+
+ ts=$(ip netns exec $NS ./cmsg_sender -p $p $i 1234)
+ check_result $? "$ts" "" "$prot - no options"
+
+ ts=$(ip netns exec $NS ./cmsg_sender -p $p $i 1234 -t | wc -l)
+ check_result $? "$ts" "2" "$prot - ts cnt"
+ ts=$(ip netns exec $NS ./cmsg_sender -p $p $i 1234 -t |
+ sed -n "s/.*SCHED ts0 [0-9].*/OK/p")
+ check_result $? "$ts" "OK" "$prot - ts0 SCHED"
+ ts=$(ip netns exec $NS ./cmsg_sender -p $p $i 1234 -t |
+ sed -n "s/.*SND ts0 [0-9].*/OK/p")
+ check_result $? "$ts" "OK" "$prot - ts0 SND"
+
+ ts=$(ip netns exec $NS ./cmsg_sender -p $p $i 1234 -t -d 1000 |
+ awk '/SND/ { if ($3 > 1000) print "OK"; }')
+ check_result $? "$ts" "OK" "$prot - TXTIME abs"
+
+ ts=$(ip netns exec $NS ./cmsg_sender -p $p $i 1234 -t -d 1000 |
+ awk '/SND/ {snd=$3}
+ /SCHED/ {sch=$3}
+ END { if (snd - sch > 500) print "OK"; }')
+ check_result $? "$ts" "OK" "$prot - TXTIME rel"
+ done
+done
+
+# Summary
+if [ $BAD -ne 0 ]; then
+ echo "FAIL - $BAD/$TOTAL cases failed"
+ exit 1
+else
+ echo "OK"
+ exit 0
+fi
diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh
index 3f4c8cfe7aca..47c4d4b4a44a 100755
--- a/tools/testing/selftests/net/fcnal-test.sh
+++ b/tools/testing/selftests/net/fcnal-test.sh
@@ -750,7 +750,7 @@ ipv4_ping_vrf()
log_start
show_hint "Fails since address on vrf device is out of device scope"
run_cmd ping -c1 -w1 -I ${NSA_DEV} ${a}
- log_test_addr ${a} $? 1 "ping local, device bind"
+ log_test_addr ${a} $? 2 "ping local, device bind"
done
#
diff --git a/tools/testing/selftests/net/fib_rule_tests.sh b/tools/testing/selftests/net/fib_rule_tests.sh
index 43ea8407a82e..4f70baad867d 100755
--- a/tools/testing/selftests/net/fib_rule_tests.sh
+++ b/tools/testing/selftests/net/fib_rule_tests.sh
@@ -96,7 +96,7 @@ fib_rule6_del()
fib_rule6_del_by_pref()
{
- pref=$($IP -6 rule show | grep "$1 lookup $TABLE" | cut -d ":" -f 1)
+ pref=$($IP -6 rule show $1 table $RTABLE | cut -d ":" -f 1)
$IP -6 rule del pref $pref
}
@@ -104,17 +104,36 @@ fib_rule6_test_match_n_redirect()
{
local match="$1"
local getmatch="$2"
+ local description="$3"
$IP -6 rule add $match table $RTABLE
$IP -6 route get $GW_IP6 $getmatch | grep -q "table $RTABLE"
- log_test $? 0 "rule6 check: $1"
+ log_test $? 0 "rule6 check: $description"
fib_rule6_del_by_pref "$match"
- log_test $? 0 "rule6 del by pref: $match"
+ log_test $? 0 "rule6 del by pref: $description"
+}
+
+fib_rule6_test_reject()
+{
+ local match="$1"
+ local rc
+
+ $IP -6 rule add $match table $RTABLE 2>/dev/null
+ rc=$?
+ log_test $rc 2 "rule6 check: $match"
+
+ if [ $rc -eq 0 ]; then
+ $IP -6 rule del $match table $RTABLE
+ fi
}
fib_rule6_test()
{
+ local getmatch
+ local match
+ local cnt
+
# setup the fib rule redirect route
$IP -6 route add table $RTABLE default via $GW_IP6 dev $DEV onlink
@@ -124,8 +143,21 @@ fib_rule6_test()
match="from $SRC_IP6 iif $DEV"
fib_rule6_test_match_n_redirect "$match" "$match" "iif redirect to table"
+ # Reject dsfield (tos) options which have ECN bits set
+ for cnt in $(seq 1 3); do
+ match="dsfield $cnt"
+ fib_rule6_test_reject "$match"
+ done
+
+ # Don't take ECN bits into account when matching on dsfield
match="tos 0x10"
- fib_rule6_test_match_n_redirect "$match" "$match" "tos redirect to table"
+ for cnt in "0x10" "0x11" "0x12" "0x13"; do
+ # Using option 'tos' instead of 'dsfield' as old iproute2
+ # versions don't support 'dsfield' in ip rule show.
+ getmatch="tos $cnt"
+ fib_rule6_test_match_n_redirect "$match" "$getmatch" \
+ "$getmatch redirect to table"
+ done
match="fwmark 0x64"
getmatch="mark 0x64"
@@ -165,7 +197,7 @@ fib_rule4_del()
fib_rule4_del_by_pref()
{
- pref=$($IP rule show | grep "$1 lookup $TABLE" | cut -d ":" -f 1)
+ pref=$($IP rule show $1 table $RTABLE | cut -d ":" -f 1)
$IP rule del pref $pref
}
@@ -173,17 +205,36 @@ fib_rule4_test_match_n_redirect()
{
local match="$1"
local getmatch="$2"
+ local description="$3"
$IP rule add $match table $RTABLE
$IP route get $GW_IP4 $getmatch | grep -q "table $RTABLE"
- log_test $? 0 "rule4 check: $1"
+ log_test $? 0 "rule4 check: $description"
fib_rule4_del_by_pref "$match"
- log_test $? 0 "rule4 del by pref: $match"
+ log_test $? 0 "rule4 del by pref: $description"
+}
+
+fib_rule4_test_reject()
+{
+ local match="$1"
+ local rc
+
+ $IP rule add $match table $RTABLE 2>/dev/null
+ rc=$?
+ log_test $rc 2 "rule4 check: $match"
+
+ if [ $rc -eq 0 ]; then
+ $IP rule del $match table $RTABLE
+ fi
}
fib_rule4_test()
{
+ local getmatch
+ local match
+ local cnt
+
# setup the fib rule redirect route
$IP route add table $RTABLE default via $GW_IP4 dev $DEV onlink
@@ -192,14 +243,27 @@ fib_rule4_test()
# need enable forwarding and disable rp_filter temporarily as all the
# addresses are in the same subnet and egress device == ingress device.
- ip netns exec testns sysctl -w net.ipv4.ip_forward=1
- ip netns exec testns sysctl -w net.ipv4.conf.$DEV.rp_filter=0
+ ip netns exec testns sysctl -qw net.ipv4.ip_forward=1
+ ip netns exec testns sysctl -qw net.ipv4.conf.$DEV.rp_filter=0
match="from $SRC_IP iif $DEV"
fib_rule4_test_match_n_redirect "$match" "$match" "iif redirect to table"
- ip netns exec testns sysctl -w net.ipv4.ip_forward=0
+ ip netns exec testns sysctl -qw net.ipv4.ip_forward=0
+
+ # Reject dsfield (tos) options which have ECN bits set
+ for cnt in $(seq 1 3); do
+ match="dsfield $cnt"
+ fib_rule4_test_reject "$match"
+ done
+ # Don't take ECN bits into account when matching on dsfield
match="tos 0x10"
- fib_rule4_test_match_n_redirect "$match" "$match" "tos redirect to table"
+ for cnt in "0x10" "0x11" "0x12" "0x13"; do
+ # Using option 'tos' instead of 'dsfield' as old iproute2
+ # versions don't support 'dsfield' in ip rule show.
+ getmatch="tos $cnt"
+ fib_rule4_test_match_n_redirect "$match" "$getmatch" \
+ "$getmatch redirect to table"
+ done
match="fwmark 0x64"
getmatch="mark 0x64"
diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
index 996af1ae3d3d..2271a8727f62 100755
--- a/tools/testing/selftests/net/fib_tests.sh
+++ b/tools/testing/selftests/net/fib_tests.sh
@@ -9,7 +9,7 @@ ret=0
ksft_skip=4
# all tests in this script. Can be overridden with -t option
-TESTS="unregister down carrier nexthop suppress ipv6_rt ipv4_rt ipv6_addr_metric ipv4_addr_metric ipv6_route_metrics ipv4_route_metrics ipv4_route_v6_gw rp_filter ipv4_del_addr ipv4_mangle ipv6_mangle"
+TESTS="unregister down carrier nexthop suppress ipv6_rt ipv4_rt ipv6_addr_metric ipv4_addr_metric ipv6_route_metrics ipv4_route_metrics ipv4_route_v6_gw rp_filter ipv4_del_addr ipv4_mangle ipv6_mangle ipv4_bcast_neigh"
VERBOSE=0
PAUSE_ON_FAIL=no
@@ -988,12 +988,25 @@ ipv6_rt_replace()
ipv6_rt_replace_mpath
}
+ipv6_rt_dsfield()
+{
+ echo
+ echo "IPv6 route with dsfield tests"
+
+ run_cmd "$IP -6 route flush 2001:db8:102::/64"
+
+ # IPv6 doesn't support routing based on dsfield
+ run_cmd "$IP -6 route add 2001:db8:102::/64 dsfield 0x04 via 2001:db8:101::2"
+ log_test $? 2 "Reject route with dsfield"
+}
+
ipv6_route_test()
{
route_setup
ipv6_rt_add
ipv6_rt_replace
+ ipv6_rt_dsfield
route_cleanup
}
@@ -1447,6 +1460,81 @@ ipv4_local_rt_cache()
log_test $? 0 "Cached route removed from VRF port device"
}
+ipv4_rt_dsfield()
+{
+ echo
+ echo "IPv4 route with dsfield tests"
+
+ run_cmd "$IP route flush 172.16.102.0/24"
+
+ # New routes should reject dsfield options that interfere with ECN
+ run_cmd "$IP route add 172.16.102.0/24 dsfield 0x01 via 172.16.101.2"
+ log_test $? 2 "Reject route with dsfield 0x01"
+
+ run_cmd "$IP route add 172.16.102.0/24 dsfield 0x02 via 172.16.101.2"
+ log_test $? 2 "Reject route with dsfield 0x02"
+
+ run_cmd "$IP route add 172.16.102.0/24 dsfield 0x03 via 172.16.101.2"
+ log_test $? 2 "Reject route with dsfield 0x03"
+
+ # A generic route that doesn't take DSCP into account
+ run_cmd "$IP route add 172.16.102.0/24 via 172.16.101.2"
+
+ # A more specific route for DSCP 0x10
+ run_cmd "$IP route add 172.16.102.0/24 dsfield 0x10 via 172.16.103.2"
+
+ # DSCP 0x10 should match the specific route, no matter the ECN bits
+ $IP route get fibmatch 172.16.102.1 dsfield 0x10 | \
+ grep -q "via 172.16.103.2"
+ log_test $? 0 "IPv4 route with DSCP and ECN:Not-ECT"
+
+ $IP route get fibmatch 172.16.102.1 dsfield 0x11 | \
+ grep -q "via 172.16.103.2"
+ log_test $? 0 "IPv4 route with DSCP and ECN:ECT(1)"
+
+ $IP route get fibmatch 172.16.102.1 dsfield 0x12 | \
+ grep -q "via 172.16.103.2"
+ log_test $? 0 "IPv4 route with DSCP and ECN:ECT(0)"
+
+ $IP route get fibmatch 172.16.102.1 dsfield 0x13 | \
+ grep -q "via 172.16.103.2"
+ log_test $? 0 "IPv4 route with DSCP and ECN:CE"
+
+ # Unknown DSCP should match the generic route, no matter the ECN bits
+ $IP route get fibmatch 172.16.102.1 dsfield 0x14 | \
+ grep -q "via 172.16.101.2"
+ log_test $? 0 "IPv4 route with unknown DSCP and ECN:Not-ECT"
+
+ $IP route get fibmatch 172.16.102.1 dsfield 0x15 | \
+ grep -q "via 172.16.101.2"
+ log_test $? 0 "IPv4 route with unknown DSCP and ECN:ECT(1)"
+
+ $IP route get fibmatch 172.16.102.1 dsfield 0x16 | \
+ grep -q "via 172.16.101.2"
+ log_test $? 0 "IPv4 route with unknown DSCP and ECN:ECT(0)"
+
+ $IP route get fibmatch 172.16.102.1 dsfield 0x17 | \
+ grep -q "via 172.16.101.2"
+ log_test $? 0 "IPv4 route with unknown DSCP and ECN:CE"
+
+ # Null DSCP should match the generic route, no matter the ECN bits
+ $IP route get fibmatch 172.16.102.1 dsfield 0x00 | \
+ grep -q "via 172.16.101.2"
+ log_test $? 0 "IPv4 route with no DSCP and ECN:Not-ECT"
+
+ $IP route get fibmatch 172.16.102.1 dsfield 0x01 | \
+ grep -q "via 172.16.101.2"
+ log_test $? 0 "IPv4 route with no DSCP and ECN:ECT(1)"
+
+ $IP route get fibmatch 172.16.102.1 dsfield 0x02 | \
+ grep -q "via 172.16.101.2"
+ log_test $? 0 "IPv4 route with no DSCP and ECN:ECT(0)"
+
+ $IP route get fibmatch 172.16.102.1 dsfield 0x03 | \
+ grep -q "via 172.16.101.2"
+ log_test $? 0 "IPv4 route with no DSCP and ECN:CE"
+}
+
ipv4_route_test()
{
route_setup
@@ -1454,6 +1542,7 @@ ipv4_route_test()
ipv4_rt_add
ipv4_rt_replace
ipv4_local_rt_cache
+ ipv4_rt_dsfield
route_cleanup
}
@@ -1865,6 +1954,61 @@ ipv6_mangle_test()
route_cleanup
}
+ip_neigh_get_check()
+{
+ ip neigh help 2>&1 | grep -q 'ip neigh get'
+ if [ $? -ne 0 ]; then
+ echo "iproute2 command does not support neigh get. Skipping test"
+ return 1
+ fi
+
+ return 0
+}
+
+ipv4_bcast_neigh_test()
+{
+ local rc
+
+ echo
+ echo "IPv4 broadcast neighbour tests"
+
+ ip_neigh_get_check || return 1
+
+ setup
+
+ set -e
+ run_cmd "$IP neigh add 192.0.2.111 lladdr 00:11:22:33:44:55 nud perm dev dummy0"
+ run_cmd "$IP neigh add 192.0.2.255 lladdr 00:11:22:33:44:55 nud perm dev dummy0"
+
+ run_cmd "$IP neigh get 192.0.2.111 dev dummy0"
+ run_cmd "$IP neigh get 192.0.2.255 dev dummy0"
+
+ run_cmd "$IP address add 192.0.2.1/24 broadcast 192.0.2.111 dev dummy0"
+
+ run_cmd "$IP neigh add 203.0.113.111 nud failed dev dummy0"
+ run_cmd "$IP neigh add 203.0.113.255 nud failed dev dummy0"
+
+ run_cmd "$IP neigh get 203.0.113.111 dev dummy0"
+ run_cmd "$IP neigh get 203.0.113.255 dev dummy0"
+
+ run_cmd "$IP address add 203.0.113.1/24 broadcast 203.0.113.111 dev dummy0"
+ set +e
+
+ run_cmd "$IP neigh get 192.0.2.111 dev dummy0"
+ log_test $? 0 "Resolved neighbour for broadcast address"
+
+ run_cmd "$IP neigh get 192.0.2.255 dev dummy0"
+ log_test $? 0 "Resolved neighbour for network broadcast address"
+
+ run_cmd "$IP neigh get 203.0.113.111 dev dummy0"
+ log_test $? 2 "Unresolved neighbour for broadcast address"
+
+ run_cmd "$IP neigh get 203.0.113.255 dev dummy0"
+ log_test $? 2 "Unresolved neighbour for network broadcast address"
+
+ cleanup
+}
+
################################################################################
# usage
@@ -1939,6 +2083,7 @@ do
ipv4_route_v6_gw) ipv4_route_v6_gw_test;;
ipv4_mangle) ipv4_mangle_test;;
ipv6_mangle) ipv6_mangle_test;;
+ ipv4_bcast_neigh) ipv4_bcast_neigh_test;;
help) echo "Test names: $TESTS"; exit 0;;
esac
diff --git a/tools/testing/selftests/net/forwarding/Makefile b/tools/testing/selftests/net/forwarding/Makefile
index 72ee644d47bf..8fa97ae9af9e 100644
--- a/tools/testing/selftests/net/forwarding/Makefile
+++ b/tools/testing/selftests/net/forwarding/Makefile
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0+ OR MIT
TEST_PROGS = bridge_igmp.sh \
+ bridge_locked_port.sh \
bridge_port_isolation.sh \
bridge_sticky_fdb.sh \
bridge_vlan_aware.sh \
diff --git a/tools/testing/selftests/net/forwarding/bridge_locked_port.sh b/tools/testing/selftests/net/forwarding/bridge_locked_port.sh
new file mode 100755
index 000000000000..5b02b6b60ce7
--- /dev/null
+++ b/tools/testing/selftests/net/forwarding/bridge_locked_port.sh
@@ -0,0 +1,176 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+ALL_TESTS="locked_port_ipv4 locked_port_ipv6 locked_port_vlan"
+NUM_NETIFS=4
+CHECK_TC="no"
+source lib.sh
+
+h1_create()
+{
+ simple_if_init $h1 192.0.2.1/24 2001:db8:1::1/64
+ vlan_create $h1 100 v$h1 198.51.100.1/24
+}
+
+h1_destroy()
+{
+ vlan_destroy $h1 100
+ simple_if_fini $h1 192.0.2.1/24 2001:db8:1::1/64
+}
+
+h2_create()
+{
+ simple_if_init $h2 192.0.2.2/24 2001:db8:1::2/64
+ vlan_create $h2 100 v$h2 198.51.100.2/24
+}
+
+h2_destroy()
+{
+ vlan_destroy $h2 100
+ simple_if_fini $h2 192.0.2.2/24 2001:db8:1::2/64
+}
+
+switch_create()
+{
+ ip link add dev br0 type bridge vlan_filtering 1
+
+ ip link set dev $swp1 master br0
+ ip link set dev $swp2 master br0
+
+ bridge link set dev $swp1 learning off
+
+ ip link set dev br0 up
+ ip link set dev $swp1 up
+ ip link set dev $swp2 up
+}
+
+switch_destroy()
+{
+ ip link set dev $swp2 down
+ ip link set dev $swp1 down
+
+ ip link del dev br0
+}
+
+setup_prepare()
+{
+ h1=${NETIFS[p1]}
+ swp1=${NETIFS[p2]}
+
+ swp2=${NETIFS[p3]}
+ h2=${NETIFS[p4]}
+
+ vrf_prepare
+
+ h1_create
+ h2_create
+
+ switch_create
+}
+
+cleanup()
+{
+ pre_cleanup
+
+ switch_destroy
+
+ h2_destroy
+ h1_destroy
+
+ vrf_cleanup
+}
+
+locked_port_ipv4()
+{
+ RET=0
+
+ check_locked_port_support || return 0
+
+ ping_do $h1 192.0.2.2
+ check_err $? "Ping did not work before locking port"
+
+ bridge link set dev $swp1 locked on
+
+ ping_do $h1 192.0.2.2
+ check_fail $? "Ping worked after locking port, but before adding FDB entry"
+
+ bridge fdb add `mac_get $h1` dev $swp1 master static
+
+ ping_do $h1 192.0.2.2
+ check_err $? "Ping did not work after locking port and adding FDB entry"
+
+ bridge link set dev $swp1 locked off
+ bridge fdb del `mac_get $h1` dev $swp1 master static
+
+ ping_do $h1 192.0.2.2
+ check_err $? "Ping did not work after unlocking port and removing FDB entry."
+
+ log_test "Locked port ipv4"
+}
+
+locked_port_vlan()
+{
+ RET=0
+
+ check_locked_port_support || return 0
+
+ bridge vlan add vid 100 dev $swp1
+ bridge vlan add vid 100 dev $swp2
+
+ ping_do $h1.100 198.51.100.2
+ check_err $? "Ping through vlan did not work before locking port"
+
+ bridge link set dev $swp1 locked on
+ ping_do $h1.100 198.51.100.2
+ check_fail $? "Ping through vlan worked after locking port, but before adding FDB entry"
+
+ bridge fdb add `mac_get $h1` dev $swp1 vlan 100 master static
+
+ ping_do $h1.100 198.51.100.2
+ check_err $? "Ping through vlan did not work after locking port and adding FDB entry"
+
+ bridge link set dev $swp1 locked off
+ bridge fdb del `mac_get $h1` dev $swp1 vlan 100 master static
+
+ ping_do $h1.100 198.51.100.2
+ check_err $? "Ping through vlan did not work after unlocking port and removing FDB entry"
+
+ bridge vlan del vid 100 dev $swp1
+ bridge vlan del vid 100 dev $swp2
+ log_test "Locked port vlan"
+}
+
+locked_port_ipv6()
+{
+ RET=0
+ check_locked_port_support || return 0
+
+ ping6_do $h1 2001:db8:1::2
+ check_err $? "Ping6 did not work before locking port"
+
+ bridge link set dev $swp1 locked on
+
+ ping6_do $h1 2001:db8:1::2
+ check_fail $? "Ping6 worked after locking port, but before adding FDB entry"
+
+ bridge fdb add `mac_get $h1` dev $swp1 master static
+ ping6_do $h1 2001:db8:1::2
+ check_err $? "Ping6 did not work after locking port and adding FDB entry"
+
+ bridge link set dev $swp1 locked off
+ bridge fdb del `mac_get $h1` dev $swp1 master static
+
+ ping6_do $h1 2001:db8:1::2
+ check_err $? "Ping6 did not work after unlocking port and removing FDB entry"
+
+ log_test "Locked port ipv6"
+}
+
+trap cleanup EXIT
+
+setup_prepare
+setup_wait
+
+tests_run
+
+exit $EXIT_STATUS
diff --git a/tools/testing/selftests/net/forwarding/bridge_vlan_aware.sh b/tools/testing/selftests/net/forwarding/bridge_vlan_aware.sh
index b90dff8d3a94..64bd00fe9a4f 100755
--- a/tools/testing/selftests/net/forwarding/bridge_vlan_aware.sh
+++ b/tools/testing/selftests/net/forwarding/bridge_vlan_aware.sh
@@ -28,8 +28,9 @@ h2_destroy()
switch_create()
{
- # 10 Seconds ageing time.
- ip link add dev br0 type bridge vlan_filtering 1 ageing_time 1000 \
+ ip link add dev br0 type bridge \
+ vlan_filtering 1 \
+ ageing_time $LOW_AGEING_TIME \
mcast_snooping 0
ip link set dev $swp1 master br0
diff --git a/tools/testing/selftests/net/forwarding/bridge_vlan_unaware.sh b/tools/testing/selftests/net/forwarding/bridge_vlan_unaware.sh
index c15c6c85c984..1c8a26046589 100755
--- a/tools/testing/selftests/net/forwarding/bridge_vlan_unaware.sh
+++ b/tools/testing/selftests/net/forwarding/bridge_vlan_unaware.sh
@@ -27,8 +27,9 @@ h2_destroy()
switch_create()
{
- # 10 Seconds ageing time.
- ip link add dev br0 type bridge ageing_time 1000 mcast_snooping 0
+ ip link add dev br0 type bridge \
+ ageing_time $LOW_AGEING_TIME \
+ mcast_snooping 0
ip link set dev $swp1 master br0
ip link set dev $swp2 master br0
diff --git a/tools/testing/selftests/net/forwarding/fib_offload_lib.sh b/tools/testing/selftests/net/forwarding/fib_offload_lib.sh
index e134a5f529c9..1b3b46292179 100644
--- a/tools/testing/selftests/net/forwarding/fib_offload_lib.sh
+++ b/tools/testing/selftests/net/forwarding/fib_offload_lib.sh
@@ -99,15 +99,15 @@ fib_ipv4_tos_test()
fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 0 metric 1024" false
check_err $? "Route not in hardware when should"
- ip -n $ns route add 192.0.2.0/24 dev dummy1 tos 2 metric 1024
- fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 2 metric 1024" false
+ ip -n $ns route add 192.0.2.0/24 dev dummy1 tos 8 metric 1024
+ fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 8 metric 1024" false
check_err $? "Highest TOS route not in hardware when should"
fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 0 metric 1024" true
check_err $? "Lowest TOS route still in hardware when should not"
- ip -n $ns route add 192.0.2.0/24 dev dummy1 tos 1 metric 1024
- fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 1 metric 1024" true
+ ip -n $ns route add 192.0.2.0/24 dev dummy1 tos 4 metric 1024
+ fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 4 metric 1024" true
check_err $? "Middle TOS route in hardware when should not"
log_test "IPv4 routes with TOS"
@@ -277,11 +277,11 @@ fib_ipv4_replay_tos_test()
ip -n $ns link set dev dummy1 up
ip -n $ns route add 192.0.2.0/24 dev dummy1 tos 0
- ip -n $ns route add 192.0.2.0/24 dev dummy1 tos 1
+ ip -n $ns route add 192.0.2.0/24 dev dummy1 tos 4
devlink -N $ns dev reload $devlink_dev
- fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 1" false
+ fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 4" false
check_err $? "Highest TOS route not in hardware when should"
fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 0" true
diff --git a/tools/testing/selftests/net/forwarding/forwarding.config.sample b/tools/testing/selftests/net/forwarding/forwarding.config.sample
index b0980a2efa31..4a546509de90 100644
--- a/tools/testing/selftests/net/forwarding/forwarding.config.sample
+++ b/tools/testing/selftests/net/forwarding/forwarding.config.sample
@@ -41,6 +41,8 @@ NETIF_CREATE=yes
# Timeout (in seconds) before ping exits regardless of how many packets have
# been sent or received
PING_TIMEOUT=5
+# Minimum ageing_time (in centiseconds) supported by hardware
+LOW_AGEING_TIME=1000
# Flag for tc match, supposed to be skip_sw/skip_hw which means do not process
# filter by software/hardware
TC_FLAG=skip_hw
diff --git a/tools/testing/selftests/net/forwarding/hw_stats_l3.sh b/tools/testing/selftests/net/forwarding/hw_stats_l3.sh
new file mode 100755
index 000000000000..1c11c4256d06
--- /dev/null
+++ b/tools/testing/selftests/net/forwarding/hw_stats_l3.sh
@@ -0,0 +1,332 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+# +--------------------+ +----------------------+
+# | H1 | | H2 |
+# | | | |
+# | $h1.200 + | | + $h2.200 |
+# | 192.0.2.1/28 | | | | 192.0.2.18/28 |
+# | 2001:db8:1::1/64 | | | | 2001:db8:2::1/64 |
+# | | | | | |
+# | $h1 + | | + $h2 |
+# | | | | | |
+# +------------------|-+ +-|--------------------+
+# | |
+# +------------------|-------------------------|--------------------+
+# | SW | | |
+# | | | |
+# | $rp1 + + $rp2 |
+# | | | |
+# | $rp1.200 + + $rp2.200 |
+# | 192.0.2.2/28 192.0.2.17/28 |
+# | 2001:db8:1::2/64 2001:db8:2::2/64 |
+# | |
+# +-----------------------------------------------------------------+
+
+ALL_TESTS="
+ ping_ipv4
+ ping_ipv6
+ test_stats_rx_ipv4
+ test_stats_tx_ipv4
+ test_stats_rx_ipv6
+ test_stats_tx_ipv6
+ respin_enablement
+ test_stats_rx_ipv4
+ test_stats_tx_ipv4
+ test_stats_rx_ipv6
+ test_stats_tx_ipv6
+ reapply_config
+ ping_ipv4
+ ping_ipv6
+ test_stats_rx_ipv4
+ test_stats_tx_ipv4
+ test_stats_rx_ipv6
+ test_stats_tx_ipv6
+ test_stats_report_rx
+ test_stats_report_tx
+ test_destroy_enabled
+ test_double_enable
+"
+NUM_NETIFS=4
+source lib.sh
+
+h1_create()
+{
+ simple_if_init $h1
+ vlan_create $h1 200 v$h1 192.0.2.1/28 2001:db8:1::1/64
+ ip route add 192.0.2.16/28 vrf v$h1 nexthop via 192.0.2.2
+ ip -6 route add 2001:db8:2::/64 vrf v$h1 nexthop via 2001:db8:1::2
+}
+
+h1_destroy()
+{
+ ip -6 route del 2001:db8:2::/64 vrf v$h1 nexthop via 2001:db8:1::2
+ ip route del 192.0.2.16/28 vrf v$h1 nexthop via 192.0.2.2
+ vlan_destroy $h1 200
+ simple_if_fini $h1
+}
+
+h2_create()
+{
+ simple_if_init $h2
+ vlan_create $h2 200 v$h2 192.0.2.18/28 2001:db8:2::1/64
+ ip route add 192.0.2.0/28 vrf v$h2 nexthop via 192.0.2.17
+ ip -6 route add 2001:db8:1::/64 vrf v$h2 nexthop via 2001:db8:2::2
+}
+
+h2_destroy()
+{
+ ip -6 route del 2001:db8:1::/64 vrf v$h2 nexthop via 2001:db8:2::2
+ ip route del 192.0.2.0/28 vrf v$h2 nexthop via 192.0.2.17
+ vlan_destroy $h2 200
+ simple_if_fini $h2
+}
+
+router_rp1_200_create()
+{
+ ip link add name $rp1.200 up \
+ link $rp1 addrgenmode eui64 type vlan id 200
+ ip address add dev $rp1.200 192.0.2.2/28
+ ip address add dev $rp1.200 2001:db8:1::2/64
+ ip stats set dev $rp1.200 l3_stats on
+}
+
+router_rp1_200_destroy()
+{
+ ip stats set dev $rp1.200 l3_stats off
+ ip address del dev $rp1.200 2001:db8:1::2/64
+ ip address del dev $rp1.200 192.0.2.2/28
+ ip link del dev $rp1.200
+}
+
+router_create()
+{
+ ip link set dev $rp1 up
+ router_rp1_200_create
+
+ ip link set dev $rp2 up
+ vlan_create $rp2 200 "" 192.0.2.17/28 2001:db8:2::2/64
+}
+
+router_destroy()
+{
+ vlan_destroy $rp2 200
+ ip link set dev $rp2 down
+
+ router_rp1_200_destroy
+ ip link set dev $rp1 down
+}
+
+setup_prepare()
+{
+ h1=${NETIFS[p1]}
+ rp1=${NETIFS[p2]}
+
+ rp2=${NETIFS[p3]}
+ h2=${NETIFS[p4]}
+
+ rp1mac=$(mac_get $rp1)
+ rp2mac=$(mac_get $rp2)
+
+ vrf_prepare
+
+ h1_create
+ h2_create
+
+ router_create
+
+ forwarding_enable
+}
+
+cleanup()
+{
+ pre_cleanup
+
+ forwarding_restore
+
+ router_destroy
+
+ h2_destroy
+ h1_destroy
+
+ vrf_cleanup
+}
+
+ping_ipv4()
+{
+ ping_test $h1.200 192.0.2.18 " IPv4"
+}
+
+ping_ipv6()
+{
+ ping_test $h1.200 2001:db8:2::1 " IPv6"
+}
+
+get_l3_stat()
+{
+ local selector=$1; shift
+
+ ip -j stats show dev $rp1.200 group offload subgroup l3_stats |
+ jq '.[0].stats64.'$selector
+}
+
+send_packets_rx_ipv4()
+{
+ # Send 21 packets instead of 20, because the first one might trap and go
+ # through the SW datapath, which might not bump the HW counter.
+ $MZ $h1.200 -c 21 -d 20msec -p 100 \
+ -a own -b $rp1mac -A 192.0.2.1 -B 192.0.2.18 \
+ -q -t udp sp=54321,dp=12345
+}
+
+send_packets_rx_ipv6()
+{
+ $MZ $h1.200 -6 -c 21 -d 20msec -p 100 \
+ -a own -b $rp1mac -A 2001:db8:1::1 -B 2001:db8:2::1 \
+ -q -t udp sp=54321,dp=12345
+}
+
+send_packets_tx_ipv4()
+{
+ $MZ $h2.200 -c 21 -d 20msec -p 100 \
+ -a own -b $rp2mac -A 192.0.2.18 -B 192.0.2.1 \
+ -q -t udp sp=54321,dp=12345
+}
+
+send_packets_tx_ipv6()
+{
+ $MZ $h2.200 -6 -c 21 -d 20msec -p 100 \
+ -a own -b $rp2mac -A 2001:db8:2::1 -B 2001:db8:1::1 \
+ -q -t udp sp=54321,dp=12345
+}
+
+___test_stats()
+{
+ local dir=$1; shift
+ local prot=$1; shift
+
+ local a
+ local b
+
+ a=$(get_l3_stat ${dir}.packets)
+ send_packets_${dir}_${prot}
+ "$@"
+ b=$(busywait "$TC_HIT_TIMEOUT" until_counter_is ">= $a + 20" \
+ get_l3_stat ${dir}.packets)
+ check_err $? "Traffic not reflected in the counter: $a -> $b"
+}
+
+__test_stats()
+{
+ local dir=$1; shift
+ local prot=$1; shift
+
+ RET=0
+ ___test_stats "$dir" "$prot"
+ log_test "Test $dir packets: $prot"
+}
+
+test_stats_rx_ipv4()
+{
+ __test_stats rx ipv4
+}
+
+test_stats_tx_ipv4()
+{
+ __test_stats tx ipv4
+}
+
+test_stats_rx_ipv6()
+{
+ __test_stats rx ipv6
+}
+
+test_stats_tx_ipv6()
+{
+ __test_stats tx ipv6
+}
+
+# Make sure everything works well even after stats have been disabled and
+# reenabled on the same device without touching the L3 configuration.
+respin_enablement()
+{
+ log_info "Turning stats off and on again"
+ ip stats set dev $rp1.200 l3_stats off
+ ip stats set dev $rp1.200 l3_stats on
+}
+
+# For the initial run, l3_stats is enabled on a completely set up netdevice. Now
+# do it the other way around: enabling the L3 stats on an L2 netdevice, and only
+# then apply the L3 configuration.
+reapply_config()
+{
+ log_info "Reapplying configuration"
+
+ router_rp1_200_destroy
+
+ ip link add name $rp1.200 link $rp1 addrgenmode none type vlan id 200
+ ip stats set dev $rp1.200 l3_stats on
+ ip link set dev $rp1.200 up addrgenmode eui64
+ ip address add dev $rp1.200 192.0.2.2/28
+ ip address add dev $rp1.200 2001:db8:1::2/64
+}
+
+__test_stats_report()
+{
+ local dir=$1; shift
+ local prot=$1; shift
+
+ local a
+ local b
+
+ RET=0
+
+ a=$(get_l3_stat ${dir}.packets)
+ send_packets_${dir}_${prot}
+ ip address flush dev $rp1.200
+ b=$(busywait "$TC_HIT_TIMEOUT" until_counter_is ">= $a + 20" \
+ get_l3_stat ${dir}.packets)
+ check_err $? "Traffic not reflected in the counter: $a -> $b"
+ log_test "Test ${dir} packets: stats pushed on loss of L3"
+
+ ip stats set dev $rp1.200 l3_stats off
+ ip link del dev $rp1.200
+ router_rp1_200_create
+}
+
+test_stats_report_rx()
+{
+ __test_stats_report rx ipv4
+}
+
+test_stats_report_tx()
+{
+ __test_stats_report tx ipv4
+}
+
+test_destroy_enabled()
+{
+ RET=0
+
+ ip link del dev $rp1.200
+ router_rp1_200_create
+
+ log_test "Destroy l3_stats-enabled netdev"
+}
+
+test_double_enable()
+{
+ RET=0
+ ___test_stats rx ipv4 \
+ ip stats set dev $rp1.200 l3_stats on
+ log_test "Test stat retention across a spurious enablement"
+}
+
+trap cleanup EXIT
+
+setup_prepare
+setup_wait
+
+tests_run
+
+exit $EXIT_STATUS
diff --git a/tools/testing/selftests/net/forwarding/lib.sh b/tools/testing/selftests/net/forwarding/lib.sh
index 7da783d6f453..664b9ecaf228 100644
--- a/tools/testing/selftests/net/forwarding/lib.sh
+++ b/tools/testing/selftests/net/forwarding/lib.sh
@@ -24,6 +24,7 @@ PING_COUNT=${PING_COUNT:=10}
PING_TIMEOUT=${PING_TIMEOUT:=5}
WAIT_TIMEOUT=${WAIT_TIMEOUT:=20}
INTERFACE_TIMEOUT=${INTERFACE_TIMEOUT:=600}
+LOW_AGEING_TIME=${LOW_AGEING_TIME:=1000}
REQUIRE_JQ=${REQUIRE_JQ:=yes}
REQUIRE_MZ=${REQUIRE_MZ:=yes}
@@ -125,6 +126,14 @@ check_ethtool_lanes_support()
fi
}
+check_locked_port_support()
+{
+ if ! bridge -d link show | grep -q " locked"; then
+ echo "SKIP: iproute2 too old; Locked port feature not supported."
+ return $ksft_skip
+ fi
+}
+
if [[ "$(id -u)" -ne 0 ]]; then
echo "SKIP: need root privileges"
exit $ksft_skip
@@ -1489,3 +1498,63 @@ brmcast_check_sg_state()
check_err_fail $should_fail $? "Entry $src has blocked flag"
done
}
+
+start_ip_monitor()
+{
+ local mtype=$1; shift
+ local ip=${1-ip}; shift
+
+ # start the monitor in the background
+ tmpfile=`mktemp /var/run/nexthoptestXXX`
+ mpid=`($ip monitor $mtype > $tmpfile & echo $!) 2>/dev/null`
+ sleep 0.2
+ echo "$mpid $tmpfile"
+}
+
+stop_ip_monitor()
+{
+ local mpid=$1; shift
+ local tmpfile=$1; shift
+ local el=$1; shift
+ local what=$1; shift
+
+ sleep 0.2
+ kill $mpid
+ local lines=`grep '^\w' $tmpfile | wc -l`
+ test $lines -eq $el
+ check_err $? "$what: $lines lines of events, expected $el"
+ rm -rf $tmpfile
+}
+
+hw_stats_monitor_test()
+{
+ local dev=$1; shift
+ local type=$1; shift
+ local make_suitable=$1; shift
+ local make_unsuitable=$1; shift
+ local ip=${1-ip}; shift
+
+ RET=0
+
+ # Expect a notification about enablement.
+ local ipmout=$(start_ip_monitor stats "$ip")
+ $ip stats set dev $dev ${type}_stats on
+ stop_ip_monitor $ipmout 1 "${type}_stats enablement"
+
+ # Expect a notification about offload.
+ local ipmout=$(start_ip_monitor stats "$ip")
+ $make_suitable
+ stop_ip_monitor $ipmout 1 "${type}_stats installation"
+
+ # Expect a notification about loss of offload.
+ local ipmout=$(start_ip_monitor stats "$ip")
+ $make_unsuitable
+ stop_ip_monitor $ipmout 1 "${type}_stats deinstallation"
+
+ # Expect a notification about disablement
+ local ipmout=$(start_ip_monitor stats "$ip")
+ $ip stats set dev $dev ${type}_stats off
+ stop_ip_monitor $ipmout 1 "${type}_stats disablement"
+
+ log_test "${type}_stats notifications"
+}
diff --git a/tools/testing/selftests/net/forwarding/pedit_ip.sh b/tools/testing/selftests/net/forwarding/pedit_ip.sh
new file mode 100755
index 000000000000..d14efb2d23b2
--- /dev/null
+++ b/tools/testing/selftests/net/forwarding/pedit_ip.sh
@@ -0,0 +1,201 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+# This test sends traffic from H1 to H2. Either on ingress of $swp1, or on
+# egress of $swp2, the traffic is acted upon by a pedit action. An ingress
+# filter installed on $h2 verifies that the packet looks like expected.
+#
+# +----------------------+ +----------------------+
+# | H1 | | H2 |
+# | + $h1 | | $h2 + |
+# | | 192.0.2.1/28 | | 192.0.2.2/28 | |
+# +----|-----------------+ +----------------|-----+
+# | |
+# +----|----------------------------------------------------------------|-----+
+# | SW | | |
+# | +-|----------------------------------------------------------------|-+ |
+# | | + $swp1 BR $swp2 + | |
+# | +--------------------------------------------------------------------+ |
+# +---------------------------------------------------------------------------+
+
+ALL_TESTS="
+ ping_ipv4
+ ping_ipv6
+ test_ip4_src
+ test_ip4_dst
+ test_ip6_src
+ test_ip6_dst
+"
+
+NUM_NETIFS=4
+source lib.sh
+source tc_common.sh
+
+h1_create()
+{
+ simple_if_init $h1 192.0.2.1/28 2001:db8:1::1/64
+}
+
+h1_destroy()
+{
+ simple_if_fini $h1 192.0.2.1/28 2001:db8:1::1/64
+}
+
+h2_create()
+{
+ simple_if_init $h2 192.0.2.2/28 2001:db8:1::2/64
+ tc qdisc add dev $h2 clsact
+}
+
+h2_destroy()
+{
+ tc qdisc del dev $h2 clsact
+ simple_if_fini $h2 192.0.2.2/28 2001:db8:1::2/64
+}
+
+switch_create()
+{
+ ip link add name br1 up type bridge vlan_filtering 1
+ ip link set dev $swp1 master br1
+ ip link set dev $swp1 up
+ ip link set dev $swp2 master br1
+ ip link set dev $swp2 up
+
+ tc qdisc add dev $swp1 clsact
+ tc qdisc add dev $swp2 clsact
+}
+
+switch_destroy()
+{
+ tc qdisc del dev $swp2 clsact
+ tc qdisc del dev $swp1 clsact
+
+ ip link set dev $swp2 down
+ ip link set dev $swp2 nomaster
+ ip link set dev $swp1 down
+ ip link set dev $swp1 nomaster
+ ip link del dev br1
+}
+
+setup_prepare()
+{
+ h1=${NETIFS[p1]}
+ swp1=${NETIFS[p2]}
+
+ swp2=${NETIFS[p3]}
+ h2=${NETIFS[p4]}
+
+ h2mac=$(mac_get $h2)
+
+ vrf_prepare
+ h1_create
+ h2_create
+ switch_create
+}
+
+cleanup()
+{
+ pre_cleanup
+
+ switch_destroy
+ h2_destroy
+ h1_destroy
+ vrf_cleanup
+}
+
+ping_ipv4()
+{
+ ping_test $h1 192.0.2.2
+}
+
+ping_ipv6()
+{
+ ping6_test $h1 2001:db8:1::2
+}
+
+do_test_pedit_ip()
+{
+ local pedit_locus=$1; shift
+ local pedit_action=$1; shift
+ local match_prot=$1; shift
+ local match_flower=$1; shift
+ local mz_flags=$1; shift
+
+ tc filter add $pedit_locus handle 101 pref 1 \
+ flower action pedit ex munge $pedit_action
+ tc filter add dev $h2 ingress handle 101 pref 1 prot $match_prot \
+ flower skip_hw $match_flower action pass
+
+ RET=0
+
+ $MZ $mz_flags $h1 -c 10 -d 20msec -p 100 -a own -b $h2mac -q -t ip
+
+ local pkts
+ pkts=$(busywait "$TC_HIT_TIMEOUT" until_counter_is ">= 10" \
+ tc_rule_handle_stats_get "dev $h2 ingress" 101)
+ check_err $? "Expected to get 10 packets, but got $pkts."
+
+ pkts=$(tc_rule_handle_stats_get "$pedit_locus" 101)
+ ((pkts >= 10))
+ check_err $? "Expected to get 10 packets on pedit rule, but got $pkts."
+
+ log_test "$pedit_locus pedit $pedit_action"
+
+ tc filter del dev $h2 ingress pref 1
+ tc filter del $pedit_locus pref 1
+}
+
+do_test_pedit_ip6()
+{
+ local locus=$1; shift
+ local pedit_addr=$1; shift
+ local flower_addr=$1; shift
+
+ do_test_pedit_ip "$locus" "$pedit_addr set 2001:db8:2::1" ipv6 \
+ "$flower_addr 2001:db8:2::1" \
+ "-6 -A 2001:db8:1::1 -B 2001:db8:1::2"
+}
+
+do_test_pedit_ip4()
+{
+ local locus=$1; shift
+ local pedit_addr=$1; shift
+ local flower_addr=$1; shift
+
+ do_test_pedit_ip "$locus" "$pedit_addr set 198.51.100.1" ip \
+ "$flower_addr 198.51.100.1" \
+ "-A 192.0.2.1 -B 192.0.2.2"
+}
+
+test_ip4_src()
+{
+ do_test_pedit_ip4 "dev $swp1 ingress" "ip src" src_ip
+ do_test_pedit_ip4 "dev $swp2 egress" "ip src" src_ip
+}
+
+test_ip4_dst()
+{
+ do_test_pedit_ip4 "dev $swp1 ingress" "ip dst" dst_ip
+ do_test_pedit_ip4 "dev $swp2 egress" "ip dst" dst_ip
+}
+
+test_ip6_src()
+{
+ do_test_pedit_ip6 "dev $swp1 ingress" "ip6 src" src_ip
+ do_test_pedit_ip6 "dev $swp2 egress" "ip6 src" src_ip
+}
+
+test_ip6_dst()
+{
+ do_test_pedit_ip6 "dev $swp1 ingress" "ip6 dst" dst_ip
+ do_test_pedit_ip6 "dev $swp2 egress" "ip6 dst" dst_ip
+}
+
+trap cleanup EXIT
+
+setup_prepare
+setup_wait
+
+tests_run
+
+exit $EXIT_STATUS
diff --git a/tools/testing/selftests/net/forwarding/tc_police.sh b/tools/testing/selftests/net/forwarding/tc_police.sh
index 4f9f17cb45d6..0a51eef21b9e 100755
--- a/tools/testing/selftests/net/forwarding/tc_police.sh
+++ b/tools/testing/selftests/net/forwarding/tc_police.sh
@@ -37,6 +37,8 @@ ALL_TESTS="
police_tx_mirror_test
police_pps_rx_test
police_pps_tx_test
+ police_mtu_rx_test
+ police_mtu_tx_test
"
NUM_NETIFS=6
source tc_common.sh
@@ -346,6 +348,56 @@ police_pps_tx_test()
tc filter del dev $rp2 egress protocol ip pref 1 handle 101 flower
}
+police_mtu_common_test() {
+ RET=0
+
+ local test_name=$1; shift
+ local dev=$1; shift
+ local direction=$1; shift
+
+ tc filter add dev $dev $direction protocol ip pref 1 handle 101 flower \
+ dst_ip 198.51.100.1 ip_proto udp dst_port 54321 \
+ action police mtu 1042 conform-exceed drop/ok
+
+ # to count "conform" packets
+ tc filter add dev $h2 ingress protocol ip pref 1 handle 101 flower \
+ dst_ip 198.51.100.1 ip_proto udp dst_port 54321 \
+ action drop
+
+ mausezahn $h1 -a own -b $(mac_get $rp1) -A 192.0.2.1 -B 198.51.100.1 \
+ -t udp sp=12345,dp=54321 -p 1001 -c 10 -q
+
+ mausezahn $h1 -a own -b $(mac_get $rp1) -A 192.0.2.1 -B 198.51.100.1 \
+ -t udp sp=12345,dp=54321 -p 1000 -c 3 -q
+
+ tc_check_packets "dev $dev $direction" 101 13
+ check_err $? "wrong packet counter"
+
+ # "exceed" packets
+ local overlimits_t0=$(tc_rule_stats_get ${dev} 1 ${direction} .overlimits)
+ test ${overlimits_t0} = 10
+ check_err $? "wrong overlimits, expected 10 got ${overlimits_t0}"
+
+ # "conform" packets
+ tc_check_packets "dev $h2 ingress" 101 3
+ check_err $? "forwarding error"
+
+ tc filter del dev $h2 ingress protocol ip pref 1 handle 101 flower
+ tc filter del dev $dev $direction protocol ip pref 1 handle 101 flower
+
+ log_test "$test_name"
+}
+
+police_mtu_rx_test()
+{
+ police_mtu_common_test "police mtu (rx)" $rp1 ingress
+}
+
+police_mtu_tx_test()
+{
+ police_mtu_common_test "police mtu (tx)" $rp2 egress
+}
+
setup_prepare()
{
h1=${NETIFS[p1]}
diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
index f0f4ab96b8f3..621af6895f4d 100755
--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
@@ -432,6 +432,8 @@ do_transfer()
local stat_ackrx_last_l=$(get_mib_counter "${listener_ns}" "MPTcpExtMPCapableACKRX")
local stat_cookietx_last=$(get_mib_counter "${listener_ns}" "TcpExtSyncookiesSent")
local stat_cookierx_last=$(get_mib_counter "${listener_ns}" "TcpExtSyncookiesRecv")
+ local stat_csum_err_s=$(get_mib_counter "${listener_ns}" "MPTcpExtDataCsumErr")
+ local stat_csum_err_c=$(get_mib_counter "${connector_ns}" "MPTcpExtDataCsumErr")
timeout ${timeout_test} \
ip netns exec ${listener_ns} \
@@ -524,6 +526,23 @@ do_transfer()
fi
fi
+ if $checksum; then
+ local csum_err_s=$(get_mib_counter "${listener_ns}" "MPTcpExtDataCsumErr")
+ local csum_err_c=$(get_mib_counter "${connector_ns}" "MPTcpExtDataCsumErr")
+
+ local csum_err_s_nr=$((csum_err_s - stat_csum_err_s))
+ if [ $csum_err_s_nr -gt 0 ]; then
+ printf "[ FAIL ]\nserver got $csum_err_s_nr data checksum error[s]"
+ rets=1
+ fi
+
+ local csum_err_c_nr=$((csum_err_c - stat_csum_err_c))
+ if [ $csum_err_c_nr -gt 0 ]; then
+ printf "[ FAIL ]\nclient got $csum_err_c_nr data checksum error[s]"
+ retc=1
+ fi
+ fi
+
if [ $retc -eq 0 ] && [ $rets -eq 0 ]; then
printf "[ OK ]"
fi
diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
index 0c8a2a20b96c..7314257d248a 100755
--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
+++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
@@ -1,6 +1,11 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
+# Double quotes to prevent globbing and word splitting is recommended in new
+# code but we accept it, especially because there were too many before having
+# address all other issues detected by shellcheck.
+#shellcheck disable=SC2086
+
ret=0
sin=""
sinfail=""
@@ -9,15 +14,28 @@ cin=""
cinfail=""
cinsent=""
cout=""
+capout=""
+ns1=""
+ns2=""
ksft_skip=4
timeout_poll=30
timeout_test=$((timeout_poll * 2 + 1))
-mptcp_connect=""
capture=0
checksum=0
-do_all_tests=1
-
+ip_mptcp=0
+check_invert=0
+validate_checksum=0
+init=0
+
+declare -A all_tests
+declare -a only_tests_ids
+declare -a only_tests_names
+declare -A failed_tests
TEST_COUNT=0
+TEST_NAME=""
+nr_blank=40
+
+export FAILING_LINKS=""
# generated using "nfbpf_compile '(ip && (ip[54] & 0xf0) == 0x30) ||
# (ip6 && (ip6[74] & 0xf0) == 0x30)'"
@@ -37,16 +55,18 @@ CBPF_MPTCP_SUBOPTION_ADD_ADDR="14,
6 0 0 65535,
6 0 0 0"
-init()
+init_partial()
{
capout=$(mktemp)
- rndh=$(printf %x $sec)-$(mktemp -u XXXXXX)
+ local rndh
+ rndh=$(mktemp -u XXXXXX)
ns1="ns1-$rndh"
ns2="ns2-$rndh"
- for netns in "$ns1" "$ns2";do
+ local netns
+ for netns in "$ns1" "$ns2"; do
ip netns add $netns || exit $ksft_skip
ip -net $netns link set lo up
ip netns exec $netns sysctl -q net.mptcp.enabled=1
@@ -57,13 +77,18 @@ init()
fi
done
- # ns1 ns2
+ check_invert=0
+ validate_checksum=$checksum
+ FAILING_LINKS=""
+
+ # ns1 ns2
# ns1eth1 ns2eth1
# ns1eth2 ns2eth2
# ns1eth3 ns2eth3
# ns1eth4 ns2eth4
- for i in `seq 1 4`; do
+ local i
+ for i in $(seq 1 4); do
ip link add ns1eth$i netns "$ns1" type veth peer name ns2eth$i netns "$ns2"
ip -net "$ns1" addr add 10.0.$i.1/24 dev ns1eth$i
ip -net "$ns1" addr add dead:beef:$i::1/64 dev ns1eth$i nodad
@@ -81,7 +106,8 @@ init()
init_shapers()
{
- for i in `seq 1 4`; do
+ local i
+ for i in $(seq 1 4); do
tc -n $ns1 qdisc add dev ns1eth$i root netem rate 20mbit delay 1
tc -n $ns2 qdisc add dev ns2eth$i root netem rate 20mbit delay 1
done
@@ -91,12 +117,48 @@ cleanup_partial()
{
rm -f "$capout"
+ local netns
for netns in "$ns1" "$ns2"; do
ip netns del $netns
rm -f /tmp/$netns.{nstat,out}
done
}
+check_tools()
+{
+ if ! ip -Version &> /dev/null; then
+ echo "SKIP: Could not run test without ip tool"
+ exit $ksft_skip
+ fi
+
+ if ! iptables -V &> /dev/null; then
+ echo "SKIP: Could not run all tests without iptables tool"
+ exit $ksft_skip
+ fi
+
+ if ! ip6tables -V &> /dev/null; then
+ echo "SKIP: Could not run all tests without ip6tables tool"
+ exit $ksft_skip
+ fi
+}
+
+init() {
+ init=1
+
+ check_tools
+
+ sin=$(mktemp)
+ sout=$(mktemp)
+ cin=$(mktemp)
+ cinsent=$(mktemp)
+ cout=$(mktemp)
+
+ trap cleanup EXIT
+
+ make_file "$cin" "client" 1
+ make_file "$sin" "server" 1
+}
+
cleanup()
{
rm -f "$cin" "$cout" "$sinfail"
@@ -104,33 +166,73 @@ cleanup()
cleanup_partial
}
+skip_test()
+{
+ if [ "${#only_tests_ids[@]}" -eq 0 ] && [ "${#only_tests_names[@]}" -eq 0 ]; then
+ return 1
+ fi
+
+ local i
+ for i in "${only_tests_ids[@]}"; do
+ if [ "${TEST_COUNT}" -eq "${i}" ]; then
+ return 1
+ fi
+ done
+ for i in "${only_tests_names[@]}"; do
+ if [ "${TEST_NAME}" = "${i}" ]; then
+ return 1
+ fi
+ done
+
+ return 0
+}
+
+# $1: test name
reset()
{
- cleanup_partial
- init
+ TEST_NAME="${1}"
+
+ TEST_COUNT=$((TEST_COUNT+1))
+
+ if skip_test; then
+ return 1
+ fi
+
+ if [ "${init}" != "1" ]; then
+ init
+ else
+ cleanup_partial
+ fi
+
+ init_partial
+
+ return 0
}
+# $1: test name
reset_with_cookies()
{
- reset
+ reset "${1}" || return 1
- for netns in "$ns1" "$ns2";do
+ local netns
+ for netns in "$ns1" "$ns2"; do
ip netns exec $netns sysctl -q net.ipv4.tcp_syncookies=2
done
}
+# $1: test name
reset_with_add_addr_timeout()
{
- local ip="${1:-4}"
+ local ip="${2:-4}"
local tables
+ reset "${1}" || return 1
+
tables="iptables"
if [ $ip -eq 6 ]; then
tables="ip6tables"
fi
- reset
-
ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=1
ip netns exec $ns2 $tables -A OUTPUT -p tcp \
-m tcp --tcp-option 30 \
@@ -139,45 +241,45 @@ reset_with_add_addr_timeout()
-j DROP
}
+# $1: test name
reset_with_checksum()
{
local ns1_enable=$1
local ns2_enable=$2
- reset
+ reset "checksum test ${1} ${2}" || return 1
ip netns exec $ns1 sysctl -q net.mptcp.checksum_enabled=$ns1_enable
ip netns exec $ns2 sysctl -q net.mptcp.checksum_enabled=$ns2_enable
+
+ validate_checksum=1
}
reset_with_allow_join_id0()
{
- local ns1_enable=$1
- local ns2_enable=$2
+ local ns1_enable=$2
+ local ns2_enable=$3
- reset
+ reset "${1}" || return 1
ip netns exec $ns1 sysctl -q net.mptcp.allow_join_initial_addr_port=$ns1_enable
ip netns exec $ns2 sysctl -q net.mptcp.allow_join_initial_addr_port=$ns2_enable
}
-ip -Version > /dev/null 2>&1
-if [ $? -ne 0 ];then
- echo "SKIP: Could not run test without ip tool"
- exit $ksft_skip
-fi
-
-iptables -V > /dev/null 2>&1
-if [ $? -ne 0 ];then
- echo "SKIP: Could not run all tests without iptables tool"
- exit $ksft_skip
-fi
+fail_test()
+{
+ ret=1
+ failed_tests[${TEST_COUNT}]="${TEST_NAME}"
+}
-ip6tables -V > /dev/null 2>&1
-if [ $? -ne 0 ];then
- echo "SKIP: Could not run all tests without ip6tables tool"
- exit $ksft_skip
-fi
+get_failed_tests_ids()
+{
+ # sorted
+ local i
+ for i in "${!failed_tests[@]}"; do
+ echo "${i}"
+ done | sort -n
+}
print_file_err()
{
@@ -188,47 +290,53 @@ print_file_err()
check_transfer()
{
- in=$1
- out=$2
- what=$3
-
- cmp "$in" "$out" > /dev/null 2>&1
- if [ $? -ne 0 ] ;then
- echo "[ FAIL ] $what does not match (in, out):"
- print_file_err "$in"
- print_file_err "$out"
- ret=1
-
- return 1
- fi
+ local in=$1
+ local out=$2
+ local what=$3
+ local i a b
+
+ local line
+ cmp -l "$in" "$out" | while read -r i a b; do
+ local sum=$((0${a} + 0${b}))
+ if [ $check_invert -eq 0 ] || [ $sum -ne $((0xff)) ]; then
+ echo "[ FAIL ] $what does not match (in, out):"
+ print_file_err "$in"
+ print_file_err "$out"
+ fail_test
+
+ return 1
+ else
+ echo "$what has inverted byte at ${i}"
+ fi
+ done
return 0
}
do_ping()
{
- listener_ns="$1"
- connector_ns="$2"
- connect_addr="$3"
+ local listener_ns="$1"
+ local connector_ns="$2"
+ local connect_addr="$3"
- ip netns exec ${connector_ns} ping -q -c 1 $connect_addr >/dev/null
- if [ $? -ne 0 ] ; then
+ if ! ip netns exec ${connector_ns} ping -q -c 1 $connect_addr >/dev/null; then
echo "$listener_ns -> $connect_addr connectivity [ FAIL ]" 1>&2
- ret=1
+ fail_test
fi
}
link_failure()
{
- ns="$1"
+ local ns="$1"
if [ -z "$FAILING_LINKS" ]; then
l=$((RANDOM%4))
FAILING_LINKS=$((l+1))
fi
+ local l
for l in $FAILING_LINKS; do
- veth="ns1eth$l"
+ local veth="ns1eth$l"
ip -net "$ns" link set "$veth" down
done
}
@@ -245,9 +353,10 @@ wait_local_port_listen()
local listener_ns="${1}"
local port="${2}"
- local port_hex i
-
+ local port_hex
port_hex="$(printf "%04X" "${port}")"
+
+ local i
for i in $(seq 10); do
ip netns exec "${listener_ns}" cat /proc/net/tcp* | \
awk "BEGIN {rc=1} {if (\$2 ~ /:${port_hex}\$/ && \$4 ~ /0A/) {rc=0; exit}} END {exit rc}" &&
@@ -258,7 +367,7 @@ wait_local_port_listen()
rm_addr_count()
{
- ns=${1}
+ local ns=${1}
ip netns exec ${ns} nstat -as | grep MPTcpExtRmAddr | awk '{print $2}'
}
@@ -269,8 +378,8 @@ wait_rm_addr()
local ns="${1}"
local old_cnt="${2}"
local cnt
- local i
+ local i
for i in $(seq 10); do
cnt=$(rm_addr_count ${ns})
[ "$cnt" = "${old_cnt}" ] || break
@@ -278,27 +387,227 @@ wait_rm_addr()
done
}
+wait_mpj()
+{
+ local ns="${1}"
+ local cnt old_cnt
+
+ old_cnt=$(ip netns exec ${ns} nstat -as | grep MPJoinAckRx | awk '{print $2}')
+
+ local i
+ for i in $(seq 10); do
+ cnt=$(ip netns exec ${ns} nstat -as | grep MPJoinAckRx | awk '{print $2}')
+ [ "$cnt" = "${old_cnt}" ] || break
+ sleep 0.1
+ done
+}
+
+pm_nl_set_limits()
+{
+ local ns=$1
+ local addrs=$2
+ local subflows=$3
+
+ if [ $ip_mptcp -eq 1 ]; then
+ ip -n $ns mptcp limits set add_addr_accepted $addrs subflows $subflows
+ else
+ ip netns exec $ns ./pm_nl_ctl limits $addrs $subflows
+ fi
+}
+
+pm_nl_add_endpoint()
+{
+ local ns=$1
+ local addr=$2
+ local flags _flags
+ local port _port
+ local dev _dev
+ local id _id
+ local nr=2
+
+ local p
+ for p in "${@}"
+ do
+ if [ $p = "flags" ]; then
+ eval _flags=\$"$nr"
+ [ -n "$_flags" ]; flags="flags $_flags"
+ fi
+ if [ $p = "dev" ]; then
+ eval _dev=\$"$nr"
+ [ -n "$_dev" ]; dev="dev $_dev"
+ fi
+ if [ $p = "id" ]; then
+ eval _id=\$"$nr"
+ [ -n "$_id" ]; id="id $_id"
+ fi
+ if [ $p = "port" ]; then
+ eval _port=\$"$nr"
+ [ -n "$_port" ]; port="port $_port"
+ fi
+
+ nr=$((nr + 1))
+ done
+
+ if [ $ip_mptcp -eq 1 ]; then
+ ip -n $ns mptcp endpoint add $addr ${_flags//","/" "} $dev $id $port
+ else
+ ip netns exec $ns ./pm_nl_ctl add $addr $flags $dev $id $port
+ fi
+}
+
+pm_nl_del_endpoint()
+{
+ local ns=$1
+ local id=$2
+ local addr=$3
+
+ if [ $ip_mptcp -eq 1 ]; then
+ ip -n $ns mptcp endpoint delete id $id $addr
+ else
+ ip netns exec $ns ./pm_nl_ctl del $id $addr
+ fi
+}
+
+pm_nl_flush_endpoint()
+{
+ local ns=$1
+
+ if [ $ip_mptcp -eq 1 ]; then
+ ip -n $ns mptcp endpoint flush
+ else
+ ip netns exec $ns ./pm_nl_ctl flush
+ fi
+}
+
+pm_nl_show_endpoints()
+{
+ local ns=$1
+
+ if [ $ip_mptcp -eq 1 ]; then
+ ip -n $ns mptcp endpoint show
+ else
+ ip netns exec $ns ./pm_nl_ctl dump
+ fi
+}
+
+pm_nl_change_endpoint()
+{
+ local ns=$1
+ local id=$2
+ local flags=$3
+
+ if [ $ip_mptcp -eq 1 ]; then
+ ip -n $ns mptcp endpoint change id $id ${flags//","/" "}
+ else
+ ip netns exec $ns ./pm_nl_ctl set id $id flags $flags
+ fi
+}
+
+pm_nl_check_endpoint()
+{
+ local line expected_line
+ local need_title=$1
+ local msg="$2"
+ local ns=$3
+ local addr=$4
+ local _flags=""
+ local flags
+ local _port
+ local port
+ local dev
+ local _id
+ local id
+
+ if [ "${need_title}" = 1 ]; then
+ printf "%03u %-36s %s" "${TEST_COUNT}" "${TEST_NAME}" "${msg}"
+ else
+ printf "%-${nr_blank}s %s" " " "${msg}"
+ fi
+
+ shift 4
+ while [ -n "$1" ]; do
+ if [ $1 = "flags" ]; then
+ _flags=$2
+ [ -n "$_flags" ]; flags="flags $_flags"
+ shift
+ elif [ $1 = "dev" ]; then
+ [ -n "$2" ]; dev="dev $1"
+ shift
+ elif [ $1 = "id" ]; then
+ _id=$2
+ [ -n "$_id" ]; id="id $_id"
+ shift
+ elif [ $1 = "port" ]; then
+ _port=$2
+ [ -n "$_port" ]; port=" port $_port"
+ shift
+ fi
+
+ shift
+ done
+
+ if [ -z "$id" ]; then
+ echo "[skip] bad test - missing endpoint id"
+ return
+ fi
+
+ if [ $ip_mptcp -eq 1 ]; then
+ line=$(ip -n $ns mptcp endpoint show $id)
+ # the dump order is: address id flags port dev
+ expected_line="$addr"
+ [ -n "$addr" ] && expected_line="$expected_line $addr"
+ expected_line="$expected_line $id"
+ [ -n "$_flags" ] && expected_line="$expected_line ${_flags//","/" "}"
+ [ -n "$dev" ] && expected_line="$expected_line $dev"
+ [ -n "$port" ] && expected_line="$expected_line $port"
+ else
+ line=$(ip netns exec $ns ./pm_nl_ctl get $_id)
+ # the dump order is: id flags dev address port
+ expected_line="$id"
+ [ -n "$flags" ] && expected_line="$expected_line $flags"
+ [ -n "$dev" ] && expected_line="$expected_line $dev"
+ [ -n "$addr" ] && expected_line="$expected_line $addr"
+ [ -n "$_port" ] && expected_line="$expected_line $_port"
+ fi
+ if [ "$line" = "$expected_line" ]; then
+ echo "[ ok ]"
+ else
+ echo "[fail] expected '$expected_line' found '$line'"
+ fail_test
+ fi
+}
+
+filter_tcp_from()
+{
+ local ns="${1}"
+ local src="${2}"
+ local target="${3}"
+
+ ip netns exec "${ns}" iptables -A INPUT -s "${src}" -p tcp -j "${target}"
+}
+
do_transfer()
{
- listener_ns="$1"
- connector_ns="$2"
- cl_proto="$3"
- srv_proto="$4"
- connect_addr="$5"
- test_link_fail="$6"
- addr_nr_ns1="$7"
- addr_nr_ns2="$8"
- speed="$9"
- bkup="${10}"
-
- port=$((10000+$TEST_COUNT))
- TEST_COUNT=$((TEST_COUNT+1))
+ local listener_ns="$1"
+ local connector_ns="$2"
+ local cl_proto="$3"
+ local srv_proto="$4"
+ local connect_addr="$5"
+ local test_link_fail="$6"
+ local addr_nr_ns1="$7"
+ local addr_nr_ns2="$8"
+ local speed="$9"
+ local sflags="${10}"
+
+ local port=$((10000 + TEST_COUNT - 1))
+ local cappid
:> "$cout"
:> "$sout"
:> "$capout"
if [ $capture -eq 1 ]; then
+ local capuser
if [ -z $SUDO_USER ] ; then
capuser=""
else
@@ -319,12 +628,19 @@ do_transfer()
NSTAT_HISTORY=/tmp/${connector_ns}.nstat ip netns exec ${connector_ns} \
nstat -n
+ local extra_args
if [ $speed = "fast" ]; then
- mptcp_connect="./mptcp_connect -j"
+ extra_args="-j"
elif [ $speed = "slow" ]; then
- mptcp_connect="./mptcp_connect -r 50"
- elif [ $speed = "least" ]; then
- mptcp_connect="./mptcp_connect -r 10"
+ extra_args="-r 50"
+ elif [[ $speed = "speed_"* ]]; then
+ extra_args="-r ${speed:6}"
+ fi
+
+ if [[ "${addr_nr_ns2}" = "fastclose_"* ]]; then
+ # disconnect
+ extra_args="$extra_args -I ${addr_nr_ns2:10}"
+ addr_nr_ns2=0
fi
local local_addr
@@ -334,43 +650,51 @@ do_transfer()
local_addr="0.0.0.0"
fi
- if [ "$test_link_fail" -eq 2 ];then
+ if [ "$test_link_fail" -gt 1 ];then
timeout ${timeout_test} \
ip netns exec ${listener_ns} \
- $mptcp_connect -t ${timeout_poll} -l -p $port -s ${srv_proto} \
- ${local_addr} < "$sinfail" > "$sout" &
+ ./mptcp_connect -t ${timeout_poll} -l -p $port -s ${srv_proto} \
+ $extra_args ${local_addr} < "$sinfail" > "$sout" &
else
timeout ${timeout_test} \
ip netns exec ${listener_ns} \
- $mptcp_connect -t ${timeout_poll} -l -p $port -s ${srv_proto} \
- ${local_addr} < "$sin" > "$sout" &
+ ./mptcp_connect -t ${timeout_poll} -l -p $port -s ${srv_proto} \
+ $extra_args ${local_addr} < "$sin" > "$sout" &
fi
- spid=$!
+ local spid=$!
wait_local_port_listen "${listener_ns}" "${port}"
if [ "$test_link_fail" -eq 0 ];then
timeout ${timeout_test} \
ip netns exec ${connector_ns} \
- $mptcp_connect -t ${timeout_poll} -p $port -s ${cl_proto} \
- $connect_addr < "$cin" > "$cout" &
- else
+ ./mptcp_connect -t ${timeout_poll} -p $port -s ${cl_proto} \
+ $extra_args $connect_addr < "$cin" > "$cout" &
+ elif [ "$test_link_fail" -eq 1 ] || [ "$test_link_fail" -eq 2 ];then
( cat "$cinfail" ; sleep 2; link_failure $listener_ns ; cat "$cinfail" ) | \
tee "$cinsent" | \
timeout ${timeout_test} \
ip netns exec ${connector_ns} \
- $mptcp_connect -t ${timeout_poll} -p $port -s ${cl_proto} \
- $connect_addr > "$cout" &
+ ./mptcp_connect -t ${timeout_poll} -p $port -s ${cl_proto} \
+ $extra_args $connect_addr > "$cout" &
+ else
+ tee "$cinsent" < "$cinfail" | \
+ timeout ${timeout_test} \
+ ip netns exec ${connector_ns} \
+ ./mptcp_connect -t ${timeout_poll} -p $port -s ${cl_proto} \
+ $extra_args $connect_addr > "$cout" &
fi
- cpid=$!
+ local cpid=$!
# let the mptcp subflow be established in background before
# do endpoint manipulation
- [ $addr_nr_ns1 = "0" -a $addr_nr_ns2 = "0" ] || sleep 1
+ if [ $addr_nr_ns1 != "0" ] || [ $addr_nr_ns2 != "0" ]; then
+ sleep 1
+ fi
if [ $addr_nr_ns1 -gt 0 ]; then
- let add_nr_ns1=addr_nr_ns1
- counter=2
+ local counter=2
+ local add_nr_ns1=${addr_nr_ns1}
while [ $add_nr_ns1 -gt 0 ]; do
local addr
if is_v6 "${connect_addr}"; then
@@ -378,35 +702,43 @@ do_transfer()
else
addr="10.0.$counter.1"
fi
- ip netns exec $ns1 ./pm_nl_ctl add $addr flags signal
- let counter+=1
- let add_nr_ns1-=1
+ pm_nl_add_endpoint $ns1 $addr flags signal
+ counter=$((counter + 1))
+ add_nr_ns1=$((add_nr_ns1 - 1))
done
elif [ $addr_nr_ns1 -lt 0 ]; then
- let rm_nr_ns1=-addr_nr_ns1
+ local rm_nr_ns1=$((-addr_nr_ns1))
if [ $rm_nr_ns1 -lt 8 ]; then
- counter=1
- pos=1
- dump=(`ip netns exec ${listener_ns} ./pm_nl_ctl dump`)
- if [ ${#dump[@]} -gt 0 ]; then
- while [ $counter -le $rm_nr_ns1 ]
- do
- id=${dump[$pos]}
- rm_addr=$(rm_addr_count ${connector_ns})
- ip netns exec ${listener_ns} ./pm_nl_ctl del $id
- wait_rm_addr ${connector_ns} ${rm_addr}
- let counter+=1
- let pos+=5
+ local counter=0
+ local line
+ pm_nl_show_endpoints ${listener_ns} | while read -r line; do
+ # shellcheck disable=SC2206 # we do want to split per word
+ local arr=($line)
+ local nr=0
+
+ local i
+ for i in "${arr[@]}"; do
+ if [ $i = "id" ]; then
+ if [ $counter -eq $rm_nr_ns1 ]; then
+ break
+ fi
+ id=${arr[$nr+1]}
+ rm_addr=$(rm_addr_count ${connector_ns})
+ pm_nl_del_endpoint ${listener_ns} $id
+ wait_rm_addr ${connector_ns} ${rm_addr}
+ counter=$((counter + 1))
+ fi
+ nr=$((nr + 1))
done
- fi
+ done
elif [ $rm_nr_ns1 -eq 8 ]; then
- ip netns exec ${listener_ns} ./pm_nl_ctl flush
+ pm_nl_flush_endpoint ${listener_ns}
elif [ $rm_nr_ns1 -eq 9 ]; then
- ip netns exec ${listener_ns} ./pm_nl_ctl del 0 ${connect_addr}
+ pm_nl_del_endpoint ${listener_ns} 0 ${connect_addr}
fi
fi
- flags="subflow"
+ local flags="subflow"
if [[ "${addr_nr_ns2}" = "fullmesh_"* ]]; then
flags="${flags},fullmesh"
addr_nr_ns2=${addr_nr_ns2:9}
@@ -414,11 +746,11 @@ do_transfer()
# if newly added endpoints must be deleted, give the background msk
# some time to created them
- [ $addr_nr_ns1 -gt 0 -a $addr_nr_ns2 -lt 0 ] && sleep 1
+ [ $addr_nr_ns1 -gt 0 ] && [ $addr_nr_ns2 -lt 0 ] && sleep 1
if [ $addr_nr_ns2 -gt 0 ]; then
- let add_nr_ns2=addr_nr_ns2
- counter=3
+ local add_nr_ns2=${addr_nr_ns2}
+ local counter=3
while [ $add_nr_ns2 -gt 0 ]; do
local addr
if is_v6 "${connect_addr}"; then
@@ -426,30 +758,40 @@ do_transfer()
else
addr="10.0.$counter.2"
fi
- ip netns exec $ns2 ./pm_nl_ctl add $addr flags $flags
- let counter+=1
- let add_nr_ns2-=1
+ pm_nl_add_endpoint $ns2 $addr flags $flags
+ counter=$((counter + 1))
+ add_nr_ns2=$((add_nr_ns2 - 1))
done
elif [ $addr_nr_ns2 -lt 0 ]; then
- let rm_nr_ns2=-addr_nr_ns2
+ local rm_nr_ns2=$((-addr_nr_ns2))
if [ $rm_nr_ns2 -lt 8 ]; then
- counter=1
- pos=1
- dump=(`ip netns exec ${connector_ns} ./pm_nl_ctl dump`)
- if [ ${#dump[@]} -gt 0 ]; then
- while [ $counter -le $rm_nr_ns2 ]
- do
- # rm_addr are serialized, allow the previous one to complete
- id=${dump[$pos]}
- rm_addr=$(rm_addr_count ${listener_ns})
- ip netns exec ${connector_ns} ./pm_nl_ctl del $id
- wait_rm_addr ${listener_ns} ${rm_addr}
- let counter+=1
- let pos+=5
+ local counter=0
+ local line
+ pm_nl_show_endpoints ${connector_ns} | while read -r line; do
+ # shellcheck disable=SC2206 # we do want to split per word
+ local arr=($line)
+ local nr=0
+
+ local i
+ for i in "${arr[@]}"; do
+ if [ $i = "id" ]; then
+ if [ $counter -eq $rm_nr_ns2 ]; then
+ break
+ fi
+ local id rm_addr
+ # rm_addr are serialized, allow the previous one to
+ # complete
+ id=${arr[$nr+1]}
+ rm_addr=$(rm_addr_count ${listener_ns})
+ pm_nl_del_endpoint ${connector_ns} $id
+ wait_rm_addr ${listener_ns} ${rm_addr}
+ counter=$((counter + 1))
+ fi
+ nr=$((nr + 1))
done
- fi
+ done
elif [ $rm_nr_ns2 -eq 8 ]; then
- ip netns exec ${connector_ns} ./pm_nl_ctl flush
+ pm_nl_flush_endpoint ${connector_ns}
elif [ $rm_nr_ns2 -eq 9 ]; then
local addr
if is_v6 "${connect_addr}"; then
@@ -457,26 +799,38 @@ do_transfer()
else
addr="10.0.1.2"
fi
- ip netns exec ${connector_ns} ./pm_nl_ctl del 0 $addr
+ pm_nl_del_endpoint ${connector_ns} 0 $addr
fi
fi
- if [ ! -z $bkup ]; then
+ if [ -n "${sflags}" ]; then
sleep 1
+
+ local netns
for netns in "$ns1" "$ns2"; do
- dump=(`ip netns exec $netns ./pm_nl_ctl dump`)
- if [ ${#dump[@]} -gt 0 ]; then
- addr=${dump[${#dump[@]} - 1]}
- backup="ip netns exec $netns ./pm_nl_ctl set $addr flags $bkup"
- $backup
- fi
+ local line
+ pm_nl_show_endpoints $netns | while read -r line; do
+ # shellcheck disable=SC2206 # we do want to split per word
+ local arr=($line)
+ local nr=0
+ local id
+
+ local i
+ for i in "${arr[@]}"; do
+ if [ $i = "id" ]; then
+ id=${arr[$nr+1]}
+ fi
+ nr=$((nr + 1))
+ done
+ pm_nl_change_endpoint $netns $id $sflags
+ done
done
fi
wait $cpid
- retc=$?
+ local retc=$?
wait $spid
- rets=$?
+ local rets=$?
if [ $capture -eq 1 ]; then
sleep 1
@@ -498,11 +852,11 @@ do_transfer()
cat /tmp/${connector_ns}.out
cat "$capout"
- ret=1
+ fail_test
return 1
fi
- if [ "$test_link_fail" -eq 2 ];then
+ if [ "$test_link_fail" -gt 1 ];then
check_transfer $sinfail $cout "file received by client"
else
check_transfer $sin $cout "file received by client"
@@ -526,9 +880,9 @@ do_transfer()
make_file()
{
- name=$1
- who=$2
- size=$3
+ local name=$1
+ local who=$2
+ local size=$3
dd if=/dev/urandom of="$name" bs=1024 count=$size 2> /dev/null
echo -e "\nMPTCP_TEST_FILE_END_MARKER" >> "$name"
@@ -538,33 +892,49 @@ make_file()
run_tests()
{
- listener_ns="$1"
- connector_ns="$2"
- connect_addr="$3"
- test_linkfail="${4:-0}"
- addr_nr_ns1="${5:-0}"
- addr_nr_ns2="${6:-0}"
- speed="${7:-fast}"
- bkup="${8:-""}"
- lret=0
- oldin=""
-
+ local listener_ns="$1"
+ local connector_ns="$2"
+ local connect_addr="$3"
+ local test_linkfail="${4:-0}"
+ local addr_nr_ns1="${5:-0}"
+ local addr_nr_ns2="${6:-0}"
+ local speed="${7:-fast}"
+ local sflags="${8:-""}"
+
+ local size
+
+ # The values above 2 are reused to make test files
+ # with the given sizes (KB)
+ if [ "$test_linkfail" -gt 2 ]; then
+ size=$test_linkfail
+
+ if [ -z "$cinfail" ]; then
+ cinfail=$(mktemp)
+ fi
+ make_file "$cinfail" "client" $size
# create the input file for the failure test when
# the first failure test run
- if [ "$test_linkfail" -ne 0 -a -z "$cinfail" ]; then
+ elif [ "$test_linkfail" -ne 0 ] && [ -z "$cinfail" ]; then
# the client file must be considerably larger
# of the maximum expected cwin value, or the
# link utilization will be not predicable
size=$((RANDOM%2))
size=$((size+1))
size=$((size*8192))
- size=$((size + ( $RANDOM % 8192) ))
+ size=$((size + ( RANDOM % 8192) ))
cinfail=$(mktemp)
make_file "$cinfail" "client" $size
fi
- if [ "$test_linkfail" -eq 2 -a -z "$sinfail" ]; then
+ if [ "$test_linkfail" -gt 2 ]; then
+ size=$test_linkfail
+
+ if [ -z "$sinfail" ]; then
+ sinfail=$(mktemp)
+ fi
+ make_file "$sinfail" "server" $size
+ elif [ "$test_linkfail" -eq 2 ] && [ -z "$sinfail" ]; then
size=$((RANDOM%16))
size=$((size+1))
size=$((size*2048))
@@ -574,8 +944,7 @@ run_tests()
fi
do_transfer ${listener_ns} ${connector_ns} MPTCP MPTCP ${connect_addr} \
- ${test_linkfail} ${addr_nr_ns1} ${addr_nr_ns2} ${speed} ${bkup}
- lret=$?
+ ${test_linkfail} ${addr_nr_ns1} ${addr_nr_ns2} ${speed} ${sflags}
}
dump_stats()
@@ -588,31 +957,40 @@ dump_stats()
chk_csum_nr()
{
- local msg=${1:-""}
+ local csum_ns1=${1:-0}
+ local csum_ns2=${2:-0}
local count
local dump_stats
+ local allow_multi_errors_ns1=0
+ local allow_multi_errors_ns2=0
- if [ ! -z "$msg" ]; then
- printf "%02u" "$TEST_COUNT"
- else
- echo -n " "
+ if [[ "${csum_ns1}" = "+"* ]]; then
+ allow_multi_errors_ns1=1
+ csum_ns1=${csum_ns1:1}
+ fi
+ if [[ "${csum_ns2}" = "+"* ]]; then
+ allow_multi_errors_ns2=1
+ csum_ns2=${csum_ns2:1}
fi
- printf " %-36s %s" "$msg" "sum"
- count=`ip netns exec $ns1 nstat -as | grep MPTcpExtDataCsumErr | awk '{print $2}'`
+
+ printf "%-${nr_blank}s %s" " " "sum"
+ count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtDataCsumErr | awk '{print $2}')
[ -z "$count" ] && count=0
- if [ "$count" != 0 ]; then
- echo "[fail] got $count data checksum error[s] expected 0"
- ret=1
+ if { [ "$count" != $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 0 ]; } ||
+ { [ "$count" -lt $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 1 ]; }; then
+ echo "[fail] got $count data checksum error[s] expected $csum_ns1"
+ fail_test
dump_stats=1
else
echo -n "[ ok ]"
fi
echo -n " - csum "
- count=`ip netns exec $ns2 nstat -as | grep MPTcpExtDataCsumErr | awk '{print $2}'`
+ count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtDataCsumErr | awk '{print $2}')
[ -z "$count" ] && count=0
- if [ "$count" != 0 ]; then
- echo "[fail] got $count data checksum error[s] expected 0"
- ret=1
+ if { [ "$count" != $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 0 ]; } ||
+ { [ "$count" -lt $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 1 ]; }; then
+ echo "[fail] got $count data checksum error[s] expected $csum_ns2"
+ fail_test
dump_stats=1
else
echo "[ ok ]"
@@ -622,28 +1000,28 @@ chk_csum_nr()
chk_fail_nr()
{
- local mp_fail_nr_tx=$1
- local mp_fail_nr_rx=$2
+ local fail_tx=$1
+ local fail_rx=$2
local count
local dump_stats
- printf "%-39s %s" " " "ftx"
- count=`ip netns exec $ns1 nstat -as | grep MPTcpExtMPFailTx | awk '{print $2}'`
+ printf "%-${nr_blank}s %s" " " "ftx"
+ count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPFailTx | awk '{print $2}')
[ -z "$count" ] && count=0
- if [ "$count" != "$mp_fail_nr_tx" ]; then
- echo "[fail] got $count MP_FAIL[s] TX expected $mp_fail_nr_tx"
- ret=1
+ if [ "$count" != "$fail_tx" ]; then
+ echo "[fail] got $count MP_FAIL[s] TX expected $fail_tx"
+ fail_test
dump_stats=1
else
echo -n "[ ok ]"
fi
- echo -n " - frx "
- count=`ip netns exec $ns2 nstat -as | grep MPTcpExtMPFailRx | awk '{print $2}'`
+ echo -n " - failrx"
+ count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtMPFailRx | awk '{print $2}')
[ -z "$count" ] && count=0
- if [ "$count" != "$mp_fail_nr_rx" ]; then
- echo "[fail] got $count MP_FAIL[s] RX expected $mp_fail_nr_rx"
- ret=1
+ if [ "$count" != "$fail_rx" ]; then
+ echo "[fail] got $count MP_FAIL[s] RX expected $fail_rx"
+ fail_test
dump_stats=1
else
echo "[ ok ]"
@@ -652,30 +1030,115 @@ chk_fail_nr()
[ "${dump_stats}" = 1 ] && dump_stats
}
+chk_fclose_nr()
+{
+ local fclose_tx=$1
+ local fclose_rx=$2
+ local count
+ local dump_stats
+
+ printf "%-${nr_blank}s %s" " " "ctx"
+ count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtMPFastcloseTx | awk '{print $2}')
+ [ -z "$count" ] && count=0
+ if [ "$count" != "$fclose_tx" ]; then
+ echo "[fail] got $count MP_FASTCLOSE[s] TX expected $fclose_tx"
+ fail_test
+ dump_stats=1
+ else
+ echo -n "[ ok ]"
+ fi
+
+ echo -n " - fclzrx"
+ count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPFastcloseRx | awk '{print $2}')
+ [ -z "$count" ] && count=0
+ if [ "$count" != "$fclose_rx" ]; then
+ echo "[fail] got $count MP_FASTCLOSE[s] RX expected $fclose_rx"
+ fail_test
+ dump_stats=1
+ else
+ echo "[ ok ]"
+ fi
+
+ [ "${dump_stats}" = 1 ] && dump_stats
+}
+
+chk_rst_nr()
+{
+ local rst_tx=$1
+ local rst_rx=$2
+ local ns_invert=${3:-""}
+ local count
+ local dump_stats
+ local ns_tx=$ns1
+ local ns_rx=$ns2
+ local extra_msg=""
+
+ if [[ $ns_invert = "invert" ]]; then
+ ns_tx=$ns2
+ ns_rx=$ns1
+ extra_msg=" invert"
+ fi
+
+ printf "%-${nr_blank}s %s" " " "rtx"
+ count=$(ip netns exec $ns_tx nstat -as | grep MPTcpExtMPRstTx | awk '{print $2}')
+ [ -z "$count" ] && count=0
+ if [ "$count" != "$rst_tx" ]; then
+ echo "[fail] got $count MP_RST[s] TX expected $rst_tx"
+ fail_test
+ dump_stats=1
+ else
+ echo -n "[ ok ]"
+ fi
+
+ echo -n " - rstrx "
+ count=$(ip netns exec $ns_rx nstat -as | grep MPTcpExtMPRstRx | awk '{print $2}')
+ [ -z "$count" ] && count=0
+ if [ "$count" != "$rst_rx" ]; then
+ echo "[fail] got $count MP_RST[s] RX expected $rst_rx"
+ fail_test
+ dump_stats=1
+ else
+ echo -n "[ ok ]"
+ fi
+
+ [ "${dump_stats}" = 1 ] && dump_stats
+
+ echo "$extra_msg"
+}
+
chk_join_nr()
{
- local msg="$1"
- local syn_nr=$2
- local syn_ack_nr=$3
- local ack_nr=$4
+ local syn_nr=$1
+ local syn_ack_nr=$2
+ local ack_nr=$3
+ local csum_ns1=${4:-0}
+ local csum_ns2=${5:-0}
+ local fail_nr=${6:-0}
+ local rst_nr=${7:-0}
+ local corrupted_pkts=${8:-0}
local count
local dump_stats
local with_cookie
+ local title="${TEST_NAME}"
+
+ if [ "${corrupted_pkts}" -gt 0 ]; then
+ title+=": ${corrupted_pkts} corrupted pkts"
+ fi
- printf "%02u %-36s %s" "$TEST_COUNT" "$msg" "syn"
- count=`ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinSynRx | awk '{print $2}'`
+ printf "%03u %-36s %s" "${TEST_COUNT}" "${title}" "syn"
+ count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinSynRx | awk '{print $2}')
[ -z "$count" ] && count=0
if [ "$count" != "$syn_nr" ]; then
echo "[fail] got $count JOIN[s] syn expected $syn_nr"
- ret=1
+ fail_test
dump_stats=1
else
echo -n "[ ok ]"
fi
echo -n " - synack"
- with_cookie=`ip netns exec $ns2 sysctl -n net.ipv4.tcp_syncookies`
- count=`ip netns exec $ns2 nstat -as | grep MPTcpExtMPJoinSynAckRx | awk '{print $2}'`
+ with_cookie=$(ip netns exec $ns2 sysctl -n net.ipv4.tcp_syncookies)
+ count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtMPJoinSynAckRx | awk '{print $2}')
[ -z "$count" ] && count=0
if [ "$count" != "$syn_ack_nr" ]; then
# simult connections exceeding the limit with cookie enabled could go up to
@@ -685,7 +1148,7 @@ chk_join_nr()
echo -n "[ ok ]"
else
echo "[fail] got $count JOIN[s] synack expected $syn_ack_nr"
- ret=1
+ fail_test
dump_stats=1
fi
else
@@ -693,19 +1156,20 @@ chk_join_nr()
fi
echo -n " - ack"
- count=`ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinAckRx | awk '{print $2}'`
+ count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinAckRx | awk '{print $2}')
[ -z "$count" ] && count=0
if [ "$count" != "$ack_nr" ]; then
echo "[fail] got $count JOIN[s] ack expected $ack_nr"
- ret=1
+ fail_test
dump_stats=1
else
echo "[ ok ]"
fi
[ "${dump_stats}" = 1 ] && dump_stats
if [ $checksum -eq 1 ]; then
- chk_csum_nr
- chk_fail_nr 0 0
+ chk_csum_nr $csum_ns1 $csum_ns2
+ chk_fail_nr $fail_nr $fail_nr
+ chk_rst_nr $rst_nr $rst_nr
fi
}
@@ -724,19 +1188,19 @@ chk_stale_nr()
local stale_nr
local recover_nr
- printf "%-39s %-18s" " " "stale"
- stale_nr=`ip netns exec $ns nstat -as | grep MPTcpExtSubflowStale | awk '{print $2}'`
+ printf "%-${nr_blank}s %-18s" " " "stale"
+ stale_nr=$(ip netns exec $ns nstat -as | grep MPTcpExtSubflowStale | awk '{print $2}')
[ -z "$stale_nr" ] && stale_nr=0
- recover_nr=`ip netns exec $ns nstat -as | grep MPTcpExtSubflowRecover | awk '{print $2}'`
+ recover_nr=$(ip netns exec $ns nstat -as | grep MPTcpExtSubflowRecover | awk '{print $2}')
[ -z "$recover_nr" ] && recover_nr=0
if [ $stale_nr -lt $stale_min ] ||
- [ $stale_max -gt 0 -a $stale_nr -gt $stale_max ] ||
- [ $((stale_nr - $recover_nr)) -ne $stale_delta ]; then
+ { [ $stale_max -gt 0 ] && [ $stale_nr -gt $stale_max ]; } ||
+ [ $((stale_nr - recover_nr)) -ne $stale_delta ]; then
echo "[fail] got $stale_nr stale[s] $recover_nr recover[s], " \
" expected stale in range [$stale_min..$stale_max]," \
" stale-recover delta $stale_delta "
- ret=1
+ fail_test
dump_stats=1
else
echo "[ ok ]"
@@ -763,28 +1227,28 @@ chk_add_nr()
local dump_stats
local timeout
- timeout=`ip netns exec $ns1 sysctl -n net.mptcp.add_addr_timeout`
+ timeout=$(ip netns exec $ns1 sysctl -n net.mptcp.add_addr_timeout)
- printf "%-39s %s" " " "add"
- count=`ip netns exec $ns2 nstat -as MPTcpExtAddAddr | grep MPTcpExtAddAddr | awk '{print $2}'`
+ printf "%-${nr_blank}s %s" " " "add"
+ count=$(ip netns exec $ns2 nstat -as MPTcpExtAddAddr | grep MPTcpExtAddAddr | awk '{print $2}')
[ -z "$count" ] && count=0
# if the test configured a short timeout tolerate greater then expected
# add addrs options, due to retransmissions
- if [ "$count" != "$add_nr" ] && [ "$timeout" -gt 1 -o "$count" -lt "$add_nr" ]; then
+ if [ "$count" != "$add_nr" ] && { [ "$timeout" -gt 1 ] || [ "$count" -lt "$add_nr" ]; }; then
echo "[fail] got $count ADD_ADDR[s] expected $add_nr"
- ret=1
+ fail_test
dump_stats=1
else
echo -n "[ ok ]"
fi
echo -n " - echo "
- count=`ip netns exec $ns1 nstat -as | grep MPTcpExtEchoAdd | awk '{print $2}'`
+ count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtEchoAdd | awk '{print $2}')
[ -z "$count" ] && count=0
if [ "$count" != "$echo_nr" ]; then
echo "[fail] got $count ADD_ADDR echo[s] expected $echo_nr"
- ret=1
+ fail_test
dump_stats=1
else
echo -n "[ ok ]"
@@ -792,76 +1256,76 @@ chk_add_nr()
if [ $port_nr -gt 0 ]; then
echo -n " - pt "
- count=`ip netns exec $ns2 nstat -as | grep MPTcpExtPortAdd | awk '{print $2}'`
+ count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtPortAdd | awk '{print $2}')
[ -z "$count" ] && count=0
if [ "$count" != "$port_nr" ]; then
echo "[fail] got $count ADD_ADDR[s] with a port-number expected $port_nr"
- ret=1
+ fail_test
dump_stats=1
else
echo "[ ok ]"
fi
- printf "%-39s %s" " " "syn"
- count=`ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinPortSynRx |
- awk '{print $2}'`
+ printf "%-${nr_blank}s %s" " " "syn"
+ count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinPortSynRx |
+ awk '{print $2}')
[ -z "$count" ] && count=0
if [ "$count" != "$syn_nr" ]; then
echo "[fail] got $count JOIN[s] syn with a different \
port-number expected $syn_nr"
- ret=1
+ fail_test
dump_stats=1
else
echo -n "[ ok ]"
fi
echo -n " - synack"
- count=`ip netns exec $ns2 nstat -as | grep MPTcpExtMPJoinPortSynAckRx |
- awk '{print $2}'`
+ count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtMPJoinPortSynAckRx |
+ awk '{print $2}')
[ -z "$count" ] && count=0
if [ "$count" != "$syn_ack_nr" ]; then
echo "[fail] got $count JOIN[s] synack with a different \
port-number expected $syn_ack_nr"
- ret=1
+ fail_test
dump_stats=1
else
echo -n "[ ok ]"
fi
echo -n " - ack"
- count=`ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinPortAckRx |
- awk '{print $2}'`
+ count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinPortAckRx |
+ awk '{print $2}')
[ -z "$count" ] && count=0
if [ "$count" != "$ack_nr" ]; then
echo "[fail] got $count JOIN[s] ack with a different \
port-number expected $ack_nr"
- ret=1
+ fail_test
dump_stats=1
else
echo "[ ok ]"
fi
- printf "%-39s %s" " " "syn"
- count=`ip netns exec $ns1 nstat -as | grep MPTcpExtMismatchPortSynRx |
- awk '{print $2}'`
+ printf "%-${nr_blank}s %s" " " "syn"
+ count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMismatchPortSynRx |
+ awk '{print $2}')
[ -z "$count" ] && count=0
if [ "$count" != "$mis_syn_nr" ]; then
echo "[fail] got $count JOIN[s] syn with a mismatched \
port-number expected $mis_syn_nr"
- ret=1
+ fail_test
dump_stats=1
else
echo -n "[ ok ]"
fi
echo -n " - ack "
- count=`ip netns exec $ns1 nstat -as | grep MPTcpExtMismatchPortAckRx |
- awk '{print $2}'`
+ count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMismatchPortAckRx |
+ awk '{print $2}')
[ -z "$count" ] && count=0
if [ "$count" != "$mis_ack_nr" ]; then
echo "[fail] got $count JOIN[s] ack with a mismatched \
port-number expected $mis_ack_nr"
- ret=1
+ fail_test
dump_stats=1
else
echo "[ ok ]"
@@ -877,43 +1341,75 @@ chk_rm_nr()
{
local rm_addr_nr=$1
local rm_subflow_nr=$2
- local invert=${3:-""}
+ local invert
+ local simult
local count
local dump_stats
- local addr_ns
- local subflow_ns
+ local addr_ns=$ns1
+ local subflow_ns=$ns2
+ local extra_msg=""
+
+ shift 2
+ while [ -n "$1" ]; do
+ [ "$1" = "invert" ] && invert=true
+ [ "$1" = "simult" ] && simult=true
+ shift
+ done
if [ -z $invert ]; then
addr_ns=$ns1
subflow_ns=$ns2
- elif [ $invert = "invert" ]; then
+ elif [ $invert = "true" ]; then
addr_ns=$ns2
subflow_ns=$ns1
+ extra_msg=" invert"
fi
- printf "%-39s %s" " " "rm "
- count=`ip netns exec $addr_ns nstat -as | grep MPTcpExtRmAddr | awk '{print $2}'`
+ printf "%-${nr_blank}s %s" " " "rm "
+ count=$(ip netns exec $addr_ns nstat -as | grep MPTcpExtRmAddr | awk '{print $2}')
[ -z "$count" ] && count=0
if [ "$count" != "$rm_addr_nr" ]; then
echo "[fail] got $count RM_ADDR[s] expected $rm_addr_nr"
- ret=1
+ fail_test
dump_stats=1
else
echo -n "[ ok ]"
fi
- echo -n " - sf "
- count=`ip netns exec $subflow_ns nstat -as | grep MPTcpExtRmSubflow | awk '{print $2}'`
+ echo -n " - rmsf "
+ count=$(ip netns exec $subflow_ns nstat -as | grep MPTcpExtRmSubflow | awk '{print $2}')
[ -z "$count" ] && count=0
+ if [ -n "$simult" ]; then
+ local cnt suffix
+
+ cnt=$(ip netns exec $addr_ns nstat -as | grep MPTcpExtRmSubflow | awk '{print $2}')
+
+ # in case of simult flush, the subflow removal count on each side is
+ # unreliable
+ [ -z "$cnt" ] && cnt=0
+ count=$((count + cnt))
+ [ "$count" != "$rm_subflow_nr" ] && suffix="$count in [$rm_subflow_nr:$((rm_subflow_nr*2))]"
+ if [ $count -ge "$rm_subflow_nr" ] && \
+ [ "$count" -le "$((rm_subflow_nr *2 ))" ]; then
+ echo "[ ok ] $suffix"
+ else
+ echo "[fail] got $count RM_SUBFLOW[s] expected in range [$rm_subflow_nr:$((rm_subflow_nr*2))]"
+ fail_test
+ dump_stats=1
+ fi
+ return
+ fi
if [ "$count" != "$rm_subflow_nr" ]; then
echo "[fail] got $count RM_SUBFLOW[s] expected $rm_subflow_nr"
- ret=1
+ fail_test
dump_stats=1
else
- echo "[ ok ]"
+ echo -n "[ ok ]"
fi
[ "${dump_stats}" = 1 ] && dump_stats
+
+ echo "$extra_msg"
}
chk_prio_nr()
@@ -923,23 +1419,23 @@ chk_prio_nr()
local count
local dump_stats
- printf "%-39s %s" " " "ptx"
- count=`ip netns exec $ns1 nstat -as | grep MPTcpExtMPPrioTx | awk '{print $2}'`
+ printf "%-${nr_blank}s %s" " " "ptx"
+ count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPPrioTx | awk '{print $2}')
[ -z "$count" ] && count=0
if [ "$count" != "$mp_prio_nr_tx" ]; then
echo "[fail] got $count MP_PRIO[s] TX expected $mp_prio_nr_tx"
- ret=1
+ fail_test
dump_stats=1
else
echo -n "[ ok ]"
fi
echo -n " - prx "
- count=`ip netns exec $ns1 nstat -as | grep MPTcpExtMPPrioRx | awk '{print $2}'`
+ count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPPrioRx | awk '{print $2}')
[ -z "$count" ] && count=0
if [ "$count" != "$mp_prio_nr_rx" ]; then
echo "[fail] got $count MP_PRIO[s] RX expected $mp_prio_nr_rx"
- ret=1
+ fail_test
dump_stats=1
else
echo "[ ok ]"
@@ -954,29 +1450,33 @@ chk_link_usage()
local link=$2
local out=$3
local expected_rate=$4
- local tx_link=`ip netns exec $ns cat /sys/class/net/$link/statistics/tx_bytes`
- local tx_total=`ls -l $out | awk '{print $5}'`
- local tx_rate=$((tx_link * 100 / $tx_total))
+
+ local tx_link tx_total
+ tx_link=$(ip netns exec $ns cat /sys/class/net/$link/statistics/tx_bytes)
+ tx_total=$(stat --format=%s $out)
+ local tx_rate=$((tx_link * 100 / tx_total))
local tolerance=5
- printf "%-39s %-18s" " " "link usage"
- if [ $tx_rate -lt $((expected_rate - $tolerance)) -o \
- $tx_rate -gt $((expected_rate + $tolerance)) ]; then
+ printf "%-${nr_blank}s %-18s" " " "link usage"
+ if [ $tx_rate -lt $((expected_rate - tolerance)) ] || \
+ [ $tx_rate -gt $((expected_rate + tolerance)) ]; then
echo "[fail] got $tx_rate% usage, expected $expected_rate%"
- ret=1
+ fail_test
else
echo "[ ok ]"
fi
}
-wait_for_tw()
+wait_attempt_fail()
{
local timeout_ms=$((timeout_poll * 1000))
local time=0
local ns=$1
while [ $time -lt $timeout_ms ]; do
- local cnt=$(ip netns exec $ns nstat -as TcpAttemptFails | grep TcpAttemptFails | awk '{print $2}')
+ local cnt
+
+ cnt=$(ip netns exec $ns nstat -as TcpAttemptFails | grep TcpAttemptFails | awk '{print $2}')
[ "$cnt" = 1 ] && return 1
time=$((time + 100))
@@ -987,877 +1487,968 @@ wait_for_tw()
subflows_tests()
{
- reset
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "no JOIN" "0" "0" "0"
+ if reset "no JOIN"; then
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 0 0 0
+ fi
# subflow limited by client
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 0
- ip netns exec $ns2 ./pm_nl_ctl limits 0 0
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "single subflow, limited by client" 0 0 0
+ if reset "single subflow, limited by client"; then
+ pm_nl_set_limits $ns1 0 0
+ pm_nl_set_limits $ns2 0 0
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 0 0 0
+ fi
# subflow limited by server
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 0
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "single subflow, limited by server" 1 1 0
+ if reset "single subflow, limited by server"; then
+ pm_nl_set_limits $ns1 0 0
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 1 1 0
+ fi
# subflow
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "single subflow" 1 1 1
+ if reset "single subflow"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 1 1 1
+ fi
# multiple subflows
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.2.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "multiple subflows" 2 2 2
+ if reset "multiple subflows"; then
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_set_limits $ns2 0 2
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 2 2 2
+ fi
# multiple subflows limited by server
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.2.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "multiple subflows, limited by server" 2 2 1
+ if reset "multiple subflows, limited by server"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 2
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 2 2 1
+ fi
# single subflow, dev
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow dev ns2eth3
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "single subflow, dev" 1 1 1
+ if reset "single subflow, dev"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow dev ns2eth3
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 1 1 1
+ fi
}
subflows_error_tests()
{
# If a single subflow is configured, and matches the MPC src
# address, no additional subflow should be created
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.1.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
- chk_join_nr "no MPC reuse with single endpoint" 0 0 0
+ if reset "no MPC reuse with single endpoint"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
+ chk_join_nr 0 0 0
+ fi
# multiple subflows, with subflow creation error
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.2.2 flags subflow
- ip netns exec $ns1 iptables -A INPUT -s 10.0.3.2 -p tcp -j REJECT
- run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
- chk_join_nr "multi subflows, with failing subflow" 1 1 1
+ if reset "multi subflows, with failing subflow"; then
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_set_limits $ns2 0 2
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow
+ filter_tcp_from $ns1 10.0.3.2 REJECT
+ run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
+ chk_join_nr 1 1 1
+ fi
# multiple subflows, with subflow timeout on MPJ
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.2.2 flags subflow
- ip netns exec $ns1 iptables -A INPUT -s 10.0.3.2 -p tcp -j DROP
- run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
- chk_join_nr "multi subflows, with subflow timeout" 1 1 1
+ if reset "multi subflows, with subflow timeout"; then
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_set_limits $ns2 0 2
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow
+ filter_tcp_from $ns1 10.0.3.2 DROP
+ run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
+ chk_join_nr 1 1 1
+ fi
# multiple subflows, check that the endpoint corresponding to
# closed subflow (due to reset) is not reused if additional
# subflows are added later
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- ip netns exec $ns1 iptables -A INPUT -s 10.0.3.2 -p tcp -j REJECT
- run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow &
-
- # updates in the child shell do not have any effect here, we
- # need to bump the test counter for the above case
- TEST_COUNT=$((TEST_COUNT+1))
-
- # mpj subflow will be in TW after the reset
- wait_for_tw $ns2
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.2.2 flags subflow
- wait
-
- # additional subflow could be created only if the PM select
- # the later endpoint, skipping the already used one
- chk_join_nr "multi subflows, fair usage on close" 1 1 1
+ if reset "multi subflows, fair usage on close"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ filter_tcp_from $ns1 10.0.3.2 REJECT
+ run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow &
+
+ # mpj subflow will be in TW after the reset
+ wait_attempt_fail $ns2
+ pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow
+ wait
+
+ # additional subflow could be created only if the PM select
+ # the later endpoint, skipping the already used one
+ chk_join_nr 1 1 1
+ fi
}
signal_address_tests()
{
# add_address, unused
- reset
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "unused signal address" 0 0 0
- chk_add_nr 1 1
+ if reset "unused signal address"; then
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 0 0 0
+ chk_add_nr 1 1
+ fi
# accept and use add_addr
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "signal address" 1 1 1
- chk_add_nr 1 1
+ if reset "signal address"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 1 1
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1
+ fi
# accept and use add_addr with an additional subflow
# note: signal address in server ns and local addresses in client ns must
# belong to different subnets or one of the listed local address could be
# used for 'add_addr' subflow
- reset
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl limits 1 2
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "subflow and signal" 2 2 2
- chk_add_nr 1 1
+ if reset "subflow and signal"; then
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_set_limits $ns2 1 2
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 2 2 2
+ chk_add_nr 1 1
+ fi
# accept and use add_addr with additional subflows
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 3
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 3
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.4.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "multiple subflows and signal" 3 3 3
- chk_add_nr 1 1
+ if reset "multiple subflows and signal"; then
+ pm_nl_set_limits $ns1 0 3
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_set_limits $ns2 1 3
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.4.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 3 3 3
+ chk_add_nr 1 1
+ fi
# signal addresses
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 3 3
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.3.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.4.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 3 3
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "signal addresses" 3 3 3
- chk_add_nr 3 3
+ if reset "signal addresses"; then
+ pm_nl_set_limits $ns1 3 3
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.3.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.4.1 flags signal
+ pm_nl_set_limits $ns2 3 3
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 3 3 3
+ chk_add_nr 3 3
+ fi
# signal invalid addresses
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 3 3
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.12.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.3.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.14.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 3 3
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "signal invalid addresses" 1 1 1
- chk_add_nr 3 3
+ if reset "signal invalid addresses"; then
+ pm_nl_set_limits $ns1 3 3
+ pm_nl_add_endpoint $ns1 10.0.12.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.3.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.14.1 flags signal
+ pm_nl_set_limits $ns2 3 3
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 1 1 1
+ chk_add_nr 3 3
+ fi
# signal addresses race test
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 4 4
- ip netns exec $ns2 ./pm_nl_ctl limits 4 4
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.3.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.4.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.1.2 flags signal
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.2.2 flags signal
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags signal
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.4.2 flags signal
-
- # the peer could possibly miss some addr notification, allow retransmission
- ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=1
- run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
- chk_join_nr "signal addresses race test" 3 3 3
-
- # the server will not signal the address terminating
- # the MPC subflow
- chk_add_nr 3 3
+ if reset "signal addresses race test"; then
+ pm_nl_set_limits $ns1 4 4
+ pm_nl_set_limits $ns2 4 4
+ pm_nl_add_endpoint $ns1 10.0.1.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.3.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.4.1 flags signal
+ pm_nl_add_endpoint $ns2 10.0.1.2 flags signal
+ pm_nl_add_endpoint $ns2 10.0.2.2 flags signal
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags signal
+ pm_nl_add_endpoint $ns2 10.0.4.2 flags signal
+
+ # the peer could possibly miss some addr notification, allow retransmission
+ ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=1
+ run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
+ chk_join_nr 3 3 3
+
+ # the server will not signal the address terminating
+ # the MPC subflow
+ chk_add_nr 3 3
+ fi
}
link_failure_tests()
{
# accept and use add_addr with additional subflows and link loss
- reset
-
- # without any b/w limit each veth could spool the packets and get
- # them acked at xmit time, so that the corresponding subflow will
- # have almost always no outstanding pkts, the scheduler will pick
- # always the first subflow and we will have hard time testing
- # active backup and link switch-over.
- # Let's set some arbitrary (low) virtual link limits.
- init_shapers
- ip netns exec $ns1 ./pm_nl_ctl limits 0 3
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 dev ns1eth2 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 3
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 dev ns2eth3 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.4.2 dev ns2eth4 flags subflow
- run_tests $ns1 $ns2 10.0.1.1 1
- chk_join_nr "multiple flows, signal, link failure" 3 3 3
- chk_add_nr 1 1
- chk_stale_nr $ns2 1 5 1
+ if reset "multiple flows, signal, link failure"; then
+ # without any b/w limit each veth could spool the packets and get
+ # them acked at xmit time, so that the corresponding subflow will
+ # have almost always no outstanding pkts, the scheduler will pick
+ # always the first subflow and we will have hard time testing
+ # active backup and link switch-over.
+ # Let's set some arbitrary (low) virtual link limits.
+ init_shapers
+ pm_nl_set_limits $ns1 0 3
+ pm_nl_add_endpoint $ns1 10.0.2.1 dev ns1eth2 flags signal
+ pm_nl_set_limits $ns2 1 3
+ pm_nl_add_endpoint $ns2 10.0.3.2 dev ns2eth3 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.4.2 dev ns2eth4 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1 1
+ chk_join_nr 3 3 3
+ chk_add_nr 1 1
+ chk_stale_nr $ns2 1 5 1
+ fi
# accept and use add_addr with additional subflows and link loss
# for bidirectional transfer
- reset
- init_shapers
- ip netns exec $ns1 ./pm_nl_ctl limits 0 3
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 dev ns1eth2 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 3
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 dev ns2eth3 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.4.2 dev ns2eth4 flags subflow
- run_tests $ns1 $ns2 10.0.1.1 2
- chk_join_nr "multi flows, signal, bidi, link fail" 3 3 3
- chk_add_nr 1 1
- chk_stale_nr $ns2 1 -1 1
+ if reset "multi flows, signal, bidi, link fail"; then
+ init_shapers
+ pm_nl_set_limits $ns1 0 3
+ pm_nl_add_endpoint $ns1 10.0.2.1 dev ns1eth2 flags signal
+ pm_nl_set_limits $ns2 1 3
+ pm_nl_add_endpoint $ns2 10.0.3.2 dev ns2eth3 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.4.2 dev ns2eth4 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1 2
+ chk_join_nr 3 3 3
+ chk_add_nr 1 1
+ chk_stale_nr $ns2 1 -1 1
+ fi
# 2 subflows plus 1 backup subflow with a lossy link, backup
# will never be used
- reset
- init_shapers
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 dev ns1eth2 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 2
- export FAILING_LINKS="1"
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 dev ns2eth3 flags subflow,backup
- run_tests $ns1 $ns2 10.0.1.1 1
- chk_join_nr "backup subflow unused, link failure" 2 2 2
- chk_add_nr 1 1
- chk_link_usage $ns2 ns2eth3 $cinsent 0
+ if reset "backup subflow unused, link failure"; then
+ init_shapers
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_add_endpoint $ns1 10.0.2.1 dev ns1eth2 flags signal
+ pm_nl_set_limits $ns2 1 2
+ FAILING_LINKS="1"
+ pm_nl_add_endpoint $ns2 10.0.3.2 dev ns2eth3 flags subflow,backup
+ run_tests $ns1 $ns2 10.0.1.1 1
+ chk_join_nr 2 2 2
+ chk_add_nr 1 1
+ chk_link_usage $ns2 ns2eth3 $cinsent 0
+ fi
# 2 lossy links after half transfer, backup will get half of
# the traffic
- reset
- init_shapers
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 dev ns1eth2 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 2
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 dev ns2eth3 flags subflow,backup
- export FAILING_LINKS="1 2"
- run_tests $ns1 $ns2 10.0.1.1 1
- chk_join_nr "backup flow used, multi links fail" 2 2 2
- chk_add_nr 1 1
- chk_stale_nr $ns2 2 4 2
- chk_link_usage $ns2 ns2eth3 $cinsent 50
+ if reset "backup flow used, multi links fail"; then
+ init_shapers
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_add_endpoint $ns1 10.0.2.1 dev ns1eth2 flags signal
+ pm_nl_set_limits $ns2 1 2
+ pm_nl_add_endpoint $ns2 10.0.3.2 dev ns2eth3 flags subflow,backup
+ FAILING_LINKS="1 2"
+ run_tests $ns1 $ns2 10.0.1.1 1
+ chk_join_nr 2 2 2
+ chk_add_nr 1 1
+ chk_stale_nr $ns2 2 4 2
+ chk_link_usage $ns2 ns2eth3 $cinsent 50
+ fi
# use a backup subflow with the first subflow on a lossy link
# for bidirectional transfer
- reset
- init_shapers
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 dev ns1eth2 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 3
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 dev ns2eth3 flags subflow,backup
- run_tests $ns1 $ns2 10.0.1.1 2
- chk_join_nr "backup flow used, bidi, link failure" 2 2 2
- chk_add_nr 1 1
- chk_stale_nr $ns2 1 -1 2
- chk_link_usage $ns2 ns2eth3 $cinsent 50
+ if reset "backup flow used, bidi, link failure"; then
+ init_shapers
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_add_endpoint $ns1 10.0.2.1 dev ns1eth2 flags signal
+ pm_nl_set_limits $ns2 1 3
+ pm_nl_add_endpoint $ns2 10.0.3.2 dev ns2eth3 flags subflow,backup
+ FAILING_LINKS="1 2"
+ run_tests $ns1 $ns2 10.0.1.1 2
+ chk_join_nr 2 2 2
+ chk_add_nr 1 1
+ chk_stale_nr $ns2 1 -1 2
+ chk_link_usage $ns2 ns2eth3 $cinsent 50
+ fi
}
add_addr_timeout_tests()
{
# add_addr timeout
- reset_with_add_addr_timeout
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
- chk_join_nr "signal address, ADD_ADDR timeout" 1 1 1
- chk_add_nr 4 0
+ if reset_with_add_addr_timeout "signal address, ADD_ADDR timeout"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 1 1
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
+ chk_join_nr 1 1 1
+ chk_add_nr 4 0
+ fi
# add_addr timeout IPv6
- reset_with_add_addr_timeout 6
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- ip netns exec $ns1 ./pm_nl_ctl add dead:beef:2::1 flags signal
- run_tests $ns1 $ns2 dead:beef:1::1 0 0 0 slow
- chk_join_nr "signal address, ADD_ADDR6 timeout" 1 1 1
- chk_add_nr 4 0
+ if reset_with_add_addr_timeout "signal address, ADD_ADDR6 timeout" 6; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 1 1
+ pm_nl_add_endpoint $ns1 dead:beef:2::1 flags signal
+ run_tests $ns1 $ns2 dead:beef:1::1 0 0 0 slow
+ chk_join_nr 1 1 1
+ chk_add_nr 4 0
+ fi
# signal addresses timeout
- reset_with_add_addr_timeout
- ip netns exec $ns1 ./pm_nl_ctl limits 2 2
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.3.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 2 2
- run_tests $ns1 $ns2 10.0.1.1 0 0 0 least
- chk_join_nr "signal addresses, ADD_ADDR timeout" 2 2 2
- chk_add_nr 8 0
+ if reset_with_add_addr_timeout "signal addresses, ADD_ADDR timeout"; then
+ pm_nl_set_limits $ns1 2 2
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.3.1 flags signal
+ pm_nl_set_limits $ns2 2 2
+ run_tests $ns1 $ns2 10.0.1.1 0 0 0 speed_10
+ chk_join_nr 2 2 2
+ chk_add_nr 8 0
+ fi
# signal invalid addresses timeout
- reset_with_add_addr_timeout
- ip netns exec $ns1 ./pm_nl_ctl limits 2 2
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.12.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.3.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 2 2
- run_tests $ns1 $ns2 10.0.1.1 0 0 0 least
- chk_join_nr "invalid address, ADD_ADDR timeout" 1 1 1
- chk_add_nr 8 0
+ if reset_with_add_addr_timeout "invalid address, ADD_ADDR timeout"; then
+ pm_nl_set_limits $ns1 2 2
+ pm_nl_add_endpoint $ns1 10.0.12.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.3.1 flags signal
+ pm_nl_set_limits $ns2 2 2
+ run_tests $ns1 $ns2 10.0.1.1 0 0 0 speed_10
+ chk_join_nr 1 1 1
+ chk_add_nr 8 0
+ fi
}
remove_tests()
{
# single subflow, remove
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1 0 0 -1 slow
- chk_join_nr "remove single subflow" 1 1 1
- chk_rm_nr 1 1
+ if reset "remove single subflow"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1 0 0 -1 slow
+ chk_join_nr 1 1 1
+ chk_rm_nr 1 1
+ fi
# multiple subflows, remove
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.2.2 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1 0 0 -2 slow
- chk_join_nr "remove multiple subflows" 2 2 2
- chk_rm_nr 2 2
+ if reset "remove multiple subflows"; then
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_set_limits $ns2 0 2
+ pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1 0 0 -2 slow
+ chk_join_nr 2 2 2
+ chk_rm_nr 2 2
+ fi
# single address, remove
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- run_tests $ns1 $ns2 10.0.1.1 0 -1 0 slow
- chk_join_nr "remove single address" 1 1 1
- chk_add_nr 1 1
- chk_rm_nr 1 1 invert
+ if reset "remove single address"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_set_limits $ns2 1 1
+ run_tests $ns1 $ns2 10.0.1.1 0 -1 0 slow
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1
+ chk_rm_nr 1 1 invert
+ fi
# subflow and signal, remove
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 2
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1 0 -1 -1 slow
- chk_join_nr "remove subflow and signal" 2 2 2
- chk_add_nr 1 1
- chk_rm_nr 1 1
+ if reset "remove subflow and signal"; then
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_set_limits $ns2 1 2
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1 0 -1 -1 slow
+ chk_join_nr 2 2 2
+ chk_add_nr 1 1
+ chk_rm_nr 1 1
+ fi
# subflows and signal, remove
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 3
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 3
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.4.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1 0 -1 -2 slow
- chk_join_nr "remove subflows and signal" 3 3 3
- chk_add_nr 1 1
- chk_rm_nr 2 2
+ if reset "remove subflows and signal"; then
+ pm_nl_set_limits $ns1 0 3
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_set_limits $ns2 1 3
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.4.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1 0 -1 -2 slow
+ chk_join_nr 3 3 3
+ chk_add_nr 1 1
+ chk_rm_nr 2 2
+ fi
# addresses remove
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 3 3
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal id 250
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.3.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.4.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 3 3
- run_tests $ns1 $ns2 10.0.1.1 0 -3 0 slow
- chk_join_nr "remove addresses" 3 3 3
- chk_add_nr 3 3
- chk_rm_nr 3 3 invert
+ if reset "remove addresses"; then
+ pm_nl_set_limits $ns1 3 3
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal id 250
+ pm_nl_add_endpoint $ns1 10.0.3.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.4.1 flags signal
+ pm_nl_set_limits $ns2 3 3
+ run_tests $ns1 $ns2 10.0.1.1 0 -3 0 slow
+ chk_join_nr 3 3 3
+ chk_add_nr 3 3
+ chk_rm_nr 3 3 invert
+ fi
# invalid addresses remove
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 3 3
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.12.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.3.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.14.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 3 3
- run_tests $ns1 $ns2 10.0.1.1 0 -3 0 slow
- chk_join_nr "remove invalid addresses" 1 1 1
- chk_add_nr 3 3
- chk_rm_nr 3 1 invert
+ if reset "remove invalid addresses"; then
+ pm_nl_set_limits $ns1 3 3
+ pm_nl_add_endpoint $ns1 10.0.12.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.3.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.14.1 flags signal
+ pm_nl_set_limits $ns2 3 3
+ run_tests $ns1 $ns2 10.0.1.1 0 -3 0 slow
+ chk_join_nr 1 1 1
+ chk_add_nr 3 3
+ chk_rm_nr 3 1 invert
+ fi
# subflows and signal, flush
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 3
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 3
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.4.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1 0 -8 -8 slow
- chk_join_nr "flush subflows and signal" 3 3 3
- chk_add_nr 1 1
- chk_rm_nr 2 2
+ if reset "flush subflows and signal"; then
+ pm_nl_set_limits $ns1 0 3
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_set_limits $ns2 1 3
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.4.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1 0 -8 -8 slow
+ chk_join_nr 3 3 3
+ chk_add_nr 1 1
+ chk_rm_nr 1 3 invert simult
+ fi
# subflows flush
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 3 3
- ip netns exec $ns2 ./pm_nl_ctl limits 3 3
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.2.2 flags subflow id 150
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.4.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1 0 -8 -8 slow
- chk_join_nr "flush subflows" 3 3 3
- chk_rm_nr 3 3
+ if reset "flush subflows"; then
+ pm_nl_set_limits $ns1 3 3
+ pm_nl_set_limits $ns2 3 3
+ pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow id 150
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.4.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1 0 -8 -8 slow
+ chk_join_nr 3 3 3
+ chk_rm_nr 0 3 simult
+ fi
# addresses flush
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 3 3
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal id 250
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.3.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.4.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 3 3
- run_tests $ns1 $ns2 10.0.1.1 0 -8 -8 slow
- chk_join_nr "flush addresses" 3 3 3
- chk_add_nr 3 3
- chk_rm_nr 3 3 invert
+ if reset "flush addresses"; then
+ pm_nl_set_limits $ns1 3 3
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal id 250
+ pm_nl_add_endpoint $ns1 10.0.3.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.4.1 flags signal
+ pm_nl_set_limits $ns2 3 3
+ run_tests $ns1 $ns2 10.0.1.1 0 -8 -8 slow
+ chk_join_nr 3 3 3
+ chk_add_nr 3 3
+ chk_rm_nr 3 3 invert simult
+ fi
# invalid addresses flush
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 3 3
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.12.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.3.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.14.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 3 3
- run_tests $ns1 $ns2 10.0.1.1 0 -8 0 slow
- chk_join_nr "flush invalid addresses" 1 1 1
- chk_add_nr 3 3
- chk_rm_nr 3 1 invert
+ if reset "flush invalid addresses"; then
+ pm_nl_set_limits $ns1 3 3
+ pm_nl_add_endpoint $ns1 10.0.12.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.3.1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.14.1 flags signal
+ pm_nl_set_limits $ns2 3 3
+ run_tests $ns1 $ns2 10.0.1.1 0 -8 0 slow
+ chk_join_nr 1 1 1
+ chk_add_nr 3 3
+ chk_rm_nr 3 1 invert
+ fi
# remove id 0 subflow
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1 0 0 -9 slow
- chk_join_nr "remove id 0 subflow" 1 1 1
- chk_rm_nr 1 1
+ if reset "remove id 0 subflow"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1 0 0 -9 slow
+ chk_join_nr 1 1 1
+ chk_rm_nr 1 1
+ fi
# remove id 0 address
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- run_tests $ns1 $ns2 10.0.1.1 0 -9 0 slow
- chk_join_nr "remove id 0 address" 1 1 1
- chk_add_nr 1 1
- chk_rm_nr 1 1 invert
+ if reset "remove id 0 address"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_set_limits $ns2 1 1
+ run_tests $ns1 $ns2 10.0.1.1 0 -9 0 slow
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1
+ chk_rm_nr 1 1 invert
+ fi
}
add_tests()
{
# add single subflow
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- run_tests $ns1 $ns2 10.0.1.1 0 0 1 slow
- chk_join_nr "add single subflow" 1 1 1
+ if reset "add single subflow"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ run_tests $ns1 $ns2 10.0.1.1 0 0 1 slow
+ chk_join_nr 1 1 1
+ fi
# add signal address
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- run_tests $ns1 $ns2 10.0.1.1 0 1 0 slow
- chk_join_nr "add signal address" 1 1 1
- chk_add_nr 1 1
+ if reset "add signal address"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 1 1
+ run_tests $ns1 $ns2 10.0.1.1 0 1 0 slow
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1
+ fi
# add multiple subflows
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl limits 0 2
- run_tests $ns1 $ns2 10.0.1.1 0 0 2 slow
- chk_join_nr "add multiple subflows" 2 2 2
+ if reset "add multiple subflows"; then
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_set_limits $ns2 0 2
+ run_tests $ns1 $ns2 10.0.1.1 0 0 2 slow
+ chk_join_nr 2 2 2
+ fi
# add multiple subflows IPv6
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl limits 0 2
- run_tests $ns1 $ns2 dead:beef:1::1 0 0 2 slow
- chk_join_nr "add multiple subflows IPv6" 2 2 2
+ if reset "add multiple subflows IPv6"; then
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_set_limits $ns2 0 2
+ run_tests $ns1 $ns2 dead:beef:1::1 0 0 2 slow
+ chk_join_nr 2 2 2
+ fi
# add multiple addresses IPv6
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl limits 2 2
- run_tests $ns1 $ns2 dead:beef:1::1 0 2 0 slow
- chk_join_nr "add multiple addresses IPv6" 2 2 2
- chk_add_nr 2 2
+ if reset "add multiple addresses IPv6"; then
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_set_limits $ns2 2 2
+ run_tests $ns1 $ns2 dead:beef:1::1 0 2 0 slow
+ chk_join_nr 2 2 2
+ chk_add_nr 2 2
+ fi
}
ipv6_tests()
{
# subflow IPv6
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add dead:beef:3::2 dev ns2eth3 flags subflow
- run_tests $ns1 $ns2 dead:beef:1::1 0 0 0 slow
- chk_join_nr "single subflow IPv6" 1 1 1
+ if reset "single subflow IPv6"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 dead:beef:3::2 dev ns2eth3 flags subflow
+ run_tests $ns1 $ns2 dead:beef:1::1 0 0 0 slow
+ chk_join_nr 1 1 1
+ fi
# add_address, unused IPv6
- reset
- ip netns exec $ns1 ./pm_nl_ctl add dead:beef:2::1 flags signal
- run_tests $ns1 $ns2 dead:beef:1::1 0 0 0 slow
- chk_join_nr "unused signal address IPv6" 0 0 0
- chk_add_nr 1 1
+ if reset "unused signal address IPv6"; then
+ pm_nl_add_endpoint $ns1 dead:beef:2::1 flags signal
+ run_tests $ns1 $ns2 dead:beef:1::1 0 0 0 slow
+ chk_join_nr 0 0 0
+ chk_add_nr 1 1
+ fi
# signal address IPv6
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns1 ./pm_nl_ctl add dead:beef:2::1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- run_tests $ns1 $ns2 dead:beef:1::1 0 0 0 slow
- chk_join_nr "single address IPv6" 1 1 1
- chk_add_nr 1 1
+ if reset "single address IPv6"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_add_endpoint $ns1 dead:beef:2::1 flags signal
+ pm_nl_set_limits $ns2 1 1
+ run_tests $ns1 $ns2 dead:beef:1::1 0 0 0 slow
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1
+ fi
# single address IPv6, remove
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns1 ./pm_nl_ctl add dead:beef:2::1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- run_tests $ns1 $ns2 dead:beef:1::1 0 -1 0 slow
- chk_join_nr "remove single address IPv6" 1 1 1
- chk_add_nr 1 1
- chk_rm_nr 1 1 invert
+ if reset "remove single address IPv6"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_add_endpoint $ns1 dead:beef:2::1 flags signal
+ pm_nl_set_limits $ns2 1 1
+ run_tests $ns1 $ns2 dead:beef:1::1 0 -1 0 slow
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1
+ chk_rm_nr 1 1 invert
+ fi
# subflow and signal IPv6, remove
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns1 ./pm_nl_ctl add dead:beef:2::1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 2
- ip netns exec $ns2 ./pm_nl_ctl add dead:beef:3::2 dev ns2eth3 flags subflow
- run_tests $ns1 $ns2 dead:beef:1::1 0 -1 -1 slow
- chk_join_nr "remove subflow and signal IPv6" 2 2 2
- chk_add_nr 1 1
- chk_rm_nr 1 1
+ if reset "remove subflow and signal IPv6"; then
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_add_endpoint $ns1 dead:beef:2::1 flags signal
+ pm_nl_set_limits $ns2 1 2
+ pm_nl_add_endpoint $ns2 dead:beef:3::2 dev ns2eth3 flags subflow
+ run_tests $ns1 $ns2 dead:beef:1::1 0 -1 -1 slow
+ chk_join_nr 2 2 2
+ chk_add_nr 1 1
+ chk_rm_nr 1 1
+ fi
}
v4mapped_tests()
{
# subflow IPv4-mapped to IPv4-mapped
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add "::ffff:10.0.3.2" flags subflow
- run_tests $ns1 $ns2 "::ffff:10.0.1.1"
- chk_join_nr "single subflow IPv4-mapped" 1 1 1
+ if reset "single subflow IPv4-mapped"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 "::ffff:10.0.3.2" flags subflow
+ run_tests $ns1 $ns2 "::ffff:10.0.1.1"
+ chk_join_nr 1 1 1
+ fi
# signal address IPv4-mapped with IPv4-mapped sk
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- ip netns exec $ns1 ./pm_nl_ctl add "::ffff:10.0.2.1" flags signal
- run_tests $ns1 $ns2 "::ffff:10.0.1.1"
- chk_join_nr "signal address IPv4-mapped" 1 1 1
- chk_add_nr 1 1
+ if reset "signal address IPv4-mapped"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 1 1
+ pm_nl_add_endpoint $ns1 "::ffff:10.0.2.1" flags signal
+ run_tests $ns1 $ns2 "::ffff:10.0.1.1"
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1
+ fi
# subflow v4-map-v6
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 "::ffff:10.0.1.1"
- chk_join_nr "single subflow v4-map-v6" 1 1 1
+ if reset "single subflow v4-map-v6"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 "::ffff:10.0.1.1"
+ chk_join_nr 1 1 1
+ fi
# signal address v4-map-v6
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- run_tests $ns1 $ns2 "::ffff:10.0.1.1"
- chk_join_nr "signal address v4-map-v6" 1 1 1
- chk_add_nr 1 1
+ if reset "signal address v4-map-v6"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 1 1
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ run_tests $ns1 $ns2 "::ffff:10.0.1.1"
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1
+ fi
# subflow v6-map-v4
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add "::ffff:10.0.3.2" flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "single subflow v6-map-v4" 1 1 1
+ if reset "single subflow v6-map-v4"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 "::ffff:10.0.3.2" flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 1 1 1
+ fi
# signal address v6-map-v4
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- ip netns exec $ns1 ./pm_nl_ctl add "::ffff:10.0.2.1" flags signal
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "signal address v6-map-v4" 1 1 1
- chk_add_nr 1 1
+ if reset "signal address v6-map-v4"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 1 1
+ pm_nl_add_endpoint $ns1 "::ffff:10.0.2.1" flags signal
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1
+ fi
# no subflow IPv6 to v4 address
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add dead:beef:2::2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "no JOIN with diff families v4-v6" 0 0 0
+ if reset "no JOIN with diff families v4-v6"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 dead:beef:2::2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 0 0 0
+ fi
# no subflow IPv6 to v4 address even if v6 has a valid v4 at the end
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add dead:beef:2::10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "no JOIN with diff families v4-v6-2" 0 0 0
+ if reset "no JOIN with diff families v4-v6-2"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 dead:beef:2::10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 0 0 0
+ fi
# no subflow IPv4 to v6 address, no need to slow down too then
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 dead:beef:1::1
- chk_join_nr "no JOIN with diff families v6-v4" 0 0 0
+ if reset "no JOIN with diff families v6-v4"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 dead:beef:1::1
+ chk_join_nr 0 0 0
+ fi
}
backup_tests()
{
# single subflow, backup
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow,backup
- run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow nobackup
- chk_join_nr "single subflow, backup" 1 1 1
- chk_prio_nr 0 1
+ if reset "single subflow, backup"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow,backup
+ run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow nobackup
+ chk_join_nr 1 1 1
+ chk_prio_nr 0 1
+ fi
# single address, backup
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow backup
- chk_join_nr "single address, backup" 1 1 1
- chk_add_nr 1 1
- chk_prio_nr 1 0
+ if reset "single address, backup"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_set_limits $ns2 1 1
+ run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow backup
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1
+ chk_prio_nr 1 1
+ fi
+
+ # single address with port, backup
+ if reset "single address with port, backup"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal port 10100
+ pm_nl_set_limits $ns2 1 1
+ run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow backup
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1
+ chk_prio_nr 1 1
+ fi
}
add_addr_ports_tests()
{
# signal address with port
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal port 10100
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "signal address with port" 1 1 1
- chk_add_nr 1 1 1
+ if reset "signal address with port"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 1 1
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal port 10100
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1 1
+ fi
# subflow and signal with port
- reset
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal port 10100
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl limits 1 2
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "subflow and signal with port" 2 2 2
- chk_add_nr 1 1 1
+ if reset "subflow and signal with port"; then
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal port 10100
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_set_limits $ns2 1 2
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 2 2 2
+ chk_add_nr 1 1 1
+ fi
# single address with port, remove
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal port 10100
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- run_tests $ns1 $ns2 10.0.1.1 0 -1 0 slow
- chk_join_nr "remove single address with port" 1 1 1
- chk_add_nr 1 1 1
- chk_rm_nr 1 1 invert
+ if reset "remove single address with port"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal port 10100
+ pm_nl_set_limits $ns2 1 1
+ run_tests $ns1 $ns2 10.0.1.1 0 -1 0 slow
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1 1
+ chk_rm_nr 1 1 invert
+ fi
# subflow and signal with port, remove
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal port 10100
- ip netns exec $ns2 ./pm_nl_ctl limits 1 2
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1 0 -1 -1 slow
- chk_join_nr "remove subflow and signal with port" 2 2 2
- chk_add_nr 1 1 1
- chk_rm_nr 1 1
+ if reset "remove subflow and signal with port"; then
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal port 10100
+ pm_nl_set_limits $ns2 1 2
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1 0 -1 -1 slow
+ chk_join_nr 2 2 2
+ chk_add_nr 1 1 1
+ chk_rm_nr 1 1
+ fi
# subflows and signal with port, flush
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 3
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal port 10100
- ip netns exec $ns2 ./pm_nl_ctl limits 1 3
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.4.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1 0 -8 -2 slow
- chk_join_nr "flush subflows and signal with port" 3 3 3
- chk_add_nr 1 1
- chk_rm_nr 2 2
+ if reset "flush subflows and signal with port"; then
+ pm_nl_set_limits $ns1 0 3
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal port 10100
+ pm_nl_set_limits $ns2 1 3
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.4.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1 0 -8 -2 slow
+ chk_join_nr 3 3 3
+ chk_add_nr 1 1
+ chk_rm_nr 1 3 invert simult
+ fi
# multiple addresses with port
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 2 2
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal port 10100
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.3.1 flags signal port 10100
- ip netns exec $ns2 ./pm_nl_ctl limits 2 2
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "multiple addresses with port" 2 2 2
- chk_add_nr 2 2 2
+ if reset "multiple addresses with port"; then
+ pm_nl_set_limits $ns1 2 2
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal port 10100
+ pm_nl_add_endpoint $ns1 10.0.3.1 flags signal port 10100
+ pm_nl_set_limits $ns2 2 2
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 2 2 2
+ chk_add_nr 2 2 2
+ fi
# multiple addresses with ports
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 2 2
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal port 10100
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.3.1 flags signal port 10101
- ip netns exec $ns2 ./pm_nl_ctl limits 2 2
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "multiple addresses with ports" 2 2 2
- chk_add_nr 2 2 2
+ if reset "multiple addresses with ports"; then
+ pm_nl_set_limits $ns1 2 2
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal port 10100
+ pm_nl_add_endpoint $ns1 10.0.3.1 flags signal port 10101
+ pm_nl_set_limits $ns2 2 2
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 2 2 2
+ chk_add_nr 2 2 2
+ fi
}
syncookies_tests()
{
# single subflow, syncookies
- reset_with_cookies
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "single subflow with syn cookies" 1 1 1
+ if reset_with_cookies "single subflow with syn cookies"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 1 1 1
+ fi
# multiple subflows with syn cookies
- reset_with_cookies
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.2.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "multiple subflows with syn cookies" 2 2 2
+ if reset_with_cookies "multiple subflows with syn cookies"; then
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_set_limits $ns2 0 2
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 2 2 2
+ fi
# multiple subflows limited by server
- reset_with_cookies
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.2.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "subflows limited by server w cookies" 2 1 1
+ if reset_with_cookies "subflows limited by server w cookies"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 2
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 2 1 1
+ fi
# test signal address with cookies
- reset_with_cookies
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "signal address with syn cookies" 1 1 1
- chk_add_nr 1 1
+ if reset_with_cookies "signal address with syn cookies"; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 1 1
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1
+ fi
# test cookie with subflow and signal
- reset_with_cookies
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns1 ./pm_nl_ctl limits 0 2
- ip netns exec $ns2 ./pm_nl_ctl limits 1 2
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "subflow and signal w cookies" 2 2 2
- chk_add_nr 1 1
+ if reset_with_cookies "subflow and signal w cookies"; then
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_set_limits $ns1 0 2
+ pm_nl_set_limits $ns2 1 2
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 2 2 2
+ chk_add_nr 1 1
+ fi
# accept and use add_addr with additional subflows
- reset_with_cookies
- ip netns exec $ns1 ./pm_nl_ctl limits 0 3
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl limits 1 3
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.4.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "subflows and signal w. cookies" 3 3 3
- chk_add_nr 1 1
+ if reset_with_cookies "subflows and signal w. cookies"; then
+ pm_nl_set_limits $ns1 0 3
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_set_limits $ns2 1 3
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ pm_nl_add_endpoint $ns2 10.0.4.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 3 3 3
+ chk_add_nr 1 1
+ fi
}
checksum_tests()
{
# checksum test 0 0
- reset_with_checksum 0 0
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- run_tests $ns1 $ns2 10.0.1.1
- chk_csum_nr "checksum test 0 0"
+ if reset_with_checksum 0 0; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 0 0 0
+ fi
# checksum test 1 1
- reset_with_checksum 1 1
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- run_tests $ns1 $ns2 10.0.1.1
- chk_csum_nr "checksum test 1 1"
+ if reset_with_checksum 1 1; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 0 0 0
+ fi
# checksum test 0 1
- reset_with_checksum 0 1
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- run_tests $ns1 $ns2 10.0.1.1
- chk_csum_nr "checksum test 0 1"
+ if reset_with_checksum 0 1; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 0 0 0
+ fi
# checksum test 1 0
- reset_with_checksum 1 0
- ip netns exec $ns1 ./pm_nl_ctl limits 0 1
- ip netns exec $ns2 ./pm_nl_ctl limits 0 1
- run_tests $ns1 $ns2 10.0.1.1
- chk_csum_nr "checksum test 1 0"
+ if reset_with_checksum 1 0; then
+ pm_nl_set_limits $ns1 0 1
+ pm_nl_set_limits $ns2 0 1
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 0 0 0
+ fi
}
deny_join_id0_tests()
{
# subflow allow join id0 ns1
- reset_with_allow_join_id0 1 0
- ip netns exec $ns1 ./pm_nl_ctl limits 1 1
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "single subflow allow join id0 ns1" 1 1 1
+ if reset_with_allow_join_id0 "single subflow allow join id0 ns1" 1 0; then
+ pm_nl_set_limits $ns1 1 1
+ pm_nl_set_limits $ns2 1 1
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 1 1 1
+ fi
# subflow allow join id0 ns2
- reset_with_allow_join_id0 0 1
- ip netns exec $ns1 ./pm_nl_ctl limits 1 1
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "single subflow allow join id0 ns2" 0 0 0
+ if reset_with_allow_join_id0 "single subflow allow join id0 ns2" 0 1; then
+ pm_nl_set_limits $ns1 1 1
+ pm_nl_set_limits $ns2 1 1
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 0 0 0
+ fi
# signal address allow join id0 ns1
# ADD_ADDRs are not affected by allow_join_id0 value.
- reset_with_allow_join_id0 1 0
- ip netns exec $ns1 ./pm_nl_ctl limits 1 1
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "signal address allow join id0 ns1" 1 1 1
- chk_add_nr 1 1
+ if reset_with_allow_join_id0 "signal address allow join id0 ns1" 1 0; then
+ pm_nl_set_limits $ns1 1 1
+ pm_nl_set_limits $ns2 1 1
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1
+ fi
# signal address allow join id0 ns2
# ADD_ADDRs are not affected by allow_join_id0 value.
- reset_with_allow_join_id0 0 1
- ip netns exec $ns1 ./pm_nl_ctl limits 1 1
- ip netns exec $ns2 ./pm_nl_ctl limits 1 1
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "signal address allow join id0 ns2" 1 1 1
- chk_add_nr 1 1
+ if reset_with_allow_join_id0 "signal address allow join id0 ns2" 0 1; then
+ pm_nl_set_limits $ns1 1 1
+ pm_nl_set_limits $ns2 1 1
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 1 1 1
+ chk_add_nr 1 1
+ fi
# subflow and address allow join id0 ns1
- reset_with_allow_join_id0 1 0
- ip netns exec $ns1 ./pm_nl_ctl limits 2 2
- ip netns exec $ns2 ./pm_nl_ctl limits 2 2
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "subflow and address allow join id0 1" 2 2 2
+ if reset_with_allow_join_id0 "subflow and address allow join id0 1" 1 0; then
+ pm_nl_set_limits $ns1 2 2
+ pm_nl_set_limits $ns2 2 2
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 2 2 2
+ fi
# subflow and address allow join id0 ns2
- reset_with_allow_join_id0 0 1
- ip netns exec $ns1 ./pm_nl_ctl limits 2 2
- ip netns exec $ns2 ./pm_nl_ctl limits 2 2
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
- run_tests $ns1 $ns2 10.0.1.1
- chk_join_nr "subflow and address allow join id0 2" 1 1 1
+ if reset_with_allow_join_id0 "subflow and address allow join id0 2" 0 1; then
+ pm_nl_set_limits $ns1 2 2
+ pm_nl_set_limits $ns2 2 2
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow
+ run_tests $ns1 $ns2 10.0.1.1
+ chk_join_nr 1 1 1
+ fi
}
fullmesh_tests()
@@ -1865,177 +2456,237 @@ fullmesh_tests()
# fullmesh 1
# 2 fullmesh addrs in ns2, added before the connection,
# 1 non-fullmesh addr in ns1, added during the connection.
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 0 4
- ip netns exec $ns2 ./pm_nl_ctl limits 1 4
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.2.2 flags subflow,fullmesh
- ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow,fullmesh
- run_tests $ns1 $ns2 10.0.1.1 0 1 0 slow
- chk_join_nr "fullmesh test 2x1" 4 4 4
- chk_add_nr 1 1
+ if reset "fullmesh test 2x1"; then
+ pm_nl_set_limits $ns1 0 4
+ pm_nl_set_limits $ns2 1 4
+ pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow,fullmesh
+ pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow,fullmesh
+ run_tests $ns1 $ns2 10.0.1.1 0 1 0 slow
+ chk_join_nr 4 4 4
+ chk_add_nr 1 1
+ fi
# fullmesh 2
# 1 non-fullmesh addr in ns1, added before the connection,
# 1 fullmesh addr in ns2, added during the connection.
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 1 3
- ip netns exec $ns2 ./pm_nl_ctl limits 1 3
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- run_tests $ns1 $ns2 10.0.1.1 0 0 fullmesh_1 slow
- chk_join_nr "fullmesh test 1x1" 3 3 3
- chk_add_nr 1 1
+ if reset "fullmesh test 1x1"; then
+ pm_nl_set_limits $ns1 1 3
+ pm_nl_set_limits $ns2 1 3
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ run_tests $ns1 $ns2 10.0.1.1 0 0 fullmesh_1 slow
+ chk_join_nr 3 3 3
+ chk_add_nr 1 1
+ fi
# fullmesh 3
# 1 non-fullmesh addr in ns1, added before the connection,
# 2 fullmesh addrs in ns2, added during the connection.
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 2 5
- ip netns exec $ns2 ./pm_nl_ctl limits 1 5
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- run_tests $ns1 $ns2 10.0.1.1 0 0 fullmesh_2 slow
- chk_join_nr "fullmesh test 1x2" 5 5 5
- chk_add_nr 1 1
+ if reset "fullmesh test 1x2"; then
+ pm_nl_set_limits $ns1 2 5
+ pm_nl_set_limits $ns2 1 5
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ run_tests $ns1 $ns2 10.0.1.1 0 0 fullmesh_2 slow
+ chk_join_nr 5 5 5
+ chk_add_nr 1 1
+ fi
# fullmesh 4
# 1 non-fullmesh addr in ns1, added before the connection,
# 2 fullmesh addrs in ns2, added during the connection,
# limit max_subflows to 4.
- reset
- ip netns exec $ns1 ./pm_nl_ctl limits 2 4
- ip netns exec $ns2 ./pm_nl_ctl limits 1 4
- ip netns exec $ns1 ./pm_nl_ctl add 10.0.2.1 flags signal
- run_tests $ns1 $ns2 10.0.1.1 0 0 fullmesh_2 slow
- chk_join_nr "fullmesh test 1x2, limited" 4 4 4
- chk_add_nr 1 1
+ if reset "fullmesh test 1x2, limited"; then
+ pm_nl_set_limits $ns1 2 4
+ pm_nl_set_limits $ns2 1 4
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ run_tests $ns1 $ns2 10.0.1.1 0 0 fullmesh_2 slow
+ chk_join_nr 4 4 4
+ chk_add_nr 1 1
+ fi
+
+ # set fullmesh flag
+ if reset "set fullmesh flag test"; then
+ pm_nl_set_limits $ns1 4 4
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags subflow
+ pm_nl_set_limits $ns2 4 4
+ run_tests $ns1 $ns2 10.0.1.1 0 0 1 slow fullmesh
+ chk_join_nr 2 2 2
+ chk_rm_nr 0 1
+ fi
+
+ # set nofullmesh flag
+ if reset "set nofullmesh flag test"; then
+ pm_nl_set_limits $ns1 4 4
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags subflow,fullmesh
+ pm_nl_set_limits $ns2 4 4
+ run_tests $ns1 $ns2 10.0.1.1 0 0 fullmesh_1 slow nofullmesh
+ chk_join_nr 2 2 2
+ chk_rm_nr 0 1
+ fi
+
+ # set backup,fullmesh flags
+ if reset "set backup,fullmesh flags test"; then
+ pm_nl_set_limits $ns1 4 4
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags subflow
+ pm_nl_set_limits $ns2 4 4
+ run_tests $ns1 $ns2 10.0.1.1 0 0 1 slow backup,fullmesh
+ chk_join_nr 2 2 2
+ chk_prio_nr 0 1
+ chk_rm_nr 0 1
+ fi
+
+ # set nobackup,nofullmesh flags
+ if reset "set nobackup,nofullmesh flags test"; then
+ pm_nl_set_limits $ns1 4 4
+ pm_nl_set_limits $ns2 4 4
+ pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow,backup,fullmesh
+ run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow nobackup,nofullmesh
+ chk_join_nr 2 2 2
+ chk_prio_nr 0 1
+ chk_rm_nr 0 1
+ fi
}
-all_tests()
+fastclose_tests()
{
- subflows_tests
- subflows_error_tests
- signal_address_tests
- link_failure_tests
- add_addr_timeout_tests
- remove_tests
- add_tests
- ipv6_tests
- v4mapped_tests
- backup_tests
- add_addr_ports_tests
- syncookies_tests
- checksum_tests
- deny_join_id0_tests
- fullmesh_tests
+ if reset "fastclose test"; then
+ run_tests $ns1 $ns2 10.0.1.1 1024 0 fastclose_2
+ chk_join_nr 0 0 0
+ chk_fclose_nr 1 1
+ chk_rst_nr 1 1 invert
+ fi
+}
+
+implicit_tests()
+{
+ # userspace pm type prevents add_addr
+ if reset "implicit EP"; then
+ pm_nl_set_limits $ns1 2 2
+ pm_nl_set_limits $ns2 2 2
+ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow &
+
+ wait_mpj $ns1
+ pm_nl_check_endpoint 1 "creation" \
+ $ns2 10.0.2.2 id 1 flags implicit
+
+ pm_nl_add_endpoint $ns2 10.0.2.2 id 33
+ pm_nl_check_endpoint 0 "ID change is prevented" \
+ $ns2 10.0.2.2 id 1 flags implicit
+
+ pm_nl_add_endpoint $ns2 10.0.2.2 flags signal
+ pm_nl_check_endpoint 0 "modif is allowed" \
+ $ns2 10.0.2.2 id 1 flags signal
+ wait
+ fi
}
+# [$1: error message]
usage()
{
+ if [ -n "${1}" ]; then
+ echo "${1}"
+ ret=1
+ fi
+
echo "mptcp_join usage:"
- echo " -f subflows_tests"
- echo " -e subflows_error_tests"
- echo " -s signal_address_tests"
- echo " -l link_failure_tests"
- echo " -t add_addr_timeout_tests"
- echo " -r remove_tests"
- echo " -a add_tests"
- echo " -6 ipv6_tests"
- echo " -4 v4mapped_tests"
- echo " -b backup_tests"
- echo " -p add_addr_ports_tests"
- echo " -k syncookies_tests"
- echo " -S checksum_tests"
- echo " -d deny_join_id0_tests"
- echo " -m fullmesh_tests"
+
+ local key
+ for key in "${!all_tests[@]}"; do
+ echo " -${key} ${all_tests[${key}]}"
+ done
+
echo " -c capture pcap files"
echo " -C enable data checksum"
+ echo " -i use ip mptcp"
echo " -h help"
-}
-sin=$(mktemp)
-sout=$(mktemp)
-cin=$(mktemp)
-cinsent=$(mktemp)
-cout=$(mktemp)
-init
-make_file "$cin" "client" 1
-make_file "$sin" "server" 1
-trap cleanup EXIT
+ echo "[test ids|names]"
-for arg in "$@"; do
- # check for "capture/checksum" args before launching tests
- if [[ "${arg}" =~ ^"-"[0-9a-zA-Z]*"c"[0-9a-zA-Z]*$ ]]; then
- capture=1
- fi
- if [[ "${arg}" =~ ^"-"[0-9a-zA-Z]*"C"[0-9a-zA-Z]*$ ]]; then
- checksum=1
- fi
+ exit ${ret}
+}
- # exception for the capture/checksum options, the rest means: a part of the tests
- if [ "${arg}" != "-c" ] && [ "${arg}" != "-C" ]; then
- do_all_tests=0
- fi
-done
-if [ $do_all_tests -eq 1 ]; then
- all_tests
- exit $ret
-fi
+# Use a "simple" array to force an specific order we cannot have with an associative one
+all_tests_sorted=(
+ f@subflows_tests
+ e@subflows_error_tests
+ s@signal_address_tests
+ l@link_failure_tests
+ t@add_addr_timeout_tests
+ r@remove_tests
+ a@add_tests
+ 6@ipv6_tests
+ 4@v4mapped_tests
+ b@backup_tests
+ p@add_addr_ports_tests
+ k@syncookies_tests
+ S@checksum_tests
+ d@deny_join_id0_tests
+ m@fullmesh_tests
+ z@fastclose_tests
+ I@implicit_tests
+)
+
+all_tests_args=""
+all_tests_names=()
+for subtests in "${all_tests_sorted[@]}"; do
+ key="${subtests%@*}"
+ value="${subtests#*@}"
+
+ all_tests_args+="${key}"
+ all_tests_names+=("${value}")
+ all_tests[${key}]="${value}"
+done
-while getopts 'fesltra64bpkdmchCS' opt; do
+tests=()
+while getopts "${all_tests_args}cCih" opt; do
case $opt in
- f)
- subflows_tests
- ;;
- e)
- subflows_error_tests
- ;;
- s)
- signal_address_tests
- ;;
- l)
- link_failure_tests
- ;;
- t)
- add_addr_timeout_tests
- ;;
- r)
- remove_tests
- ;;
- a)
- add_tests
- ;;
- 6)
- ipv6_tests
- ;;
- 4)
- v4mapped_tests
- ;;
- b)
- backup_tests
- ;;
- p)
- add_addr_ports_tests
- ;;
- k)
- syncookies_tests
- ;;
- S)
- checksum_tests
- ;;
- d)
- deny_join_id0_tests
- ;;
- m)
- fullmesh_tests
+ ["${all_tests_args}"])
+ tests+=("${all_tests[${opt}]}")
;;
c)
+ capture=1
;;
C)
+ checksum=1
+ ;;
+ i)
+ ip_mptcp=1
;;
- h | *)
+ h)
usage
;;
+ *)
+ usage "Unknown option: -${opt}"
+ ;;
esac
done
+shift $((OPTIND - 1))
+
+for arg in "${@}"; do
+ if [[ "${arg}" =~ ^[0-9]+$ ]]; then
+ only_tests_ids+=("${arg}")
+ else
+ only_tests_names+=("${arg}")
+ fi
+done
+
+if [ ${#tests[@]} -eq 0 ]; then
+ tests=("${all_tests_names[@]}")
+fi
+
+for subtests in "${tests[@]}"; do
+ "${subtests}"
+done
+
+if [ ${ret} -ne 0 ]; then
+ echo
+ echo "${#failed_tests[@]} failure(s) has(ve) been detected:"
+ for i in $(get_failed_tests_ids); do
+ echo -e "\t- ${i}: ${failed_tests[${i}]}"
+ done
+ echo
+fi
+
exit $ret
diff --git a/tools/testing/selftests/net/mptcp/pm_netlink.sh b/tools/testing/selftests/net/mptcp/pm_netlink.sh
index cbacf9f6538b..89839d1ff9d8 100755
--- a/tools/testing/selftests/net/mptcp/pm_netlink.sh
+++ b/tools/testing/selftests/net/mptcp/pm_netlink.sh
@@ -164,4 +164,22 @@ id 253 flags 10.0.0.5
id 254 flags 10.0.0.2
id 255 flags 10.0.0.3" "wrap-around ids"
+ip netns exec $ns1 ./pm_nl_ctl flush
+ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.1 flags subflow
+ip netns exec $ns1 ./pm_nl_ctl set 10.0.1.1 flags backup
+check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
+subflow,backup 10.0.1.1" "set flags (backup)"
+ip netns exec $ns1 ./pm_nl_ctl set 10.0.1.1 flags nobackup
+check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
+subflow 10.0.1.1" " (nobackup)"
+ip netns exec $ns1 ./pm_nl_ctl set id 1 flags fullmesh
+check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
+subflow,fullmesh 10.0.1.1" " (fullmesh)"
+ip netns exec $ns1 ./pm_nl_ctl set id 1 flags nofullmesh
+check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
+subflow 10.0.1.1" " (nofullmesh)"
+ip netns exec $ns1 ./pm_nl_ctl set id 1 flags backup,fullmesh
+check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
+subflow,backup,fullmesh 10.0.1.1" " (backup,fullmesh)"
+
exit $ret
diff --git a/tools/testing/selftests/net/mptcp/pm_nl_ctl.c b/tools/testing/selftests/net/mptcp/pm_nl_ctl.c
index 354784512748..a75a68ad652e 100644
--- a/tools/testing/selftests/net/mptcp/pm_nl_ctl.c
+++ b/tools/testing/selftests/net/mptcp/pm_nl_ctl.c
@@ -28,7 +28,7 @@ static void syntax(char *argv[])
fprintf(stderr, "\tadd [flags signal|subflow|backup|fullmesh] [id <nr>] [dev <name>] <ip>\n");
fprintf(stderr, "\tdel <id> [<ip>]\n");
fprintf(stderr, "\tget <id>\n");
- fprintf(stderr, "\tset <ip> [flags backup|nobackup]\n");
+ fprintf(stderr, "\tset [<ip>] [id <nr>] flags [no]backup|[no]fullmesh [port <nr>]\n");
fprintf(stderr, "\tflush\n");
fprintf(stderr, "\tdump\n");
fprintf(stderr, "\tlimits [<rcv addr max> <subflow max>]\n");
@@ -436,6 +436,13 @@ static void print_addr(struct rtattr *attrs, int len)
printf(",");
}
+ if (flags & MPTCP_PM_ADDR_FLAG_IMPLICIT) {
+ printf("implicit");
+ flags &= ~MPTCP_PM_ADDR_FLAG_IMPLICIT;
+ if (flags)
+ printf(",");
+ }
+
/* bump unknown flags, if any */
if (flags)
printf("0x%x", flags);
@@ -657,8 +664,10 @@ int set_flags(int fd, int pm_family, int argc, char *argv[])
u_int32_t flags = 0;
u_int16_t family;
int nest_start;
+ int use_id = 0;
+ u_int8_t id;
int off = 0;
- int arg;
+ int arg = 2;
memset(data, 0, sizeof(data));
nh = (void *)data;
@@ -674,29 +683,45 @@ int set_flags(int fd, int pm_family, int argc, char *argv[])
nest->rta_len = RTA_LENGTH(0);
off += NLMSG_ALIGN(nest->rta_len);
- /* addr data */
- rta = (void *)(data + off);
- if (inet_pton(AF_INET, argv[2], RTA_DATA(rta))) {
- family = AF_INET;
- rta->rta_type = MPTCP_PM_ADDR_ATTR_ADDR4;
- rta->rta_len = RTA_LENGTH(4);
- } else if (inet_pton(AF_INET6, argv[2], RTA_DATA(rta))) {
- family = AF_INET6;
- rta->rta_type = MPTCP_PM_ADDR_ATTR_ADDR6;
- rta->rta_len = RTA_LENGTH(16);
+ if (!strcmp(argv[arg], "id")) {
+ if (++arg >= argc)
+ error(1, 0, " missing id value");
+
+ use_id = 1;
+ id = atoi(argv[arg]);
+ rta = (void *)(data + off);
+ rta->rta_type = MPTCP_PM_ADDR_ATTR_ID;
+ rta->rta_len = RTA_LENGTH(1);
+ memcpy(RTA_DATA(rta), &id, 1);
+ off += NLMSG_ALIGN(rta->rta_len);
} else {
- error(1, errno, "can't parse ip %s", argv[2]);
+ /* addr data */
+ rta = (void *)(data + off);
+ if (inet_pton(AF_INET, argv[arg], RTA_DATA(rta))) {
+ family = AF_INET;
+ rta->rta_type = MPTCP_PM_ADDR_ATTR_ADDR4;
+ rta->rta_len = RTA_LENGTH(4);
+ } else if (inet_pton(AF_INET6, argv[arg], RTA_DATA(rta))) {
+ family = AF_INET6;
+ rta->rta_type = MPTCP_PM_ADDR_ATTR_ADDR6;
+ rta->rta_len = RTA_LENGTH(16);
+ } else {
+ error(1, errno, "can't parse ip %s", argv[arg]);
+ }
+ off += NLMSG_ALIGN(rta->rta_len);
+
+ /* family */
+ rta = (void *)(data + off);
+ rta->rta_type = MPTCP_PM_ADDR_ATTR_FAMILY;
+ rta->rta_len = RTA_LENGTH(2);
+ memcpy(RTA_DATA(rta), &family, 2);
+ off += NLMSG_ALIGN(rta->rta_len);
}
- off += NLMSG_ALIGN(rta->rta_len);
- /* family */
- rta = (void *)(data + off);
- rta->rta_type = MPTCP_PM_ADDR_ATTR_FAMILY;
- rta->rta_len = RTA_LENGTH(2);
- memcpy(RTA_DATA(rta), &family, 2);
- off += NLMSG_ALIGN(rta->rta_len);
+ if (++arg >= argc)
+ error(1, 0, " missing flags keyword");
- for (arg = 3; arg < argc; arg++) {
+ for (; arg < argc; arg++) {
if (!strcmp(argv[arg], "flags")) {
char *tok, *str;
@@ -704,12 +729,14 @@ int set_flags(int fd, int pm_family, int argc, char *argv[])
if (++arg >= argc)
error(1, 0, " missing flags value");
- /* do not support flag list yet */
for (str = argv[arg]; (tok = strtok(str, ","));
str = NULL) {
if (!strcmp(tok, "backup"))
flags |= MPTCP_PM_ADDR_FLAG_BACKUP;
- else if (strcmp(tok, "nobackup"))
+ else if (!strcmp(tok, "fullmesh"))
+ flags |= MPTCP_PM_ADDR_FLAG_FULLMESH;
+ else if (strcmp(tok, "nobackup") &&
+ strcmp(tok, "nofullmesh"))
error(1, errno,
"unknown flag %s", argv[arg]);
}
@@ -719,6 +746,21 @@ int set_flags(int fd, int pm_family, int argc, char *argv[])
rta->rta_len = RTA_LENGTH(4);
memcpy(RTA_DATA(rta), &flags, 4);
off += NLMSG_ALIGN(rta->rta_len);
+ } else if (!strcmp(argv[arg], "port")) {
+ u_int16_t port;
+
+ if (use_id)
+ error(1, 0, " port can't be used with id");
+
+ if (++arg >= argc)
+ error(1, 0, " missing port value");
+
+ port = atoi(argv[arg]);
+ rta = (void *)(data + off);
+ rta->rta_type = MPTCP_PM_ADDR_ATTR_PORT;
+ rta->rta_len = RTA_LENGTH(2);
+ memcpy(RTA_DATA(rta), &port, 2);
+ off += NLMSG_ALIGN(rta->rta_len);
} else {
error(1, 0, "unknown keyword %s", argv[arg]);
}
diff --git a/tools/testing/selftests/net/mptcp/settings b/tools/testing/selftests/net/mptcp/settings
index a62d2fa1275c..79b65bdf05db 100644
--- a/tools/testing/selftests/net/mptcp/settings
+++ b/tools/testing/selftests/net/mptcp/settings
@@ -1 +1 @@
-timeout=600
+timeout=1200
diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
index 694732e4b344..736e358dc549 100755
--- a/tools/testing/selftests/net/pmtu.sh
+++ b/tools/testing/selftests/net/pmtu.sh
@@ -26,6 +26,15 @@
# - pmtu_ipv6
# Same as pmtu_ipv4, except for locked PMTU tests, using IPv6
#
+# - pmtu_ipv4_dscp_icmp_exception
+# Set up the same network topology as pmtu_ipv4, but use non-default
+# routing table in A. A fib-rule is used to jump to this routing table
+# based on DSCP. Send ICMPv4 packets with the expected DSCP value and
+# verify that ECN doesn't interfere with the creation of PMTU exceptions.
+#
+# - pmtu_ipv4_dscp_udp_exception
+# Same as pmtu_ipv4_dscp_icmp_exception, but use UDP instead of ICMP.
+#
# - pmtu_ipv4_vxlan4_exception
# Set up the same network topology as pmtu_ipv4, create a VXLAN tunnel
# over IPv4 between A and B, routed via R1. On the link between R1 and B,
@@ -203,6 +212,8 @@ which ping6 > /dev/null 2>&1 && ping6=$(which ping6) || ping6=$(which ping)
tests="
pmtu_ipv4_exception ipv4: PMTU exceptions 1
pmtu_ipv6_exception ipv6: PMTU exceptions 1
+ pmtu_ipv4_dscp_icmp_exception ICMPv4 with DSCP and ECN: PMTU exceptions 1
+ pmtu_ipv4_dscp_udp_exception UDPv4 with DSCP and ECN: PMTU exceptions 1
pmtu_ipv4_vxlan4_exception IPv4 over vxlan4: PMTU exceptions 1
pmtu_ipv6_vxlan4_exception IPv6 over vxlan4: PMTU exceptions 1
pmtu_ipv4_vxlan6_exception IPv4 over vxlan6: PMTU exceptions 1
@@ -323,6 +334,9 @@ routes_nh="
B 6 default 61
"
+policy_mark=0x04
+rt_table=main
+
veth4_a_addr="192.168.1.1"
veth4_b_addr="192.168.1.2"
veth4_c_addr="192.168.2.10"
@@ -346,6 +360,7 @@ dummy6_mask="64"
err_buf=
tcpdump_pids=
nettest_pids=
+socat_pids=
err() {
err_buf="${err_buf}${1}
@@ -723,7 +738,7 @@ setup_routing_old() {
ns_name="$(nsname ${ns})"
- ip -n ${ns_name} route add ${addr} via ${gw}
+ ip -n "${ns_name}" route add "${addr}" table "${rt_table}" via "${gw}"
ns=""; addr=""; gw=""
done
@@ -753,7 +768,7 @@ setup_routing_new() {
ns_name="$(nsname ${ns})"
- ip -n ${ns_name} -${fam} route add ${addr} nhid ${nhid}
+ ip -n "${ns_name}" -"${fam}" route add "${addr}" table "${rt_table}" nhid "${nhid}"
ns=""; fam=""; addr=""; nhid=""
done
@@ -798,6 +813,24 @@ setup_routing() {
return 0
}
+setup_policy_routing() {
+ setup_routing
+
+ ip -netns "${NS_A}" -4 rule add dsfield "${policy_mark}" \
+ table "${rt_table}"
+
+ # Set the IPv4 Don't Fragment bit with tc, since socat doesn't seem to
+ # have an option do to it.
+ tc -netns "${NS_A}" qdisc replace dev veth_A-R1 root prio
+ tc -netns "${NS_A}" qdisc replace dev veth_A-R2 root prio
+ tc -netns "${NS_A}" filter add dev veth_A-R1 \
+ protocol ipv4 flower ip_proto udp \
+ action pedit ex munge ip df set 0x40 pipe csum ip and udp
+ tc -netns "${NS_A}" filter add dev veth_A-R2 \
+ protocol ipv4 flower ip_proto udp \
+ action pedit ex munge ip df set 0x40 pipe csum ip and udp
+}
+
setup_bridge() {
run_cmd ${ns_a} ip link add br0 type bridge || return $ksft_skip
run_cmd ${ns_a} ip link set br0 up
@@ -903,6 +936,11 @@ cleanup() {
done
nettest_pids=
+ for pid in ${socat_pids}; do
+ kill "${pid}"
+ done
+ socat_pids=
+
for n in ${NS_A} ${NS_B} ${NS_C} ${NS_R1} ${NS_R2}; do
ip netns del ${n} 2> /dev/null
done
@@ -950,15 +988,21 @@ link_get_mtu() {
route_get_dst_exception() {
ns_cmd="${1}"
dst="${2}"
+ dsfield="${3}"
- ${ns_cmd} ip route get "${dst}"
+ if [ -z "${dsfield}" ]; then
+ dsfield=0
+ fi
+
+ ${ns_cmd} ip route get "${dst}" dsfield "${dsfield}"
}
route_get_dst_pmtu_from_exception() {
ns_cmd="${1}"
dst="${2}"
+ dsfield="${3}"
- mtu_parse "$(route_get_dst_exception "${ns_cmd}" ${dst})"
+ mtu_parse "$(route_get_dst_exception "${ns_cmd}" "${dst}" "${dsfield}")"
}
check_pmtu_value() {
@@ -1068,6 +1112,95 @@ test_pmtu_ipv6_exception() {
test_pmtu_ipvX 6
}
+test_pmtu_ipv4_dscp_icmp_exception() {
+ rt_table=100
+
+ setup namespaces policy_routing || return $ksft_skip
+ trace "${ns_a}" veth_A-R1 "${ns_r1}" veth_R1-A \
+ "${ns_r1}" veth_R1-B "${ns_b}" veth_B-R1 \
+ "${ns_a}" veth_A-R2 "${ns_r2}" veth_R2-A \
+ "${ns_r2}" veth_R2-B "${ns_b}" veth_B-R2
+
+ # Set up initial MTU values
+ mtu "${ns_a}" veth_A-R1 2000
+ mtu "${ns_r1}" veth_R1-A 2000
+ mtu "${ns_r1}" veth_R1-B 1400
+ mtu "${ns_b}" veth_B-R1 1400
+
+ mtu "${ns_a}" veth_A-R2 2000
+ mtu "${ns_r2}" veth_R2-A 2000
+ mtu "${ns_r2}" veth_R2-B 1500
+ mtu "${ns_b}" veth_B-R2 1500
+
+ len=$((2000 - 20 - 8)) # Fills MTU of veth_A-R1
+
+ dst1="${prefix4}.${b_r1}.1"
+ dst2="${prefix4}.${b_r2}.1"
+
+ # Create route exceptions
+ dsfield=${policy_mark} # No ECN bit set (Not-ECT)
+ run_cmd "${ns_a}" ping -q -M want -Q "${dsfield}" -c 1 -w 1 -s "${len}" "${dst1}"
+
+ dsfield=$(printf "%#x" $((policy_mark + 0x02))) # ECN=2 (ECT(0))
+ run_cmd "${ns_a}" ping -q -M want -Q "${dsfield}" -c 1 -w 1 -s "${len}" "${dst2}"
+
+ # Check that exceptions have been created with the correct PMTU
+ pmtu_1="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst1}" "${policy_mark}")"
+ check_pmtu_value "1400" "${pmtu_1}" "exceeding MTU" || return 1
+
+ pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst2}" "${policy_mark}")"
+ check_pmtu_value "1500" "${pmtu_2}" "exceeding MTU" || return 1
+}
+
+test_pmtu_ipv4_dscp_udp_exception() {
+ rt_table=100
+
+ if ! which socat > /dev/null 2>&1; then
+ echo "'socat' command not found; skipping tests"
+ return $ksft_skip
+ fi
+
+ setup namespaces policy_routing || return $ksft_skip
+ trace "${ns_a}" veth_A-R1 "${ns_r1}" veth_R1-A \
+ "${ns_r1}" veth_R1-B "${ns_b}" veth_B-R1 \
+ "${ns_a}" veth_A-R2 "${ns_r2}" veth_R2-A \
+ "${ns_r2}" veth_R2-B "${ns_b}" veth_B-R2
+
+ # Set up initial MTU values
+ mtu "${ns_a}" veth_A-R1 2000
+ mtu "${ns_r1}" veth_R1-A 2000
+ mtu "${ns_r1}" veth_R1-B 1400
+ mtu "${ns_b}" veth_B-R1 1400
+
+ mtu "${ns_a}" veth_A-R2 2000
+ mtu "${ns_r2}" veth_R2-A 2000
+ mtu "${ns_r2}" veth_R2-B 1500
+ mtu "${ns_b}" veth_B-R2 1500
+
+ len=$((2000 - 20 - 8)) # Fills MTU of veth_A-R1
+
+ dst1="${prefix4}.${b_r1}.1"
+ dst2="${prefix4}.${b_r2}.1"
+
+ # Create route exceptions
+ run_cmd_bg "${ns_b}" socat UDP-LISTEN:50000 OPEN:/dev/null,wronly=1
+ socat_pids="${socat_pids} $!"
+
+ dsfield=${policy_mark} # No ECN bit set (Not-ECT)
+ run_cmd "${ns_a}" socat OPEN:/dev/zero,rdonly=1,readbytes="${len}" \
+ UDP:"${dst1}":50000,tos="${dsfield}"
+
+ dsfield=$(printf "%#x" $((policy_mark + 0x02))) # ECN=2 (ECT(0))
+ run_cmd "${ns_a}" socat OPEN:/dev/zero,rdonly=1,readbytes="${len}" \
+ UDP:"${dst2}":50000,tos="${dsfield}"
+
+ # Check that exceptions have been created with the correct PMTU
+ pmtu_1="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst1}" "${policy_mark}")"
+ check_pmtu_value "1400" "${pmtu_1}" "exceeding MTU" || return 1
+ pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst2}" "${policy_mark}")"
+ check_pmtu_value "1500" "${pmtu_2}" "exceeding MTU" || return 1
+}
+
test_pmtu_ipvX_over_vxlanY_or_geneveY_exception() {
type=${1}
family=${2}
diff --git a/tools/testing/selftests/net/psock_fanout.c b/tools/testing/selftests/net/psock_fanout.c
index 3653d6468c67..1a736f700be4 100644
--- a/tools/testing/selftests/net/psock_fanout.c
+++ b/tools/testing/selftests/net/psock_fanout.c
@@ -53,6 +53,7 @@
#include <unistd.h>
#include "psock_lib.h"
+#include "../kselftest.h"
#define RING_NUM_FRAMES 20
@@ -117,7 +118,7 @@ static void sock_fanout_set_cbpf(int fd)
struct sock_fprog bpf_prog;
bpf_prog.filter = bpf_filter;
- bpf_prog.len = sizeof(bpf_filter) / sizeof(struct sock_filter);
+ bpf_prog.len = ARRAY_SIZE(bpf_filter);
if (setsockopt(fd, SOL_PACKET, PACKET_FANOUT_DATA, &bpf_prog,
sizeof(bpf_prog))) {
@@ -162,7 +163,7 @@ static void sock_fanout_set_ebpf(int fd)
memset(&attr, 0, sizeof(attr));
attr.prog_type = BPF_PROG_TYPE_SOCKET_FILTER;
attr.insns = (unsigned long) prog;
- attr.insn_cnt = sizeof(prog) / sizeof(prog[0]);
+ attr.insn_cnt = ARRAY_SIZE(prog);
attr.license = (unsigned long) "GPL";
attr.log_buf = (unsigned long) log_buf,
attr.log_size = sizeof(log_buf),
diff --git a/tools/testing/selftests/net/reuseport_bpf_numa.c b/tools/testing/selftests/net/reuseport_bpf_numa.c
index b2eebf669b8c..c9ba36aa688e 100644
--- a/tools/testing/selftests/net/reuseport_bpf_numa.c
+++ b/tools/testing/selftests/net/reuseport_bpf_numa.c
@@ -86,7 +86,7 @@ static void attach_bpf(int fd)
memset(&attr, 0, sizeof(attr));
attr.prog_type = BPF_PROG_TYPE_SOCKET_FILTER;
- attr.insn_cnt = sizeof(prog) / sizeof(prog[0]);
+ attr.insn_cnt = ARRAY_SIZE(prog);
attr.insns = (unsigned long) &prog;
attr.license = (unsigned long) &bpf_license;
attr.log_buf = (unsigned long) &bpf_log_buf;
diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
index c9ce3dfa42ee..0900c5438fbb 100755
--- a/tools/testing/selftests/net/rtnetlink.sh
+++ b/tools/testing/selftests/net/rtnetlink.sh
@@ -216,9 +216,9 @@ kci_test_route_get()
check_err $?
ip route get fe80::1 dev "$devdummy" > /dev/null
check_err $?
- ip route get 127.0.0.1 from 127.0.0.1 oif lo tos 0x1 mark 0x1 > /dev/null
+ ip route get 127.0.0.1 from 127.0.0.1 oif lo tos 0x10 mark 0x1 > /dev/null
check_err $?
- ip route get ::1 from ::1 iif lo oif lo tos 0x1 mark 0x1 > /dev/null
+ ip route get ::1 from ::1 iif lo oif lo tos 0x10 mark 0x1 > /dev/null
check_err $?
ip addr add dev "$devdummy" 10.23.7.11/24
check_err $?
diff --git a/tools/testing/selftests/net/test_vxlan_vnifiltering.sh b/tools/testing/selftests/net/test_vxlan_vnifiltering.sh
new file mode 100755
index 000000000000..704997ffc244
--- /dev/null
+++ b/tools/testing/selftests/net/test_vxlan_vnifiltering.sh
@@ -0,0 +1,579 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+# This test is for checking the VXLAN vni filtering api and
+# datapath.
+# It simulates two hypervisors running two VMs each using four network
+# six namespaces: two for the HVs, four for the VMs. Each VM is
+# connected to a separate bridge. The VM's use overlapping vlans and
+# hence the separate bridge domain. Each vxlan device is a collect
+# metadata device with vni filtering and hence has the ability to
+# terminate configured vni's only.
+
+# +--------------------------------+ +------------------------------------+
+# | vm-11 netns | | vm-21 netns |
+# | | | |
+# |+------------+ +-------------+ | |+-------------+ +----------------+ |
+# ||veth-11.10 | |veth-11.20 | | ||veth-21.10 | | veth-21.20 | |
+# ||10.0.10.11/24 |10.0.20.11/24| | ||10.0.10.21/24| | 10.0.20.21/24 | |
+# |+------|-----+ +|------------+ | |+-----------|-+ +---|------------+ |
+# | | | | | | | |
+# | | | | | +------------+ |
+# | +------------+ | | | veth-21 | |
+# | | veth-11 | | | | | |
+# | | | | | +-----|------+ |
+# | +-----|------+ | | | |
+# | | | | | |
+# +------------|-------------------+ +---------------|--------------------+
+# +------------|-----------------------------------------|-------------------+
+# | +-----|------+ +-----|------+ |
+# | |vethhv-11 | |vethhv-21 | |
+# | +----|-------+ +-----|------+ |
+# | +---|---+ +---|--+ |
+# | | br1 | | br2 | |
+# | +---|---+ +---|--+ |
+# | +---|----+ +---|--+ |
+# | | vxlan1| |vxlan2| |
+# | +--|-----+ +--|---+ |
+# | | | |
+# | | +---------------------+ | |
+# | | |veth0 | | |
+# | +---------|172.16.0.1/24 -----------+ |
+# | |2002:fee1::1/64 | |
+# | hv-1 netns +--------|------------+ |
+# +-----------------------------|--------------------------------------------+
+# |
+# +-----------------------------|--------------------------------------------+
+# | hv-2 netns +--------|-------------+ |
+# | | veth0 | |
+# | +------| 172.16.0.2/24 |---+ |
+# | | | 2002:fee1::2/64 | | |
+# | | | | | |
+# | | +----------------------+ | - |
+# | | | |
+# | +-|-------+ +--------|-+ |
+# | | vxlan1 | | vxlan2 | |
+# | +----|----+ +---|------+ |
+# | +--|--+ +-|---+ |
+# | | br1 | | br2 | |
+# | +--|--+ +--|--+ |
+# | +-----|-------+ +----|-------+ |
+# | | vethhv-12 | |vethhv-22 | |
+# | +------|------+ +-------|----+ |
+# +-----------------|----------------------------|---------------------------+
+# | |
+# +-----------------|-----------------+ +--------|---------------------------+
+# | +-------|---+ | | +--|---------+ |
+# | | veth-12 | | | |veth-22 | |
+# | +-|--------|+ | | +--|--------|+ |
+# | | | | | | | |
+# |+----------|--+ +---|-----------+ | |+-------|-----+ +|---------------+ |
+# ||veth-12.10 | |veth-12.20 | | ||veth-22.10 | |veth-22.20 | |
+# ||10.0.10.12/24| |10.0.20.12/24 | | ||10.0.10.22/24| |10.0.20.22/24 | |
+# |+-------------+ +---------------+ | |+-------------+ +----------------+ |
+# | | | |
+# | | | |
+# | vm-12 netns | |vm-22 netns |
+# +-----------------------------------+ +------------------------------------+
+#
+#
+# This test tests the new vxlan vnifiltering api
+
+ret=0
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
+# all tests in this script. Can be overridden with -t option
+TESTS="
+ vxlan_vnifilter_api
+ vxlan_vnifilter_datapath
+ vxlan_vnifilter_datapath_pervni
+ vxlan_vnifilter_datapath_mgroup
+ vxlan_vnifilter_datapath_mgroup_pervni
+ vxlan_vnifilter_metadata_and_traditional_mix
+"
+VERBOSE=0
+PAUSE_ON_FAIL=no
+PAUSE=no
+
+which ping6 > /dev/null 2>&1 && ping6=$(which ping6) || ping6=$(which ping)
+
+log_test()
+{
+ local rc=$1
+ local expected=$2
+ local msg="$3"
+
+ if [ ${rc} -eq ${expected} ]; then
+ printf " TEST: %-60s [ OK ]\n" "${msg}"
+ nsuccess=$((nsuccess+1))
+ else
+ ret=1
+ nfail=$((nfail+1))
+ printf " TEST: %-60s [FAIL]\n" "${msg}"
+ if [ "${PAUSE_ON_FAIL}" = "yes" ]; then
+ echo
+ echo "hit enter to continue, 'q' to quit"
+ read a
+ [ "$a" = "q" ] && exit 1
+ fi
+ fi
+
+ if [ "${PAUSE}" = "yes" ]; then
+ echo
+ echo "hit enter to continue, 'q' to quit"
+ read a
+ [ "$a" = "q" ] && exit 1
+ fi
+}
+
+run_cmd()
+{
+ local cmd="$1"
+ local out
+ local stderr="2>/dev/null"
+
+ if [ "$VERBOSE" = "1" ]; then
+ printf "COMMAND: $cmd\n"
+ stderr=
+ fi
+
+ out=$(eval $cmd $stderr)
+ rc=$?
+ if [ "$VERBOSE" = "1" -a -n "$out" ]; then
+ echo " $out"
+ fi
+
+ return $rc
+}
+
+check_hv_connectivity() {
+ ip netns exec hv-1 ping -c 1 -W 1 $1 &>/dev/null
+ sleep 1
+ ip netns exec hv-1 ping -c 1 -W 1 $2 &>/dev/null
+
+ return $?
+}
+
+check_vm_connectivity() {
+ run_cmd "ip netns exec vm-11 ping -c 1 -W 1 10.0.10.12"
+ log_test $? 0 "VM connectivity over $1 (ipv4 default rdst)"
+
+ run_cmd "ip netns exec vm-21 ping -c 1 -W 1 10.0.10.22"
+ log_test $? 0 "VM connectivity over $1 (ipv6 default rdst)"
+}
+
+cleanup() {
+ ip link del veth-hv-1 2>/dev/null || true
+ ip link del vethhv-11 vethhv-12 vethhv-21 vethhv-22 2>/dev/null || true
+
+ for ns in hv-1 hv-2 vm-11 vm-21 vm-12 vm-22 vm-31 vm-32; do
+ ip netns del $ns 2>/dev/null || true
+ done
+}
+
+trap cleanup EXIT
+
+setup-hv-networking() {
+ hv=$1
+ local1=$2
+ mask1=$3
+ local2=$4
+ mask2=$5
+
+ ip netns add hv-$hv
+ ip link set veth-hv-$hv netns hv-$hv
+ ip -netns hv-$hv link set veth-hv-$hv name veth0
+ ip -netns hv-$hv addr add $local1/$mask1 dev veth0
+ ip -netns hv-$hv addr add $local2/$mask2 dev veth0
+ ip -netns hv-$hv link set veth0 up
+}
+
+# Setups a "VM" simulated by a netns an a veth pair
+# example: setup-vm <hvid> <vmid> <brid> <VATTRS> <mcast_for_bum>
+# VATTRS = comma separated "<vlan>-<v[46]>-<localip>-<remoteip>-<VTYPE>-<vxlandstport>"
+# VTYPE = vxlan device type. "default = traditional device, metadata = metadata device
+# vnifilter = vnifiltering device,
+# vnifilterg = vnifiltering device with per vni group/remote"
+# example:
+# setup-vm 1 11 1 \
+# 10-v4-172.16.0.1-239.1.1.100-vnifilterg,20-v4-172.16.0.1-239.1.1.100-vnifilterg 1
+#
+setup-vm() {
+ hvid=$1
+ vmid=$2
+ brid=$3
+ vattrs=$4
+ mcast=$5
+ lastvxlandev=""
+
+ # create bridge
+ ip -netns hv-$hvid link add br$brid type bridge vlan_filtering 1 vlan_default_pvid 0 \
+ mcast_snooping 0
+ ip -netns hv-$hvid link set br$brid up
+
+ # create vm namespace and interfaces and connect to hypervisor
+ # namespace
+ ip netns add vm-$vmid
+ hvvethif="vethhv-$vmid"
+ vmvethif="veth-$vmid"
+ ip link add $hvvethif type veth peer name $vmvethif
+ ip link set $hvvethif netns hv-$hvid
+ ip link set $vmvethif netns vm-$vmid
+ ip -netns hv-$hvid link set $hvvethif up
+ ip -netns vm-$vmid link set $vmvethif up
+ ip -netns hv-$hvid link set $hvvethif master br$brid
+
+ # configure VM vlan/vni filtering on hypervisor
+ for vmap in $(echo $vattrs | cut -d "," -f1- --output-delimiter=' ')
+ do
+ local vid=$(echo $vmap | awk -F'-' '{print ($1)}')
+ local family=$(echo $vmap | awk -F'-' '{print ($2)}')
+ local localip=$(echo $vmap | awk -F'-' '{print ($3)}')
+ local group=$(echo $vmap | awk -F'-' '{print ($4)}')
+ local vtype=$(echo $vmap | awk -F'-' '{print ($5)}')
+ local port=$(echo $vmap | awk -F'-' '{print ($6)}')
+
+ ip -netns vm-$vmid link add name $vmvethif.$vid link $vmvethif type vlan id $vid
+ ip -netns vm-$vmid addr add 10.0.$vid.$vmid/24 dev $vmvethif.$vid
+ ip -netns vm-$vmid link set $vmvethif.$vid up
+
+ tid=$vid
+ vxlandev="vxlan$brid"
+ vxlandevflags=""
+
+ if [[ -n $vtype && $vtype == "metadata" ]]; then
+ vxlandevflags="$vxlandevflags external"
+ elif [[ -n $vtype && $vtype == "vnifilter" || $vtype == "vnifilterg" ]]; then
+ vxlandevflags="$vxlandevflags external vnifilter"
+ tid=$((vid+brid))
+ else
+ vxlandevflags="$vxlandevflags id $tid"
+ vxlandev="vxlan$tid"
+ fi
+
+ if [[ -n $vtype && $vtype != "vnifilterg" ]]; then
+ if [[ -n "$group" && "$group" != "null" ]]; then
+ if [ $mcast -eq 1 ]; then
+ vxlandevflags="$vxlandevflags group $group"
+ else
+ vxlandevflags="$vxlandevflags remote $group"
+ fi
+ fi
+ fi
+
+ if [[ -n "$port" && "$port" != "default" ]]; then
+ vxlandevflags="$vxlandevflags dstport $port"
+ fi
+
+ # create vxlan device
+ if [ "$vxlandev" != "$lastvxlandev" ]; then
+ ip -netns hv-$hvid link add $vxlandev type vxlan local $localip $vxlandevflags dev veth0 2>/dev/null
+ ip -netns hv-$hvid link set $vxlandev master br$brid
+ ip -netns hv-$hvid link set $vxlandev up
+ lastvxlandev=$vxlandev
+ fi
+
+ # add vlan
+ bridge -netns hv-$hvid vlan add vid $vid dev $hvvethif
+ bridge -netns hv-$hvid vlan add vid $vid pvid dev $vxlandev
+
+ # Add bridge vni filter for tx
+ if [[ -n $vtype && $vtype == "metadata" || $vtype == "vnifilter" || $vtype == "vnifilterg" ]]; then
+ bridge -netns hv-$hvid link set dev $vxlandev vlan_tunnel on
+ bridge -netns hv-$hvid vlan add dev $vxlandev vid $vid tunnel_info id $tid
+ fi
+
+ if [[ -n $vtype && $vtype == "metadata" ]]; then
+ bridge -netns hv-$hvid fdb add 00:00:00:00:00:00 dev $vxlandev \
+ src_vni $tid vni $tid dst $group self
+ elif [[ -n $vtype && $vtype == "vnifilter" ]]; then
+ # Add per vni rx filter with 'bridge vni' api
+ bridge -netns hv-$hvid vni add dev $vxlandev vni $tid
+ elif [[ -n $vtype && $vtype == "vnifilterg" ]]; then
+ # Add per vni group config with 'bridge vni' api
+ if [ -n "$group" ]; then
+ if [ "$family" == "v4" ]; then
+ if [ $mcast -eq 1 ]; then
+ bridge -netns hv-$hvid vni add dev $vxlandev vni $tid group $group
+ else
+ bridge -netns hv-$hvid vni add dev $vxlandev vni $tid remote $group
+ fi
+ else
+ if [ $mcast -eq 1 ]; then
+ bridge -netns hv-$hvid vni add dev $vxlandev vni $tid group6 $group
+ else
+ bridge -netns hv-$hvid vni add dev $vxlandev vni $tid remote6 $group
+ fi
+ fi
+ fi
+ fi
+ done
+}
+
+setup_vnifilter_api()
+{
+ ip link add veth-host type veth peer name veth-testns
+ ip netns add testns
+ ip link set veth-testns netns testns
+}
+
+cleanup_vnifilter_api()
+{
+ ip link del veth-host 2>/dev/null || true
+ ip netns del testns 2>/dev/null || true
+}
+
+# tests vxlan filtering api
+vxlan_vnifilter_api()
+{
+ hv1addr1="172.16.0.1"
+ hv2addr1="172.16.0.2"
+ hv1addr2="2002:fee1::1"
+ hv2addr2="2002:fee1::2"
+ localip="172.16.0.1"
+ group="239.1.1.101"
+
+ cleanup_vnifilter_api &>/dev/null
+ setup_vnifilter_api
+
+ # Duplicate vni test
+ # create non-vnifiltering traditional vni device
+ run_cmd "ip -netns testns link add vxlan100 type vxlan id 100 local $localip dev veth-testns dstport 4789"
+ log_test $? 0 "Create traditional vxlan device"
+
+ # create vni filtering device
+ run_cmd "ip -netns testns link add vxlan-ext1 type vxlan vnifilter local $localip dev veth-testns dstport 4789"
+ log_test $? 1 "Cannot create vnifilter device without external flag"
+
+ run_cmd "ip -netns testns link add vxlan-ext1 type vxlan external vnifilter local $localip dev veth-testns dstport 4789"
+ log_test $? 0 "Creating external vxlan device with vnifilter flag"
+
+ run_cmd "bridge -netns testns vni add dev vxlan-ext1 vni 100"
+ log_test $? 0 "Cannot set in-use vni id on vnifiltering device"
+
+ run_cmd "bridge -netns testns vni add dev vxlan-ext1 vni 200"
+ log_test $? 0 "Set new vni id on vnifiltering device"
+
+ run_cmd "ip -netns testns link add vxlan-ext2 type vxlan external vnifilter local $localip dev veth-testns dstport 4789"
+ log_test $? 0 "Create second external vxlan device with vnifilter flag"
+
+ run_cmd "bridge -netns testns vni add dev vxlan-ext2 vni 200"
+ log_test $? 255 "Cannot set in-use vni id on vnifiltering device"
+
+ run_cmd "bridge -netns testns vni add dev vxlan-ext2 vni 300"
+ log_test $? 0 "Set new vni id on vnifiltering device"
+
+ # check in bridge vni show
+ run_cmd "bridge -netns testns vni add dev vxlan-ext2 vni 300"
+ log_test $? 0 "Update vni id on vnifiltering device"
+
+ run_cmd "bridge -netns testns vni add dev vxlan-ext2 vni 400"
+ log_test $? 0 "Add new vni id on vnifiltering device"
+
+ # add multicast group per vni
+ run_cmd "bridge -netns testns vni add dev vxlan-ext1 vni 200 group $group"
+ log_test $? 0 "Set multicast group on existing vni"
+
+ # add multicast group per vni
+ run_cmd "bridge -netns testns vni add dev vxlan-ext2 vni 300 group $group"
+ log_test $? 0 "Set multicast group on existing vni"
+
+ # set vnifilter on an existing external vxlan device
+ run_cmd "ip -netns testns link set dev vxlan-ext1 type vxlan external vnifilter"
+ log_test $? 2 "Cannot set vnifilter flag on a device"
+
+ # change vxlan vnifilter flag
+ run_cmd "ip -netns testns link set dev vxlan-ext1 type vxlan external novnifilter"
+ log_test $? 2 "Cannot unset vnifilter flag on a device"
+}
+
+# Sanity test vnifilter datapath
+# vnifilter vnis inherit BUM group from
+# vxlan device
+vxlan_vnifilter_datapath()
+{
+ hv1addr1="172.16.0.1"
+ hv2addr1="172.16.0.2"
+ hv1addr2="2002:fee1::1"
+ hv2addr2="2002:fee1::2"
+
+ ip link add veth-hv-1 type veth peer name veth-hv-2
+ setup-hv-networking 1 $hv1addr1 24 $hv1addr2 64 $hv2addr1 $hv2addr2
+ setup-hv-networking 2 $hv2addr1 24 $hv2addr2 64 $hv1addr1 $hv1addr2
+
+ check_hv_connectivity hv2addr1 hv2addr2
+
+ setup-vm 1 11 1 10-v4-$hv1addr1-$hv2addr1-vnifilter,20-v4-$hv1addr1-$hv2addr1-vnifilter 0
+ setup-vm 1 21 2 10-v6-$hv1addr2-$hv2addr2-vnifilter,20-v6-$hv1addr2-$hv2addr2-vnifilter 0
+
+ setup-vm 2 12 1 10-v4-$hv2addr1-$hv1addr1-vnifilter,20-v4-$hv2addr1-$hv1addr1-vnifilter 0
+ setup-vm 2 22 2 10-v6-$hv2addr2-$hv1addr2-vnifilter,20-v6-$hv2addr2-$hv1addr2-vnifilter 0
+
+ check_vm_connectivity "vnifiltering vxlan"
+}
+
+# Sanity test vnifilter datapath
+# with vnifilter per vni configured BUM
+# group/remote
+vxlan_vnifilter_datapath_pervni()
+{
+ hv1addr1="172.16.0.1"
+ hv2addr1="172.16.0.2"
+ hv1addr2="2002:fee1::1"
+ hv2addr2="2002:fee1::2"
+
+ ip link add veth-hv-1 type veth peer name veth-hv-2
+ setup-hv-networking 1 $hv1addr1 24 $hv1addr2 64
+ setup-hv-networking 2 $hv2addr1 24 $hv2addr2 64
+
+ check_hv_connectivity hv2addr1 hv2addr2
+
+ setup-vm 1 11 1 10-v4-$hv1addr1-$hv2addr1-vnifilterg,20-v4-$hv1addr1-$hv2addr1-vnifilterg 0
+ setup-vm 1 21 2 10-v6-$hv1addr2-$hv2addr2-vnifilterg,20-v6-$hv1addr2-$hv2addr2-vnifilterg 0
+
+ setup-vm 2 12 1 10-v4-$hv2addr1-$hv1addr1-vnifilterg,20-v4-$hv2addr1-$hv1addr1-vnifilterg 0
+ setup-vm 2 22 2 10-v6-$hv2addr2-$hv1addr2-vnifilterg,20-v6-$hv2addr2-$hv1addr2-vnifilterg 0
+
+ check_vm_connectivity "vnifiltering vxlan pervni remote"
+}
+
+
+vxlan_vnifilter_datapath_mgroup()
+{
+ hv1addr1="172.16.0.1"
+ hv2addr1="172.16.0.2"
+ hv1addr2="2002:fee1::1"
+ hv2addr2="2002:fee1::2"
+ group="239.1.1.100"
+ group6="ff07::1"
+
+ ip link add veth-hv-1 type veth peer name veth-hv-2
+ setup-hv-networking 1 $hv1addr1 24 $hv1addr2 64
+ setup-hv-networking 2 $hv2addr1 24 $hv2addr2 64
+
+ check_hv_connectivity hv2addr1 hv2addr2
+
+ setup-vm 1 11 1 10-v4-$hv1addr1-$group-vnifilter,20-v4-$hv1addr1-$group-vnifilter 1
+ setup-vm 1 21 2 "10-v6-$hv1addr2-$group6-vnifilter,20-v6-$hv1addr2-$group6-vnifilter" 1
+
+ setup-vm 2 12 1 10-v4-$hv2addr1-$group-vnifilter,20-v4-$hv2addr1-$group-vnifilter 1
+ setup-vm 2 22 2 10-v6-$hv2addr2-$group6-vnifilter,20-v6-$hv2addr2-$group6-vnifilter 1
+
+ check_vm_connectivity "vnifiltering vxlan mgroup"
+}
+
+vxlan_vnifilter_datapath_mgroup_pervni()
+{
+ hv1addr1="172.16.0.1"
+ hv2addr1="172.16.0.2"
+ hv1addr2="2002:fee1::1"
+ hv2addr2="2002:fee1::2"
+ group="239.1.1.100"
+ group6="ff07::1"
+
+ ip link add veth-hv-1 type veth peer name veth-hv-2
+ setup-hv-networking 1 $hv1addr1 24 $hv1addr2 64
+ setup-hv-networking 2 $hv2addr1 24 $hv2addr2 64
+
+ check_hv_connectivity hv2addr1 hv2addr2
+
+ setup-vm 1 11 1 10-v4-$hv1addr1-$group-vnifilterg,20-v4-$hv1addr1-$group-vnifilterg 1
+ setup-vm 1 21 2 10-v6-$hv1addr2-$group6-vnifilterg,20-v6-$hv1addr2-$group6-vnifilterg 1
+
+ setup-vm 2 12 1 10-v4-$hv2addr1-$group-vnifilterg,20-v4-$hv2addr1-$group-vnifilterg 1
+ setup-vm 2 22 2 10-v6-$hv2addr2-$group6-vnifilterg,20-v6-$hv2addr2-$group6-vnifilterg 1
+
+ check_vm_connectivity "vnifiltering vxlan pervni mgroup"
+}
+
+vxlan_vnifilter_metadata_and_traditional_mix()
+{
+ hv1addr1="172.16.0.1"
+ hv2addr1="172.16.0.2"
+ hv1addr2="2002:fee1::1"
+ hv2addr2="2002:fee1::2"
+
+ ip link add veth-hv-1 type veth peer name veth-hv-2
+ setup-hv-networking 1 $hv1addr1 24 $hv1addr2 64
+ setup-hv-networking 2 $hv2addr1 24 $hv2addr2 64
+
+ check_hv_connectivity hv2addr1 hv2addr2
+
+ setup-vm 1 11 1 10-v4-$hv1addr1-$hv2addr1-vnifilter,20-v4-$hv1addr1-$hv2addr1-vnifilter 0
+ setup-vm 1 21 2 10-v6-$hv1addr2-$hv2addr2-vnifilter,20-v6-$hv1addr2-$hv2addr2-vnifilter 0
+ setup-vm 1 31 3 30-v4-$hv1addr1-$hv2addr1-default-4790,40-v6-$hv1addr2-$hv2addr2-default-4790,50-v4-$hv1addr1-$hv2addr1-metadata-4791 0
+
+
+ setup-vm 2 12 1 10-v4-$hv2addr1-$hv1addr1-vnifilter,20-v4-$hv2addr1-$hv1addr1-vnifilter 0
+ setup-vm 2 22 2 10-v6-$hv2addr2-$hv1addr2-vnifilter,20-v6-$hv2addr2-$hv1addr2-vnifilter 0
+ setup-vm 2 32 3 30-v4-$hv2addr1-$hv1addr1-default-4790,40-v6-$hv2addr2-$hv1addr2-default-4790,50-v4-$hv2addr1-$hv1addr1-metadata-4791 0
+
+ check_vm_connectivity "vnifiltering vxlan pervni remote mix"
+
+ # check VM connectivity over traditional/non-vxlan filtering vxlan devices
+ run_cmd "ip netns exec vm-31 ping -c 1 -W 1 10.0.30.32"
+ log_test $? 0 "VM connectivity over traditional vxlan (ipv4 default rdst)"
+
+ run_cmd "ip netns exec vm-31 ping -c 1 -W 1 10.0.40.32"
+ log_test $? 0 "VM connectivity over traditional vxlan (ipv6 default rdst)"
+
+ run_cmd "ip netns exec vm-31 ping -c 1 -W 1 10.0.50.32"
+ log_test $? 0 "VM connectivity over metadata nonfiltering vxlan (ipv4 default rdst)"
+}
+
+while getopts :t:pP46hv o
+do
+ case $o in
+ t) TESTS=$OPTARG;;
+ p) PAUSE_ON_FAIL=yes;;
+ P) PAUSE=yes;;
+ v) VERBOSE=$(($VERBOSE + 1));;
+ h) usage; exit 0;;
+ *) usage; exit 1;;
+ esac
+done
+
+# make sure we don't pause twice
+[ "${PAUSE}" = "yes" ] && PAUSE_ON_FAIL=no
+
+if [ "$(id -u)" -ne 0 ];then
+ echo "SKIP: Need root privileges"
+ exit $ksft_skip;
+fi
+
+if [ ! -x "$(command -v ip)" ]; then
+ echo "SKIP: Could not run test without ip tool"
+ exit $ksft_skip
+fi
+
+ip link help vxlan 2>&1 | grep -q "vnifilter"
+if [ $? -ne 0 ]; then
+ echo "SKIP: iproute2 too old, missing vxlan dev vnifilter setting"
+ sync
+ exit $ksft_skip
+fi
+
+bridge vni help 2>&1 | grep -q "Usage: bridge vni"
+if [ $? -ne 0 ]; then
+ echo "SKIP: iproute2 bridge lacks vxlan vnifiltering support"
+ exit $ksft_skip
+fi
+
+# start clean
+cleanup &> /dev/null
+
+for t in $TESTS
+do
+ case $t in
+ none) setup; exit 0;;
+ *) $t; cleanup;;
+ esac
+done
+
+if [ "$TESTS" != "none" ]; then
+ printf "\nTests passed: %3d\n" ${nsuccess}
+ printf "Tests failed: %3d\n" ${nfail}
+fi
+
+exit $ret
diff --git a/tools/testing/selftests/net/timestamping.c b/tools/testing/selftests/net/timestamping.c
index aee631c5284e..044bc0e9ed81 100644
--- a/tools/testing/selftests/net/timestamping.c
+++ b/tools/testing/selftests/net/timestamping.c
@@ -325,8 +325,8 @@ int main(int argc, char **argv)
struct ifreq device;
struct ifreq hwtstamp;
struct hwtstamp_config hwconfig, hwconfig_requested;
- struct so_timestamping so_timestamping_get = { 0, -1 };
- struct so_timestamping so_timestamping = { 0, -1 };
+ struct so_timestamping so_timestamping_get = { 0, 0 };
+ struct so_timestamping so_timestamping = { 0, 0 };
struct sockaddr_in addr;
struct ip_mreq imr;
struct in_addr iaddr;
diff --git a/tools/testing/selftests/net/toeplitz.c b/tools/testing/selftests/net/toeplitz.c
index c5489341cfb8..90026a27eac0 100644
--- a/tools/testing/selftests/net/toeplitz.c
+++ b/tools/testing/selftests/net/toeplitz.c
@@ -52,6 +52,8 @@
#include <sys/types.h>
#include <unistd.h>
+#include "../kselftest.h"
+
#define TOEPLITZ_KEY_MIN_LEN 40
#define TOEPLITZ_KEY_MAX_LEN 60
@@ -295,7 +297,7 @@ static void __set_filter(int fd, int off_proto, uint8_t proto, int off_dport)
struct sock_fprog prog = {};
prog.filter = filter;
- prog.len = sizeof(filter) / sizeof(struct sock_filter);
+ prog.len = ARRAY_SIZE(filter);
if (setsockopt(fd, SOL_SOCKET, SO_ATTACH_FILTER, &prog, sizeof(prog)))
error(1, errno, "setsockopt filter");
}
@@ -324,7 +326,7 @@ static void set_filter_null(int fd)
struct sock_fprog prog = {};
prog.filter = filter;
- prog.len = sizeof(filter) / sizeof(struct sock_filter);
+ prog.len = ARRAY_SIZE(filter);
if (setsockopt(fd, SOL_SOCKET, SO_ATTACH_FILTER, &prog, sizeof(prog)))
error(1, errno, "setsockopt filter");
}
diff --git a/tools/testing/selftests/net/txtimestamp.c b/tools/testing/selftests/net/txtimestamp.c
index fabb1d555ee5..10f2fde3686b 100644
--- a/tools/testing/selftests/net/txtimestamp.c
+++ b/tools/testing/selftests/net/txtimestamp.c
@@ -161,7 +161,7 @@ static void validate_timestamp(struct timespec *cur, int min_delay)
max_delay = min_delay + cfg_delay_tolerance_usec;
if (cur64 < start64 + min_delay || cur64 > start64 + max_delay) {
- fprintf(stderr, "ERROR: %lu us expected between %d and %d\n",
+ fprintf(stderr, "ERROR: %" PRId64 " us expected between %d and %d\n",
cur64 - start64, min_delay, max_delay);
test_failed = true;
}
@@ -170,9 +170,9 @@ static void validate_timestamp(struct timespec *cur, int min_delay)
static void __print_ts_delta_formatted(int64_t ts_delta)
{
if (cfg_print_nsec)
- fprintf(stderr, "%lu ns", ts_delta);
+ fprintf(stderr, "%" PRId64 " ns", ts_delta);
else
- fprintf(stderr, "%lu us", ts_delta / NSEC_PER_USEC);
+ fprintf(stderr, "%" PRId64 " us", ts_delta / NSEC_PER_USEC);
}
static void __print_timestamp(const char *name, struct timespec *cur,
diff --git a/tools/testing/selftests/ptp/testptp.c b/tools/testing/selftests/ptp/testptp.c
index c0f6a062364d..198ad5f32187 100644
--- a/tools/testing/selftests/ptp/testptp.c
+++ b/tools/testing/selftests/ptp/testptp.c
@@ -133,6 +133,7 @@ static void usage(char *progname)
" 0 - none\n"
" 1 - external time stamp\n"
" 2 - periodic output\n"
+ " -n val shift the ptp clock time by 'val' nanoseconds\n"
" -p val enable output with a period of 'val' nanoseconds\n"
" -H val set output phase to 'val' nanoseconds (requires -p)\n"
" -w val set output pulse width to 'val' nanoseconds (requires -p)\n"
@@ -165,6 +166,7 @@ int main(int argc, char *argv[])
clockid_t clkid;
int adjfreq = 0x7fffffff;
int adjtime = 0;
+ int adjns = 0;
int capabilities = 0;
int extts = 0;
int flagtest = 0;
@@ -186,7 +188,7 @@ int main(int argc, char *argv[])
progname = strrchr(argv[0], '/');
progname = progname ? 1+progname : argv[0];
- while (EOF != (c = getopt(argc, argv, "cd:e:f:ghH:i:k:lL:p:P:sSt:T:w:z"))) {
+ while (EOF != (c = getopt(argc, argv, "cd:e:f:ghH:i:k:lL:n:p:P:sSt:T:w:z"))) {
switch (c) {
case 'c':
capabilities = 1;
@@ -223,6 +225,9 @@ int main(int argc, char *argv[])
return -1;
}
break;
+ case 'n':
+ adjns = atoi(optarg);
+ break;
case 'p':
perout = atoll(optarg);
break;
@@ -305,11 +310,16 @@ int main(int argc, char *argv[])
}
}
- if (adjtime) {
+ if (adjtime || adjns) {
memset(&tx, 0, sizeof(tx));
- tx.modes = ADJ_SETOFFSET;
+ tx.modes = ADJ_SETOFFSET | ADJ_NANO;
tx.time.tv_sec = adjtime;
- tx.time.tv_usec = 0;
+ tx.time.tv_usec = adjns;
+ while (tx.time.tv_usec < 0) {
+ tx.time.tv_sec -= 1;
+ tx.time.tv_usec += 1000000000;
+ }
+
if (clock_adjtime(clkid, &tx) < 0) {
perror("clock_adjtime");
} else {
diff --git a/tools/testing/selftests/tc-testing/tdc_config.py b/tools/testing/selftests/tc-testing/tdc_config.py
index ea04f04c173e..ccb0f06ef9e3 100644
--- a/tools/testing/selftests/tc-testing/tdc_config.py
+++ b/tools/testing/selftests/tc-testing/tdc_config.py
@@ -21,7 +21,7 @@ NAMES = {
'BATCH_FILE': './batch.txt',
'BATCH_DIR': 'tmp',
# Length of time in seconds to wait before terminating a command
- 'TIMEOUT': 12,
+ 'TIMEOUT': 24,
# Name of the namespace to use
'NS': 'tcut',
# Directory containing eBPF test programs
diff --git a/tools/testing/vsock/vsock_test.c b/tools/testing/vsock/vsock_test.c
index 2a3638c0a008..dc577461afc2 100644
--- a/tools/testing/vsock/vsock_test.c
+++ b/tools/testing/vsock/vsock_test.c
@@ -16,6 +16,8 @@
#include <linux/kernel.h>
#include <sys/types.h>
#include <sys/socket.h>
+#include <time.h>
+#include <sys/mman.h>
#include "timeout.h"
#include "control.h"
@@ -391,6 +393,209 @@ static void test_seqpacket_msg_trunc_server(const struct test_opts *opts)
close(fd);
}
+static time_t current_nsec(void)
+{
+ struct timespec ts;
+
+ if (clock_gettime(CLOCK_REALTIME, &ts)) {
+ perror("clock_gettime(3) failed");
+ exit(EXIT_FAILURE);
+ }
+
+ return (ts.tv_sec * 1000000000ULL) + ts.tv_nsec;
+}
+
+#define RCVTIMEO_TIMEOUT_SEC 1
+#define READ_OVERHEAD_NSEC 250000000 /* 0.25 sec */
+
+static void test_seqpacket_timeout_client(const struct test_opts *opts)
+{
+ int fd;
+ struct timeval tv;
+ char dummy;
+ time_t read_enter_ns;
+ time_t read_overhead_ns;
+
+ fd = vsock_seqpacket_connect(opts->peer_cid, 1234);
+ if (fd < 0) {
+ perror("connect");
+ exit(EXIT_FAILURE);
+ }
+
+ tv.tv_sec = RCVTIMEO_TIMEOUT_SEC;
+ tv.tv_usec = 0;
+
+ if (setsockopt(fd, SOL_SOCKET, SO_RCVTIMEO, (void *)&tv, sizeof(tv)) == -1) {
+ perror("setsockopt 'SO_RCVTIMEO'");
+ exit(EXIT_FAILURE);
+ }
+
+ read_enter_ns = current_nsec();
+
+ if (read(fd, &dummy, sizeof(dummy)) != -1) {
+ fprintf(stderr,
+ "expected 'dummy' read(2) failure\n");
+ exit(EXIT_FAILURE);
+ }
+
+ if (errno != EAGAIN) {
+ perror("EAGAIN expected");
+ exit(EXIT_FAILURE);
+ }
+
+ read_overhead_ns = current_nsec() - read_enter_ns -
+ 1000000000ULL * RCVTIMEO_TIMEOUT_SEC;
+
+ if (read_overhead_ns > READ_OVERHEAD_NSEC) {
+ fprintf(stderr,
+ "too much time in read(2), %lu > %i ns\n",
+ read_overhead_ns, READ_OVERHEAD_NSEC);
+ exit(EXIT_FAILURE);
+ }
+
+ control_writeln("WAITDONE");
+ close(fd);
+}
+
+static void test_seqpacket_timeout_server(const struct test_opts *opts)
+{
+ int fd;
+
+ fd = vsock_seqpacket_accept(VMADDR_CID_ANY, 1234, NULL);
+ if (fd < 0) {
+ perror("accept");
+ exit(EXIT_FAILURE);
+ }
+
+ control_expectln("WAITDONE");
+ close(fd);
+}
+
+#define BUF_PATTERN_1 'a'
+#define BUF_PATTERN_2 'b'
+
+static void test_seqpacket_invalid_rec_buffer_client(const struct test_opts *opts)
+{
+ int fd;
+ unsigned char *buf1;
+ unsigned char *buf2;
+ int buf_size = getpagesize() * 3;
+
+ fd = vsock_seqpacket_connect(opts->peer_cid, 1234);
+ if (fd < 0) {
+ perror("connect");
+ exit(EXIT_FAILURE);
+ }
+
+ buf1 = malloc(buf_size);
+ if (!buf1) {
+ perror("'malloc()' for 'buf1'");
+ exit(EXIT_FAILURE);
+ }
+
+ buf2 = malloc(buf_size);
+ if (!buf2) {
+ perror("'malloc()' for 'buf2'");
+ exit(EXIT_FAILURE);
+ }
+
+ memset(buf1, BUF_PATTERN_1, buf_size);
+ memset(buf2, BUF_PATTERN_2, buf_size);
+
+ if (send(fd, buf1, buf_size, 0) != buf_size) {
+ perror("send failed");
+ exit(EXIT_FAILURE);
+ }
+
+ if (send(fd, buf2, buf_size, 0) != buf_size) {
+ perror("send failed");
+ exit(EXIT_FAILURE);
+ }
+
+ close(fd);
+}
+
+static void test_seqpacket_invalid_rec_buffer_server(const struct test_opts *opts)
+{
+ int fd;
+ unsigned char *broken_buf;
+ unsigned char *valid_buf;
+ int page_size = getpagesize();
+ int buf_size = page_size * 3;
+ ssize_t res;
+ int prot = PROT_READ | PROT_WRITE;
+ int flags = MAP_PRIVATE | MAP_ANONYMOUS;
+ int i;
+
+ fd = vsock_seqpacket_accept(VMADDR_CID_ANY, 1234, NULL);
+ if (fd < 0) {
+ perror("accept");
+ exit(EXIT_FAILURE);
+ }
+
+ /* Setup first buffer. */
+ broken_buf = mmap(NULL, buf_size, prot, flags, -1, 0);
+ if (broken_buf == MAP_FAILED) {
+ perror("mmap for 'broken_buf'");
+ exit(EXIT_FAILURE);
+ }
+
+ /* Unmap "hole" in buffer. */
+ if (munmap(broken_buf + page_size, page_size)) {
+ perror("'broken_buf' setup");
+ exit(EXIT_FAILURE);
+ }
+
+ valid_buf = mmap(NULL, buf_size, prot, flags, -1, 0);
+ if (valid_buf == MAP_FAILED) {
+ perror("mmap for 'valid_buf'");
+ exit(EXIT_FAILURE);
+ }
+
+ /* Try to fill buffer with unmapped middle. */
+ res = read(fd, broken_buf, buf_size);
+ if (res != -1) {
+ fprintf(stderr,
+ "expected 'broken_buf' read(2) failure, got %zi\n",
+ res);
+ exit(EXIT_FAILURE);
+ }
+
+ if (errno != ENOMEM) {
+ perror("unexpected errno of 'broken_buf'");
+ exit(EXIT_FAILURE);
+ }
+
+ /* Try to fill valid buffer. */
+ res = read(fd, valid_buf, buf_size);
+ if (res < 0) {
+ perror("unexpected 'valid_buf' read(2) failure");
+ exit(EXIT_FAILURE);
+ }
+
+ if (res != buf_size) {
+ fprintf(stderr,
+ "invalid 'valid_buf' read(2), expected %i, got %zi\n",
+ buf_size, res);
+ exit(EXIT_FAILURE);
+ }
+
+ for (i = 0; i < buf_size; i++) {
+ if (valid_buf[i] != BUF_PATTERN_2) {
+ fprintf(stderr,
+ "invalid pattern for 'valid_buf' at %i, expected %hhX, got %hhX\n",
+ i, BUF_PATTERN_2, valid_buf[i]);
+ exit(EXIT_FAILURE);
+ }
+ }
+
+ /* Unmap buffers. */
+ munmap(broken_buf, page_size);
+ munmap(broken_buf + page_size * 2, page_size);
+ munmap(valid_buf, buf_size);
+ close(fd);
+}
+
static struct test_case test_cases[] = {
{
.name = "SOCK_STREAM connection reset",
@@ -431,6 +636,16 @@ static struct test_case test_cases[] = {
.run_client = test_seqpacket_msg_trunc_client,
.run_server = test_seqpacket_msg_trunc_server,
},
+ {
+ .name = "SOCK_SEQPACKET timeout",
+ .run_client = test_seqpacket_timeout_client,
+ .run_server = test_seqpacket_timeout_server,
+ },
+ {
+ .name = "SOCK_SEQPACKET invalid receive buffer",
+ .run_client = test_seqpacket_invalid_rec_buffer_client,
+ .run_server = test_seqpacket_invalid_rec_buffer_server,
+ },
{},
};