aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2024-03-06libbpf: Sync progs autoload with maps autocreate for struct_ops mapsEduard Zingerman1-0/+43
Automatically select which struct_ops programs to load depending on which struct_ops maps are selected for automatic creation. E.g. for the BPF code below: SEC("struct_ops/test_1") int BPF_PROG(foo) { ... } SEC("struct_ops/test_2") int BPF_PROG(bar) { ... } SEC(".struct_ops.link") struct test_ops___v1 A = { .foo = (void *)foo }; SEC(".struct_ops.link") struct test_ops___v2 B = { .foo = (void *)foo, .bar = (void *)bar, }; And the following libbpf API calls: bpf_map__set_autocreate(skel->maps.A, true); bpf_map__set_autocreate(skel->maps.B, false); The autoload would be enabled for program 'foo' and disabled for program 'bar'. During load, for each struct_ops program P, referenced from some struct_ops map M: - set P.autoload = true if M.autocreate is true for some M; - set P.autoload = false if M.autocreate is false for all M; - don't change P.autoload, if P is not referenced from any map. Do this after bpf_object__init_kern_struct_ops_maps() to make sure that shadow vars assignment is done. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-9-eddyz87@gmail.com
2024-03-06selftests/bpf: Test autocreate behavior for struct_ops mapsEduard Zingerman2-0/+118
Check that bpf_map__set_autocreate() can be used to disable automatic creation for struct_ops maps. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-8-eddyz87@gmail.com
2024-03-06selftests/bpf: Bad_struct_ops testEduard Zingerman4-0/+88
When loading struct_ops programs kernel requires BTF id of the struct_ops type and member index for attachment point inside that type. This makes impossible to use same BPF program in several struct_ops maps that have different struct_ops type. Check if libbpf rejects such BPF objects files. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-7-eddyz87@gmail.com
2024-03-06selftests/bpf: Utility functions to capture libbpf log in test_progsEduard Zingerman2-0/+62
Several test_progs tests already capture libbpf log in order to check for some expected output, e.g bpf_tcp_ca.c, kfunc_dynptr_param.c, log_buf.c and a few others. This commit provides a, hopefully, simple API to capture libbpf log w/o necessity to define new print callback in each test: /* Creates a global memstream capturing INFO and WARN level output * passed to libbpf_print_fn. * Returns 0 on success, negative value on failure. * On failure the description is printed using PRINT_FAIL and * current test case is marked as fail. */ int start_libbpf_log_capture(void) /* Destroys global memstream created by start_libbpf_log_capture(). * Returns a pointer to captured data which has to be freed. * Returned buffer is null terminated. */ char *stop_libbpf_log_capture(void) The intended usage is as follows: if (start_libbpf_log_capture()) return; use_libbpf(); char *log = stop_libbpf_log_capture(); ASSERT_HAS_SUBSTR(log, "... expected ...", "expected some message"); free(log); As a safety measure, free(start_libbpf_log_capture()) is invoked in the epilogue of the test_progs.c:run_one_test(). Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-6-eddyz87@gmail.com
2024-03-06selftests/bpf: Test struct_ops map definition with type suffixEduard Zingerman3-10/+46
Extend struct_ops_module test case to check if it is possible to use '___' suffixes for struct_ops type specification. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/20240306104529.6453-5-eddyz87@gmail.com
2024-03-06libbpf: Honor autocreate flag for struct_ops mapsEduard Zingerman1-3/+15
Skip load steps for struct_ops maps not marked for automatic creation. This should allow to load bpf object in situations like below: SEC("struct_ops/foo") int BPF_PROG(foo) { ... } SEC("struct_ops/bar") int BPF_PROG(bar) { ... } struct test_ops___v1 { int (*foo)(void); }; struct test_ops___v2 { int (*foo)(void); int (*does_not_exist)(void); }; SEC(".struct_ops.link") struct test_ops___v1 map_for_old = { .test_1 = (void *)foo }; SEC(".struct_ops.link") struct test_ops___v2 map_for_new = { .test_1 = (void *)foo, .does_not_exist = (void *)bar }; Suppose program is loaded on old kernel that does not have definition for 'does_not_exist' struct_ops member. After this commit it would be possible to load such object file after the following tweaks: bpf_program__set_autoload(skel->progs.bar, false); bpf_map__set_autocreate(skel->maps.map_for_new, false); Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/20240306104529.6453-4-eddyz87@gmail.com
2024-03-06libbpf: Tie struct_ops programs to kernel BTF ids, not to local idsEduard Zingerman1-23/+26
Enforce the following existing limitation on struct_ops programs based on kernel BTF id instead of program-local BTF id: struct_ops BPF prog can be re-used between multiple .struct_ops & .struct_ops.link as long as it's the same struct_ops struct definition and the same function pointer field This allows reusing same BPF program for versioned struct_ops map definitions, e.g.: SEC("struct_ops/test") int BPF_PROG(foo) { ... } struct some_ops___v1 { int (*test)(void); }; struct some_ops___v2 { int (*test)(void); }; SEC(".struct_ops.link") struct some_ops___v1 a = { .test = foo } SEC(".struct_ops.link") struct some_ops___v2 b = { .test = foo } Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-3-eddyz87@gmail.com
2024-03-06libbpf: Allow version suffixes (___smth) for struct_ops typesEduard Zingerman1-1/+5
E.g. allow the following struct_ops definitions: struct bpf_testmod_ops___v1 { int (*test)(void); }; struct bpf_testmod_ops___v2 { int (*test)(void); }; SEC(".struct_ops.link") struct bpf_testmod_ops___v1 a = { .test = ... } SEC(".struct_ops.link") struct bpf_testmod_ops___v2 b = { .test = ... } Where both bpf_testmod_ops__v1 and bpf_testmod_ops__v2 would be resolved as 'struct bpf_testmod_ops' from kernel BTF. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: David Vernet <void@manifault.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-2-eddyz87@gmail.com
2024-03-06selftests/bpf: Test may_gotoAlexei Starovoitov2-3/+101
Add tests for may_goto instruction via cond_break macro. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Tested-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20240306031929.42666-5-alexei.starovoitov@gmail.com
2024-03-06bpf: Add cond_break macroAlexei Starovoitov1-0/+12
Use may_goto instruction to implement cond_break macro. Ideally the macro should be written as: asm volatile goto(".byte 0xe5; .byte 0; .short %l[l_break] ... .long 0; but LLVM doesn't recognize fixup of 2 byte PC relative yet. Hence use asm volatile goto(".byte 0xe5; .byte 0; .long %l[l_break] ... .short 0; that produces correct asm on little endian. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Tested-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20240306031929.42666-4-alexei.starovoitov@gmail.com
2024-03-06bpf: Recognize that two registers are safe when their ranges matchAlexei Starovoitov1-21/+30
When open code iterators, bpf_loop or may_goto are used the following two states are equivalent and safe to prune the search: cur state: fp-8_w=scalar(id=3,smin=umin=smin32=umin32=2,smax=umax=smax32=umax32=11,var_off=(0x0; 0xf)) old state: fp-8_rw=scalar(id=2,smin=umin=smin32=umin32=1,smax=umax=smax32=umax32=11,var_off=(0x0; 0xf)) In other words "exact" state match should ignore liveness and precision marks, since open coded iterator logic didn't complete their propagation, reg_old->type == NOT_INIT && reg_cur->type != NOT_INIT is also not safe to prune while looping, but range_within logic that applies to scalars, ptr_to_mem, map_value, pkt_ptr is safe to rely on. Avoid doing such comparison when regular infinite loop detection logic is used, otherwise bounded loop logic will declare such "infinite loop" as false positive. Such example is in progs/verifier_loops1.c not_an_inifinite_loop(). Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Tested-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20240306031929.42666-3-alexei.starovoitov@gmail.com
2024-03-06bpf: Introduce may_goto instructionAlexei Starovoitov6-30/+150
Introduce may_goto instruction that from the verifier pov is similar to open coded iterators bpf_for()/bpf_repeat() and bpf_loop() helper, but it doesn't iterate any objects. In assembly 'may_goto' is a nop most of the time until bpf runtime has to terminate the program for whatever reason. In the current implementation may_goto has a hidden counter, but other mechanisms can be used. For programs written in C the later patch introduces 'cond_break' macro that combines 'may_goto' with 'break' statement and has similar semantics: cond_break is a nop until bpf runtime has to break out of this loop. It can be used in any normal "for" or "while" loop, like for (i = zero; i < cnt; cond_break, i++) { The verifier recognizes that may_goto is used in the program, reserves additional 8 bytes of stack, initializes them in subprog prologue, and replaces may_goto instruction with: aux_reg = *(u64 *)(fp - 40) if aux_reg == 0 goto pc+off aux_reg -= 1 *(u64 *)(fp - 40) = aux_reg may_goto instruction can be used by LLVM to implement __builtin_memcpy, __builtin_strcmp. may_goto is not a full substitute for bpf_for() macro. bpf_for() doesn't have induction variable that verifiers sees, so 'i' in bpf_for(i, 0, 100) is seen as imprecise and bounded. But when the code is written as: for (i = 0; i < 100; cond_break, i++) the verifier see 'i' as precise constant zero, hence cond_break (aka may_goto) doesn't help to converge the loop. A static or global variable can be used as a workaround: static int zero = 0; for (i = zero; i < 100; cond_break, i++) // works! may_goto works well with arena pointers that don't need to be bounds checked on access. Load/store from arena returns imprecise unbounded scalar and loops with may_goto pass the verifier. Reserve new opcode BPF_JMP | BPF_JCOND for may_goto insn. JCOND stands for conditional pseudo jump. Since goto_or_nop insn was proposed, it may use the same opcode. may_goto vs goto_or_nop can be distinguished by src_reg: code = BPF_JMP | BPF_JCOND src_reg = 0 - may_goto src_reg = 1 - goto_or_nop Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Tested-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20240306031929.42666-2-alexei.starovoitov@gmail.com
2024-03-06mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages().Alexei Starovoitov2-2/+62
vmap/vmalloc APIs are used to map a set of pages into contiguous kernel virtual space. get_vm_area() with appropriate flag is used to request an area of kernel address range. It's used for vmalloc, vmap, ioremap, xen use cases. - vmalloc use case dominates the usage. Such vm areas have VM_ALLOC flag. - the areas created by vmap() function should be tagged with VM_MAP. - ioremap areas are tagged with VM_IOREMAP. BPF would like to extend the vmap API to implement a lazily-populated sparse, yet contiguous kernel virtual space. Introduce VM_SPARSE flag and vm_area_map_pages(area, start_addr, count, pages) API to map a set of pages within a given area. It has the same sanity checks as vmap() does. It also checks that get_vm_area() was created with VM_SPARSE flag which identifies such areas in /proc/vmallocinfo and returns zero pages on read through /proc/kcore. The next commits will introduce bpf_arena which is a sparsely populated shared memory region between bpf program and user space process. It will map privately-managed pages into a sparse vm area with the following steps: // request virtual memory region during bpf prog verification area = get_vm_area(area_size, VM_SPARSE); // on demand vm_area_map_pages(area, kaddr, kend, pages); vm_area_unmap_pages(area, kaddr, kend); // after bpf program is detached and unloaded free_vm_area(area); Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com> Link: https://lore.kernel.org/bpf/20240305030516.41519-3-alexei.starovoitov@gmail.com
2024-03-06mm: Enforce VM_IOREMAP flag and range in ioremap_page_range.Alexei Starovoitov1-0/+13
There are various users of get_vm_area() + ioremap_page_range() APIs. Enforce that get_vm_area() was requested as VM_IOREMAP type and range passed to ioremap_page_range() matches created vm_area to avoid accidentally ioremap-ing into wrong address range. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/bpf/20240305030516.41519-2-alexei.starovoitov@gmail.com
2024-03-04selftests/bpf: Test struct_ops maps with a large number of struct_ops program.Kui-Feng Lee3-0/+176
Create and load a struct_ops map with a large number of struct_ops programs to generate trampolines taking a size over multiple pages. The map includes 40 programs. Their trampolines takes 6.6k+, more than 1.5 pages, on x86. Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240224223418.526631-4-thinker.li@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-04bpf: struct_ops supports more than one page for trampolines.Kui-Feng Lee3-50/+96
The BPF struct_ops previously only allowed one page of trampolines. Each function pointer of a struct_ops is implemented by a struct_ops bpf program. Each struct_ops bpf program requires a trampoline. The following selftest patch shows each page can hold a little more than 20 trampolines. While one page is more than enough for the tcp-cc usecase, the sched_ext use case shows that one page is not always enough and hits the one page limit. This patch overcomes the one page limit by allocating another page when needed and it is limited to a total of MAX_IMAGE_PAGES (8) pages which is more than enough for reasonable usages. The variable st_map->image has been changed to st_map->image_pages, and its type has been changed to an array of pointers to pages. Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240224223418.526631-3-thinker.li@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-04bpf, net: validate struct_ops when updating value.Kui-Feng Lee2-10/+7
Perform all validations when updating values of struct_ops maps. Doing validation in st_ops->reg() and st_ops->update() is not necessary anymore. However, tcp_register_congestion_control() has been called in various places. It still needs to do validations. Cc: netdev@vger.kernel.org Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240224223418.526631-2-thinker.li@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-04selftests/bpf: xdp_hw_metadata reduce sleep intervalSong Yoong Siang1-1/+1
In current ping-pong design, xdp_hw_metadata will wait until the packet transmission completely done, then only start to receive the next packet. The current sleep interval is 10ms, which is unnecessary large. Typically, a NIC does not need such a long time to transmit a packet. Furthermore, during this 10ms sleep time, the app is unable to receive incoming packets. Therefore, this commit reduce sleep interval to 10us, so that xdp_hw_metadata is able to support periodic packets with shorter interval. 10us * 500 = 5ms should be enough for packet transmission and status retrieval. Signed-off-by: Song Yoong Siang <yoong.siang.song@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20240303083225.1184165-2-yoong.siang.song@intel.com
2024-03-04selftests/bpf: Extend uprobe/uretprobe triggering benchmarksAndrii Nakryiko4-46/+103
Settle on three "flavors" of uprobe/uretprobe, installed on different kinds of instruction: nop, push, and ret. All three are testing different internal code paths emulating or single-stepping instructions, so are interesting to compare and benchmark separately. To ensure `push rbp` instruction we ensure that uprobe_target_push() is not a leaf function by calling (global __weak) noop function and returning something afterwards (if we don't do that, compiler will just do a tail call optimization). Also, we need to make sure that compiler isn't skipping frame pointer generation, so let's add `-fno-omit-frame-pointers` to Makefile. Just to give an idea of where we currently stand in terms of relative performance of different uprobe/uretprobe cases vs a cheap syscall (getpgid()) baseline, here are results from my local machine: $ benchs/run_bench_uprobes.sh base : 1.561 ± 0.020M/s uprobe-nop : 0.947 ± 0.007M/s uprobe-push : 0.951 ± 0.004M/s uprobe-ret : 0.443 ± 0.007M/s uretprobe-nop : 0.471 ± 0.013M/s uretprobe-push : 0.483 ± 0.004M/s uretprobe-ret : 0.306 ± 0.007M/s Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240301214551.1686095-1-andrii@kernel.org
2024-03-04libbpf: Correct debug message in btf__load_vmlinux_btfChen Shen1-1/+1
In the function btf__load_vmlinux_btf, the debug message incorrectly refers to 'path' instead of 'sysfs_btf_path'. Signed-off-by: Chen Shen <peterchenshen@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/bpf/20240302062218.3587-1-peterchenshen@gmail.com
2024-03-04bpf, docs: Rename legacy conformance group to packetDave Thaler1-2/+2
There could be other legacy conformance groups in the future, so use a more descriptive name. The status of the conformance group in the IANA registry is what designates it as legacy, not the name of the group. Signed-off-by: Dave Thaler <dthaler1968@gmail.com> Link: https://lore.kernel.org/r/20240302012229.16452-1-dthaler1968@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2024-03-02bpf, docs: Use IETF format for field definitions in instruction-set.rstDave Thaler1-241/+290
In preparation for publication as an IETF RFC, the WG chairs asked me to convert the document to use IETF packet format for field layout, so this patch attempts to make it consistent with other IETF documents. Some fields that are not byte aligned were previously inconsistent in how values were defined. Some were defined as the value of the byte containing the field (like 0x20 for a field holding the high four bits of the byte), and others were defined as the value of the field itself (like 0x2). This PR makes them be consistent in using just the values of the field itself, which is IETF convention. As a result, some of the defines that used BPF_* would no longer match the value in the spec, and so this patch also drops the BPF_* prefix to avoid confusion with the defines that are the full-byte equivalent values. For consistency, BPF_* is then dropped from other fields too. BPF_<foo> is thus the Linux implementation-specific define for <foo> as it appears in the BPF ISA specification. The syntax BPF_ADD | BPF_X | BPF_ALU only worked for full-byte values so the convention {ADD, X, ALU} is proposed for referring to field values instead. Also replace the redundant "LSB bits" with "least significant bits". A preview of what the resulting Internet Draft would look like can be seen at: https://htmlpreview.github.io/?https://raw.githubusercontent.com/dthaler/ebp f-docs-1/format/draft-ietf-bpf-isa.html v1->v2: Fix sphinx issue as recommended by David Vernet Signed-off-by: Dave Thaler <dthaler1968@gmail.com> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20240301222337.15931-1-dthaler1968@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-01inet: use xa_array iterator to implement inet_dump_ifaddr()Eric Dumazet1-58/+37
1) inet_dump_ifaddr() can can run under RCU protection instead of RTNL. 2) properly return 0 at the end of a dump, avoiding an an extra recvmsg() system call. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01inet: prepare inet_base_seq() to run without RTNLEric Dumazet2-3/+4
In the following patch, inet_base_seq() will no longer be called with RTNL held. Add READ_ONCE()/WRITE_ONCE() annotations in dev_base_seq_inc() and inet_base_seq(). Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01inet: annotate data-races around ifa->ifa_flagsEric Dumazet1-9/+13
ifa->ifa_flags can be read locklessly. Add appropriate READ_ONCE()/WRITE_ONCE() annotations. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01inet: annotate data-races around ifa->ifa_preferred_lftEric Dumazet1-8/+8
ifa->ifa_preferred_lft can be read locklessly. Add appropriate READ_ONCE()/WRITE_ONCE() annotations. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01inet: annotate data-races around ifa->ifa_valid_lftEric Dumazet1-8/+8
ifa->ifa_valid_lft can be read locklessly. Add appropriate READ_ONCE()/WRITE_ONCE() annotations. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01inet: annotate data-races around ifa->ifa_tstamp and ifa->ifa_cstampEric Dumazet1-10/+13
ifa->ifa_tstamp can be read locklessly. Add appropriate READ_ONCE()/WRITE_ONCE() annotations. Do the same for ifa->ifa_cstamp to prepare upcoming changes. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01netdevsim: fix rtnetlink.sh selftestDavid Wei1-0/+2
I cleared IFF_NOARP flag from netdevsim dev->flags in order to support skb forwarding. This breaks the rtnetlink.sh selftest kci_test_ipsec_offload() test because ipsec does not connect to peers it cannot transmit to. Fix the issue by adding a neigh entry manually. ipsec_offload test now successfully pass. Signed-off-by: David Wei <dw@davidwei.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01netdevsim: add selftest for forwarding skb between connected portsDavid Wei2-0/+144
Connect two netdevsim ports in different namespaces together, then send packets between them using socat. Signed-off-by: David Wei <dw@davidwei.uk> Reviewed-by: Maciek Machnikowski <maciek@machnikowski.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01netdevsim: add ndo_get_iflink() implementationDavid Wei1-0/+16
Add an implementation for ndo_get_iflink() in netdevsim that shows the ifindex of the linked peer, if any. Signed-off-by: David Wei <dw@davidwei.uk> Reviewed-by: Maciek Machnikowski <maciek@machnikowski.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01netdevsim: forward skbs from one connected port to anotherDavid Wei2-5/+23
Forward skbs sent from one netdevsim port to its connected netdevsim port using dev_forward_skb, in a spirit similar to veth. Add a tx_dropped variable to struct netdevsim, tracking the number of skbs that could not be forwarded using dev_forward_skb(). The xmit() function accessing the peer ptr is protected by an RCU read critical section. The rcu_read_lock() is functionally redundant as since v5.0 all softirqs are implicitly RCU read critical sections; but it is useful for human readers. If another CPU is concurrently in nsim_destroy(), then it will first set the peer ptr to NULL. This does not affect any existing readers that dereferenced a non-NULL peer. Then, in unregister_netdevice(), there is a synchronize_rcu() before the netdev is actually unregistered and freed. This ensures that any readers i.e. xmit() that got a non-NULL peer will complete before the netdev is freed. Any readers after the RCU_INIT_POINTER() but before synchronize_rcu() will dereference NULL, making it safe. The codepath to nsim_destroy() and nsim_create() takes both the newly added nsim_dev_list_lock and rtnl_lock. This makes it safe with concurrent calls to linking two netdevsims together. Signed-off-by: David Wei <dw@davidwei.uk> Reviewed-by: Maciek Machnikowski <maciek@machnikowski.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01netdevsim: allow two netdevsim ports to be connectedDavid Wei3-0/+157
Add two netdevsim bus attribute to sysfs: /sys/bus/netdevsim/link_device /sys/bus/netdevsim/unlink_device Writing "A M B N" to link_device will link netdevsim M in netnsid A with netdevsim N in netnsid B. Writing "A M" to unlink_device will unlink netdevsim M in netnsid A from its peer, if any. rtnl_lock is taken to ensure nothing changes during the linking. Signed-off-by: David Wei <dw@davidwei.uk> Reviewed-by: Maciek Machnikowski <maciek@machnikowski.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01selftests: ip_local_port_range: use XFAIL instead of SKIPJakub Kicinski1-3/+3
SCTP does not support IP_LOCAL_PORT_RANGE and we know it, so use XFAIL instead of SKIP. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01selftests: kselftest_harness: support using xfailJakub Kicinski1-1/+48
Currently some tests report skip for things they expect to fail e.g. when given combination of parameters is known to be unsupported. This is confusing because in an ideal test environment and fully featured kernel no tests should be skipped. Selftest summary line already includes xfail and xpass counters, e.g.: Totals: pass:725 fail:0 xfail:0 xpass:0 skip:0 error:0 but there's no way to use it from within the harness. Add a new per-fixture+variant combination list of test cases we expect to fail. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01selftests: kselftest_harness: let PASS / FAIL provide diagnosticJakub Kicinski1-5/+4
Switch to printing KTAP line for PASS / FAIL with ksft_test_result_code(), this gives us the ability to report diagnostic messages. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01selftests: kselftest_harness: separate diagnostic message with # in ksft_test_result_code()Jakub Kicinski2-1/+6
According to the spec we should always print a # if we add a diagnostic message. Having the caller pass in the new line as part of diagnostic message makes handling this a bit counter-intuitive, so append the new line in the helper. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01selftests: kselftest_harness: print test name for SKIPJakub Kicinski2-4/+6
Jakub points out that for parsers it's rather useful to always have the test name on the result line. Currently if we SKIP (or soon XFAIL or XPASS), we will print: ok 17 # SKIP SCTP doesn't support IP_BIND_ADDRESS_NO_PORT ^ no test name Always print the test name. KTAP format seems to allow or even call for it, per: https://docs.kernel.org/dev-tools/ktap.html Suggested-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/all/87jzn6lnou.fsf@cloudflare.com/ Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01selftests: kselftest: add ksft_test_result_code(), handling all exit codesJakub Kicinski2-2/+46
For generic test harness code it's more useful to deal with exit codes directly, rather than having to switch on them and call the right ksft_test_result_*() helper. Add such function to kselftest.h. Note that "directive" and "diagnostic" are what ktap docs call those parts of the message. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01selftests: kselftest_harness: use exit code to store skipJakub Kicinski1-14/+5
We always use skip in combination with exit_code being 0 (KSFT_PASS). This are basic KSFT / KTAP semantics. Store the right KSFT_* code in exit_code directly. This makes it easier to support tests reporting other extended KSFT_* codes like XFAIL / XPASS. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01selftests: kselftest_harness: save full exit code in metadataJakub Kicinski7-35/+41
Instead of tracking passed = 0/1 rename the field to exit_code and invert the values so that they match the KSFT_* exit codes. This will allow us to fold SKIP / XFAIL into the same value. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01selftests: kselftest_harness: generate test name onceJakub Kicinski1-6/+10
Since we added variant support generating full test case name takes 4 string arguments. We're about to need it in another two places. Stop the duplication and print once into a temporary buffer. Suggested-by: Jakub Sitnicki <jakub@cloudflare.com> Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01selftests: kselftest_harness: use KSFT_* exit codesJakub Kicinski1-6/+5
Now that we no longer need low exit codes to communicate assertion steps - use normal KSFT exit codes. Acked-by: Kees Cook <keescook@chromium.org> Tested-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01selftests/harness: Merge TEST_F_FORK() into TEST_F()Mickaël Salaün2-91/+27
Replace Landlock-specific TEST_F_FORK() with an improved TEST_F() which brings four related changes: Run TEST_F()'s tests in a grandchild process to make it possible to drop privileges and delegate teardown to the parent. Compared to TEST_F_FORK(), simplify handling of the test grandchild process thanks to vfork(2), and makes it generic (e.g. no explicit conversion between exit code and _metadata). Compared to TEST_F_FORK(), run teardown even when tests failed with an assert thanks to commit 63e6b2a42342 ("selftests/harness: Run TEARDOWN for ASSERT failures"). Simplify the test harness code by removing the no_print and step fields which are not used. I added this feature just after I made kselftest_harness.h more broadly available but this step counter remained even though it wasn't needed after all. See commit 369130b63178 ("selftests: Enhance kselftest_harness.h to print which assert failed"). Replace spaces with tabs in one line of __TEST_F_IMPL(). Cc: Günther Noack <gnoack@google.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Will Drewry <wad@chromium.org> Signed-off-by: Mickaël Salaün <mic@digikod.net> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01selftests/landlock: Redefine TEST_F() as TEST_F_FORK()Mickaël Salaün1-1/+5
This has the effect of creating a new test process for either TEST_F() or TEST_F_FORK(), which doesn't change tests but will ease potential backports. See next commit for the TEST_F_FORK() merge into TEST_F(). Cc: Günther Noack <gnoack@google.com> Cc: Kees Cook <keescook@chromium.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Will Drewry <wad@chromium.org> Signed-off-by: Mickaël Salaün <mic@digikod.net> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01net: bcmasp: Add support for PHY interruptsJustin Chen3-0/+26
Hook up the phy interrupts for internal phys to reduce mdio traffic and improve responsiveness of link changes. Signed-off-by: Justin Chen <justin.chen@broadcom.com> Acked-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01net: bcmasp: Keep buffers through power managementJustin Chen2-108/+85
There is no advantage of freeing and re-allocating buffers through suspend and resume. This waste cycles and makes suspend/resume time longer. We also open ourselves to failed allocations in systems with heavy memory fragmentation. Signed-off-by: Justin Chen <justin.chen@broadcom.com> Acked-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01net: phy: mdio-bcm-unimac: Add asp v2.2 supportJustin Chen1-0/+1
Add mdio compat string for ASP 2.0 ethernet driver. Signed-off-by: Justin Chen <justin.chen@broadcom.com> Acked-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01net: bcmasp: Add support for ASP 2.2Justin Chen3-10/+87
ASP 2.2 improves power savings during low power modes. A new register was added to toggle to a slower clock during low power modes. EEE was broken for ASP 2.0/2.1. A HW workaround was added for ASP 2.2 that requires toggling a chicken bit. Signed-off-by: Justin Chen <justin.chen@broadcom.com> Acked-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01dt-bindings: net: brcm,asp-v2.0: Add asp-v2.2Justin Chen1-0/+4
Add support for ASP 2.2. Signed-off-by: Justin Chen <justin.chen@broadcom.com> Acked-by: Florian Fainelli <florian.fainelli@broadcom.com> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>