aboutsummaryrefslogtreecommitdiffstats
path: root/tools/testing/selftests (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2019-12-06net/tls: Fix return values to avoid ENOTSUPPValentin Vidic1-6/+2
ENOTSUPP is not available in userspace, for example: setsockopt failed, 524, Unknown error 524 Signed-off-by: Valentin Vidic <vvidic@valentin-vidic.from.hr> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-04selftests/bpf: Add a fexit/bpf2bpf test with target bpf prog no calleesYonghong Song3-19/+81
The existing fexit_bpf2bpf test covers the target progrm with callees. This patch added a test for the target program without callees. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191205010607.177904-1-yhs@fb.com
2019-12-04selftests: add epoll selftestsHeiher4-0/+3083
This adds the promised selftest for epoll. It will verify the wakeups of epoll. Including leaf and nested mode, epoll_wait() and poll() and multi-threads. Link: http://lkml.kernel.org/r/20191009121518.4027-1-r@hev.cc Signed-off-by: hev <r@hev.cc> Reviewed-by: Roman Penyaev <rpenyaev@suse.de> Cc: Jason Baron <jbaron@akamai.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-04selftests/bpf: De-flake test_tcpbpfStanislav Fomichev3-7/+20
It looks like BPF program that handles BPF_SOCK_OPS_STATE_CB state can race with the bpf_map_lookup_elem("global_map"); I sometimes see the failures in this test and re-running helps. Since we know that we expect the callback to be called 3 times (one time for listener socket, two times for both ends of the connection), let's export this number and add simple retry logic around that. Also, let's make EXPECT_EQ() not return on failure, but continue evaluating all conditions; that should make potential debugging easier. With this fix in place I don't observe the flakiness anymore. Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Cc: Lawrence Brakmo <brakmo@fb.com> Link: https://lore.kernel.org/bpf/20191204190955.170934-1-sdf@google.com
2019-12-04selftests/bpf: Bring back c++ include/link testStanislav Fomichev3-1/+26
Commit 5c26f9a78358 ("libbpf: Don't use cxx to test_libpf target") converted existing c++ test to c. We still want to include and link against libbpf from c++ code, so reinstate this test back, this time in a form of a selftest with a clear comment about its purpose. v2: * -lelf -> $(LDLIBS) (Andrii Nakryiko) Fixes: 5c26f9a78358 ("libbpf: Don't use cxx to test_libpf target") Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191202215931.248178-1-sdf@google.com
2019-12-04selftests/bpf: Don't hard-code root cgroup idStanislav Fomichev1-1/+1
Commit 40430452fd5d ("kernfs: use 64bit inos if ino_t is 64bit") changed the way cgroup ids are exposed to the userspace. Instead of assuming fixed root id, let's query it. Fixes: 40430452fd5d ("kernfs: use 64bit inos if ino_t is 64bit") Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191202200143.250793-1-sdf@google.com
2019-12-01selftests: vm: add fragment CONFIG_TEST_VMALLOCAnders Roxell1-0/+1
When running test_vmalloc.sh smoke the following print out states that the fragment is missing. # ./test_vmalloc.sh: You must have the following enabled in your kernel: # CONFIG_TEST_VMALLOC=m Rework to add the fragment 'CONFIG_TEST_VMALLOC=m' to the config file. Link: http://lkml.kernel.org/r/20190916095217.19665-1-anders.roxell@linaro.org Fixes: a05ef00c9790 ("selftests/vm: add script helper for CONFIG_TEST_VMALLOC_MODULE") Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Cc: Shuah Khan <shuah@kernel.org> Cc: "Uladzislau Rezki (Sony)" <urezki@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-01memfd: add test for COW on MAP_PRIVATE and F_SEAL_FUTURE_WRITE mappingsJoel Fernandes (Google)1-0/+36
In this test, the parent and child both have writable private mappings. The test shows that without the patch in this series, the parent and child shared the same memory which is incorrect. In other words, COW needs to be triggered so any writes to child's copy stays local to the child. Link: http://lkml.kernel.org/r/20191107195355.80608-2-joel@joelfernandes.org Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org> Cc: Hugh Dickins <hughd@google.com> Cc: Nicolas Geoffray <ngeoffray@google.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-11-30selftests: forwarding: fix race between packet receive and tc checkJiri Pirko1-8/+31
It is possible that tc stats get checked before the packet we check for actually arrived into the interface and accounted for. Fix it by checking for the expected result in a loop until timeout is reached (by default 1 second). Fixes: 07e5c75184a1 ("selftests: forwarding: Introduce tc flower matching tests") Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-29selftests: pmtu: use -oneline for ip route list cacheThadeu Lima de Souza Cascardo1-3/+2
Some versions of iproute2 will output more than one line per entry, which will cause the test to fail, like: TEST: ipv6: list and flush cached exceptions [FAIL] can't list cached exceptions That happens, for example, with iproute2 4.15.0. When using the -oneline option, this will work just fine: TEST: ipv6: list and flush cached exceptions [ OK ] This also works just fine with a more recent version of iproute2, like 5.4.0. For some reason, two lines are printed for the IPv4 test no matter what version of iproute2 is used. Use the same -oneline parameter there instead of counting the lines twice. Fixes: b964641e9925 ("selftests: pmtu: Make list_flush_ipv6_exception test more demanding") Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Acked-by: Stefano Brivio <sbrivio@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-28selftests: bpf: correct perror stringsJakub Kicinski2-20/+20
perror(str) is basically equivalent to print("%s: %s\n", str, strerror(errno)). New line or colon at the end of str is a mistake/breaks formatting. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-28selftests: bpf: test_sockmap: handle file creation failures gracefullyJakub Kicinski1-0/+9
test_sockmap creates a temporary file to use for sendpage. this may fail for various reasons. Handle the error rather than segfault. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-28selftests/tls: add a test for fragmented messagesJakub Kicinski1-0/+60
Add a sendmsg test with very fragmented messages. This should fill up sk_msg and test the boundary conditions. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-28Revert "selftests: Fix O= and KBUILD_OUTPUT handling for relative paths"Shuah Khan1-3/+2
This reverts commit 303e6218ecec475d5bc3e5922dec770ee5baf107. This patch breaks several CI use-cases that run kselftest builds without using main Makefile. This fix depends on abs_objtree which is undefined when kselftest build is invoked on selftests Makefile without going through the main Makefile. Revert this for now as this patch impacts selftest runs. Fixes: 303e6218ecec ("selftests: Fix O= and KBUILD_OUTPUT handling for relative paths") Reported-by: Cristian Marussi <cristian.marussi@arm.com> Reported-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2019-11-27libbpf: Fix global variable relocationAndrii Nakryiko4-17/+17
Similarly to a0d7da26ce86 ("libbpf: Fix call relocation offset calculation bug"), relocations against global variables need to take into account referenced symbol's st_value, which holds offset into a corresponding data section (and, subsequently, offset into internal backing map). For static variables this offset is always zero and data offset is completely described by respective instruction's imm field. Convert a bunch of selftests to global variables. Previously they were relying on `static volatile` trick to ensure Clang doesn't inline static variables, which with global variables is not necessary anymore. Fixes: 393cdfbee809 ("libbpf: Support initialized global variables") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20191127200651.1381348-1-andriin@fb.com
2019-11-26selftests/x86/single_step_syscall: Check SYSENTER directlyAndy Lutomirski1-9/+85
We used to test SYSENTER only through the vDSO. Test it directly too, just in case. Signed-off-by: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-11-24bpf: Introduce BPF_TRACE_x helper for the tracing testsMartin KaFai Lau6-174/+125
For BPF_PROG_TYPE_TRACING, the bpf_prog's ctx is an array of u64. This patch borrows the idea from BPF_CALL_x in filter.h to convert a u64 to the arg type of the traced function. The new BPF_TRACE_x has an arg to specify the return type of a bpf_prog. It will be used in the future TCP-ops bpf_prog that may return "void". The new macros are defined in the new header file "bpf_trace_helpers.h". It is under selftests/bpf/ for now. It could be moved to libbpf later after seeing more upcoming non-tracing use cases. The tests are changed to use these new macros also. Hence, the k[s]u8/16/32/64 are no longer needed and they are removed from the bpf_helpers.h. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191123202504.1502696-1-kafai@fb.com
2019-11-24bpf, testing: Add various tail call test casesDaniel Borkmann6-0/+698
Add several BPF kselftest cases for tail calls which test the various patch directions, and that multiple locations are patched in same and different programs. # ./test_progs -n 45 #45/1 tailcall_1:OK #45/2 tailcall_2:OK #45/3 tailcall_3:OK #45/4 tailcall_4:OK #45/5 tailcall_5:OK #45 tailcalls:OK Summary: 1/5 PASSED, 0 SKIPPED, 0 FAILED I've also verified the JITed dump after each of the rewrite cases that it matches expectations. Also regular test_verifier suite passes fine which contains further tail call tests: # ./test_verifier [...] Summary: 1563 PASSED, 0 SKIPPED, 0 FAILED Checked under JIT, interpreter and JIT + hardening. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/3d6cbecbeb171117dccfe153306e479798fb608d.1574452833.git.daniel@iogearbox.net
2019-11-24selftests/bpf: Add BPF trampoline performance testAlexei Starovoitov2-0/+185
Add a test that benchmarks different ways of attaching BPF program to a kernel function. Here are the results for 2.4Ghz x86 cpu on a kernel without mitigations: $ ./test_progs -n 49 -v|grep events task_rename base 2743K events per sec task_rename kprobe 2419K events per sec task_rename kretprobe 1876K events per sec task_rename raw_tp 2578K events per sec task_rename fentry 2710K events per sec task_rename fexit 2685K events per sec On a kernel with retpoline: $ ./test_progs -n 49 -v|grep events task_rename base 2401K events per sec task_rename kprobe 1930K events per sec task_rename kretprobe 1485K events per sec task_rename raw_tp 2053K events per sec task_rename fentry 2351K events per sec task_rename fexit 2185K events per sec All 5 approaches: - kprobe/kretprobe in __set_task_comm() - raw tracepoint in trace_task_rename() - fentry/fexit in __set_task_comm() are roughly equivalent. __set_task_comm() by itself is quite fast, so any extra instructions add up. Until BPF trampoline was introduced the fastest mechanism was raw tracepoint. kprobe via ftrace was second best. kretprobe is slow due to trap. New fentry/fexit methods via BPF trampoline are clearly the fastest and the difference is more pronounced with retpoline on, since BPF trampoline doesn't use indirect jumps. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20191122011515.255371-1-ast@kernel.org
2019-11-24selftests/bpf: Ensure core_reloc_kernel is reading test_progs's data onlyAndrii Nakryiko2-5/+15
test_core_reloc_kernel.c selftest is the only CO-RE test that reads and returns for validation calling thread's information (pid, tgid, comm). Thus it has to make sure that only test_prog's invocations are honored. Fixes: df36e621418b ("selftests/bpf: add CO-RE relocs testing setup") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20191121175900.3486133-1-andriin@fb.com
2019-11-24selftests/bpf: Add verifier tests for better jmp32 register boundsYonghong Song1-0/+83
Three test cases are added. Test 1: jmp32 'reg op imm'. Test 2: jmp32 'reg op reg' where dst 'reg' has unknown constant and src 'reg' has known constant Test 3: jmp32 'reg op reg' where dst 'reg' has known constant and src 'reg' has unknown constant Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191121170651.449096-1-yhs@fb.com
2019-11-24selftests/bpf: Integrate verbose verifier log into test_progsAndrii Nakryiko4-9/+27
Add exra level of verboseness, activated by -vvv argument. When -vv is specified, verbose libbpf and verifier log (level 1) is output, even for successful tests. With -vvv, verifier log goes to level 2. This is extremely useful to debug verifier failures, as well as just see the state and flow of verification. Before this, you'd have to go and modify load_program()'s source code inside libbpf to specify extra log_level flags, which is suboptimal to say the least. Currently -vv and -vvv triggering verifier output is integrated into test_stub's bpf_prog_load as well as bpf_verif_scale.c tests. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191120003548.4159797-1-andriin@fb.com
2019-11-24libbpf: Support initialized global variablesAndrii Nakryiko13-26/+26
Initialized global variables are no different in ELF from static variables, and don't require any extra support from libbpf. But they are matching semantics of global data (backed by BPF maps) more closely, preventing LLVM/Clang from aggressively inlining constant values and not requiring volatile incantations to prevent those. This patch enables global variables. It still disables uninitialized variables, which will be put into special COM (common) ELF section, because BPF doesn't allow uninitialized data to be accessed. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191121070743.1309473-5-andriin@fb.com
2019-11-24selftests, bpftool: Skip the build test if not in treeJakub Kicinski1-0/+4
If selftests are copied over to another machine/location for execution the build test of bpftool will obviously not work, since the sources are not copied. Skip it if we can't find bpftool's Makefile. Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191119105010.19189-3-quentin.monnet@netronome.com
2019-11-24selftests, bpftool: Set EXIT trap after usage functionQuentin Monnet1-13/+13
The trap on EXIT is used to clean up any temporary directory left by the build attempts. It is not needed when the user simply calls the script with its --help option, and may not be needed either if we add checks (e.g. on the availability of bpftool files) before the build attempts. Let's move this trap and related variables lower down in the code, so that we don't accidentally change the value returned from the script on early exits at pre-checks. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Link: https://lore.kernel.org/bpf/20191119105010.19189-2-quentin.monnet@netronome.com
2019-11-24selftests/bpf: Ensure no DWARF relocations for BPF object filesAndrii Nakryiko1-1/+1
Add -mattr=dwarfris attribute to llc to avoid having relocations against DWARF data. These relocations make it impossible to inspect DWARF contents: all strings are invalid. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191121070743.1309473-2-andriin@fb.com
2019-11-21selftests/x86/sigreturn/32: Invalidate DS and ES when abusing the kernelAndy Lutomirski1-0/+13
If the kernel accidentally uses DS or ES while the user values are loaded, it will work fine for sane userspace. In the interest of simulating maximally insane userspace, make sigreturn_32 zero out DS and ES for the nasty parts so that inadvertent use of these segments will crash. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@kernel.org
2019-11-21selftests/x86/mov_ss_trap: Fix the SYSENTER testAndy Lutomirski1-1/+2
For reasons that I haven't quite fully diagnosed, running mov_ss_trap_32 on a 32-bit kernel results in an infinite loop in userspace. This appears to be because the hacky SYSENTER test doesn't segfault as desired; instead it corrupts the program state such that it infinite loops. Fix it by explicitly clearing EBP before doing SYSENTER. This will give a more reliable segfault. Fixes: 59c2a7226fc5 ("x86/selftests: Add mov_to_ss test") Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@kernel.org
2019-11-21selftests/powerpc: spectre_v2 test must be built 64-bitMichael Ellerman1-0/+2
The spectre_v2 test must be built 64-bit, it includes hand-written asm that is 64-bit only, and segfaults if built 32-bit. Fixes: c790c3d2b0ec ("selftests/powerpc: Add a test of spectre_v2 mitigations") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191120023924.13130-1-mpe@ellerman.id.au
2019-11-19selftests/bpf: Enforce no-ALU32 for test_progs-no_alu32Andrii Nakryiko1-0/+7
With the most recent Clang, alu32 is enabled by default if -mcpu=probe or -mcpu=v3 is specified. Use a separate build rule with -mcpu=v2 to enforce no ALU32 mode. Suggested-by: Yonghong Song <yhs@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20191120002510.4130605-1-andriin@fb.com
2019-11-19libbpf: Fix call relocation offset calculation bugAndrii Nakryiko3-6/+6
When relocating subprogram call, libbpf doesn't take into account relo->text_off, which comes from symbol's value. This generally works fine for subprograms implemented as static functions, but breaks for global functions. Taking a simplified test_pkt_access.c as an example: __attribute__ ((noinline)) static int test_pkt_access_subprog1(volatile struct __sk_buff *skb) { return skb->len * 2; } __attribute__ ((noinline)) static int test_pkt_access_subprog2(int val, volatile struct __sk_buff *skb) { return skb->len + val; } SEC("classifier/test_pkt_access") int test_pkt_access(struct __sk_buff *skb) { if (test_pkt_access_subprog1(skb) != skb->len * 2) return TC_ACT_SHOT; if (test_pkt_access_subprog2(2, skb) != skb->len + 2) return TC_ACT_SHOT; return TC_ACT_UNSPEC; } When compiled, we get two relocations, pointing to '.text' symbol. .text has st_value set to 0 (it points to the beginning of .text section): 0000000000000008 000000050000000a R_BPF_64_32 0000000000000000 .text 0000000000000040 000000050000000a R_BPF_64_32 0000000000000000 .text test_pkt_access_subprog1 and test_pkt_access_subprog2 offsets (targets of two calls) are encoded within call instruction's imm32 part as -1 and 2, respectively: 0000000000000000 test_pkt_access_subprog1: 0: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) 1: 64 00 00 00 01 00 00 00 w0 <<= 1 2: 95 00 00 00 00 00 00 00 exit 0000000000000018 test_pkt_access_subprog2: 3: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) 4: 04 00 00 00 02 00 00 00 w0 += 2 5: 95 00 00 00 00 00 00 00 exit 0000000000000000 test_pkt_access: 0: bf 16 00 00 00 00 00 00 r6 = r1 ===> 1: 85 10 00 00 ff ff ff ff call -1 2: bc 01 00 00 00 00 00 00 w1 = w0 3: b4 00 00 00 02 00 00 00 w0 = 2 4: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 5: 64 02 00 00 01 00 00 00 w2 <<= 1 6: 5e 21 08 00 00 00 00 00 if w1 != w2 goto +8 <LBB0_3> 7: bf 61 00 00 00 00 00 00 r1 = r6 ===> 8: 85 10 00 00 02 00 00 00 call 2 9: bc 01 00 00 00 00 00 00 w1 = w0 10: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 11: 04 02 00 00 02 00 00 00 w2 += 2 12: b4 00 00 00 ff ff ff ff w0 = -1 13: 1e 21 01 00 00 00 00 00 if w1 == w2 goto +1 <LBB0_3> 14: b4 00 00 00 02 00 00 00 w0 = 2 0000000000000078 LBB0_3: 15: 95 00 00 00 00 00 00 00 exit Now, if we compile example with global functions, the setup changes. Relocations are now against specifically test_pkt_access_subprog1 and test_pkt_access_subprog2 symbols, with test_pkt_access_subprog2 pointing 24 bytes into its respective section (.text), i.e., 3 instructions in: 0000000000000008 000000070000000a R_BPF_64_32 0000000000000000 test_pkt_access_subprog1 0000000000000048 000000080000000a R_BPF_64_32 0000000000000018 test_pkt_access_subprog2 Calls instructions now encode offsets relative to function symbols and are both set ot -1: 0000000000000000 test_pkt_access_subprog1: 0: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) 1: 64 00 00 00 01 00 00 00 w0 <<= 1 2: 95 00 00 00 00 00 00 00 exit 0000000000000018 test_pkt_access_subprog2: 3: 61 20 00 00 00 00 00 00 r0 = *(u32 *)(r2 + 0) 4: 0c 10 00 00 00 00 00 00 w0 += w1 5: 95 00 00 00 00 00 00 00 exit 0000000000000000 test_pkt_access: 0: bf 16 00 00 00 00 00 00 r6 = r1 ===> 1: 85 10 00 00 ff ff ff ff call -1 2: bc 01 00 00 00 00 00 00 w1 = w0 3: b4 00 00 00 02 00 00 00 w0 = 2 4: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 5: 64 02 00 00 01 00 00 00 w2 <<= 1 6: 5e 21 09 00 00 00 00 00 if w1 != w2 goto +9 <LBB2_3> 7: b4 01 00 00 02 00 00 00 w1 = 2 8: bf 62 00 00 00 00 00 00 r2 = r6 ===> 9: 85 10 00 00 ff ff ff ff call -1 10: bc 01 00 00 00 00 00 00 w1 = w0 11: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 12: 04 02 00 00 02 00 00 00 w2 += 2 13: b4 00 00 00 ff ff ff ff w0 = -1 14: 1e 21 01 00 00 00 00 00 if w1 == w2 goto +1 <LBB2_3> 15: b4 00 00 00 02 00 00 00 w0 = 2 0000000000000080 LBB2_3: 16: 95 00 00 00 00 00 00 00 exit Thus the right formula to calculate target call offset after relocation should take into account relocation's target symbol value (offset within section), call instruction's imm32 offset, and (subtracting, to get relative instruction offset) instruction index of call instruction itself. All that is shifted by number of instructions in main program, given all sub-programs are copied over after main program. Convert few selftests relying on bpf-to-bpf calls to use global functions instead of static ones. Fixes: 48cca7e44f9f ("libbpf: add support for bpf_call") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191119224447.3781271-1-andriin@fb.com
2019-11-18selftests: forwarding: Add speed and auto-negotiation testAmit Cohen1-0/+318
Check configurations and packets transference with different variations of autoneg and speed. Test plan: 1. Test force of same speed with autoneg off 2. Test force of different speeds with autoneg off (should fail) 3. One side is autoneg on and other side sets force of common speeds 4. One side is autoneg on and other side only advertises a subset of the common speeds (one speed of the subset) 5. One side is autoneg on and other side only advertises a subset of the common speeds. Check that highest speed is negotiated 6. Test autoneg on, but each side advertises different speeds (should fail) Signed-off-by: Amit Cohen <amitc@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-18selftests: forwarding: lib.sh: Add wait for dev with timeoutAmit Cohen1-3/+26
Add a function that waits for device with maximum number of iterations. It enables to limit the waiting and prevent infinite loop. This will be used by the subsequent patch which will set two ports to different speeds in order to make sure they cannot negotiate a link. Waiting for all the setup is limited with 10 minutes for each device. Signed-off-by: Amit Cohen <amitc@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-18selftests: forwarding: Add ethtool_lib.shAmit Cohen1-0/+69
Functions: 1. speeds_arr_get The function returns an array of speed values from /usr/include/linux/ethtool.h The array looks as follows: [10baseT/Half] = 0, [10baseT/Full] = 1, ... 2. ethtool_set: params: cmd The function runs ethtool by cmd (ethtool -s cmd) and checks if there was an error in configuration 3. dev_speeds_get: params: dev, with_mode (0 or 1), adver (0 or 1) return value: Array of supported/Advertised link modes with/without mode * Example 1: speeds_get swp1 0 0 return: 1000 10000 40000 * Example 2: speeds_get swp1 1 1 return: 1000baseKX/Full 10000baseKR/Full 40000baseCR4/Full 4. common_speeds_get: params: dev1, dev2, with_mode (0 or 1), adver (0 or 1) return value: Array of common speeds of dev1 and dev2 * Example: common_speeds_get swp1 swp2 0 0 return: 1000 10000 Assuming that swp1 supports 1000 10000 40000 and swp2 supports 1000 10000 Signed-off-by: Amit Cohen <amitc@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-18selftests: mlxsw: Check devlink device before running testDanielle Ratson1-0/+5
The scale test for Spectrum-2 should only be invoked for Spectrum-2. Skip the test otherwise. Signed-off-by: Danielle Ratson <danieller@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-18selftests: mlxsw: Add router scale test for Spectrum-2Danielle Ratson2-1/+22
Same as for Spectrum-1, test the ability to add the maximum number of routes possible to the switch. Invoke the test from the 'resource_scale' wrapper script. Signed-off-by: Danielle Ratson <danieller@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-18selftests, bpf: Workaround an alu32 sub-register spilling issueYonghong Song1-1/+3
Currently, with latest llvm trunk, selftest test_progs failed obj file test_seg6_loop.o with the following error in verifier: infinite loop detected at insn 76 The byte code sequence looks like below, and noted that alu32 has been turned off by default for better generated codes in general: 48: w3 = 100 49: *(u32 *)(r10 - 68) = r3 ... ; if (tlv.type == SR6_TLV_PADDING) { 76: if w3 == 5 goto -18 <LBB0_19> ... 85: r1 = *(u32 *)(r10 - 68) ; for (int i = 0; i < 100; i++) { 86: w1 += -1 87: if w1 == 0 goto +5 <LBB0_20> 88: *(u32 *)(r10 - 68) = r1 The main reason for verification failure is due to partial spills at r10 - 68 for induction variable "i". Current verifier only handles spills with 8-byte values. The above 4-byte value spill to stack is treated to STACK_MISC and its content is not saved. For the above example: w3 = 100 R3_w=inv100 fp-64_w=inv1086626730498 *(u32 *)(r10 - 68) = r3 R3_w=inv100 fp-64_w=inv1086626730498 ... r1 = *(u32 *)(r10 - 68) R1_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) fp-64=inv1086626730498 To resolve this issue, verifier needs to be extended to track sub-registers in spilling, or llvm needs to enhanced to prevent sub-register spilling in register allocation phase. The former will increase verifier complexity and the latter will need some llvm "hacking". Let us workaround this issue by declaring the induction variable as "long" type so spilling will happen at non sub-register level. We can revisit this later if sub-register spilling causes similar or other verification issues. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191117214036.1309510-1-yhs@fb.com
2019-11-18selftests, bpf: Fix test_tc_tunnel hangingJiri Benc1-0/+5
When run_kselftests.sh is run, it hangs after test_tc_tunnel.sh. The reason is test_tc_tunnel.sh ensures the server ('nc -l') is run all the time, starting it again every time it is expected to terminate. The exception is the final client_connect: the server is not started anymore, which ensures no process is kept running after the test is finished. For a sit test, though, the script is terminated prematurely without the final client_connect and the 'nc' process keeps running. This in turn causes the run_one function in kselftest/runner.sh to hang forever, waiting for the runaway process to finish. Ensure a remaining server is terminated on cleanup. Fixes: f6ad6accaa99 ("selftests/bpf: expand test_tc_tunnel with SIT encap") Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Willem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/bpf/60919291657a9ee89c708d8aababc28ebe1420be.1573821780.git.jbenc@redhat.com
2019-11-18selftests, bpf: xdping is not meant to be run standaloneJiri Benc1-2/+2
The actual test to run is test_xdping.sh, which is already in TEST_PROGS. The xdping program alone is not runnable with 'make run_tests', it immediatelly fails due to missing arguments. Move xdping to TEST_GEN_PROGS_EXTENDED in order to be built but not run. Fixes: cd5385029f1d ("selftests/bpf: measure RTT from xdp using xdping") Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/4365c81198f62521344c2215909634407184387e.1573821726.git.jbenc@redhat.com
2019-11-18selftests/bpf: Add BPF_TYPE_MAP_ARRAY mmap() testsAndrii Nakryiko3-18/+292
Add selftests validating mmap()-ing BPF array maps: both single-element and multi-element ones. Check that plain bpf_map_update_elem() and bpf_map_lookup_elem() work correctly with memory-mapped array. Also convert CO-RE relocation tests to use memory-mapped views of global data. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191117172806.2195367-6-andriin@fb.com
2019-11-18selftests/clone3: skip if clone3() is ENOSYSChristian Brauner4-30/+33
If the clone3() syscall is not implemented we should skip the tests. Fixes: 41585bbeeef9 ("selftests: add tests for clone3() with *set_tid") Fixes: 17a810699c18 ("selftests: add tests for clone3()") Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
2019-11-18selftests/clone3: check that all pids are released on error pathsAndrei Vagin1-2/+14
This is a regression test case for an issue when pids have not been released on error paths. Signed-off-by: Andrei Vagin <avagin@gmail.com> Link: https://lore.kernel.org/r/20191118064750.408003-3-avagin@gmail.com Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
2019-11-18selftests/clone3: report a correct number of failsAndrei Vagin1-7/+3
In clone3_set_tid, a few test cases are running in a child process. And right now, if one of these test cases fails, the whole test will exit with the success status. Fixes: 41585bbeeef9 ("selftests: add tests for clone3() with *set_tid") Signed-off-by: Andrei Vagin <avagin@gmail.com> Link: https://lore.kernel.org/r/20191118064750.408003-2-avagin@gmail.com Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
2019-11-18selftests/clone3: flush stdout and stderr before clone3() and _exit()Andrei Vagin2-4/+13
Buffers have to be flushed before clone3() to avoid double messages in the log. Fixes: 41585bbeeef9 ("selftests: add tests for clone3() with *set_tid") Signed-off-by: Andrei Vagin <avagin@gmail.com> Link: https://lore.kernel.org/r/20191118064750.408003-1-avagin@gmail.com Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
2019-11-16selftests: net: avoid ptl lock contention in tcp_mmapEric Dumazet1-5/+53
tcp_mmap is used as a reference program for TCP rx zerocopy, so it is important to point out some potential issues. If multiple threads are concurrently using getsockopt(... TCP_ZEROCOPY_RECEIVE), there is a chance the low-level mm functions compete on shared ptl lock, if vma are arbitrary placed. Instead of letting the mm layer place the chunks back to back, this patch enforces an alignment so that each thread uses a different ptl lock. Performance measured on a 100 Gbit NIC, with 8 tcp_mmap clients launched at the same time : $ for f in {1..8}; do ./tcp_mmap -H 2002:a05:6608:290:: & done In the following run, we reproduce the old behavior by requesting no alignment : $ tcp_mmap -sz -C $((128*1024)) -a 4096 received 32768 MB (100 % mmap'ed) in 9.69532 s, 28.3516 Gbit cpu usage user:0.08634 sys:3.86258, 120.511 usec per MB, 171839 c-switches received 32768 MB (100 % mmap'ed) in 25.4719 s, 10.7914 Gbit cpu usage user:0.055268 sys:21.5633, 659.745 usec per MB, 9065 c-switches received 32768 MB (100 % mmap'ed) in 28.5419 s, 9.63069 Gbit cpu usage user:0.057401 sys:23.8761, 730.392 usec per MB, 14987 c-switches received 32768 MB (100 % mmap'ed) in 28.655 s, 9.59268 Gbit cpu usage user:0.059689 sys:23.8087, 728.406 usec per MB, 18509 c-switches received 32768 MB (100 % mmap'ed) in 28.7808 s, 9.55074 Gbit cpu usage user:0.066042 sys:23.4632, 718.056 usec per MB, 24702 c-switches received 32768 MB (100 % mmap'ed) in 28.8259 s, 9.5358 Gbit cpu usage user:0.056547 sys:23.6628, 723.858 usec per MB, 23518 c-switches received 32768 MB (100 % mmap'ed) in 28.8808 s, 9.51767 Gbit cpu usage user:0.059357 sys:23.8515, 729.703 usec per MB, 14691 c-switches received 32768 MB (100 % mmap'ed) in 28.8879 s, 9.51534 Gbit cpu usage user:0.047115 sys:23.7349, 725.769 usec per MB, 21773 c-switches New behavior (automatic alignment based on Hugepagesize), we can see the system overhead being dramatically reduced. $ tcp_mmap -sz -C $((128*1024)) received 32768 MB (100 % mmap'ed) in 13.5339 s, 20.3103 Gbit cpu usage user:0.122644 sys:3.4125, 107.884 usec per MB, 168567 c-switches received 32768 MB (100 % mmap'ed) in 16.0335 s, 17.1439 Gbit cpu usage user:0.132428 sys:3.55752, 112.608 usec per MB, 188557 c-switches received 32768 MB (100 % mmap'ed) in 17.5506 s, 15.6621 Gbit cpu usage user:0.155405 sys:3.24889, 103.891 usec per MB, 226652 c-switches received 32768 MB (100 % mmap'ed) in 19.1924 s, 14.3222 Gbit cpu usage user:0.135352 sys:3.35583, 106.542 usec per MB, 207404 c-switches received 32768 MB (100 % mmap'ed) in 22.3649 s, 12.2906 Gbit cpu usage user:0.142429 sys:3.53187, 112.131 usec per MB, 250225 c-switches received 32768 MB (100 % mmap'ed) in 22.5336 s, 12.1986 Gbit cpu usage user:0.140654 sys:3.61971, 114.757 usec per MB, 253754 c-switches received 32768 MB (100 % mmap'ed) in 22.5483 s, 12.1906 Gbit cpu usage user:0.134035 sys:3.55952, 112.718 usec per MB, 252997 c-switches received 32768 MB (100 % mmap'ed) in 22.6442 s, 12.139 Gbit cpu usage user:0.126173 sys:3.71251, 117.147 usec per MB, 253728 c-switches Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Cc: Arjun Roy <arjunroy@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-16selftests/x86/iopl: Extend test to cover IOPL emulationThomas Gleixner1-11/+118
Add tests that the now emulated iopl() functionality: - does not longer allow user space to disable interrupts. - does restore a I/O bitmap when IOPL is dropped Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2019-11-16selftests/x86/ioperm: Extend testing so the shared bitmap is exercisedThomas Gleixner1-1/+15
Add code to the fork path which forces the shared bitmap to be duplicated and the reference count to be dropped. Verify that the child modifications did not affect the parent. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2019-11-15selftests: add tests for clone3() with *set_tidAdrian Reber6-26/+421
This tests clone3() with *set_tid to see if all desired PIDs are working as expected. The tests are trying multiple invalid input parameters as well as creating processes while specifying a certain PID in multiple PID namespaces at the same time. Additionally this moves common clone3() test code into clone3_selftests.h. Signed-off-by: Adrian Reber <areber@redhat.com> Acked-by: Christian Brauner <christian.brauner@ubuntu.com> Link: https://lore.kernel.org/r/20191115123621.142252-2-areber@redhat.com Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
2019-11-15selftests/bpf: Add a test for attaching BPF prog to another BPF prog and subprogAlexei Starovoitov2-0/+167
Add a test that attaches one FEXIT program to main sched_cls networking program and two other FEXIT programs to subprograms. All three tracing programs access return values and skb->len of networking program and subprograms. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191114185720.1641606-21-ast@kernel.org
2019-11-15selftests/bpf: Extend test_pkt_access testAlexei Starovoitov1-2/+36
The test_pkt_access.o is used by multiple tests. Fix its section name so that program type can be automatically detected by libbpf and make it call other subprograms with skb argument. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191114185720.1641606-20-ast@kernel.org