aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/reset (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2019-11-15bpf: Introduce BPF trampolineAlexei Starovoitov9-10/+735
Introduce BPF trampoline concept to allow kernel code to call into BPF programs with practically zero overhead. The trampoline generation logic is architecture dependent. It's converting native calling convention into BPF calling convention. BPF ISA is 64-bit (even on 32-bit architectures). The registers R1 to R5 are used to pass arguments into BPF functions. The main BPF program accepts only single argument "ctx" in R1. Whereas CPU native calling convention is different. x86-64 is passing first 6 arguments in registers and the rest on the stack. x86-32 is passing first 3 arguments in registers. sparc64 is passing first 6 in registers. And so on. The trampolines between BPF and kernel already exist. BPF_CALL_x macros in include/linux/filter.h statically compile trampolines from BPF into kernel helpers. They convert up to five u64 arguments into kernel C pointers and integers. On 64-bit architectures this BPF_to_kernel trampolines are nops. On 32-bit architecture they're meaningful. The opposite job kernel_to_BPF trampolines is done by CAST_TO_U64 macros and __bpf_trace_##call() shim functions in include/trace/bpf_probe.h. They convert kernel function arguments into array of u64s that BPF program consumes via R1=ctx pointer. This patch set is doing the same job as __bpf_trace_##call() static trampolines, but dynamically for any kernel function. There are ~22k global kernel functions that are attachable via nop at function entry. The function arguments and types are described in BTF. The job of btf_distill_func_proto() function is to extract useful information from BTF into "function model" that architecture dependent trampoline generators will use to generate assembly code to cast kernel function arguments into array of u64s. For example the kernel function eth_type_trans has two pointers. They will be casted to u64 and stored into stack of generated trampoline. The pointer to that stack space will be passed into BPF program in R1. On x86-64 such generated trampoline will consume 16 bytes of stack and two stores of %rdi and %rsi into stack. The verifier will make sure that only two u64 are accessed read-only by BPF program. The verifier will also recognize the precise type of the pointers being accessed and will not allow typecasting of the pointer to a different type within BPF program. The tracing use case in the datacenter demonstrated that certain key kernel functions have (like tcp_retransmit_skb) have 2 or more kprobes that are always active. Other functions have both kprobe and kretprobe. So it is essential to keep both kernel code and BPF programs executing at maximum speed. Hence generated BPF trampoline is re-generated every time new program is attached or detached to maintain maximum performance. To avoid the high cost of retpoline the attached BPF programs are called directly. __bpf_prog_enter/exit() are used to support per-program execution stats. In the future this logic will be optimized further by adding support for bpf_stats_enabled_key inside generated assembly code. Introduction of preemptible and sleepable BPF programs will completely remove the need to call to __bpf_prog_enter/exit(). Detach of a BPF program from the trampoline should not fail. To avoid memory allocation in detach path the half of the page is used as a reserve and flipped after each attach/detach. 2k bytes is enough to call 40+ BPF programs directly which is enough for BPF tracing use cases. This limit can be increased in the future. BPF_TRACE_FENTRY programs have access to raw kernel function arguments while BPF_TRACE_FEXIT programs have access to kernel return value as well. Often kprobe BPF program remembers function arguments in a map while kretprobe fetches arguments from a map and analyzes them together with return value. BPF_TRACE_FEXIT accelerates this typical use case. Recursion prevention for kprobe BPF programs is done via per-cpu bpf_prog_active counter. In practice that turned out to be a mistake. It caused programs to randomly skip execution. The tracing tools missed results they were looking for. Hence BPF trampoline doesn't provide builtin recursion prevention. It's a job of BPF program itself and will be addressed in the follow up patches. BPF trampoline is intended to be used beyond tracing and fentry/fexit use cases in the future. For example to remove retpoline cost from XDP programs. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191114185720.1641606-5-ast@kernel.org
2019-11-15bpf: Add bpf_arch_text_poke() helperAlexei Starovoitov3-0/+65
Add bpf_arch_text_poke() helper that is used by BPF trampoline logic to patch nops/calls in kernel text into calls into BPF trampoline and to patch calls/nops inside BPF programs too. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191114185720.1641606-4-ast@kernel.org
2019-11-15bpf: Refactor x86 JIT into helpersAlexei Starovoitov1-54/+98
Refactor x86 JITing of LDX, STX, CALL instructions into separate helper functions. No functional changes in LDX and STX helpers. There is a minor change in CALL helper. It will populate target address correctly on the first pass of JIT instead of second pass. That won't reduce total number of JIT passes though. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191114185720.1641606-3-ast@kernel.org
2019-11-15x86/alternatives: Teach text_poke_bp() to emulate instructionsPeter Zijlstra4-46/+130
In preparation for static_call and variable size jump_label support, teach text_poke_bp() to emulate instructions, namely: JMP32, JMP8, CALL, NOP2, NOP_ATOMIC5, INT3 The current text_poke_bp() takes a @handler argument which is used as a jump target when the temporary INT3 is hit by a different CPU. When patching CALL instructions, this doesn't work because we'd miss the PUSH of the return address. Instead, teach poke_int3_handler() to emulate an instruction, typically the instruction we're patching in. This fits almost all text_poke_bp() users, except arch_unoptimize_kprobe() which restores random text, and for that site we have to build an explicit emulate instruction. Tested-by: Alexei Starovoitov <ast@kernel.org> Tested-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Daniel Bristot de Oliveira <bristot@redhat.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20191111132457.529086974@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 8c7eebc10687af45ac8e40ad1bac0cf7893dba9f) Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-11-15bpf, doc: Change right arguments for JIT example codeMao Wenan1-4/+4
The example code for the x86_64 JIT uses the wrong arguments when calling function bar(). Signed-off-by: Mao Wenan <maowenan@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191114034351.162740-1-maowenan@huawei.com
2019-11-15samples/bpf: Add missing option to xdpsock usageAndre Guedes1-0/+2
Commit 743e568c1586 (samples/bpf: Add a "force" flag to XDP samples) introduced the '-F' option but missed adding it to the usage() and the 'long_option' array. Fixes: 743e568c1586 (samples/bpf: Add a "force" flag to XDP samples) Signed-off-by: Andre Guedes <andre.guedes@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191114162847.221770-2-andre.guedes@intel.com
2019-11-15samples/bpf: Remove duplicate option from xdpsockAndre Guedes1-1/+0
The '-f' option is shown twice in the usage(). This patch removes the outdated version. Signed-off-by: Andre Guedes <andre.guedes@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191114162847.221770-1-andre.guedes@intel.com
2019-11-15s390/bpf: Make sure JIT passes do not increase code sizeIlya Leoshkevich1-8/+66
The upcoming s390 branch length extension patches rely on "passes do not increase code size" property in order to consistently choose between short and long branches. Currently this property does not hold between the first and the second passes for register save/restore sequences, as well as various code fragments that depend on SEEN_* flags. Generate the code during the first pass conservatively: assume register save/restore sequences have the maximum possible length, and that all SEEN_* flags are set. Also refuse to JIT if this happens anyway (e.g. due to a bug), as this might lead to verifier bypass once long branches are introduced. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191114151820.53222-1-iii@linux.ibm.com
2019-11-15bpf: Support doubleword alignment in bpf_jit_binary_allocIlya Leoshkevich2-2/+8
Currently passing alignment greater than 4 to bpf_jit_binary_alloc does not work: in such cases it silently aligns only to 4 bytes. On s390, in order to load a constant from memory in a large (>512k) BPF program, one must use lgrl instruction, whose memory operand must be aligned on an 8-byte boundary. This patch makes it possible to request 8-byte alignment from bpf_jit_binary_alloc, and also makes it issue a warning when an unsupported alignment is requested. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191115123722.58462-1-iii@linux.ibm.com
2019-11-11bpf, testing: Add missing object file to TEST_FILESAnders Roxell1-1/+2
When installing kselftests to its own directory and run the test_lwt_ip_encap.sh it will complain that test_lwt_ip_encap.o can't be found. Same with the test_tc_edt.sh test it will complain that test_tc_edt.o can't be found. $ ./test_lwt_ip_encap.sh starting egress IPv4 encap test Error opening object test_lwt_ip_encap.o: No such file or directory Object hashing failed! Cannot initialize ELF context! Failed to parse eBPF program: Invalid argument Rework to add test_lwt_ip_encap.o and test_tc_edt.o to TEST_FILES so the object file gets installed when installing kselftest. Fixes: 74b5a5968fe8 ("selftests/bpf: Replace test_progs and test_maps w/ general rule") Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191111161728.8854-1-anders.roxell@linaro.org
2019-11-11bpf, testing: Workaround a verifier failure for test_progsYonghong Song1-1/+4
With latest llvm compiler, running test_progs will have the following verifier failure for test_sysctl_loop1.o: libbpf: load bpf program failed: Permission denied libbpf: -- BEGIN DUMP LOG --- libbpf: invalid indirect read from stack var_off (0x0; 0xff)+196 size 7 ... libbpf: -- END LOG -- libbpf: failed to load program 'cgroup/sysctl' libbpf: failed to load object 'test_sysctl_loop1.o' The related bytecode looks as below: 0000000000000308 LBB0_8: 97: r4 = r10 98: r4 += -288 99: r4 += r7 100: w8 &= 255 101: r1 = r10 102: r1 += -488 103: r1 += r8 104: r2 = 7 105: r3 = 0 106: call 106 107: w1 = w0 108: w1 += -1 109: if w1 > 6 goto -24 <LBB0_5> 110: w0 += w8 111: r7 += 8 112: w8 = w0 113: if r7 != 224 goto -17 <LBB0_8> And source code: for (i = 0; i < ARRAY_SIZE(tcp_mem); ++i) { ret = bpf_strtoul(value + off, MAX_ULONG_STR_LEN, 0, tcp_mem + i); if (ret <= 0 || ret > MAX_ULONG_STR_LEN) return 0; off += ret & MAX_ULONG_STR_LEN; } Current verifier is not able to conclude that register w0 before '+' at insn 110 has a range of 1 to 7 and thinks it is from 0 - 255. This leads to more conservative range for w8 at insn 112, and later verifier complaint. Let us workaround this issue until we found a compiler and/or verifier solution. The workaround in this patch is to make variable 'ret' volatile, which will force a reload and then '&' operation to ensure better value range. With this patch, I got the below byte code for the loop: 0000000000000328 LBB0_9: 101: r4 = r10 102: r4 += -288 103: r4 += r7 104: w8 &= 255 105: r1 = r10 106: r1 += -488 107: r1 += r8 108: r2 = 7 109: r3 = 0 110: call 106 111: *(u32 *)(r10 - 64) = r0 112: r1 = *(u32 *)(r10 - 64) 113: if w1 s< 1 goto -28 <LBB0_5> 114: r1 = *(u32 *)(r10 - 64) 115: if w1 s> 7 goto -30 <LBB0_5> 116: r1 = *(u32 *)(r10 - 64) 117: w1 &= 7 118: w1 += w8 119: r7 += 8 120: w8 = w1 121: if r7 != 224 goto -21 <LBB0_9> Insn 117 did the '&' operation and we got more precise value range for 'w8' at insn 120. The test is happy then: #3/17 test_sysctl_loop1.o:OK Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191107170045.2503480-1-yhs@fb.com
2019-11-10xsk: Extend documentation for Rx|Tx-only sockets and shared umemsMagnus Karlsson1-5/+23
Add more documentation about the new Rx-only and Tx-only sockets in libbpf and also how libbpf can now support shared umems. Also found two pieces that could be improved in the text, that got fixed in this commit. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Tested-by: William Tu <u9012063@gmail.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Link: https://lore.kernel.org/bpf/1573148860-30254-6-git-send-email-magnus.karlsson@intel.com
2019-11-10samples/bpf: Use Rx-only and Tx-only sockets in xdpsockMagnus Karlsson1-12/+29
Use Rx-only sockets for the rxdrop sample and Tx-only sockets for the txpush sample in the xdpsock application. This so that we exercise and show case these socket types too. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Tested-by: William Tu <u9012063@gmail.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Link: https://lore.kernel.org/bpf/1573148860-30254-5-git-send-email-magnus.karlsson@intel.com
2019-11-10libbpf: Allow for creating Rx or Tx only AF_XDP socketsMagnus Karlsson1-2/+3
The libbpf AF_XDP code is extended to allow for the creation of Rx only or Tx only sockets. Previously it returned an error if the socket was not initialized for both Rx and Tx. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Tested-by: William Tu <u9012063@gmail.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Link: https://lore.kernel.org/bpf/1573148860-30254-4-git-send-email-magnus.karlsson@intel.com
2019-11-10samples/bpf: Add XDP_SHARED_UMEM support to xdpsockMagnus Karlsson4-42/+135
Add support for the XDP_SHARED_UMEM mode to the xdpsock sample application. As libbpf does not have a built in XDP program for this mode, we use an explicitly loaded XDP program. This also serves as an example on how to write your own XDP program that can route to an AF_XDP socket. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Tested-by: William Tu <u9012063@gmail.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Link: https://lore.kernel.org/bpf/1573148860-30254-3-git-send-email-magnus.karlsson@intel.com
2019-11-10libbpf: Support XDP_SHARED_UMEM with external XDP programMagnus Karlsson1-10/+17
Add support in libbpf to create multiple sockets that share a single umem. Note that an external XDP program need to be supplied that routes the incoming traffic to the desired sockets. So you need to supply the libbpf_flag XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD and load your own XDP program. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Tested-by: William Tu <u9012063@gmail.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Link: https://lore.kernel.org/bpf/1573148860-30254-2-git-send-email-magnus.karlsson@intel.com
2019-11-10libbpf: Add getter for program sizeToke Høiland-Jørgensen3-0/+9
This adds a new getter for the BPF program size (in bytes). This is useful for a caller that is trying to predict how much memory will be locked by loading a BPF object into the kernel. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/157333185272.88376.10996937115395724683.stgit@toke.dk
2019-11-10libbpf: Add bpf_get_link_xdp_info() function to get more XDP informationToke Høiland-Jørgensen3-28/+67
Currently, libbpf only provides a function to get a single ID for the XDP program attached to the interface. However, it can be useful to get the full set of program IDs attached, along with the attachment mode, in one go. Add a new getter function to support this, using an extendible structure to carry the information. Express the old bpf_get_link_id() function in terms of the new function. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/157333185164.88376.7520653040667637246.stgit@toke.dk
2019-11-10libbpf: Use pr_warn() when printing netlink errorsToke Høiland-Jørgensen2-6/+7
The netlink functions were using fprintf(stderr, ) directly to print out error messages, instead of going through the usual logging macros. This makes it impossible for the calling application to silence or redirect those error messages. Fix this by switching to pr_warn() in nlattr.c and netlink.c. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/157333185055.88376.15999360127117901443.stgit@toke.dk
2019-11-10libbpf: Propagate EPERM to caller on program loadToke Høiland-Jørgensen1-16/+11
When loading an eBPF program, libbpf overrides the return code for EPERM errors instead of returning it to the caller. This makes it hard to figure out what went wrong on load. In particular, EPERM is returned when the system rlimit is too low to lock the memory required for the BPF program. Previously, this was somewhat obscured because the rlimit error would be hit on map creation (which does return it correctly). However, since maps can now be reused, object load can proceed all the way to loading programs without hitting the error; propagating it even in this case makes it possible for the caller to react appropriately (and, e.g., attempt to raise the rlimit before retrying). Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/157333184946.88376.11768171652794234561.stgit@toke.dk
2019-11-10selftests/bpf: Add tests for automatic map unpinning on load failureToke Høiland-Jørgensen2-4/+18
This add tests for the different variations of automatic map unpinning on load failure. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/157333184838.88376.8243704248624814775.stgit@toke.dk
2019-11-10libbpf: Unpin auto-pinned maps if loading failsToke Høiland-Jørgensen1-1/+8
Since the automatic map-pinning happens during load, it will leave pinned maps around if the load fails at a later stage. Fix this by unpinning any pinned maps on cleanup. To avoid unpinning pinned maps that were reused rather than newly pinned, add a new boolean property on struct bpf_map to keep track of whether that map was reused or not; and only unpin those maps that were not reused. Fixes: 57a00f41644f ("libbpf: Add auto-pinning of maps when loading BPF objects") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/157333184731.88376.9992935027056165873.stgit@toke.dk
2019-11-08samples: bpf: update map definition to new syntax BTF-defined mapDaniel T. Lee12-178/+178
Since, the new syntax of BTF-defined map has been introduced, the syntax for using maps under samples directory are mixed up. For example, some are already using the new syntax, and some are using existing syntax by calling them as 'legacy'. As stated at commit abd29c931459 ("libbpf: allow specifying map definitions using BTF"), the BTF-defined map has more compatablility with extending supported map definition features. The commit doesn't replace all of the map to new BTF-defined map, because some of the samples still use bpf_load instead of libbpf, which can't properly create BTF-defined map. This will only updates the samples which uses libbpf API for loading bpf program. (ex. bpf_prog_load_xattr) Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-11-08samples: bpf: Update outdated error messageDaniel T. Lee5-7/+7
Currently, under samples, several methods are being used to load bpf program. Since using libbpf is preferred solution, lots of previously used 'load_bpf_file' from bpf_load are replaced with 'bpf_prog_load_xattr' from libbpf. But some of the error messages still show up as 'load_bpf_file' instead of 'bpf_prog_load_xattr'. This commit fixes outdated errror messages under samples and fixes some code style issues. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191107005153.31541-2-danieltimlee@gmail.com
2019-11-07bpf: Add cb access in kfree_skb testMartin KaFai Lau2-16/+63
Access the skb->cb[] in the kfree_skb test. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191107180905.4097871-1-kafai@fb.com
2019-11-07bpf: Add array support to btf_struct_accessMartin KaFai Lau1-29/+166
This patch adds array support to btf_struct_access(). It supports array of int, array of struct and multidimensional array. It also allows using u8[] as a scratch space. For example, it allows access the "char cb[48]" with size larger than the array's element "char". Another potential use case is "u64 icsk_ca_priv[]" in the tcp congestion control. btf_resolve_size() is added to resolve the size of any type. It will follow the modifier if there is any. Please see the function comment for details. This patch also adds the "off < moff" check at the beginning of the for loop. It is to reject cases when "off" is pointing to a "hole" in a struct. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191107180903.4097702-1-kafai@fb.com
2019-11-07libbpf: Improve handling of corrupted ELF during map initializationAndrii Nakryiko1-2/+2
If we get ELF file with "maps" section, but no symbols pointing to it, we'll end up with division by zero. Add check against this situation and exit early with error. Found by Coverity scan against Github libbpf sources. Fixes: bf82927125dd ("libbpf: refactor map initialization") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-6-andriin@fb.com
2019-11-07libbpf: Make btf__resolve_size logic always check size error conditionAndrii Nakryiko1-2/+1
Perform size check always in btf__resolve_size. Makes the logic a bit more robust against corrupted BTF and silences LGTM/Coverity complaining about always true (size < 0) check. Fixes: 69eaab04c675 ("btf: extract BTF type size calculation") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-5-andriin@fb.com
2019-11-07libbpf: Fix another potential overflow issue in bpf_prog_linfoAndrii Nakryiko1-7/+7
Fix few issues found by Coverity and LGTM. Fixes: b053b439b72a ("bpf: libbpf: bpftool: Print bpf_line_info during prog dump") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-4-andriin@fb.com
2019-11-07libbpf: Fix potential overflow issueAndrii Nakryiko1-1/+1
Fix a potential overflow issue found by LGTM analysis, based on Github libbpf source code. Fixes: 3d65014146c6 ("bpf: libbpf: Add btf_line_info support to libbpf") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-3-andriin@fb.com
2019-11-07libbpf: Fix memory leak/double free issueAndrii Nakryiko1-1/+1
Coverity scan against Github libbpf code found the issue of not freeing memory and leaving already freed memory still referenced from bpf_program. Fix it by re-assigning successfully reallocated memory sooner. Fixes: 2993e0515bb4 ("tools/bpf: add support to read .BTF.ext sections") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-2-andriin@fb.com
2019-11-07libbpf: Fix negative FD close() in xsk_setup_xdp_prog()Andrii Nakryiko1-0/+2
Fix issue reported by static analysis (Coverity). If bpf_prog_get_fd_by_id() fails, xsk_lookup_bpf_maps() will fail as well and clean-up code will attempt close() with fd=-1. Fix by checking bpf_prog_get_fd_by_id() return result and exiting early. Fixes: 10a13bb40e54 ("libbpf: remove qidconf and better support external bpf programs.") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107054059.313884-1-andriin@fb.com
2019-11-07s390/bpf: Remove unused SEEN_RET0, SEEN_REG_AX and ret0_ipIlya Leoshkevich1-16/+5
We don't need them since commit e1cf4befa297 ("bpf, s390x: remove ld_abs/ld_ind") and commit a3212b8f15d8 ("bpf, s390x: remove obsolete exception handling from div/mod"). Also, use BIT(n) instead of 1 << n, because checkpatch says so. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107114033.90505-1-iii@linux.ibm.com
2019-11-07s390/bpf: Wrap JIT macro parameter usages in parenthesesIlya Leoshkevich1-31/+31
This change does not alter JIT behavior; it only makes it possible to safely invoke JIT macros with complex arguments in the future. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107113211.90105-1-iii@linux.ibm.com
2019-11-07s390/bpf: Use kvcalloc for addrs arrayIlya Leoshkevich1-2/+3
A BPF program may consist of 1m instructions, which means JIT instruction-address mapping can be as large as 4m. s390 has FORCE_MAX_ZONEORDER=9 (for memory hotplug reasons), which means maximum kmalloc size is 1m. This makes it impossible to JIT programs with more than 256k instructions. Fix by using kvcalloc, which falls back to vmalloc for larger allocations. An alternative would be to use a radix tree, but that is not supported by bpf_prog_fill_jited_linfo. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107141838.92202-1-iii@linux.ibm.com
2019-11-07tools, bpf_asm: Warn when jumps are out of rangeIlya Leoshkevich1-2/+12
When compiling larger programs with bpf_asm, it's possible to accidentally exceed jt/jf range, in which case it won't complain, but rather silently emit a truncated offset, leading to a "happy debugging" situation. Add a warning to help detecting such issues. It could be made an error instead, but this might break compilation of existing code (which might be working by accident). Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107100349.88976-1-iii@linux.ibm.com
2019-11-06bpf: Account for insn->off when doing bpf_probe_read_kernelMartin KaFai Lau1-1/+1
In the bpf interpreter mode, bpf_probe_read_kernel is used to read from PTR_TO_BTF_ID's kernel object. It currently missed considering the insn->off. This patch fixes it. Fixes: 2a02759ef5f8 ("bpf: Add support for BTF pointers to interpreter") Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191107014640.384083-1-kafai@fb.com
2019-11-06libbpf: Simplify BPF_CORE_READ_BITFIELD_PROBED usageAndrii Nakryiko2-28/+18
Streamline BPF_CORE_READ_BITFIELD_PROBED interface to follow BPF_CORE_READ_BITFIELD (direct) and BPF_CORE_READ, in general, i.e., just return read result or 0, if underlying bpf_probe_read() failed. In practice, real applications rarely check bpf_probe_read() result, because it has to always work or otherwise it's a bug. So propagating internal bpf_probe_read() error from this macro hurts usability without providing real benefits in practice. This patch fixes the issue and simplifies usage, noticeable even in selftest itself. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20191106201500.2582438-1-andriin@fb.com
2019-11-06selftests/bps: Clean up removed ints relocations negative testsAndrii Nakryiko1-6/+0
As part of 42765ede5c54 ("selftests/bpf: Remove too strict field offset relo test cases"), few ints relocations negative (supposed to fail) tests were removed, but not completely. Due to them being negative, some leftovers in prog_tests/core_reloc.c went unnoticed. Clean them up. Fixes: 42765ede5c54 ("selftests/bpf: Remove too strict field offset relo test cases") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191106173659.1978131-1-andriin@fb.com
2019-11-04selftests/bpf: Add field size relocation testsAndrii Nakryiko5-5/+122
Add test verifying correctness and logic of field size relocation support in libbpf. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191101222810.1246166-6-andriin@fb.com
2019-11-04selftest/bpf: Add relocatable bitfield reading testsAndrii Nakryiko9-2/+294
Add a bunch of selftests verifying correctness of relocatable bitfield reading support in libbpf. Both bpf_probe_read()-based and direct read-based bitfield macros are tested. core_reloc.c "test_harness" is extended to support raw tracepoint and new typed raw tracepoints as test BPF program types. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191101222810.1246166-5-andriin@fb.com
2019-11-04libbpf: Add support for field size relocationsAndrii Nakryiko2-8/+39
Add bpf_core_field_size() macro, capturing a relocation against field size. Adjust bits of internal libbpf relocation logic to allow capturing size relocations of various field types: arrays, structs/unions, enums, etc. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191101222810.1246166-4-andriin@fb.com
2019-11-04libbpf: Add support for relocatable bitfieldsAndrii Nakryiko3-60/+227
Add support for the new field relocation kinds, necessary to support relocatable bitfield reads. Provide macro for abstracting necessary code doing full relocatable bitfield extraction into u64 value. Two separate macros are provided: - BPF_CORE_READ_BITFIELD macro for direct memory read-enabled BPF programs (e.g., typed raw tracepoints). It uses direct memory dereference to extract bitfield backing integer value. - BPF_CORE_READ_BITFIELD_PROBED macro for cases where bpf_probe_read() needs to be used to extract same backing integer value. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191101222810.1246166-3-andriin@fb.com
2019-11-04selftests/bpf: Remove too strict field offset relo test casesAndrii Nakryiko9-90/+4
As libbpf is going to gain support for more field relocations, including field size, some restrictions about exact size match are going to be lifted. Remove test cases that explicitly test such failures. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191101222810.1246166-2-andriin@fb.com
2019-11-03mISDN: remove unused variable 'faxmodulation_s'YueHaibing1-1/+0
drivers/isdn/hardware/mISDN/mISDNisar.c:30:17: warning: faxmodulation_s defined but not used [-Wunused-const-variable=] It is never used, so can be removed. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-03ptp: Add a ptp clock driver for IDT ClockMatrix.Vincent Cheng5-0/+2201
The IDT ClockMatrix (TM) family includes integrated devices that provide eight PLL channels. Each PLL channel can be independently configured as a frequency synthesizer, jitter attenuator, digitally controlled oscillator (DCO), or a digital phase lock loop (DPLL). Typically these devices are used as timing references and clock sources for PTP applications. This patch adds support for the device. Co-developed-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Vincent Cheng <vincent.cheng.xh@renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>