aboutsummaryrefslogtreecommitdiffstats
path: root/tools/testing/selftests/vDSO (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2018-12-09bpf: Add bpf_line_info supportMartin KaFai Lau10-33/+419
This patch adds bpf_line_info support. It accepts an array of bpf_line_info objects during BPF_PROG_LOAD. The "line_info", "line_info_cnt" and "line_info_rec_size" are added to the "union bpf_attr". The "line_info_rec_size" makes bpf_line_info extensible in the future. The new "check_btf_line()" ensures the userspace line_info is valid for the kernel to use. When the verifier is translating/patching the bpf_prog (through "bpf_patch_insn_single()"), the line_infos' insn_off is also adjusted by the newly added "bpf_adj_linfo()". If the bpf_prog is jited, this patch also provides the jited addrs (in aux->jited_linfo) for the corresponding line_info.insn_off. "bpf_prog_fill_jited_linfo()" is added to fill the aux->jited_linfo. It is currently called by the x86 jit. Other jits can also use "bpf_prog_fill_jited_linfo()" and it will be done in the followup patches. In the future, if it deemed necessary, a particular jit could also provide its own "bpf_prog_fill_jited_linfo()" implementation. A few "*line_info*" fields are added to the bpf_prog_info such that the user can get the xlated line_info back (i.e. the line_info with its insn_off reflecting the translated prog). The jited_line_info is available if the prog is jited. It is an array of __u64. If the prog is not jited, jited_line_info_cnt is 0. The verifier's verbose log with line_info will be done in a follow up patch. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07selftests: bpf: update testcases for BPF_ALU | BPF_ARSHJiong Wang1-4/+25
"arsh32 on imm" and "arsh32 on reg" now are accepted. Also added two new testcases to make sure arsh32 won't be treated as arsh64 during interpretation or JIT code-gen for which case the high bits will be moved into low halve that the testcases could catch them. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07bpf: verifier remove the rejection on BPF_ALU | BPF_ARSHJiong Wang1-5/+0
This patch remove the rejection on BPF_ALU | BPF_ARSH as we have supported them on interpreter and all JIT back-ends Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07bpf: interpreter support BPF_ALU | BPF_ARSHJiong Wang1-22/+30
This patch implements interpreting BPF_ALU | BPF_ARSH. Do arithmetic right shift on low 32-bit sub-register, and zero the high 32 bits. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07nfp: bpf: implement jitting of BPF_ALU | BPF_ARSH | BPF_*Jiong Wang1-0/+45
BPF_X support needs indirect shift mode, please see code comments for details. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07s390: bpf: implement jitting of BPF_ALU | BPF_ARSH | BPF_*Jiong Wang1-0/+12
This patch implements code-gen for BPF_ALU | BPF_ARSH | BPF_*. Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07ppc: bpf: implement jitting of BPF_ALU | BPF_ARSH | BPF_*Jiong Wang3-0/+12
This patch implements code-gen for BPF_ALU | BPF_ARSH | BPF_*. Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com> Cc: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07mips: bpf: implement jitting of BPF_ALU | BPF_ARSH | BPF_XJiong Wang6-4/+13
Jitting of BPF_K is supported already, but not BPF_X. This patch complete the support for the latter on both MIPS and microMIPS. Cc: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Acked-by: Paul Burton <paul.burton@mips.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07mips: bpf: fix encoding bug for mm_srlv32_opJiong Wang1-1/+1
For micro-mips, srlv inside POOL32A encoding space should use 0x50 sub-opcode, NOT 0x90. Some early version ISA doc describes the encoding as 0x90 for both srlv and srav, this looks to me was a typo. I checked Binutils libopcode implementation which is using 0x50 for srlv and 0x90 for srav. v1->v2: - Keep mm_srlv32_op sorted by value. Fixes: f31318fdf324 ("MIPS: uasm: Add srlv uasm instruction") Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-05bpf: Expect !info.func_info and insn_off name changes in test_btf/libbpf/bpftoolMartin KaFai Lau4-9/+22
Similar to info.jited_*, info.func_info could be 0 if bpf_dump_raw_ok() == false. This patch makes changes to test_btf and bpftool to expect info.func_info could be 0. This patch also makes the needed changes for s/insn_offset/insn_off/. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-05bpf: tools: Sync uapi bpf.h for the name changes in bpf_func_infoMartin KaFai Lau1-1/+1
This patch sync the name changes in bpf_func_info to the tools/. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-05bpf: Change insn_offset to insn_off in bpf_func_infoMartin KaFai Lau2-10/+10
The later patch will introduce "struct bpf_line_info" which has member "line_off" and "file_off" referring back to the string section in btf. The line_"off" and file_"off" are more consistent to the naming convention in btf.h that means "offset" (e.g. name_off in "struct btf_type"). The to-be-added "struct bpf_line_info" also has another member, "insn_off" which is the same as the "insn_offset" in "struct bpf_func_info". Hence, this patch renames "insn_offset" to "insn_off" for "struct bpf_func_info". Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-05bpf: Improve the info.func_info and info.func_info_rec_size behaviorMartin KaFai Lau2-27/+21
1) When bpf_dump_raw_ok() == false and the kernel can provide >=1 func_info to the userspace, the current behavior is setting the info.func_info_cnt to 0 instead of setting info.func_info to 0. It is different from the behavior in jited_func_lens/nr_jited_func_lens, jited_ksyms/nr_jited_ksyms...etc. This patch fixes it. (i.e. set func_info to 0 instead of func_info_cnt to 0 when bpf_dump_raw_ok() == false). 2) When the userspace passed in info.func_info_cnt == 0, the kernel will set the expected func_info size back to the info.func_info_rec_size. It is a way for the userspace to learn the kernel expected func_info_rec_size introduced in commit 838e96904ff3 ("bpf: Introduce bpf_func_info"). An exception is the kernel expected size is not set when func_info is not available for a bpf_prog. This makes the returned info.func_info_rec_size has different values depending on the returned value of info.func_info_cnt. This patch sets the kernel expected size to info.func_info_rec_size independent of the info.func_info_cnt. 3) The current logic only rejects invalid func_info_rec_size if func_info_cnt is non zero. This patch also rejects invalid nonzero info.func_info_rec_size and not equal to the kernel expected size. 4) Set info.btf_id as long as prog->aux->btf != NULL. That will setup the later copy_to_user() codes look the same as others which then easier to understand and maintain. prog->aux->btf is not NULL only if prog->aux->func_info_cnt > 0. Breaking up info.btf_id from prog->aux->func_info_cnt is needed for the later line info patch anyway. A similar change is made to bpf_get_prog_name(). Fixes: 838e96904ff3 ("bpf: Introduce bpf_func_info") Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-05tools: bpftool: add a command to dump the trace pipeQuentin Monnet5-4/+176
BPF programs can use the bpf_trace_printk() helper to print debug information into the trace pipe. Add a subcommand "bpftool prog tracelog" to simply dump this pipe to the console. This is for a good part copied from iproute2, where the feature is available with "tc exec bpf dbg". Changes include dumping pipe content to stdout instead of stderr and adding JSON support (content is dumped as an array of strings, one per line read from the pipe). This version is dual-licensed, with Daniel's permission. Cc: Daniel Borkmann <daniel@iogearbox.net> Suggested-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-05arm64/bpf: don't allocate BPF JIT programs in module memoryArd Biesheuvel2-1/+17
The arm64 module region is a 128 MB region that is kept close to the core kernel, in order to ensure that relative branches are always in range. So using the same region for programs that do not have this restriction is wasteful, and preferably avoided. Now that the core BPF JIT code permits the alloc/free routines to be overridden, implement them by vmalloc()/vfree() calls from a dedicated 128 MB region set aside for BPF programs. This ensures that BPF programs are still in branching range of each other, which is something the JIT currently depends upon (and is not guaranteed when using module_alloc() on KASLR kernels like we do currently). It also ensures that placement of BPF programs does not correlate with the placement of the core kernel or modules, making it less likely that leaking the former will reveal the latter. This also solves an issue under KASAN, where shadow memory is needlessly allocated for all BPF programs (which don't require KASAN shadow pages since they are not KASAN instrumented) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-05bpf: add __weak hook for allocating executable memoryArd Biesheuvel1-2/+12
By default, BPF uses module_alloc() to allocate executable memory, but this is not necessary on all arches and potentially undesirable on some of them. So break out the module_alloc() and module_memfree() calls into __weak functions to allow them to be overridden in arch code. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-04selftests: add a test for bpf_prog_test_run_xattrLorenz Bauer1-1/+54
Make sure that bpf_prog_test_run_xattr returns the correct length and that the kernel respects the output size hint. Also check that errno indicates ENOSPC if there is a short output buffer given. Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-04libbpf: add bpf_prog_test_run_xattrLorenz Bauer3-0/+43
Add a new function, which encourages safe usage of the test interface. bpf_prog_test_run continues to work as before, but should be considered unsafe. Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-04tools: sync uapi/linux/bpf.hLorenz Bauer1-2/+5
Pull changes from "bpf: respect size hint to BPF_PROG_TEST_RUN if present". Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-04bpf: respect size hint to BPF_PROG_TEST_RUN if presentLorenz Bauer2-4/+18
Use data_size_out as a size hint when copying test output to user space. ENOSPC is returned if the output buffer is too small. Callers which so far did not set data_size_out are not affected. Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-03samples: bpf: fix: seg fault with NULL pointer argDaniel T. Lee1-1/+3
When NULL pointer accidentally passed to write_kprobe_events, due to strlen(NULL), segmentation fault happens. Changed code returns -1 to deal with this situation. Bug issued with Smatch, static analysis. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-03bpf: fix documentation for eBPF helpersQuentin Monnet1-45/+45
The missing indentation on the "Return" sections for bpf_map_pop_elem() and bpf_map_peek_elem() helpers break RST and man pages generation. This patch fixes them, and moves the description of those two helpers towards the end of the list (even though they are somehow related to the three first helpers for maps, the man page explicitly states that the helpers are sorted in chronological order). While at it, bring other minor formatting edits for eBPF helpers documentation: mostly blank lines removal, RST formatting, or other small nits for consistency. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-03bpf: allow BPF read access to qdisc pkt_lenPetar Penkov4-0/+50
The pkt_len field in qdisc_skb_cb stores the skb length as it will appear on the wire after segmentation. For byte accounting, this value is more accurate than skb->len. It is computed on entry to the TC layer, so only valid there. Allow read access to this field from BPF tc classifier and action programs. The implementation is analogous to tc_classid, aside from restricting to read access. To distinguish it from skb->len and self-describe export as wire_len. Changes v1->v2 - Rename pkt_len to wire_len Signed-off-by: Petar Penkov <ppenkov@google.com> Signed-off-by: Vlad Dumitrescu <vladum@google.com> Signed-off-by: Willem de Bruijn <willemb@google.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-03libbpf: Fix license in README.rstAndrey Ignatov1-1/+1
The whole libbpf is licensed as (LGPL-2.1 OR BSD-2-Clause). I missed it while adding README.rst. Fix it and use same license as all other files in libbpf do. Since I'm the only author of README.rst so far, no others' permissions should be needed. Fixes: 76d1b894c515 ("libbpf: Document API and ABI conventions") Signed-off-by: Andrey Ignatov <rdna@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-02bpf: Fix memleak in aux->func_info and aux->btfMartin KaFai Lau1-0/+2
The aux->func_info and aux->btf are leaked in the error out cases during bpf_prog_load(). This patch fixes it. Fixes: ba64e7d85252 ("bpf: btf: support proper non-jit func info") Cc: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30samples: bpf: get ifindex from ifnameMatteo Croce1-2/+7
Find the ifindex with if_nametoindex() instead of requiring the numeric ifindex. Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30samples: bpf: improve xdp1 exampleMatteo Croce1-10/+8
Store only the total packet count for every protocol, instead of the whole per-cpu array. Use bpf_map_get_next_key() to iterate the map, instead of looking up all the protocols. Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30bpf: Apply F_NEEDS_EFFICIENT_UNALIGNED_ACCESS to more ACCEPT test cases.David Miller1-0/+17
If a testcase has alignment problems but is expected to be ACCEPT, verify it using F_NEEDS_EFFICIENT_UNALIGNED_ACCESS too. Maybe in the future if we add some architecture specific code to elide the unaligned memory access warnings during the test, we can execute these as well. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30bpf: Make more use of 'any' alignment in test_verifier.cDavid Miller1-0/+44
Use F_NEEDS_EFFICIENT_UNALIGNED_ACCESS in more tests where the expected result is REJECT. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30bpf: Adjust F_NEEDS_EFFICIENT_UNALIGNED_ACCESS handling in test_verifier.cDavid Miller1-18/+24
Make it set the flag argument to bpf_verify_program() which will relax the alignment restrictions. Now all such test cases will go properly through the verifier even on inefficient unaligned access architectures. On inefficient unaligned access architectures do not try to run such programs, instead mark the test case as passing but annotate the result similarly to how it is done now in the presence of this flag. So, we get complete full coverage for all REJECT test cases, and at least verifier level coverage for ACCEPT test cases. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30bpf: Add BPF_F_ANY_ALIGNMENT.David Miller8-9/+45
Often we want to write tests cases that check things like bad context offset accesses. And one way to do this is to use an odd offset on, for example, a 32-bit load. This unfortunately triggers the alignment checks first on platforms that do not set CONFIG_EFFICIENT_UNALIGNED_ACCESS. So the test case see the alignment failure rather than what it was testing for. It is often not completely possible to respect the original intention of the test, or even test the same exact thing, while solving the alignment issue. Another option could have been to check the alignment after the context and other validations are performed by the verifier, but that is a non-trivial change to the verifier. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30bpf: Fix verifier log string check for bad alignment.David Miller1-1/+1
The message got changed a lot time ago. This was responsible for 36 test case failures on sparc64. Fixes: f1174f77b50c ("bpf/verifier: rework value tracking") Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30tools: bpftool: add owner_prog_type and owner_jited to bpftool outputQuentin Monnet3-26/+75
For prog array maps, the type of the owner program, and the JIT-ed state of that program, are available from the file descriptor information under /proc. Add them to "bpftool map show" output. Example output: # bpftool map show 158225: prog_array name jmp_table flags 0x0 key 4B value 4B max_entries 8 memlock 4096B owner_prog_type flow_dissector owner jited # bpftool --json --pretty map show [{ "id": 1337, "type": "prog_array", "name": "jmp_table", "flags": 0, "bytes_key": 4, "bytes_value": 4, "max_entries": 8, "bytes_memlock": 4096, "owner_prog_type": "flow_dissector", "owner_jited": true } ] As we move the table used for associating names to program types, complete it with the missing types (lwt_seg6local and sk_reuseport). Also add missing types to the help message for "bpftool prog" (sk_reuseport and flow_dissector). Suggested-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30tools: bpftool: mark offloaded programs more explicitly in plain outputQuentin Monnet1-1/+1
In bpftool (plain) output for "bpftool prog show" or "bpftool map show", an offloaded BPF object is simply denoted with "dev ifname", which is not really explicit. Change it with something that clearly shows the program is offloaded. While at it also add an additional space, as done between other information fields. Example output, before: # bpftool prog show 1337: xdp tag a04f5eef06a7f555 dev foo loaded_at 2018-10-19T16:40:36+0100 uid 0 xlated 16B not jited memlock 4096B After: # bpftool prog show 1337: xdp tag a04f5eef06a7f555 offloaded_to foo loaded_at 2018-10-19T16:40:36+0100 uid 0 xlated 16B not jited memlock 4096B Suggested-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30tools: bpftool: fix bash completion for new map types (queue and stack)Quentin Monnet1-1/+1
Commit 197c2dac74e4 ("bpf: Add BPF_MAP_TYPE_QUEUE and BPF_MAP_TYPE_STACK to bpftool-map") added support for queue and stack eBPF map types in bpftool map handling. Let's update the bash completion accordingly. Fixes: 197c2dac74e4 ("bpf: Add BPF_MAP_TYPE_QUEUE and BPF_MAP_TYPE_STACK to bpftool-map") Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30tools: bpftool: fix bash completion for bpftool prog (attach|detach)Quentin Monnet1-24/+49
Fix bash completion for "bpftool prog (attach|detach) PROG TYPE MAP" so that the list of indices proposed for MAP are map indices, and not PROG indices. Also use variables for map and prog reference types ("id", "pinned", and "tag" for programs). Fixes: b7d3826c2ed6 ("bpf: bpftool, add support for attaching programs to maps") Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30tools: bpftool: use "/proc/self/" i.o. crafting links with getpid()Quentin Monnet2-13/+3
The getpid() function is called in a couple of places in bpftool to craft links of the shape "/proc/<pid>/...". Instead, it is possible to use the "/proc/self/" shortcut, which makes things a bit easier, in particular in jit_disasm.c. Do the replacement, and remove the includes of <sys/types.h> from the relevant files, now we do not use getpid() anymore. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30arm64/bpf: use movn/movk/movk sequence to generate kernel addressesArd Biesheuvel1-11/+6
On arm64, all executable code is guaranteed to reside in the vmalloc space (or the module space), and so jump targets will only use 48 bits at most, and the remaining bits are guaranteed to be 0x1. This means we can generate an immediate jump address using a sequence of one MOVN (move wide negated) and two MOVK instructions, where the first one sets the lower 16 bits but also sets all top bits to 0x1. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-11-30tools/bpf: make libbpf _GNU_SOURCE friendlyYonghong Song2-0/+3
During porting libbpf to bcc, I got some warnings like below: ... [ 2%] Building C object src/cc/CMakeFiles/bpf-shared.dir/libbpf/src/libbpf.c.o /home/yhs/work/bcc2/src/cc/libbpf/src/libbpf.c:12:0: warning: "_GNU_SOURCE" redefined [enabled by default] #define _GNU_SOURCE ... [ 3%] Building C object src/cc/CMakeFiles/bpf-shared.dir/libbpf/src/libbpf_errno.c.o /home/yhs/work/bcc2/src/cc/libbpf/src/libbpf_errno.c: In function ‘libbpf_strerror’: /home/yhs/work/bcc2/src/cc/libbpf/src/libbpf_errno.c:45:7: warning: assignment makes integer from pointer without a cast [enabled by default] ret = strerror_r(err, buf, size); ... bcc is built with _GNU_SOURCE defined and this caused the above warning. This patch intends to make libpf _GNU_SOURCE friendly by . define _GNU_SOURCE in libbpf.c unless it is not defined . undefine _GNU_SOURCE as non-gnu version of strerror_r is expected. Signed-off-by: Yonghong Song <yhs@fb.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-11-29net: Don't default Aquantia USB driver to 'y'David S. Miller1-1/+0
Reported-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-29net: explain __skb_checksum_complete() with commentsCong Wang2-1/+18
Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-29tcp: remove loop to compute wscaleEric Dumazet1-5/+3
We can remove the loop and conditional branches and compute wscale efficiently thanks to ilog2() Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-29qede - Add a statistic for a case where driver drops tx packet due to memory allocation failure.Michael Shteinbok3-2/+4
skb_linearization can fail due to memory allocation failure. In such a case, the driver will drop the packet. In such a case The driver used to print an error message. This patch replaces this error message by a dedicated statistic. Signed-off-by: Michael Shteinbok <michael.shteinbok@cavium.com> Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-29dpaa2-eth: Add "fall through" commentsIoana Ciocoi Radulescu1-0/+2
Add comments in the switch statement for XDP action to indicate fallthrough is intended. Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-29net: ethernet: ave: Preserve wol state in suspend/resume sequenceKunihiko Hayashi1-0/+10
Since the wol state forces to be initialized after reset, the state should be preserved in suspend/resume sequence. Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-29net: ethernet: ave: Set initial wol state to disabledKunihiko Hayashi1-1/+5
If wol state of phy hardware is enabled after reset, phy_ethtool_get_wol() returns that wol.wolopts is true. However, since net_device.wol_enabled is zero and this doesn't apply wol state until calling ethtool_set_wol(), so mdio_bus_phy_may_suspend() returns true, that is, it's in a state where phy can suspend even though wol state is enabled. In this inconsistency, phy_suspend() returns -EBUSY, and at last, suspend sequence fails with the following message: dpm_run_callback(): mdio_bus_phy_suspend+0x0/0x58 returns -16 PM: Device 65000000.ethernet-ffffffff:01 failed to suspend: error -16 PM: Some devices failed to suspend, or early wake event detected In order to fix the above issue, this patch forces to set initial wol state to disabled as default. Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-29net: ethernet: ave: Add suspend/resume supportKunihiko Hayashi1-0/+44
This patch introduces suspend and resume functions to ave driver. Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-28bpf: Fix various lib and testsuite build failures on 32-bit.David Miller2-6/+6
Cannot cast a u64 to a pointer on 32-bit without an intervening (long) cast otherwise GCC warns. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-28selftests/bpf: add config fragment CONFIG_FTRACE_SYSCALLSNaresh Kamboju1-0/+1
CONFIG_FTRACE_SYSCALLS=y is required for get_cgroup_id_user test case this test reads a file from debug trace path /sys/kernel/debug/tracing/events/syscalls/sys_enter_nanosleep/id Signed-off-by: Naresh Kamboju <naresh.kamboju@linaro.org> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-11-28bpf: test_sockmap, add options for msg_pop_data() helperJohn Fastabend2-17/+180
Similar to msg_pull_data and msg_push_data add a set of options to have msg_pop_data() exercised. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>