aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2020-05-21arm64: vdso: Don't prefix sigreturn trampoline with a BTI C instructionWill Deacon3-13/+20
For better or worse, GDB relies on the exact instruction sequence in the VDSO sigreturn trampoline in order to unwind from signals correctly. Commit c91db232da48 ("arm64: vdso: Convert to modern assembler annotations") unfortunately added a BTI C instruction to the start of __kernel_rt_sigreturn, which breaks this check. Thankfully, it's also not required, since the trampoline is called from a RET instruction when returning from the signal handler Remove the unnecessary BTI C instruction from __kernel_rt_sigreturn, and do the same for the 32-bit VDSO as well for good measure. Cc: Daniel Kiss <daniel.kiss@arm.com> Cc: Tamas Zsoldos <tamas.zsoldos@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Fixes: c91db232da48 ("arm64: vdso: Convert to modern assembler annotations") Signed-off-by: Will Deacon <will@kernel.org>
2020-05-12arm64: bti: Fix support for userspace only BTIMark Brown1-0/+8
When setting PTE_MAYBE_GP we check system_supports_bti() but this is true for systems where only CONFIG_BTI is set causing us to enable BTI on some kernel text. Add an extra check for the kernel mode option, using an ifdef due to line length. Fixes: c8027285e366 ("arm64: Set GP bit in kernel page tables to enable BTI for the kernel") Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200512113950.29996-1-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-12arm64: kconfig: Update and comment GCC version check for kernel BTIWill Deacon1-1/+2
Some versions of GCC are known to suffer from a BTI code generation bug, meaning that CONFIG_CC_HAS_BRANCH_PROT_PAC_RET_BTI cannot be solely used to determine whether or not we can compile with kernel with BTI enabled. Update the BTI Kconfig entry to refer to the relevant GCC bugzilla entry (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94697) and update the check now that the fix has been merged into GCC release 10.1. Acked-by: Mark Brown <broonie@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: vdso: Map the vDSO text with guarded pages when built for BTIMark Brown1-1/+5
The kernel is responsible for mapping the vDSO into userspace processes, including mapping the text section as executable. Handle the mapping of the vDSO for BTI similarly, mapping the text section as guarded pages so the BTI annotations in the vDSO become effective when they are present. This will mean that we can have BTI active for the vDSO in processes that do not otherwise support BTI. This should not be an issue for any expected use of the vDSO. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-12-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: vdso: Force the vDSO to be linked as BTI when built for BTIMark Brown1-1/+3
When the kernel and hence vDSO are built with BTI enabled force the linker to link the vDSO as BTI. This will cause the linker to warn if any of the input files do not have the BTI annotation, ensuring that we don't silently fail to provide a vDSO that is built and annotated for BTI when the kernel is being built with BTI. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-11-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: vdso: Annotate for BTIMark Brown3-0/+9
Generate BTI annotations for all assembly files included in the 64 bit vDSO. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-10-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: asm: Provide a mechanism for generating ELF note for BTIMark Brown1-0/+50
ELF files built for BTI should have a program property note section which identifies them as such. The linker expects to find this note in all object files it is linking into a BTI annotated output, the compiler will ensure that this happens for C files but for assembler files we need to do this in the source so provide a macro which can be used for this purpose. To support likely future requirements for additional notes we split the defininition of the flags to set for BTI code from the macro that creates the note itself. This is mainly for use in the vDSO which should be a normal ELF shared library and should therefore include BTI annotations when built for BTI. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200506195138.22086-9-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: bti: Provide Kconfig for kernel mode BTIMark Brown1-0/+19
Now that all the code is in place provide a Kconfig option allowing users to enable BTI for the kernel if their toolchain supports it, defaulting it on since this has security benefits. This is a separate configuration option since we currently don't support secondary CPUs that lack BTI if the boot CPU supports it. Code generation issues mean that current GCC 9 versions are not able to produce usable BTI binaries so we disable support for building with GCC versions prior to 10, once a fix is backported to GCC 9 the dependencies will be updated. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-8-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: mm: Mark executable text as guarded pagesMark Brown1-2/+2
When the kernel is built for BTI and running on a system which supports make all executable text guarded pages to ensure that loadable module and JITed BPF code is protected by BTI. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-7-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: bpf: Annotate JITed code for BTIMark Brown2-0/+20
In order to extend the protection offered by BTI to all code executing in kernel mode we need to annotate JITed BPF code appropriately for BTI. To do this we need to add a landing pad to the start of each BPF function and also immediately after the function prologue if we are emitting a function which can be tail called. Jumps within BPF functions are all to immediate offsets and therefore do not require landing pads. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-6-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: Set GP bit in kernel page tables to enable BTI for the kernelMark Brown3-0/+31
Now that the kernel is built with BTI annotations enable the feature by setting the GP bit in the stage 1 translation tables. This is done based on the features supported by the boot CPU so that we do not need to rewrite the translation tables. In order to avoid potential issues on big.LITTLE systems when there are a mix of BTI and non-BTI capable CPUs in the system when we have enabled kernel mode BTI we change BTI to be a _STRICT_BOOT_CPU_FEATURE when we have kernel BTI. This will prevent any CPUs that don't support BTI being started if the boot CPU supports BTI rather than simply not using BTI as we do when supporting BTI only in userspace. The main concern is the possibility of BTYPE being preserved by a CPU that does not implement BTI when a thread is migrated to it resulting in an incorrect state which could generate an exception when the thread migrates back to a CPU that does support BTI. If we encounter practical systems which mix BTI and non-BTI CPUs we will need to revisit this implementation. Since we currently do not generate landing pads in the BPF JIT we only map the base kernel text in this way. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-5-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: asm: Override SYM_FUNC_START when building the kernel with BTIMark Brown1-0/+46
When the kernel is built for BTI override SYM_FUNC_START and related macros to add a BTI landing pad to the start of all global functions, ensuring that they are BTI safe. The ; at the end of the BTI_x macros is for the benefit of the macro-generated functions in xen-hypercall.S. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: bti: Support building kernel C code using BTIMark Brown1-0/+4
When running with BTI enabled we need to ask the compiler to enable generation of BTI landing pads beyond those generated as a result of pointer authentication instructions being landing pads. Since the two features are practically speaking unlikely to be used separately we will make kernel mode BTI depend on pointer authentication in order to simplify the Makefile. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: Document why we enable PAC support for leaf functionsMark Brown1-0/+3
Document the fact that we enable pointer authentication protection for leaf functions since there is some narrow potential for ROP protection benefits and little overhead has been observed. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200506195138.22086-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: insn: Report PAC and BTI instructions as skippableMark Brown1-0/+17
The PAC and BTI instructions can be safely skipped so report them as such, allowing them to be probed. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200504131326.18290-5-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: insn: Don't assume unrecognized HINTs are skippableMark Brown1-7/+3
Currently the kernel assumes that any HINT which it does not explicitly recognise is skippable. This is not robust as new instructions may be added which need special handling, and in any case software should only be using explicit NOP instructions for deliberate NOPs. This has the effect of rendering PAC and BTI instructions unprobeable which means that probes can't be inserted on the first instruction of functions built with those features. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200504131326.18290-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: insn: Provide a better name for aarch64_insn_is_nop()Mark Brown3-4/+3
The current aarch64_insn_is_nop() has exactly one caller which uses it solely to identify if the instruction is a HINT that can safely be stepped, requiring us to list things that aren't NOPs and make things more confusing than they need to be. Rename the function to reflect the actual usage and make things more clear. Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200504131326.18290-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: insn: Add constants for new HINT instruction decodeMark Brown2-3/+27
Add constants for decoding newer instructions defined in the HINT space. Since we are now decoding both the op2 and CRm fields rename the enum as well; this is compatible with what the existing users are doing. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200504131326.18290-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: Disable old style assembly annotationsMark Brown1-0/+1
Now that we have converted arm64 over to the new style SYM_ assembler annotations select ARCH_USE_SYM_ANNOTATIONS so the old macros aren't available and we don't regress. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200501115430.37315-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: kernel: Convert to modern annotations for assembly functionsMark Brown10-68/+68
In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC and also add a new annotation for static functions which previously had no ENTRY equivalent. Update the annotations in the core kernel code to the new macros. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200501115430.37315-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: entry: Refactor and modernise annotation for ret_to_userMark Brown1-13/+14
As part of an effort to clarify and clean up the assembler annotations new macros have been introduced which annotate the start and end of blocks of code in assembler files. Currently ret_to_user has an out of line slow path work_pending placed above the main function which makes annotating the start and end of these blocks of code awkward. Since work_pending is only referenced from within ret_to_user try to make things a bit clearer by moving it after the current ret_to_user and then marking both ret_to_user and work_pending as part of a single ret_to_user code block. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200501115430.37315-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-04-26Linux 5.7-rc3Linus Torvalds1-1/+1
2020-04-26firmware_loader: revert removal of the fw_fallback_config exportLuis Chamberlain1-0/+1
Christoph's patch removed two unsused exported symbols, however, one symbol is used by the firmware_loader itself. If CONFIG_FW_LOADER=m so the firmware_loader is modular but CONFIG_FW_LOADER_USER_HELPER=y we fail the build at mostpost. ERROR: modpost: "fw_fallback_config" [drivers/base/firmware_loader/firmware_class.ko] undefined! This happens because the variable fw_fallback_config is built into the kernel if CONFIG_FW_LOADER_USER_HELPER=y always, so we need to grant access to the firmware loader module by exporting it. Revert only one hunk from his patch. Fixes: 739604734bd8 ("firmware_loader: remove unused exports") Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/r/20200424184916.22843-1-mcgrof@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-04-25s390/protvirt: fix compilation issueClaudio Imbrenda2-3/+2
The kernel fails to compile with CONFIG_PROTECTED_VIRTUALIZATION_GUEST set but CONFIG_KVM unset. This patch fixes the issue by making the needed variable always available. Link: https://lkml.kernel.org/r/20200423120114.2027410-1-imbrenda@linux.ibm.com Fixes: a0f60f843199 ("s390/protvirt: Add sysfs firmware interface for Ultravisor information") Reported-by: kbuild test robot <lkp@intel.com> Reported-by: Philipp Rudo <prudo@linux.ibm.com> Suggested-by: Philipp Rudo <prudo@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2020-04-24selftests/bpf: Fix a couple of broken test_btf casesStanislav Fomichev4-40/+16
Commit 51c39bb1d5d1 ("bpf: Introduce function-by-function verification") introduced function linkage flag and changed the error message from "vlen != 0" to "Invalid func linkage" and broke some fake BPF programs. Adjust the test accordingly. AFACT, the programs don't really need any arguments and only look at BTF for maps, so let's drop the args altogether. Before: BTF raw test[103] (func (Non zero vlen)): do_test_raw:3703:FAIL expected err_str:vlen != 0 magic: 0xeb9f version: 1 flags: 0x0 hdr_len: 24 type_off: 0 type_len: 72 str_off: 72 str_len: 10 btf_total_size: 106 [1] INT (anon) size=4 bits_offset=0 nr_bits=32 encoding=SIGNED [2] INT (anon) size=4 bits_offset=0 nr_bits=32 encoding=(none) [3] FUNC_PROTO (anon) return=0 args=(1 a, 2 b) [4] FUNC func type_id=3 Invalid func linkage BTF libbpf test[1] (test_btf_haskv.o): libbpf: load bpf program failed: Invalid argument libbpf: -- BEGIN DUMP LOG --- libbpf: Validating test_long_fname_2() func#1... Arg#0 type PTR in test_long_fname_2() is not supported yet. processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 libbpf: -- END LOG -- libbpf: failed to load program 'dummy_tracepoint' libbpf: failed to load object 'test_btf_haskv.o' do_test_file:4201:FAIL bpf_object__load: -4007 BTF libbpf test[2] (test_btf_newkv.o): libbpf: load bpf program failed: Invalid argument libbpf: -- BEGIN DUMP LOG --- libbpf: Validating test_long_fname_2() func#1... Arg#0 type PTR in test_long_fname_2() is not supported yet. processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 libbpf: -- END LOG -- libbpf: failed to load program 'dummy_tracepoint' libbpf: failed to load object 'test_btf_newkv.o' do_test_file:4201:FAIL bpf_object__load: -4007 BTF libbpf test[3] (test_btf_nokv.o): libbpf: load bpf program failed: Invalid argument libbpf: -- BEGIN DUMP LOG --- libbpf: Validating test_long_fname_2() func#1... Arg#0 type PTR in test_long_fname_2() is not supported yet. processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 libbpf: -- END LOG -- libbpf: failed to load program 'dummy_tracepoint' libbpf: failed to load object 'test_btf_nokv.o' do_test_file:4201:FAIL bpf_object__load: -4007 Fixes: 51c39bb1d5d1 ("bpf: Introduce function-by-function verification") Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200422003753.124921-1-sdf@google.com
2020-04-24tools/runqslower: Ensure own vmlinux.h is picked up firstAndrii Nakryiko1-1/+1
Reorder include paths to ensure that runqslower sources are picking up vmlinux.h, generated by runqslower's own Makefile. When runqslower is built from selftests/bpf, due to current -I$(BPF_INCLUDE) -I$(OUTPUT) ordering, it might pick up not-yet-complete vmlinux.h, generated by selftests Makefile, which could lead to compilation errors like [0]. So ensure that -I$(OUTPUT) goes first and rely on runqslower's Makefile own dependency chain to ensure vmlinux.h is properly completed before source code relying on it is compiled. [0] https://travis-ci.org/github/libbpf/libbpf/jobs/677905925 Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200422012407.176303-1-andriin@fb.com
2020-04-24bpf: Make bpf_link_fops staticZou Wei1-1/+1
Fix the following sparse warning: kernel/bpf/syscall.c:2289:30: warning: symbol 'bpf_link_fops' was not declared. Should it be static? Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Zou Wei <zou_wei@huawei.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/1587609160-117806-1-git-send-email-zou_wei@huawei.com
2020-04-24bpftool: Respect the -d option in struct_ops cmdMartin KaFai Lau1-1/+7
In the prog cmd, the "-d" option turns on the verifier log. This is missed in the "struct_ops" cmd and this patch fixes it. Fixes: 65c93628599d ("bpftool: Add struct_ops support") Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20200424182911.1259355-1-kafai@fb.com
2020-04-24selftests/bpf: Add test for freplace program with expected_attach_typeToke Høiland-Jørgensen3-18/+58
This adds a new selftest that tests the ability to attach an freplace program to a program type that relies on the expected_attach_type of the target program to pass verification. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/158773526831.293902.16011743438619684815.stgit@toke.dk
2020-04-24bpf: Propagate expected_attach_type when verifying freplace programsToke Høiland-Jørgensen1-0/+8
For some program types, the verifier relies on the expected_attach_type of the program being verified in the verification process. However, for freplace programs, the attach type was not propagated along with the verifier ops, so the expected_attach_type would always be zero for freplace programs. This in turn caused the verifier to sometimes make the wrong call for freplace programs. For all existing uses of expected_attach_type for this purpose, the result of this was only false negatives (i.e., freplace functions would be rejected by the verifier even though they were valid programs for the target they were replacing). However, should a false positive be introduced, this can lead to out-of-bounds accesses and/or crashes. The fix introduced in this patch is to propagate the expected_attach_type to the freplace program during verification, and reset it after that is done. Fixes: be8704ff07d2 ("bpf: Introduce dynamic program extensions") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/158773526726.293902.13257293296560360508.stgit@toke.dk
2020-04-24bpf: Fix leak in LINK_UPDATE and enforce empty old_prog_fdAndrii Nakryiko1-2/+9
Fix bug of not putting bpf_link in LINK_UPDATE command. Also enforce zeroed old_prog_fd if no BPF_F_REPLACE flag is specified. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200424052045.4002963-1-andriin@fb.com
2020-04-24bpf, x86_32: Fix logic error in BPF_LDX zero-extensionWang YanQing1-1/+1
When verifier_zext is true, we don't need to emit code for zero-extension. Fixes: 836256bf5f37 ("x32: bpf: eliminate zero extension code-gen") Signed-off-by: Wang YanQing <udknight@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200423050637.GA4029@udknight
2020-04-24bpf, x86_32: Fix clobbering of dst for BPF_JSETLuke Nelson1-4/+18
The current JIT clobbers the destination register for BPF_JSET BPF_X and BPF_K by using "and" and "or" instructions. This is fine when the destination register is a temporary loaded from a register stored on the stack but not otherwise. This patch fixes the problem (for both BPF_K and BPF_X) by always loading the destination register into temporaries since BPF_JSET should not modify the destination register. This bug may not be currently triggerable as BPF_REG_AX is the only register not stored on the stack and the verifier uses it in a limited way. Fixes: 03f5781be2c7b ("bpf, x86_32: add eBPF JIT compiler for ia32") Signed-off-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Wang YanQing <udknight@gmail.com> Link: https://lore.kernel.org/bpf/20200422173630.8351-2-luke.r.nels@gmail.com
2020-04-24bpf, x86_32: Fix incorrect encoding in BPF_LDX zero-extensionLuke Nelson1-1/+3
The current JIT uses the following sequence to zero-extend into the upper 32 bits of the destination register for BPF_LDX BPF_{B,H,W}, when the destination register is not on the stack: EMIT3(0xC7, add_1reg(0xC0, dst_hi), 0); The problem is that C7 /0 encodes a MOV instruction that requires a 4-byte immediate; the current code emits only 1 byte of the immediate. This means that the first 3 bytes of the next instruction will be treated as the rest of the immediate, breaking the stream of instructions. This patch fixes the problem by instead emitting "xor dst_hi,dst_hi" to clear the upper 32 bits. This fixes the problem and is more efficient than using MOV to load a zero immediate. This bug may not be currently triggerable as BPF_REG_AX is the only register not stored on the stack and the verifier uses it in a limited way, and the verifier implements a zero-extension optimization. But the JIT should avoid emitting incorrect encodings regardless. Fixes: 03f5781be2c7b ("bpf, x86_32: add eBPF JIT compiler for ia32") Signed-off-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: H. Peter Anvin (Intel) <hpa@zytor.com> Acked-by: Wang YanQing <udknight@gmail.com> Link: https://lore.kernel.org/bpf/20200422173630.8351-1-luke.r.nels@gmail.com
2020-04-24bpf: Fix reStructuredText markupJakub Wilk2-2/+2
The patch fixes: $ scripts/bpf_helpers_doc.py > bpf-helpers.rst $ rst2man bpf-helpers.rst > bpf-helpers.7 bpf-helpers.rst:1105: (WARNING/2) Inline strong start-string without end-string. Signed-off-by: Jakub Wilk <jwilk@jwilk.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20200422082324.2030-1-jwilk@jwilk.net
2020-04-24net: systemport: suppress warnings on failed Rx SKB allocationsDoug Berger1-1/+2
The driver is designed to drop Rx packets and reclaim the buffers when an allocation fails, and the network interface needs to safely handle this packet loss. Therefore, an allocation failure of Rx SKBs is relatively benign. However, the output of the warning message occurs with a high scheduling priority that can cause excessive jitter/latency for other high priority processing. This commit suppresses the warning messages to prevent scheduling problems while retaining the failure count in the statistics of the network interface. Signed-off-by: Doug Berger <opendmb@gmail.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-04-24net: bcmgenet: suppress warnings on failed Rx SKB allocationsDoug Berger1-1/+2
The driver is designed to drop Rx packets and reclaim the buffers when an allocation fails, and the network interface needs to safely handle this packet loss. Therefore, an allocation failure of Rx SKBs is relatively benign. However, the output of the warning message occurs with a high scheduling priority that can cause excessive jitter/latency for other high priority processing. This commit suppresses the warning messages to prevent scheduling problems while retaining the failure count in the statistics of the network interface. Signed-off-by: Doug Berger <opendmb@gmail.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-04-24macsec: avoid to set wrong mtuTaehee Yoo1-4/+8
When a macsec interface is created, the mtu is calculated with the lower interface's mtu value. If the mtu of lower interface is lower than the length, which is needed by macsec interface, macsec's mtu value will be overflowed. So, if the lower interface's mtu is too low, macsec interface's mtu should be set to 0. Test commands: ip link add dummy0 mtu 10 type dummy ip link add macsec0 link dummy0 type macsec ip link show macsec0 Before: 11: macsec0@dummy0: <BROADCAST,MULTICAST,M-DOWN> mtu 4294967274 After: 11: macsec0@dummy0: <BROADCAST,MULTICAST,M-DOWN> mtu 0 Fixes: c09440f7dcb3 ("macsec: introduce IEEE 802.1AE driver") Signed-off-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-04-24dt-bindings: phy: qcom-qusb2: Fix defaultsDouglas Anderson1-3/+3
The defaults listed in the bindings don't match what the code is actually doing. Presumably existing users care more about keeping existing behavior the same, so change the bindings to match the code in Linux. The "qcom,preemphasis-level" default has been wrong for quite a long time (May 2018). The other two were recently added. As some evidence that these values are wrong, this is from the Linux driver: - qcom,preemphasis-level: sets "PORT_TUNE1", lower 2 bits. Driver programs PORT_TUNE1 to 0x30 by default and (0x30 & 0x3) = 0. - qcom,bias-ctrl-value: sets "PLL_BIAS_CONTROL_2", lower 6 bits. Driver programs PLL_BIAS_CONTROL_2 to 0x20 by default and (0x20 & 0x3f) = 0x20 = 32. - qcom,hsdisc-trim-value: sets "PORT_TUNE2", lower 2 bits. Driver programs PORT_TUNE2 to 0x29 by default and (0x29 & 0x3) = 1. Fixes: 1e6f134eb67a ("dt-bindings: phy: qcom-qusb2: Add support for overriding Phy tuning parameters") Fixes: a8b70ccf10e3 ("dt-bindings: phy-qcom-usb2: Add support to override tuning values") Signed-off-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: Rob Herring <robh@kernel.org>
2020-04-24dt-bindings: Fix erroneous 'additionalProperties'Rob Herring6-7/+17
There's several cases of json-schema 'additionalProperties' at the wrong indentation level which has the effect of making them DT properties. This is harmless, but let's fix them so a meta-schema check for this can be added. In all the cases, either the 'additionalProperties' was extra or doesn't work because there's a $ref to more properties. In the latter case, we can use 'unevaluatedProperties' instead. Reported-by: Iskren Chernev <iskren.chernev@gmail.com> Cc: Lee Jones <lee.jones@linaro.org> Cc: Saravanan Sekar <sravanhome@gmail.com> Cc: Liam Girdwood <lgirdwood@gmail.com> Acked-by: Mark Brown <broonie@kernel.org> Signed-off-by: Rob Herring <robh@kernel.org>
2020-04-24proc: Put thread_pid in release_task not proc_flush_pidEric W. Biederman2-1/+1
Oleg pointed out that in the unlikely event the kernel is compiled with CONFIG_PROC_FS unset that release_task will now leak the pid. Move the put_pid out of proc_flush_pid into release_task to fix this and to guarantee I don't make that mistake again. When possible it makes sense to keep get and put in the same function so it can easily been seen how they pair up. Fixes: 7bc3e6e55acf ("proc: Use a list of inodes to flush from proc") Reported-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2020-04-24mm: check that mm is still valid in madvise()Linus Torvalds1-0/+18
IORING_OP_MADVISE can end up basically doing mprotect() on the VM of another process, which means that it can race with our crazy core dump handling which accesses the VM state without holding the mmap_sem (because it incorrectly thinks that it is the final user). This is clearly a core dumping problem, but we've never fixed it the right way, and instead have the notion of "check that the mm is still ok" using mmget_still_valid() after getting the mmap_sem for writing in any situation where we're not the original VM thread. See commit 04f5866e41fb ("coredump: fix race condition between mmget_not_zero()/get_task_mm() and core dumping") for more background on this whole mmget_still_valid() thing. You might want to have a barf bag handy when you do. We're discussing just fixing this properly in the only remaining core dumping routines. But even if we do that, let's make do_madvise() do the right thing, and then when we fix core dumping, we can remove all these mmget_still_valid() checks. Reported-and-tested-by: Jann Horn <jannh@google.com> Fixes: c1ca757bd6f4 ("io_uring: add IORING_OP_MADVISE") Acked-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-24afs: Make record checking use TASK_UNINTERRUPTIBLE when appropriateDavid Howells4-12/+11
When an operation is meant to be done uninterruptibly (such as FS.StoreData), we should not be allowing volume and server record checking to be interrupted. Fixes: d2ddc776a458 ("afs: Overhaul volume and server record caching and fileserver rotation") Signed-off-by: David Howells <dhowells@redhat.com>
2020-04-24afs: Fix to actually set AFS_SERVER_FL_HAVE_EPOCHDavid Howells1-1/+1
AFS keeps track of the epoch value from the rxrpc protocol to note (a) when a fileserver appears to have restarted and (b) when different endpoints of a fileserver do not appear to be associated with the same fileserver (ie. all probes back from a fileserver from all of its interfaces should carry the same epoch). However, the AFS_SERVER_FL_HAVE_EPOCH flag that indicates that we've received the server's epoch is never set, though it is used. Fix this to set the flag when we first receive an epoch value from a probe sent to the filesystem client from the fileserver. Fixes: 3bf0fb6f33dd ("afs: Probe multiple fileservers simultaneously") Signed-off-by: David Howells <dhowells@redhat.com>
2020-04-24afs: Remove some unused bitsDavid Howells3-8/+3
Remove three bits: (1) afs_server::no_epoch is neither set nor used. (2) afs_server::have_result is set and a wakeup is applied to it, but nothing looks at it or waits on it. (3) afs_vl_dump_edestaddrreq() prints afs_addr_list::probed, but nothing sets it for VL servers. Signed-off-by: David Howells <dhowells@redhat.com>
2020-04-24dt-bindings: Fix command line length limit calling dt-mk-schemaRob Herring1-8/+10
As the number of schemas has increased, we're starting to hit the error "execvp: /bin/sh: Argument list too long". This is due to passing all the schema files on the command line to dt-mk-schema. It currently is only with out of tree builds and is intermittent depending on the file path lengths. Commit 2ba06cd8565b ("kbuild: Always validate DT binding examples") made hitting this proplem more likely since the example validation now always gets the full list of schemas. Fix this by passing the schema file list in a pipe and using xargs. We end up doing the find twice, but the time is insignificant compared to the dt-mk-schema time. Reported-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Masahiro Yamada <masahiroy@kernel.org> Tested-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Rob Herring <robh@kernel.org>
2020-04-24mac80211: sta_info: Add lockdep condition for RCU list usageMadhuparna Bhowmik1-1/+2
The function sta_info_get_by_idx() uses RCU list primitive. It is called with local->sta_mtx held from mac80211/cfg.c. Add lockdep expression to avoid any false positive RCU list warnings. Signed-off-by: Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com> Link: https://lore.kernel.org/r/20200409082906.27427-1-madhuparnabhowmik10@gmail.com Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2020-04-24mac80211: populate debugfs only after cfg80211 initJohannes Berg10-25/+51
When fixing the initialization race, we neglected to account for the fact that debugfs is initialized in wiphy_register(), and some debugfs things went missing (or rather were rerooted to the global debugfs root). Fix this by adding debugfs entries only after wiphy_register(). This requires some changes in the rate control code since it currently adds debugfs at alloc time, which can no longer be done after the reordering. Reported-by: Jouni Malinen <j@w1.fi> Reported-by: kernel test robot <rong.a.chen@intel.com> Reported-by: Hauke Mehrtens <hauke@hauke-m.de> Reported-by: Felix Fietkau <nbd@nbd.name> Cc: stable@vger.kernel.org Fixes: 52e04b4ce5d0 ("mac80211: fix race in ieee80211_register_hw()") Signed-off-by: Johannes Berg <johannes.berg@intel.com> Acked-by: Sumit Garg <sumit.garg@linaro.org> Link: https://lore.kernel.org/r/20200423111344.0e00d3346f12.Iadc76a03a55093d94391fc672e996a458702875d@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2020-04-24lib/mpi: Fix building for powerpc with clangNathan Chancellor1-17/+17
0day reports over and over on an powerpc randconfig with clang: lib/mpi/generic_mpih-mul1.c:37:13: error: invalid use of a cast in a inline asm context requiring an l-value: remove the cast or build with -fheinous-gnu-extensions Remove the superfluous casts, which have been done previously for x86 and arm32 in commit dea632cadd12 ("lib/mpi: fix build with clang") and commit 7b7c1df2883d ("lib/mpi/longlong.h: fix building with 32-bit x86"). Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://github.com/ClangBuiltLinux/linux/issues/991 Link: https://lore.kernel.org/r/20200413195041.24064-1-natechancellor@gmail.com
2020-04-23net: bcmgenet: correct per TX/RX ring statisticsDoug Berger1-0/+3
The change to track net_device_stats per ring to better support SMP missed updating the rx_dropped member. The ndo_get_stats method is also needed to combine the results for ethtool statistics (-S) before filling in the ethtool structure. Fixes: 37a30b435b92 ("net: bcmgenet: Track per TX/RX rings statistics") Signed-off-by: Doug Berger <opendmb@gmail.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>