aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/arch/arm64/Kconfig (follow)
AgeCommit message (Collapse)AuthorFilesLines
2020-08-09Merge tag 'kvmarm-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into kvm-next-5.6Paolo Bonzini1-19/+1
KVM/arm64 updates for Linux 5.9: - Split the VHE and nVHE hypervisor code bases, build the EL2 code separately, allowing for the VHE code to now be built with instrumentation - Level-based TLB invalidation support - Restructure of the vcpu register storage to accomodate the NV code - Pointer Authentication available for guests on nVHE hosts - Simplification of the system register table parsing - MMU cleanups and fixes - A number of post-32bit cleanups and other fixes
2020-08-04Merge tag 'fork-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linuxLinus Torvalds1-1/+0
Pull fork cleanups from Christian Brauner: "This is cleanup series from when we reworked a chunk of the process creation paths in the kernel and switched to struct {kernel_}clone_args. High-level this does two main things: - Remove the double export of both do_fork() and _do_fork() where do_fork() used the incosistent legacy clone calling convention. Now we only export _do_fork() which is based on struct kernel_clone_args. - Remove the copy_thread_tls()/copy_thread() split making the architecture specific HAVE_COYP_THREAD_TLS config option obsolete. This switches all remaining architectures to select HAVE_COPY_THREAD_TLS and thus to the copy_thread_tls() calling convention. The current split makes the process creation codepaths more convoluted than they need to be. Each architecture has their own copy_thread() function unless it selects HAVE_COPY_THREAD_TLS then it has a copy_thread_tls() function. The split is not needed anymore nowadays, all architectures support CLONE_SETTLS but quite a few of them never bothered to select HAVE_COPY_THREAD_TLS and instead simply continued to use copy_thread() and use the old calling convention. Removing this split cleans up the process creation codepaths and paves the way for implementing clone3() on such architectures since it requires the copy_thread_tls() calling convention. After having made each architectures support copy_thread_tls() this series simply renames that function back to copy_thread(). It also switches all architectures that call do_fork() directly over to _do_fork() and the struct kernel_clone_args calling convention. This is a corollary of switching the architectures that did not yet support it over to copy_thread_tls() since do_fork() is conditional on not supporting copy_thread_tls() (Mostly because it lacks a separate argument for tls which is trivial to fix but there's no need for this function to exist.). The do_fork() removal is in itself already useful as it allows to to remove the export of both do_fork() and _do_fork() we currently have in favor of only _do_fork(). This has already been discussed back when we added clone3(). The legacy clone() calling convention is - as is probably well-known - somewhat odd: # # ABI hall of shame # config CLONE_BACKWARDS config CLONE_BACKWARDS2 config CLONE_BACKWARDS3 that is aggravated by the fact that some architectures such as sparc follow the CLONE_BACKWARDSx calling convention but don't really select the corresponding config option since they call do_fork() directly. So do_fork() enforces a somewhat arbitrary calling convention in the first place that doesn't really help the individual architectures that deviate from it. They can thus simply be switched to _do_fork() enforcing a single calling convention. (I really hope that any new architectures will __not__ try to implement their own calling conventions...) Most architectures already have made a similar switch (m68k comes to mind). Overall this removes more code than it adds even with a good portion of added comments. It simplifies a chunk of arch specific assembly either by moving the code into C or by simply rewriting the assembly. Architectures that have been touched in non-trivial ways have all been actually boot and stress tested: sparc and ia64 have been tested with Debian 9 images. They are the two architectures which have been touched the most. All non-trivial changes to architectures have seen acks from the relevant maintainers. nios2 with a custom built buildroot image. h8300 I couldn't get something bootable to test on but the changes have been fairly automatic and I'm sure we'll hear people yell if I broke something there. All other architectures that have been touched in trivial ways have been compile tested for each single patch of the series via git rebase -x "make ..." v5.8-rc2. arm{64} and x86{_64} have been boot tested even though they have just been trivially touched (removal of the HAVE_COPY_THREAD_TLS macro from their Kconfig) because well they are basically "core architectures" and since it is trivial to get your hands on a useable image" * tag 'fork-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux: arch: rename copy_thread_tls() back to copy_thread() arch: remove HAVE_COPY_THREAD_TLS unicore: switch to copy_thread_tls() sh: switch to copy_thread_tls() nds32: switch to copy_thread_tls() microblaze: switch to copy_thread_tls() hexagon: switch to copy_thread_tls() c6x: switch to copy_thread_tls() alpha: switch to copy_thread_tls() fork: remove do_fork() h8300: select HAVE_COPY_THREAD_TLS, switch to kernel_clone_args nios2: enable HAVE_COPY_THREAD_TLS, switch to kernel_clone_args ia64: enable HAVE_COPY_THREAD_TLS, switch to kernel_clone_args sparc: unconditionally enable HAVE_COPY_THREAD_TLS sparc: share process creation helpers between sparc and sparc64 sparc64: enable HAVE_COPY_THREAD_TLS fork: fold legacy_clone_args_valid() into _do_fork()
2020-07-31Merge branch 'for-next/tlbi' into for-next/coreCatalin Marinas1-0/+14
* for-next/tlbi: : Support for TTL (translation table level) hint in the TLB operations arm64: tlb: Use the TLBI RANGE feature in arm64 arm64: enable tlbi range instructions arm64: tlb: Detect the ARMv8.4 TLBI RANGE feature arm64: tlb: don't set the ttl value in flush_tlb_page_nosync arm64: Shift the __tlbi_level() indentation left arm64: tlb: Set the TTL field in flush_*_tlb_range arm64: tlb: Set the TTL field in flush_tlb_range tlb: mmu_gather: add tlb_flush_*_range APIs arm64: Add tlbi_user_level TLB invalidation helper arm64: Add level-hinted TLB invalidation helper arm64: Document SW reserved PTE/PMD bits in Stage-2 descriptors arm64: Detect the ARMv8.4 TTL feature
2020-07-31Merge branches 'for-next/misc', 'for-next/vmcoreinfo', 'for-next/cpufeature', 'for-next/acpi', 'for-next/perf', 'for-next/timens', 'for-next/msi-iommu' and 'for-next/trivial' into for-next/coreCatalin Marinas1-0/+1
* for-next/misc: : Miscellaneous fixes and cleanups arm64: use IRQ_STACK_SIZE instead of THREAD_SIZE for irq stack arm64/mm: save memory access in check_and_switch_context() fast switch path recordmcount: only record relocation of type R_AARCH64_CALL26 on arm64. arm64: Reserve HWCAP2_MTE as (1 << 18) arm64/entry: deduplicate SW PAN entry/exit routines arm64: s/AMEVTYPE/AMEVTYPER arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs arm64: stacktrace: Move export for save_stack_trace_tsk() smccc: Make constants available to assembly arm64/mm: Redefine CONT_{PTE, PMD}_SHIFT arm64/defconfig: Enable CONFIG_KEXEC_FILE arm64: Document sysctls for emulated deprecated instructions arm64/panic: Unify all three existing notifier blocks arm64/module: Optimize module load time by optimizing PLT counting * for-next/vmcoreinfo: : Export the virtual and physical address sizes in vmcoreinfo arm64/crash_core: Export TCR_EL1.T1SZ in vmcoreinfo crash_core, vmcoreinfo: Append 'MAX_PHYSMEM_BITS' to vmcoreinfo * for-next/cpufeature: : CPU feature handling cleanups arm64/cpufeature: Validate feature bits spacing in arm64_ftr_regs[] arm64/cpufeature: Replace all open bits shift encodings with macros arm64/cpufeature: Add remaining feature bits in ID_AA64MMFR2 register arm64/cpufeature: Add remaining feature bits in ID_AA64MMFR1 register arm64/cpufeature: Add remaining feature bits in ID_AA64MMFR0 register * for-next/acpi: : ACPI updates for arm64 arm64/acpi: disallow writeable AML opregion mapping for EFI code regions arm64/acpi: disallow AML memory opregions to access kernel memory * for-next/perf: : perf updates for arm64 arm64: perf: Expose some new events via sysfs tools headers UAPI: Update tools's copy of linux/perf_event.h arm64: perf: Add cap_user_time_short perf: Add perf_event_mmap_page::cap_user_time_short ABI arm64: perf: Only advertise cap_user_time for arch_timer arm64: perf: Implement correct cap_user_time time/sched_clock: Use raw_read_seqcount_latch() sched_clock: Expose struct clock_read_data arm64: perf: Correct the event index in sysfs perf/smmuv3: To simplify code for ioremap page in pmcg * for-next/timens: : Time namespace support for arm64 arm64: enable time namespace support arm64/vdso: Restrict splitting VVAR VMA arm64/vdso: Handle faults on timens page arm64/vdso: Add time namespace page arm64/vdso: Zap vvar pages when switching to a time namespace arm64/vdso: use the fault callback to map vvar pages * for-next/msi-iommu: : Make the MSI/IOMMU input/output ID translation PCI agnostic, augment the : MSI/IOMMU ACPI/OF ID mapping APIs to accept an input ID bus-specific parameter : and apply the resulting changes to the device ID space provided by the : Freescale FSL bus bus: fsl-mc: Add ACPI support for fsl-mc bus/fsl-mc: Refactor the MSI domain creation in the DPRC driver of/irq: Make of_msi_map_rid() PCI bus agnostic of/irq: make of_msi_map_get_device_domain() bus agnostic dt-bindings: arm: fsl: Add msi-map device-tree binding for fsl-mc bus of/device: Add input id to of_dma_configure() of/iommu: Make of_map_rid() PCI agnostic ACPI/IORT: Add an input ID to acpi_dma_configure() ACPI/IORT: Remove useless PCI bus walk ACPI/IORT: Make iort_msi_map_rid() PCI agnostic ACPI/IORT: Make iort_get_device_domain IRQ domain agnostic ACPI/IORT: Make iort_match_node_callback walk the ACPI namespace for NC * for-next/trivial: : Trivial fixes arm64: sigcontext.h: delete duplicated word arm64: ptrace.h: delete duplicated word arm64: pgtable-hwdef.h: delete duplicated words
2020-07-28Merge branch 'kvm-arm64/misc-5.9' into kvmarm-master/next-WIPMarc Zyngier1-16/+0
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-07-28Merge branch 'kvm-arm64/ptrauth-nvhe' into kvmarm-master/next-WIPMarc Zyngier1-3/+1
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-07-28KVM: arm64: Substitute RANDOMIZE_BASE for HARDEN_EL2_VECTORSDavid Brazdil1-16/+0
The HARDEN_EL2_VECTORS config maps vectors at a fixed location on cores which are susceptible to Spector variant 3a (A57, A72) to prevent defeating hyp layout randomization by leaking the value of VBAR_EL2. Since this feature is only applicable when EL2 layout randomization is enabled, unify both behind the same RANDOMIZE_BASE Kconfig. Majority of code remains conditional on a capability selected for the affected cores. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200721094445.82184-3-dbrazdil@google.com
2020-07-24arm64: enable time namespace supportAndrei Vagin1-0/+1
CONFIG_TIME_NS is dependes on GENERIC_VDSO_TIME_NS. Signed-off-by: Andrei Vagin <avagin@gmail.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Dmitry Safonov <dima@arista.com> Link: https://lore.kernel.org/r/20200624083321.144975-7-avagin@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-15arm64: enable tlbi range instructionsZhenyu Ye1-0/+14
TLBI RANGE feature instoduces new assembly instructions and only support by binutils >= 2.30. Add necessary Kconfig logic to allow this to be enabled and pass '-march=armv8.4-a' to KBUILD_CFLAGS. Signed-off-by: Zhenyu Ye <yezhenyu2@huawei.com> Link: https://lore.kernel.org/r/20200715071945.897-3-yezhenyu2@huawei.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-04arch: remove HAVE_COPY_THREAD_TLSChristian Brauner1-1/+0
All architectures support copy_thread_tls() now, so remove the legacy copy_thread() function and the HAVE_COPY_THREAD_TLS config option. Everyone uses the same process creation calling convention based on copy_thread_tls() and struct kernel_clone_args. This will make it easier to maintain the core process creation code under kernel/, simplifies the callpaths and makes the identical for all architectures. Cc: linux-arch@vger.kernel.org Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Acked-by: Greentime Hu <green.hu@gmail.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
2020-07-02arm64: Document sysctls for emulated deprecated instructionsMark Brown1-2/+6
We have support for emulating a number of deprecated instructions in the kernel with individual Kconfig options enabling this support per instruction. In addition to the Kconfig options we also provide runtime control via sysctls but this is not currently mentioned in the Kconfig so not very discoverable for users. This is particularly important for SWP/SWPB since this is disabled by default at runtime and must be enabled via the sysctl, causing considerable frustration for users who have enabled the config option and are then confused to find that the instruction is still faulting. Add a reference to the sysctls in the help text for each of the config options, noting that SWP/SWPB is disabled by default, to improve the user experience. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200625131507.32334-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-06-27Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linuxLinus Torvalds1-2/+2
Pull arm64 fixes from Will Deacon: "The big fix here is to our vDSO sigreturn trampoline as, after a painfully long stint of debugging, it turned out that fixing some of our CFI directives in the merge window lit up a bunch of logic in libgcc which has been shown to SEGV in some cases during asynchronous pthread cancellation. It looks like we can fix this by extending the directives to restore most of the interrupted register state from the sigcontext, but it's risky and hard to test so we opted to remove the CFI directives for now and rely on the unwinder fallback path like we used to. - Fix unwinding through vDSO sigreturn trampoline - Fix build warnings by raising minimum LD version for PAC - Whitelist some Kryo Cortex-A55 derivatives for Meltdown and SSB - Fix perf register PC reporting for compat tasks - Fix 'make clean' warning for arm64 signal selftests - Fix ftrace when BTI is compiled in - Avoid building the compat vDSO using GCC plugins" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: Add KRYO{3,4}XX silver CPU cores to SSB safelist arm64: perf: Report the PC value in REGS_ABI_32 mode kselftest: arm64: Remove redundant clean target arm64: kpti: Add KRYO{3, 4}XX silver CPU cores to kpti safelist arm64: Don't insert a BTI instruction at inner labels arm64: vdso: Don't use gcc plugins for building vgettimeofday.c arm64: vdso: Only pass --no-eh-frame-hdr when linker supports it arm64: Depend on newer binutils when building PAC arm64: compat: Remove 32-bit sigreturn code from the vDSO arm64: compat: Always use sigpage for sigreturn trampoline arm64: compat: Allow 32-bit vdso and sigpage to co-exist arm64: vdso: Disable dwarf unwinding through the sigreturn trampoline
2020-06-23arm64: Depend on newer binutils when building PACMark Brown1-2/+2
Versions of binutils prior to 2.33.1 don't understand the ELF notes that are added by modern compilers to indicate the PAC and BTI options used to build the code. This causes them to emit large numbers of warnings in the form: aarch64-linux-gnu-nm: warning: .tmp_vmlinux.kallsyms2: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000 during the kernel build which is currently causing quite a bit of disruption for automated build testing using clang. In commit 15cd0e675f3f76b (arm64: Kconfig: ptrauth: Add binutils version check to fix mismatch) we added a dependency on binutils to avoid this issue when building with versions of GCC that emit the notes but did not do so for clang as it was believed that the existing check for .cfi_negate_ra_state was already requiring a new enough binutils. This does not appear to be the case for some versions of binutils (eg, the binutils in Debian 10) so instead refactor so we require a new enough GNU binutils in all cases other than when we are using an old GCC version that does not emit notes. Other, more exotic, combinations of tools are possible such as using clang, lld and gas together are possible and may have further problems but rather than adding further version checks it looks like the most robust thing will be to just test that we can build cleanly with the configured tools but that will require more review and discussion so do this for now to address the immediate problem disrupting build testing. Reported-by: KernelCI <bot@kernelci.org> Reported-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1054 Link: https://lore.kernel.org/r/20200619123550.48098-1-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-06-22KVM: arm64: Allow ARM64_PTR_AUTH when ARM64_VHE=nMarc Zyngier1-3/+1
We currently prevent PtrAuth from even being built if KVM is selected, but VHE isn't. It is a bit of a pointless restriction, since we also check this at run time (rejecting the enabling of PtrAuth for the vcpu if we're not running with VHE). Just drop this apparently useless restriction. Acked-by: Andrew Scull <ascull@google.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-06-21Merge tag 'kbuild-fixes-v5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuildLinus Torvalds1-1/+1
Pull Kbuild fixes from Masahiro Yamada: - fix -gz=zlib compiler option test for CONFIG_DEBUG_INFO_COMPRESSED - improve cc-option in scripts/Kbuild.include to clean up temp files - improve cc-option in scripts/Kconfig.include for more reliable compile option test - do not copy modules.builtin by 'make install' because it would break existing systems - use 'userprogs' syntax for watch_queue sample * tag 'kbuild-fixes-v5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: samples: watch_queue: build sample program for target architecture Revert "Makefile: install modules.builtin even if CONFIG_MODULES=n" scripts: Fix typo in headers_install.sh kconfig: unify cc-option and as-option kbuild: improve cc-option to clean up all temporary files Makefile: Improve compressed debug info support detection
2020-06-17arm64: bti: Require clang >= 10.0.1 for in-kernel BTI supportWill Deacon1-0/+2
Unfortunately, most versions of clang that support BTI are capable of miscompiling the kernel when converting a switch statement into a jump table. As an example, attempting to spawn a KVM guest results in a panic: [ 56.253312] Kernel panic - not syncing: bad mode [ 56.253834] CPU: 0 PID: 279 Comm: lkvm Not tainted 5.8.0-rc1 #2 [ 56.254225] Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015 [ 56.254712] Call trace: [ 56.254952] dump_backtrace+0x0/0x1d4 [ 56.255305] show_stack+0x1c/0x28 [ 56.255647] dump_stack+0xc4/0x128 [ 56.255905] panic+0x16c/0x35c [ 56.256146] bad_el0_sync+0x0/0x58 [ 56.256403] el1_sync_handler+0xb4/0xe0 [ 56.256674] el1_sync+0x7c/0x100 [ 56.256928] kvm_vm_ioctl_check_extension_generic+0x74/0x98 [ 56.257286] __arm64_sys_ioctl+0x94/0xcc [ 56.257569] el0_svc_common+0x9c/0x150 [ 56.257836] do_el0_svc+0x84/0x90 [ 56.258083] el0_sync_handler+0xf8/0x298 [ 56.258361] el0_sync+0x158/0x180 This is because the switch in kvm_vm_ioctl_check_extension_generic() is executed as an indirect branch to tail-call through a jump table: ffff800010032dc8: 3869694c ldrb w12, [x10, x9] ffff800010032dcc: 8b0c096b add x11, x11, x12, lsl #2 ffff800010032dd0: d61f0160 br x11 However, where the target case uses the stack, the landing pad is elided due to the presence of a paciasp instruction: ffff800010032e14: d503233f paciasp ffff800010032e18: a9bf7bfd stp x29, x30, [sp, #-16]! ffff800010032e1c: 910003fd mov x29, sp ffff800010032e20: aa0803e0 mov x0, x8 ffff800010032e24: 940017c0 bl ffff800010038d24 <kvm_vm_ioctl_check_extension> ffff800010032e28: 93407c00 sxtw x0, w0 ffff800010032e2c: a8c17bfd ldp x29, x30, [sp], #16 ffff800010032e30: d50323bf autiasp ffff800010032e34: d65f03c0 ret Unfortunately, this results in a fatal exception because paciasp is compatible only with branch-and-link (call) instructions and not simple indirect branches. A fix is being merged into Clang 10.0.1 so that a 'bti j' instruction is emitted as an explicit landing pad in this situation. Make in-kernel BTI depend on that compiler version when building with clang. Cc: Tom Stellard <tstellar@redhat.com> Cc: Daniel Kiss <daniel.kiss@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Acked-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20200615105524.GA2694@willie-the-truck Link: https://lore.kernel.org/r/20200616183630.2445-1-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-06-17kconfig: unify cc-option and as-optionMasahiro Yamada1-1/+1
cc-option and as-option are almost the same; both pass the flag to $(CC). The main difference is the cc-option stops before the assemble stage (-S option) whereas as-option stops after (-c option). I chose -S because it is slightly faster, but $(cc-option,-gz=zlib) returns a wrong result (https://lkml.org/lkml/2020/6/9/1529). It has been fixed by commit 7b16994437c7 ("Makefile: Improve compressed debug info support detection"), but the assembler should always be invoked for more reliable compiler option tests. However, you cannot simply replace -S with -c because the following code in lib/Kconfig.debug would break: depends on $(cc-option,-gsplit-dwarf) The combination of -c and -gsplit-dwarf does not accept /dev/null as output. $ cat /dev/null | gcc -gsplit-dwarf -S -x c - -o /dev/null $ echo $? 0 $ cat /dev/null | gcc -gsplit-dwarf -c -x c - -o /dev/null objcopy: Warning: '/dev/null' is not an ordinary file $ echo $? 1 $ cat /dev/null | gcc -gsplit-dwarf -c -x c - -o tmp.o $ echo $? 0 There is another flag that creates an separate file based on the object file path: $ cat /dev/null | gcc -ftest-coverage -c -x c - -o /dev/null <stdin>:1: error: cannot open /dev/null.gcno So, we cannot use /dev/null to sink the output. Align the cc-option implementation with scripts/Kbuild.include. With -c option used in cc-option, as-option is unneeded. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Acked-by: Will Deacon <will@kernel.org>
2020-06-13Merge tag 'kbuild-v5.8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuildLinus Torvalds1-2/+2
Pull more Kbuild updates from Masahiro Yamada: - fix build rules in binderfs sample - fix build errors when Kbuild recurses to the top Makefile - covert '---help---' in Kconfig to 'help' * tag 'kbuild-v5.8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: treewide: replace '---help---' in Kconfig files with 'help' kbuild: fix broken builds because of GZIP,BZIP2,LZOP variables samples: binderfs: really compile this sample and fix build issues
2020-06-14treewide: replace '---help---' in Kconfig files with 'help'Masahiro Yamada1-2/+2
Since commit 84af7a6194e4 ("checkpatch: kconfig: prefer 'help' over '---help---'"), the number of '---help---' has been gradually decreasing, but there are still more than 2400 instances. This commit finishes the conversion. While I touched the lines, I also fixed the indentation. There are a variety of indentation styles found. a) 4 spaces + '---help---' b) 7 spaces + '---help---' c) 8 spaces + '---help---' d) 1 space + 1 tab + '---help---' e) 1 tab + '---help---' (correct indentation) f) 1 tab + 1 space + '---help---' g) 1 tab + 2 spaces + '---help---' In order to convert all of them to 1 tab + 'help', I ran the following commend: $ find . -name 'Kconfig*' | xargs sed -i 's/^[[:space:]]*---help---/\thelp/' Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2020-06-11Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linuxLinus Torvalds1-1/+10
Pull arm64 fixes from Will Deacon: "arm64 fixes that came in during the merge window. There will probably be more to come, but it doesn't seem like it's worth me sitting on these in the meantime. - Fix SCS debug check to report max stack usage in bytes as advertised - Fix typo: CONFIG_FTRACE_WITH_REGS => CONFIG_DYNAMIC_FTRACE_WITH_REGS - Fix incorrect mask in HiSilicon L3C perf PMU driver - Fix compat vDSO compilation under some toolchain configurations - Fix false UBSAN warning from ACPI IORT parsing code - Fix booting under bootloaders that ignore TEXT_OFFSET - Annotate debug initcall function with '__init'" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: warn on incorrect placement of the kernel by the bootloader arm64: acpi: fix UBSAN warning arm64: vdso32: add CONFIG_THUMB2_COMPAT_VDSO drivers/perf: hisi: Fix wrong value for all counters enable arm64: ftrace: Change CONFIG_FTRACE_WITH_REGS to CONFIG_DYNAMIC_FTRACE_WITH_REGS arm64: debug: mark a function as __init to save some memory scs: Report SCS usage in bytes rather than number of entries
2020-06-11arm64: warn on incorrect placement of the kernel by the bootloaderArd Biesheuvel1-1/+2
Commit cfa7ede20f133c ("arm64: set TEXT_OFFSET to 0x0 in preparation for removing it entirely") results in boot failures when booting kernels that are built without KASLR support on broken bootloaders that ignore the TEXT_OFFSET value passed via the header, and use the default of 0x80000 instead. To work around this, turn CONFIG_RELOCATABLE on by default, even if KASLR itself (CONFIG_RANDOMIZE_BASE) is turned off, and require CONFIG_EXPERT to be enabled to deviate from this. Then, emit a warning into the kernel log if we are not booting via the EFI stub (which is permitted to deviate from the placement restrictions) and the kernel base address is not placed according to the rules as laid out in Documentation/arm64/booting.rst. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20200611124330.252163-1-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-06-10arm64: vdso32: add CONFIG_THUMB2_COMPAT_VDSONick Desaulniers1-0/+8
Allow the compat vdso (32b) to be compiled as either THUMB2 (default) or ARM. For THUMB2, the register r7 is reserved for the frame pointer, but code in arch/arm64/include/asm/vdso/compat_gettimeofday.h uses r7. Explicitly set -fomit-frame-pointer, since unwinding through interworked THUMB2 and ARM is unreliable anyways. See also how CONFIG_UNWINDER_FRAME_POINTER cannot be selected for CONFIG_THUMB2_KERNEL for ARCH=arm. This also helps toolchains that differ in their implicit value if the choice of -f{no-}omit-frame-pointer is left unspecified, to not error on the use of r7. 2019 Q4 ARM AAPCS seeks to standardize the use of r11 as the reserved frame pointer register, but no production compiler that can compile the Linux kernel currently implements this. We're actively discussing such a transition with ARM toolchain developers currently. Reported-by: Luis Lozano <llozano@google.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Manoj Gupta <manojgupta@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Stephen Boyd <swboyd@google.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Dave Martin <Dave.Martin@arm.com> Link: https://static.docs.arm.com/ihi0042/i/aapcs32.pdf Link: https://bugs.chromium.org/p/chromium/issues/detail?id=1084372 Link: https://lore.kernel.org/r/20200608205711.109418-1-ndesaulniers@google.com Signed-off-by: Will Deacon <will@kernel.org>
2020-06-04mm/debug: add tests validating architecture page table helpersAnshuman Khandual1-0/+1
This adds tests which will validate architecture page table helpers and other accessors in their compliance with expected generic MM semantics. This will help various architectures in validating changes to existing page table helpers or addition of new ones. This test covers basic page table entry transformations including but not limited to old, young, dirty, clean, write, write protect etc at various level along with populating intermediate entries with next page table page and validating them. Test page table pages are allocated from system memory with required size and alignments. The mapped pfns at page table levels are derived from a real pfn representing a valid kernel text symbol. This test gets called via late_initcall(). This test gets built and run when CONFIG_DEBUG_VM_PGTABLE is selected. Any architecture, which is willing to subscribe this test will need to select ARCH_HAS_DEBUG_VM_PGTABLE. For now this is limited to arc, arm64, x86, s390 and powerpc platforms where the test is known to build and run successfully Going forward, other architectures too can subscribe the test after fixing any build or runtime problems with their page table helpers. Folks interested in making sure that a given platform's page table helpers conform to expected generic MM semantics should enable the above config which will just trigger this test during boot. Any non conformity here will be reported as an warning which would need to be fixed. This test will help catch any changes to the agreed upon semantics expected from generic MM and enable platforms to accommodate it thereafter. [anshuman.khandual@arm.com: v17] Link: http://lkml.kernel.org/r/1587436495-22033-3-git-send-email-anshuman.khandual@arm.com [anshuman.khandual@arm.com: v18] Link: http://lkml.kernel.org/r/1588564865-31160-3-git-send-email-anshuman.khandual@arm.com Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Tested-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390] Tested-by: Christophe Leroy <christophe.leroy@c-s.fr> [ppc32] Reviewed-by: Ingo Molnar <mingo@kernel.org> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Kirill A. Shutemov <kirill@shutemov.name> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Link: http://lkml.kernel.org/r/1583919272-24178-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-03arm64: mm: use ARCH_HAS_DEBUG_WX instead of arch definedZong Li1-0/+1
Extract DEBUG_WX to mm/Kconfig.debug for shared use. Change to use ARCH_HAS_DEBUG_WX instead of DEBUG_WX defined by arch port. Signed-off-by: Zong Li <zong.li@sifive.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Link: http://lkml.kernel.org/r/e19709e7576f65e303245fe520cad5f7bae72763.1587455584.git.zong.li@sifive.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-03mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP optionMike Rapoport1-1/+0
CONFIG_HAVE_MEMBLOCK_NODE_MAP is used to differentiate initialization of nodes and zones structures between the systems that have region to node mapping in memblock and those that don't. Currently all the NUMA architectures enable this option and for the non-NUMA systems we can presume that all the memory belongs to node 0 and therefore the compile time configuration option is not required. The remaining few architectures that use DISCONTIGMEM without NUMA are easily updated to use memblock_add_node() instead of memblock_add() and thus have proper correspondence of memblock regions to NUMA nodes. Still, free_area_init_node() must have a backward compatible version because its semantics with and without CONFIG_HAVE_MEMBLOCK_NODE_MAP is different. Once all the architectures will use the new semantics, the entire compatibility layer can be dropped. To avoid addition of extra run time memory to store node id for architectures that keep memblock but have only a single node, the node id field of the memblock_region is guarded by CONFIG_NEED_MULTIPLE_NODES and the corresponding accessors presume that in those cases it is always 0. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200412194859.12663-4-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-01Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linuxLinus Torvalds1-57/+105
Pull arm64 updates from Will Deacon: "A sizeable pile of arm64 updates for 5.8. Summary below, but the big two features are support for Branch Target Identification and Clang's Shadow Call stack. The latter is currently arm64-only, but the high-level parts are all in core code so it could easily be adopted by other architectures pending toolchain support Branch Target Identification (BTI): - Support for ARMv8.5-BTI in both user- and kernel-space. This allows branch targets to limit the types of branch from which they can be called and additionally prevents branching to arbitrary code, although kernel support requires a very recent toolchain. - Function annotation via SYM_FUNC_START() so that assembly functions are wrapped with the relevant "landing pad" instructions. - BPF and vDSO updates to use the new instructions. - Addition of a new HWCAP and exposure of BTI capability to userspace via ID register emulation, along with ELF loader support for the BTI feature in .note.gnu.property. - Non-critical fixes to CFI unwind annotations in the sigreturn trampoline. Shadow Call Stack (SCS): - Support for Clang's Shadow Call Stack feature, which reserves platform register x18 to point at a separate stack for each task that holds only return addresses. This protects function return control flow from buffer overruns on the main stack. - Save/restore of x18 across problematic boundaries (user-mode, hypervisor, EFI, suspend, etc). - Core support for SCS, should other architectures want to use it too. - SCS overflow checking on context-switch as part of the existing stack limit check if CONFIG_SCHED_STACK_END_CHECK=y. CPU feature detection: - Removed numerous "SANITY CHECK" errors when running on a system with mismatched AArch32 support at EL1. This is primarily a concern for KVM, which disabled support for 32-bit guests on such a system. - Addition of new ID registers and fields as the architecture has been extended. Perf and PMU drivers: - Minor fixes and cleanups to system PMU drivers. Hardware errata: - Unify KVM workarounds for VHE and nVHE configurations. - Sort vendor errata entries in Kconfig. Secure Monitor Call Calling Convention (SMCCC): - Update to the latest specification from Arm (v1.2). - Allow PSCI code to query the SMCCC version. Software Delegated Exception Interface (SDEI): - Unexport a bunch of unused symbols. - Minor fixes to handling of firmware data. Pointer authentication: - Add support for dumping the kernel PAC mask in vmcoreinfo so that the stack can be unwound by tools such as kdump. - Simplification of key initialisation during CPU bringup. BPF backend: - Improve immediate generation for logical and add/sub instructions. vDSO: - Minor fixes to the linker flags for consistency with other architectures and support for LLVM's unwinder. - Clean up logic to initialise and map the vDSO into userspace. ACPI: - Work around for an ambiguity in the IORT specification relating to the "num_ids" field. - Support _DMA method for all named components rather than only PCIe root complexes. - Minor other IORT-related fixes. Miscellaneous: - Initialise debug traps early for KGDB and fix KDB cacheflushing deadlock. - Minor tweaks to early boot state (documentation update, set TEXT_OFFSET to 0x0, increase alignment of PE/COFF sections). - Refactoring and cleanup" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (148 commits) KVM: arm64: Move __load_guest_stage2 to kvm_mmu.h KVM: arm64: Check advertised Stage-2 page size capability arm64/cpufeature: Add get_arm64_ftr_reg_nowarn() ACPI/IORT: Remove the unused __get_pci_rid() arm64/cpuinfo: Add ID_MMFR4_EL1 into the cpuinfo_arm64 context arm64/cpufeature: Add remaining feature bits in ID_AA64PFR1 register arm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register arm64/cpufeature: Add remaining feature bits in ID_AA64ISAR0 register arm64/cpufeature: Add remaining feature bits in ID_MMFR4 register arm64/cpufeature: Add remaining feature bits in ID_PFR0 register arm64/cpufeature: Introduce ID_MMFR5 CPU register arm64/cpufeature: Introduce ID_DFR1 CPU register arm64/cpufeature: Introduce ID_PFR2 CPU register arm64/cpufeature: Make doublelock a signed feature in ID_AA64DFR0 arm64/cpufeature: Drop TraceFilt feature exposure from ID_DFR0 register arm64/cpufeature: Add explicit ftr_id_isar0[] for ID_ISAR0 register arm64: mm: Add asid_gen_match() helper firmware: smccc: Fix missing prototype warning for arm_smccc_version_init arm64: vdso: Fix CFI directives in sigreturn trampoline arm64: vdso: Don't prefix sigreturn trampoline with a BTI C instruction ...
2020-05-28Merge branch 'for-next/scs' into for-next/coreWill Deacon1-0/+5
Support for Clang's Shadow Call Stack in the kernel (Sami Tolvanen and Will Deacon) * for-next/scs: arm64: entry-ftrace.S: Update comment to indicate that x18 is live scs: Move DEFINE_SCS macro into core code scs: Remove references to asm/scs.h from core code scs: Move scs_overflow_check() out of architecture code arm64: scs: Use 'scs_sp' register alias for x18 scs: Move accounting into alloc/free functions arm64: scs: Store absolute SCS stack pointer value in thread_info efi/libstub: Disable Shadow Call Stack arm64: scs: Add shadow stacks for SDEI arm64: Implement Shadow Call Stack arm64: Disable SCS for hypervisor code arm64: vdso: Disable Shadow Call Stack arm64: efi: Restore register x18 if it was corrupted arm64: Preserve register x18 when CPU is suspended arm64: Reserve register x18 from general allocation with SCS scs: Disable when function graph tracing is enabled scs: Add support for stack usage debugging scs: Add page accounting for shadow call stack allocations scs: Add support for Clang's Shadow Call Stack (SCS)
2020-05-28Merge branch 'for-next/kvm/errata' into for-next/coreWill Deacon1-21/+18
KVM CPU errata rework (Andrew Scull and Marc Zyngier) * for-next/kvm/errata: KVM: arm64: Move __load_guest_stage2 to kvm_mmu.h arm64: Unify WORKAROUND_SPECULATIVE_AT_{NVHE,VHE}
2020-05-28Merge branch 'for-next/bti' into for-next/coreWill Deacon1-0/+46
Support for Branch Target Identification (BTI) in user and kernel (Mark Brown and others) * for-next/bti: (39 commits) arm64: vdso: Fix CFI directives in sigreturn trampoline arm64: vdso: Don't prefix sigreturn trampoline with a BTI C instruction arm64: bti: Fix support for userspace only BTI arm64: kconfig: Update and comment GCC version check for kernel BTI arm64: vdso: Map the vDSO text with guarded pages when built for BTI arm64: vdso: Force the vDSO to be linked as BTI when built for BTI arm64: vdso: Annotate for BTI arm64: asm: Provide a mechanism for generating ELF note for BTI arm64: bti: Provide Kconfig for kernel mode BTI arm64: mm: Mark executable text as guarded pages arm64: bpf: Annotate JITed code for BTI arm64: Set GP bit in kernel page tables to enable BTI for the kernel arm64: asm: Override SYM_FUNC_START when building the kernel with BTI arm64: bti: Support building kernel C code using BTI arm64: Document why we enable PAC support for leaf functions arm64: insn: Report PAC and BTI instructions as skippable arm64: insn: Don't assume unrecognized HINTs are skippable arm64: insn: Provide a better name for aarch64_insn_is_nop() arm64: insn: Add constants for new HINT instruction decode arm64: Disable old style assembly annotations ...
2020-05-25Merge tag 'v5.7-rc7' into efi/core, to refresh the branch and pick up fixesIngo Molnar1-0/+1
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-05-15arm64: Implement Shadow Call StackSami Tolvanen1-0/+5
This change implements shadow stack switching, initial SCS set-up, and interrupt shadow stacks for arm64. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15bpf: Restrict bpf_probe_read{, str}() only to archs where they workDaniel Borkmann1-0/+1
Given the legacy bpf_probe_read{,str}() BPF helpers are broken on archs with overlapping address ranges, we should really take the next step to disable them from BPF use there. To generally fix the situation, we've recently added new helper variants bpf_probe_read_{user,kernel}() and bpf_probe_read_{user,kernel}_str(). For details on them, see 6ae08ae3dea2 ("bpf: Add probe_read_{user, kernel} and probe_read_{user,kernel}_str helpers"). Given bpf_probe_read{,str}() have been around for ~5 years by now, there are plenty of users at least on x86 still relying on them today, so we cannot remove them entirely w/o breaking the BPF tracing ecosystem. However, their use should be restricted to archs with non-overlapping address ranges where they are working in their current form. Therefore, move this behind a CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE and have x86, arm64, arm select it (other archs supporting it can follow-up on it as well). For the remaining archs, they can workaround easily by relying on the feature probe from bpftool which spills out defines that can be used out of BPF C code to implement the drop-in replacement for old/new kernels via: bpftool feature probe macro Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/bpf/20200515101118.6508-2-daniel@iogearbox.net
2020-05-12arm64: kconfig: Update and comment GCC version check for kernel BTIWill Deacon1-1/+2
Some versions of GCC are known to suffer from a BTI code generation bug, meaning that CONFIG_CC_HAS_BRANCH_PROT_PAC_RET_BTI cannot be solely used to determine whether or not we can compile with kernel with BTI enabled. Update the BTI Kconfig entry to refer to the relevant GCC bugzilla entry (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94697) and update the check now that the fix has been merged into GCC release 10.1. Acked-by: Mark Brown <broonie@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: bti: Provide Kconfig for kernel mode BTIMark Brown1-0/+19
Now that all the code is in place provide a Kconfig option allowing users to enable BTI for the kernel if their toolchain supports it, defaulting it on since this has security benefits. This is a separate configuration option since we currently don't support secondary CPUs that lack BTI if the boot CPU supports it. Code generation issues mean that current GCC 9 versions are not able to produce usable BTI binaries so we disable support for building with GCC versions prior to 10, once a fix is backported to GCC 9 the dependencies will be updated. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-8-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-05Merge branches 'for-next/asm' and 'for-next/insn' into for-next/btiWill Deacon1-0/+1
Merge in dependencies for in-kernel Branch Target Identification support. * for-next/asm: arm64: Disable old style assembly annotations arm64: kernel: Convert to modern annotations for assembly functions arm64: entry: Refactor and modernise annotation for ret_to_user x86/asm: Provide a Kconfig symbol for disabling old assembly annotations x86/32: Remove CONFIG_DOUBLEFAULT * for-next/insn: arm64: insn: Report PAC and BTI instructions as skippable arm64: insn: Don't assume unrecognized HINTs are skippable arm64: insn: Provide a better name for aarch64_insn_is_nop() arm64: insn: Add constants for new HINT instruction decode
2020-05-05Merge branch 'for-next/bti-user' into for-next/btiWill Deacon1-0/+25
Merge in user support for Branch Target Identification, which narrowly missed the cut for 5.7 after a late ABI concern. * for-next/bti-user: arm64: bti: Document behaviour for dynamically linked binaries arm64: elf: Fix allnoconfig kernel build with !ARCH_USE_GNU_PROPERTY arm64: BTI: Add Kconfig entry for userspace BTI mm: smaps: Report arm64 guarded pages in smaps arm64: mm: Display guarded pages in ptdump KVM: arm64: BTI: Reset BTYPE when skipping emulated instructions arm64: BTI: Reset BTYPE when skipping emulated instructions arm64: traps: Shuffle code to eliminate forward declarations arm64: unify native/compat instruction skipping arm64: BTI: Decode BYTPE bits when printing PSTATE arm64: elf: Enable BTI at exec based on ELF program properties elf: Allow arch to tweak initial mmap prot flags arm64: Basic Branch Target Identification support ELF: Add ELF program property parsing support ELF: UAPI and Kconfig additions for ELF program properties
2020-05-05arm64: Sort vendor-specific errataGeert Uytterhoeven1-36/+36
Sort configuration options for vendor-specific errata by vendor, to increase uniformity. Move ARM64_WORKAROUND_REPEAT_TLBI up, as it is also selected by ARM64_ERRATUM_1286807. Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Arnd Bergmann <arnd@arndb.de Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: Unify WORKAROUND_SPECULATIVE_AT_{NVHE,VHE}Andrew Scull1-21/+18
Errata 1165522, 1319367 and 1530923 each allow TLB entries to be allocated as a result of a speculative AT instruction. In order to avoid mandating VHE on certain affected CPUs, apply the workaround to both the nVHE and the VHE case for all affected CPUs. Signed-off-by: Andrew Scull <ascull@google.com> Acked-by: Will Deacon <will@kernel.org> CC: Marc Zyngier <maz@kernel.org> CC: James Morse <james.morse@arm.com> CC: Suzuki K Poulose <suzuki.poulose@arm.com> CC: Will Deacon <will@kernel.org> CC: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/20200504094858.108917-1-ascull@google.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: Disable old style assembly annotationsMark Brown1-0/+1
Now that we have converted arm64 over to the new style SYM_ assembler annotations select ARCH_USE_SYM_ANNOTATIONS so the old macros aren't available and we don't regress. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200501115430.37315-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-04-23efi/libstub: Move arm-stub to a common fileAtish Patra1-1/+1
Most of the arm-stub code is written in an architecture independent manner. As a result, RISC-V can reuse most of the arm-stub code. Rename the arm-stub.c to efi-stub.c so that ARM, ARM64 and RISC-V can use it. This patch doesn't introduce any functional changes. Signed-off-by: Atish Patra <atish.patra@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Link: https://lore.kernel.org/r/20200415195422.19866-2-atish.patra@wdc.com Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-04-09Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linuxLinus Torvalds1-1/+4
Pull arm64 fixes from Catalin Marinas: - Ensure that the compiler and linker versions are aligned so that ld doesn't complain about not understanding a .note.gnu.property section (emitted when pointer authentication is enabled). - Force -mbranch-protection=none when the feature is not enabled, in case a compiler may choose a different default value. - Remove CONFIG_DEBUG_ALIGN_RODATA. It was never in defconfig and rarely enabled. - Fix checking 16-bit Thumb-2 instructions checking mask in the emulation of the SETEND instruction (it could match the bottom half of a 32-bit Thumb-2 instruction). * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: armv8_deprecated: Fix undef_hook mask for thumb setend arm64: remove CONFIG_DEBUG_ALIGN_RODATA feature arm64: Always force a branch protection mode when the compiler has one arm64: Kconfig: ptrauth: Add binutils version check to fix mismatch init/kconfig: Add LD_VERSION Kconfig
2020-04-01arm64: Kconfig: ptrauth: Add binutils version check to fix mismatchAmit Daniel Kachhap1-1/+4
Recent addition of ARM64_PTR_AUTH exposed a mismatch issue with binutils. 9.1+ versions of gcc inserts a section note .note.gnu.property but this can be used properly by binutils version greater than 2.33.1. If older binutils are used then the following warnings are generated, aarch64-linux-ld: warning: arch/arm64/kernel/vdso/vgettimeofday.o: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000 aarch64-linux-objdump: warning: arch/arm64/lib/csum.o: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000 aarch64-linux-nm: warning: .tmp_vmlinux1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000 This patch enables ARM64_PTR_AUTH when gcc and binutils versions are compatible with each other. Older gcc which do not insert such section continue to work as before. This scenario may not occur with clang as a recent commit 3b446c7d27ddd06 ("arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH") masks binutils version lesser then 2.34. Reported-by: kbuild test robot <lkp@intel.com> Suggested-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> [catalin.marinas@arm.com: slight adjustment to the comment] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-31Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linuxLinus Torvalds1-3/+66
Pull arm64 updates from Catalin Marinas: "The bulk is in-kernel pointer authentication, activity monitors and lots of asm symbol annotations. I also queued the sys_mremap() patch commenting the asymmetry in the address untagging. Summary: - In-kernel Pointer Authentication support (previously only offered to user space). - ARM Activity Monitors (AMU) extension support allowing better CPU utilisation numbers for the scheduler (frequency invariance). - Memory hot-remove support for arm64. - Lots of asm annotations (SYM_*) in preparation for the in-kernel Branch Target Identification (BTI) support. - arm64 perf updates: ARMv8.5-PMU 64-bit counters, refactoring the PMU init callbacks, support for new DT compatibles. - IPv6 header checksum optimisation. - Fixes: SDEI (software delegated exception interface) double-lock on hibernate with shared events. - Minor clean-ups and refactoring: cpu_ops accessor, cpu_do_switch_mm() converted to C, cpufeature finalisation helper. - sys_mremap() comment explaining the asymmetric address untagging behaviour" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (81 commits) mm/mremap: Add comment explaining the untagging behaviour of mremap() arm64: head: Convert install_el2_stub to SYM_INNER_LABEL arm64: Introduce get_cpu_ops() helper function arm64: Rename cpu_read_ops() to init_cpu_ops() arm64: Declare ACPI parking protocol CPU operation if needed arm64: move kimage_vaddr to .rodata arm64: use mov_q instead of literal ldr arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH lkdtm: arm64: test kernel pointer authentication arm64: compile the kernel with ptrauth return address signing kconfig: Add support for 'as-option' arm64: suspend: restore the kernel ptrauth keys arm64: __show_regs: strip PAC from lr in printk arm64: unwind: strip PAC from kernel addresses arm64: mask PAC bits of __builtin_return_address arm64: initialize ptrauth keys for kernel booting task arm64: initialize and switch ptrauth kernel keys arm64: enable ptrauth earlier arm64: cpufeature: handle conflicts based on capability arm64: cpufeature: Move cpu capability helpers inside C file ...
2020-03-25Merge branch 'for-next/kernel-ptrauth' into for-next/coreCatalin Marinas1-1/+34
* for-next/kernel-ptrauth: : Return address signing - in-kernel support arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH lkdtm: arm64: test kernel pointer authentication arm64: compile the kernel with ptrauth return address signing kconfig: Add support for 'as-option' arm64: suspend: restore the kernel ptrauth keys arm64: __show_regs: strip PAC from lr in printk arm64: unwind: strip PAC from kernel addresses arm64: mask PAC bits of __builtin_return_address arm64: initialize ptrauth keys for kernel booting task arm64: initialize and switch ptrauth kernel keys arm64: enable ptrauth earlier arm64: cpufeature: handle conflicts based on capability arm64: cpufeature: Move cpu capability helpers inside C file arm64: ptrauth: Add bootup/runtime flags for __cpu_setup arm64: install user ptrauth keys at kernel exit time arm64: rename ptrauth key structures to be user-specific arm64: cpufeature: add pointer auth meta-capabilities arm64: cpufeature: Fix meta-capability cpufeature check
2020-03-25Merge branches 'for-next/memory-hotremove', 'for-next/arm_sdei', 'for-next/amu', 'for-next/final-cap-helper', 'for-next/cpu_ops-cleanup', 'for-next/misc' and 'for-next/perf' into for-next/coreCatalin Marinas1-2/+29
* for-next/memory-hotremove: : Memory hot-remove support for arm64 arm64/mm: Enable memory hot remove arm64/mm: Hold memory hotplug lock while walking for kernel page table dump * for-next/arm_sdei: : SDEI: fix double locking on return from hibernate and clean-up firmware: arm_sdei: clean up sdei_event_create() firmware: arm_sdei: Use cpus_read_lock() to avoid races with cpuhp firmware: arm_sdei: fix possible double-lock on hibernate error path firmware: arm_sdei: fix double-lock on hibernate with shared events * for-next/amu: : ARMv8.4 Activity Monitors support clocksource/drivers/arm_arch_timer: validate arch_timer_rate arm64: use activity monitors for frequency invariance cpufreq: add function to get the hardware max frequency Documentation: arm64: document support for the AMU extension arm64/kvm: disable access to AMU registers from kvm guests arm64: trap to EL1 accesses to AMU counters from EL0 arm64: add support for the AMU extension v1 * for-next/final-cap-helper: : Introduce cpus_have_final_cap_helper(), migrate arm64 KVM to it arm64: kvm: hyp: use cpus_have_final_cap() arm64: cpufeature: add cpus_have_final_cap() * for-next/cpu_ops-cleanup: : cpu_ops[] access code clean-up arm64: Introduce get_cpu_ops() helper function arm64: Rename cpu_read_ops() to init_cpu_ops() arm64: Declare ACPI parking protocol CPU operation if needed * for-next/misc: : Various fixes and clean-ups arm64: define __alloc_zeroed_user_highpage arm64/kernel: Simplify __cpu_up() by bailing out early arm64: remove redundant blank for '=' operator arm64: kexec_file: Fixed code style. arm64: add blank after 'if' arm64: fix spelling mistake "ca not" -> "cannot" arm64: entry: unmask IRQ in el0_sp() arm64: efi: add efi-entry.o to targets instead of extra-$(CONFIG_EFI) arm64: csum: Optimise IPv6 header checksum arch/arm64: fix typo in a comment arm64: remove gratuitious/stray .ltorg stanzas arm64: Update comment for ASID() macro arm64: mm: convert cpu_do_switch_mm() to C arm64: fix NUMA Kconfig typos * for-next/perf: : arm64 perf updates arm64: perf: Add support for ARMv8.5-PMU 64-bit counters KVM: arm64: limit PMU version to PMUv3 for ARMv8.1 arm64: cpufeature: Extract capped perfmon fields arm64: perf: Clean up enable/disable calls perf: arm-ccn: Use scnprintf() for robustness arm64: perf: Support new DT compatibles arm64: perf: Refactor PMU init callbacks perf: arm_spe: Remove unnecessary zero check on 'nr_pages'
2020-03-20arm64: Kconfig: verify binutils support for ARM64_PTR_AUTHNick Desaulniers1-0/+4
Clang relies on GNU as from binutils to assemble the Linux kernel, currently. A recent patch to enable the armv8.3-a extension for pointer authentication checked for compiler support of the relevant flags. Everything works with binutils 2.34+, but for older versions we observe assembler errors: /tmp/vgettimeofday-36a54b.s: Assembler messages: /tmp/vgettimeofday-36a54b.s:40: Error: unknown pseudo-op: `.cfi_negate_ra_state' When compiling with Clang, require the assembler to support .cfi_negate_ra_state directives, in order to support CONFIG_ARM64_PTR_AUTH. Link: https://github.com/ClangBuiltLinux/linux/issues/938 Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Tested-by: Nathan Chancellor <natechancellor@gmail.com>
2020-03-18arm64: elf: Fix allnoconfig kernel build with !ARCH_USE_GNU_PROPERTYWill Deacon1-1/+1
Commit ab7876a98a21 ("arm64: elf: Enable BTI at exec based on ELF program properties") introduced the conditional selection of ARCH_USE_GNU_PROPERTY if BINFMT_ELF is enabled. With allnoconfig, this option is no longer selected and the arm64 arch_parse_elf_property() function clashes with the generic dummy implementation. Link: http://lkml.kernel.org/r/20200318082830.GA31312@willie-the-truck Fixes: ab7876a98a21 ("arm64: elf: Enable BTI at exec based on ELF program properties") Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: compile the kernel with ptrauth return address signingKristina Martsenko1-1/+23
Compile all functions with two ptrauth instructions: PACIASP in the prologue to sign the return address, and AUTIASP in the epilogue to authenticate the return address (from the stack). If authentication fails, the return will cause an instruction abort to be taken, followed by an oops and killing the task. This should help protect the kernel against attacks using return-oriented programming. As ptrauth protects the return address, it can also serve as a replacement for CONFIG_STACKPROTECTOR, although note that it does not protect other parts of the stack. The new instructions are in the HINT encoding space, so on a system without ptrauth they execute as NOPs. CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM guests, but also automatically builds the kernel with ptrauth instructions if the compiler supports it. If there is no compiler support, we do not warn that the kernel was built without ptrauth instructions. GCC 7 and 8 support the -msign-return-address option, while GCC 9 deprecates that option and replaces it with -mbranch-protection. Support both options. Clang uses an external assembler hence this patch makes sure that the correct parameters (-march=armv8.3-a) are passed down to help it recognize the ptrauth instructions. Ftrace function tracer works properly with Ptrauth only when patchable-function-entry feature is present and is ensured by the Kconfig dependency. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> # not co-dev parts Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: Cover leaf function, comments, Ftrace Kconfig] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: mask PAC bits of __builtin_return_addressAmit Daniel Kachhap1-0/+1
Functions like vmap() record how much memory has been allocated by their callers, and callers are identified using __builtin_return_address(). Once the kernel is using pointer-auth the return address will be signed. This means it will not match any kernel symbol, and will vary between threads even for the same caller. The output of /proc/vmallocinfo in this case may look like, 0x(____ptrval____)-0x(____ptrval____) 20480 0x86e28000100e7c60 pages=4 vmalloc N0=4 0x(____ptrval____)-0x(____ptrval____) 20480 0x86e28000100e7c60 pages=4 vmalloc N0=4 0x(____ptrval____)-0x(____ptrval____) 20480 0xc5c78000100e7c60 pages=4 vmalloc N0=4 The above three 64bit values should be the same symbol name and not different LR values. Use the pre-processor to add logic to clear the PAC to __builtin_return_address() callers. This patch adds a new file asm/compiler.h and is transitively included via include/compiler_types.h on the compiler command line so it is guaranteed to be loaded and the users of this macro will not find a wrong version. Helper macros ptrauth_kernel_pac_mask/ptrauth_clear_pac are created for this purpose and added in this file. Existing macro ptrauth_user_pac_mask moved from asm/pointer_auth.h. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: enable ptrauth earlierKristina Martsenko1-0/+6
When the kernel is compiled with pointer auth instructions, the boot CPU needs to start using address auth very early, so change the cpucap to account for this. Pointer auth must be enabled before we call C functions, because it is not possible to enter a function with pointer auth disabled and exit it with pointer auth enabled. Note, mismatches between architected and IMPDEF algorithms will still be caught by the cpufeature framework (the separate *_ARCH and *_IMP_DEF cpucaps). Note the change in behavior: if the boot CPU has address auth and a late CPU does not, then the late CPU is parked by the cpufeature framework. This is possible as kernel will only have NOP space intructions for PAC so such mismatched late cpu will silently ignore those instructions in C functions. Also, if the boot CPU does not have address auth and the late CPU has then the late cpu will still boot but with ptrauth feature disabled. Leave generic authentication as a "system scope" cpucap for now, since initially the kernel will only use address authentication. Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: Re-worked ptrauth setup logic, comments] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>