aboutsummaryrefslogtreecommitdiffstats
path: root/arch (follow)
AgeCommit message (Collapse)AuthorFilesLines
2019-08-09PSCI: cpuidle: Refactor CPU suspend power_state parameter handlingLorenzo Pieralisi1-4/+45
Current PSCI code handles idle state entry through the psci_cpu_suspend_enter() API, that takes an idle state index as a parameter and convert the index into a previously initialized power_state parameter before calling the PSCI.CPU_SUSPEND() with it. This is unwieldly, since it forces the PSCI firmware layer to keep track of power_state parameter for every idle state so that the index->power_state conversion can be made in the PSCI firmware layer instead of the CPUidle driver implementations. Move the power_state handling out of drivers/firmware/psci into the respective ACPI/DT PSCI CPUidle backends and convert the psci_cpu_suspend_enter() API to get the power_state parameter as input, which makes it closer to its firmware interface PSCI.CPU_SUSPEND() API. A notable side effect is that the PSCI ACPI/DT CPUidle backends now can directly handle (and if needed update) power_state parameters before handing them over to the PSCI firmware interface to trigger PSCI.CPU_SUSPEND() calls. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ulf Hansson <ulf.hansson@linaro.org> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09ARM: psci: cpuidle: Enable PSCI CPUidle driverLorenzo Pieralisi2-7/+4
Allow selection of the PSCI CPUidle in the kernel by updating the respective Kconfig entry. Remove PSCI callbacks from ARM/ARM64 generic CPU ops to prevent the PSCI idle driver from clashing with the generic ARM CPUidle driver initialization, that relies on CPU ops to initialize and enter idle states. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Cc: Will Deacon <will@kernel.org> Cc: Ulf Hansson <ulf.hansson@linaro.org> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09arm64: mm: Really fix sparse warning in untagged_addr()Will Deacon1-1/+1
untagged_addr() can be called with a '__user' pointer parameter and must therefore use '__force' casts both when passing this parameter through to sign_extend64() as a 'u64', but also when casting the 's64' return value back to the '__user' pointer type. Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09arm64: mm: Simplify definition of virt_addr_valid()Will Deacon1-4/+2
_virt_addr_valid() is defined as the same value in two places and rolls its own version of virt_to_pfn() in both cases. Consolidate these definitions by inlining a simplified version directly into virt_addr_valid(). Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09arm64: mm: Remove vabits_userSteve Capper7-16/+5
Previous patches have enabled 52-bit kernel + user VAs and there is no longer any scenario where user VA != kernel VA size. This patch removes the, now redundant, vabits_user variable and replaces usage with vabits_actual where appropriate. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09arm64: mm: Introduce 52-bit Kernel VAsSteve Capper8-22/+39
Most of the machinery is now in place to enable 52-bit kernel VAs that are detectable at boot time. This patch adds a Kconfig option for 52-bit user and kernel addresses and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ, physvirt_offset and vmemmap at early boot. To simplify things this patch also removes the 52-bit user/48-bit kernel kconfig option. Signed-off-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09arm64: mm: Modify calculation of VMEMMAP_SIZESteve Capper1-1/+8
In a later patch we will need to have a slightly larger VMEMMAP region to accommodate boot time selection between 48/52-bit kernel VAs. This patch modifies the formula for computing VMEMMAP_SIZE to depend explicitly on the PAGE_OFFSET and start of kernel addressable memory. (This allows for a slightly larger direct linear map in future). Signed-off-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09arm64: mm: Separate out vmemmapSteve Capper2-2/+7
vmemmap is a preprocessor definition that depends on a variable, memstart_addr. In a later patch we will need to expand the size of the VMEMMAP region and optionally modify vmemmap depending upon whether or not hardware support is available for 52-bit virtual addresses. This patch changes vmemmap to be a variable. As the old definition depended on a variable load, this should not affect performance noticeably. Signed-off-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09arm64: mm: Logic to make offset_ttbr1 conditionalSteve Capper4-10/+18
When running with a 52-bit userspace VA and a 48-bit kernel VA we offset ttbr1_el1 to allow the kernel pagetables with a 52-bit PTRS_PER_PGD to be used for both userspace and kernel. Moving on to a 52-bit kernel VA we no longer require this offset to ttbr1_el1 should we be running on a system with HW support for 52-bit VAs. This patch introduces conditional logic to offset_ttbr1 to query SYS_ID_AA64MMFR2_EL1 whenever 52-bit VAs are selected. If there is HW support for 52-bit VAs then the ttbr1 offset is skipped. We choose to read a system register rather than vabits_actual because offset_ttbr1 can be called in places where the kernel data is not actually mapped. Calls to offset_ttbr1 appear to be made from rarely called code paths so this extra logic is not expected to adversely affect performance. Signed-off-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09arm64: mm: Introduce vabits_actualSteve Capper8-17/+31
In order to support 52-bit kernel addresses detectable at boot time, one needs to know the actual VA_BITS detected. A new variable vabits_actual is introduced in this commit and employed for the KVM hypervisor layout, KASAN, fault handling and phys-to/from-virt translation where there would normally be compile time constants. In order to maintain performance in phys_to_virt, another variable physvirt_offset is introduced. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09arm64: mm: Introduce VA_BITS_MINSteve Capper6-9/+17
In order to support 52-bit kernel addresses detectable at boot time, the kernel needs to know the most conservative VA_BITS possible should it need to fall back to this quantity due to lack of hardware support. A new compile time constant VA_BITS_MIN is introduced in this patch and it is employed in the KASAN end address, KASLR, and EFI stub. For Arm, if 52-bit VA support is unavailable the fallback is to 48-bits. In other words: VA_BITS_MIN = min (48, VA_BITS) Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09arm64: dump: De-constify VA_START and KASAN_SHADOW_STARTSteve Capper1-3/+16
The kernel page table dumper assumes that the placement of VA regions is constant and determined at compile time. As we are about to introduce variable VA logic, we need to be able to determine certain regions at boot time. Specifically the VA_START and KASAN_SHADOW_START will depend on whether or not the system is booted with 52-bit kernel VAs. This patch adds logic to the kernel page table dumper s.t. these regions can be computed at boot time. Signed-off-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09arm64: kasan: Switch to using KASAN_SHADOW_OFFSETSteve Capper4-18/+24
KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command line argument and affects the codegen of the inline address sanetiser. Essentially, for an example memory access: *ptr1 = val; The compiler will insert logic similar to the below: shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET) if (somethingWrong(shadowValue)) flagAnError(); This code sequence is inserted into many places, thus KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel text. If we want to run a single kernel binary with multiple address spaces, then we need to do this with KASAN_SHADOW_OFFSET fixed. Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide shadow addresses we know that the end of the shadow region is constant w.r.t. VA space size: KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET This means that if we increase the size of the VA space, the start of the KASAN region expands into lower addresses whilst the end of the KASAN region is fixed. Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via build scripts with the VA size used as a parameter. (There are build time checks in the C code too to ensure that expected values are being derived). It is sufficient, and indeed is a simplification, to remove the build scripts (and build time checks) entirely and instead provide KASAN_SHADOW_OFFSET values. This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the arm64 Makefile, and instead we adopt the approach used by x86 to supply offset values in kConfig. To help debug/develop future VA space changes, the Makefile logic has been preserved in a script file in the arm64 Documentation folder. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09arm64: mm: Flip kernel VA spaceSteve Capper8-22/+16
In order to allow for a KASAN shadow that changes size at boot time, one must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the start address. Also, it is highly desirable to maintain the same function addresses in the kernel .text between VA sizes. Both of these requirements necessitate us to flip the kernel address space halves s.t. the direct linear map occupies the lower addresses. This patch puts the direct linear map in the lower addresses of the kernel VA range and everything else in the higher ranges. We need to adjust: *) KASAN shadow region placement logic, *) KASAN_SHADOW_OFFSET computation logic, *) virt_to_phys, phys_to_virt checks, *) page table dumper. These are all small changes, that need to take place atomically, so they are bundled into this commit. As part of the re-arrangement, a guard region of 2MB (to preserve alignment for fixed map) is added after the vmemmap. Otherwise the vmemmap could intersect with IS_ERR pointers. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-09arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_STARTSteve Capper1-6/+5
Currently there are assumptions about the alignment of VMEMMAP_START and PAGE_OFFSET that won't be valid after this series is applied. These assumptions are in the form of bitwise operators being used instead of addition and subtraction when calculating addresses. This patch replaces these bitwise operators with addition/subtraction. Signed-off-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-07arm64/ptrace: Fix typoes in sve_set() commentJulien Grall1-1/+1
The ptrace trace SVE flags are prefixed with SVE_PT_*. Update the comment accordingly. Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-07arm64: mm: print hexadecimal EC value in mem_abort_decode()Miles Chen1-2/+2
This change prints the hexadecimal EC value in mem_abort_decode(), which makes it easier to lookup the corresponding EC in the ARM Architecture Reference Manual. The commit 1f9b8936f36f ("arm64: Decode information from ESR upon mem faults") prints useful information when memory abort occurs. It would be easier to lookup "0x25" instead of "DABT" in the document. Then we can check the corresponding ISS. For example: Current info Document EC Exception class "CP15 MCR/MRC" 0x3 "MCR or MRC access to CP15a..." "ASIMD" 0x7 "Access to SIMD or floating-point..." "DABT (current EL)" 0x25 "Data Abort taken without..." ... Before: Unable to handle kernel paging request at virtual address 000000000000c000 Mem abort info: ESR = 0x96000046 Exception class = DABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 Data abort info: ISV = 0, ISS = 0x00000046 CM = 0, WnR = 1 After: Unable to handle kernel paging request at virtual address 000000000000c000 Mem abort info: ESR = 0x96000046 EC = 0x25: DABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 Data abort info: ISV = 0, ISS = 0x00000046 CM = 0, WnR = 1 Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: James Morse <james.morse@arm.com> Acked-by: Mark Rutland <Mark.rutland@arm.com> Signed-off-by: Miles Chen <miles.chen@mediatek.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-07arm64/prefetch: fix a -Wtype-limits warningQian Cai2-11/+12
The commit d5370f754875 ("arm64: prefetch: add alternative pattern for CPUs without a prefetcher") introduced MIDR_IS_CPU_MODEL_RANGE() to be used in has_no_hw_prefetch() with rv_min=0 which generates a compilation warning from GCC, In file included from ./arch/arm64/include/asm/cache.h:8, from ./include/linux/cache.h:6, from ./include/linux/printk.h:9, from ./include/linux/kernel.h:15, from ./include/linux/cpumask.h:10, from arch/arm64/kernel/cpufeature.c:11: arch/arm64/kernel/cpufeature.c: In function 'has_no_hw_prefetch': ./arch/arm64/include/asm/cputype.h:59:26: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits] _model == (model) && rv >= (rv_min) && rv <= (rv_max); \ ^~ arch/arm64/kernel/cpufeature.c:889:9: note: in expansion of macro 'MIDR_IS_CPU_MODEL_RANGE' return MIDR_IS_CPU_MODEL_RANGE(midr, MIDR_THUNDERX, ^~~~~~~~~~~~~~~~~~~~~~~ Fix it by converting MIDR_IS_CPU_MODEL_RANGE to a static inline function. Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-07arm64: Add support for function error injectionLeo Yan4-0/+26
Inspired by the commit 7cd01b08d35f ("powerpc: Add support for function error injection"), this patch supports function error injection for Arm64. This patch mainly support two functions: one is regs_set_return_value() which is used to overwrite the return value; the another function is override_function_with_return() which is to override the probed function returning and jump to its caller. Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-07error-injection: Consolidate override function definitionLeo Yan2-26/+0
The function override_function_with_return() is defined separately for each architecture and every architecture's definition is almost same with each other. E.g. x86 and powerpc both define function in its own asm/error-injection.h header and override_function_with_return() has the same definition, the only difference is that x86 defines an extra function just_return_func() but it is specific for x86 and is only used by x86's override_function_with_return(), so don't need to export this function. This patch consolidates override_function_with_return() definition into asm-generic/error-injection.h header, thus all architectures can use the common definition. As result, the architecture specific headers are removed; the include/linux/error-injection.h header also changes to include asm-generic/error-injection.h header rather than architecture header, furthermore, it includes linux/compiler.h for successful compilation. Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-06arm64: Introduce prctl() options to control the tagged user addresses ABICatalin Marinas5-1/+94
It is not desirable to relax the ABI to allow tagged user addresses into the kernel indiscriminately. This patch introduces a prctl() interface for enabling or disabling the tagged ABI with a global sysctl control for preventing applications from enabling the relaxed ABI (meant for testing user-space prctl() return error checking without reconfiguring the kernel). The ABI properties are inherited by threads of the same application and fork()'ed children but cleared on execve(). A Kconfig option allows the overall disabling of the relaxed ABI. The PR_SET_TAGGED_ADDR_CTRL will be expanded in the future to handle MTE-specific settings like imprecise vs precise exceptions. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-06arm64: untag user pointers in access_ok and __uaccess_mask_ptrAndrey Konovalov2-4/+8
This patch is a part of a series that extends kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments. copy_from_user (and a few other similar functions) are used to copy data from user memory into the kernel memory or vice versa. Since a user can provided a tagged pointer to one of the syscalls that use copy_from_user, we need to correctly handle such pointers. Do this by untagging user pointers in access_ok and in __uaccess_mask_ptr, before performing access validity checks. Note, that this patch only temporarily untags the pointers to perform the checks, but then passes them as is into the kernel internals. Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> [will: Add __force to casting in untagged_addr() to kill sparse warning] Signed-off-by: Will Deacon <will@kernel.org>
2019-08-05arm64: Add support for relocating the kernel with RELR relocationsPeter Collingbourne4-6/+114
RELR is a relocation packing format for relative relocations. The format is described in a generic-abi proposal: https://groups.google.com/d/topic/generic-abi/bX460iggiKg/discussion The LLD linker can be instructed to pack relocations in the RELR format by passing the flag --pack-dyn-relocs=relr. This patch adds a new config option, CONFIG_RELR. Enabling this option instructs the linker to pack vmlinux's relative relocations in the RELR format, and causes the kernel to apply the relocations at startup along with the RELA relocations. RELA relocations still need to be applied because the linker will emit RELA relative relocations if they are unrepresentable in the RELR format (i.e. address not a multiple of 2). Enabling CONFIG_RELR reduces the size of a defconfig kernel image with CONFIG_RANDOMIZE_BASE by 3.5MB/16% uncompressed, or 550KB/5% compressed (lz4). Signed-off-by: Peter Collingbourne <pcc@google.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-05arm64: Move TIF_* documentation to individual definitionsGeert Uytterhoeven1-18/+7
Some TIF_* flags are documented in the comment block at the top, some next to their definitions, some in both places. Move all documentation to the individual definitions for consistency, and for easy lookup. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-05arm64: mm: free the initrd reserved memblock in a aligned mannerJunhua Huang1-1/+5
We should free the initrd reserved memblock in an aligned manner, because the initrd reserves the memblock in an aligned manner in arm64_memblock_init(). Otherwise there are some fragments in memblock_reserved regions after free_initrd_mem(). e.g.: /sys/kernel/debug/memblock # cat reserved 0: 0x0000000080080000..0x00000000817fafff 1: 0x0000000083400000..0x0000000083ffffff 2: 0x0000000090000000..0x000000009000407f 3: 0x00000000b0000000..0x00000000b000003f 4: 0x00000000b26184ea..0x00000000b2618fff The fragments like the ranges from b0000000 to b000003f and from b26184ea to b2618fff should be freed. And we can do free_reserved_area() after memblock_free(), as free_reserved_area() calls __free_pages(), once we've done that it could be allocated somewhere else, but memblock and iomem still say this is reserved memory. Fixes: 05c58752f9dc ("arm64: To remove initrd reserved area entry from memblock") Signed-off-by: Junhua Huang <huang.junhua@zte.com.cn> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-05arm64: io: Relax implicit barriers in default I/O accessorsWill Deacon1-2/+2
The arm64 implementation of the default I/O accessors requires barrier instructions to satisfy the memory ordering requirements documented in memory-barriers.txt [1], which are largely derived from the behaviour of I/O accesses on x86. Of particular interest are the requirements that a write to a device must be ordered against prior writes to memory, and a read from a device must be ordered against subsequent reads from memory. We satisfy these requirements using various flavours of DSB: the most expensive barrier we have, since it implies completion of prior accesses. This was deemed necessary when we first implemented the accessors, since accesses to different endpoints could propagate independently and therefore the only way to enforce order is to rely on completion guarantees [2]. Since then, the Armv8 memory model has been retrospectively strengthened to require "other-multi-copy atomicity", a property that requires memory accesses from an observer to become visible to all other observers simultaneously [3]. In other words, propagation of accesses is limited to transitioning from locally observed to globally observed. It recently became apparent that this change also has a subtle impact on our I/O accessors for shared peripherals, allowing us to use the cheaper DMB instruction instead. As a concrete example, consider the following: memcpy(dma_buffer, data, bufsz); writel(DMA_START, dev->ctrl_reg); A DMB ST instruction between the final write to the DMA buffer and the write to the control register will ensure that the writes to the DMA buffer are observed before the write to the control register by all observers. Put another way, if an observer can see the write to the control register, it can also see the writes to memory. This has always been the case and is not sufficient to provide the ordering required by Linux, since there is no guarantee that the master interface of the DMA-capable device has observed either of the accesses. However, in an other-multi-copy atomic world, we can infer two things: 1. A write arriving at an endpoint shared between multiple CPUs is visible to all CPUs 2. A write that is visible to all CPUs is also visible to all other observers in the shareability domain Pieced together, this allows us to use DMB OSHST for our default I/O write accessors and DMB OSHLD for our default I/O read accessors (the outer-shareability is for handling non-cacheable mappings) for shared devices. Memory-mapped, DMA-capable peripherals that are private to a CPU (i.e. inaccessible to other CPUs) still require the DSB, however these are few and far between and typically require special treatment anyway which is outside of the scope of the portable driver API (e.g. GIC, page-table walker, SPE profiler). Note that our mandatory barriers remain as DSBs, since there are cases where they are used to flush the store buffer of the CPU, e.g. when publishing page table updates to the SMMU. [1] https://git.kernel.org/linus/4614bbdee357 [2] https://www.youtube.com/watch?v=i6DayghhA8Q [3] https://www.cl.cam.ac.uk/~pes20/armv8-mca/ Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-08-05arm64: Remove unused cpucap_multi_entry_cap_cpu_enable()Mark Brown1-15/+0
The function cpucap_multi_entry_cap_cpu_enable() is unused, remove it to avoid any confusion reading the code and potential for bit rot. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-05arm64: sysreg: Remove unused and rotting SCTLR_ELx field definitionsWill Deacon1-29/+0
Our SCTLR_ELx field definitions are somewhat over-engineered in that they carefully define masks describing the RES0/RES1 bits and then use these to construct further masks representing bits to be set/cleared for the _EL1 and _EL2 registers. However, most of the resulting definitions aren't actually used by anybody and have subsequently started to bit-rot when new fields have been added by the architecture, resulting in fields being part of the RES0 mask despite being defined and used elsewhere. Rather than fix up these masks, simply remove the unused parts entirely so that we can drop the maintenance burden. We can always add things back if we need them in the future. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-05arm64: esr: Add ESR exception class encoding for trapped ERETWill Deacon2-1/+3
The ESR.EC encoding of 0b011010 (0x1a) describes an exception generated by an ERET, ERETAA or ERETAB instruction as a result of a nested virtualisation trap to EL2. Add an encoding for this EC and a string description so that we identify it correctly if we take one unexpectedly. Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-05arm64: Replace strncmp with str_has_prefixChuhong Yuan2-2/+2
In commit b6b2735514bc ("tracing: Use str_has_prefix() instead of using fixed sizes") the newly introduced str_has_prefix() was used to replace error-prone strncmp(str, const, len). Here fix codes with the same pattern. Signed-off-by: Chuhong Yuan <hslester96@gmail.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-05arm64: remove unneeded uapi/asm/stat.hMasahiro Yamada1-17/+0
stat.h is listed in include/uapi/asm-generic/Kbuild, so Kbuild will automatically generate it. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-05arm64/kexec: Use consistent convention of initializing 'kxec_buf.mem' with KEXEC_BUF_MEM_UNKNOWNBhupesh Sharma2-3/+3
With commit b6664ba42f14 ("s390, kexec_file: drop arch_kexec_mem_walk()"), we introduced the KEXEC_BUF_MEM_UNKNOWN macro. If kexec_buf.mem is set to this value, kexec_locate_mem_hole() will try to allocate free memory. While other arch(s) like s390 and x86_64 already use this macro to initialize kexec_buf.mem with, arm64 uses an equivalent value of 0. Replace it with KEXEC_BUF_MEM_UNKNOWN, to keep the convention of initializing 'kxec_buf.mem' consistent across various archs. Cc: takahiro.akashi@linaro.org Cc: james.morse@arm.com Reviewed-by: Matthias Brugger <mbrugger@suse.com> Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-05arm64: remove pointless __KERNEL__ guardsMark Rutland16-46/+3
For a number of years, UAPI headers have been split from kernel-internal headers. The latter are never exposed to userspace, and always built with __KERNEL__ defined. Most headers under arch/arm64 don't have __KERNEL__ guards, but there are a few stragglers lying around. To make things more consistent, and to set a good example going forward, let's remove these redundant __KERNEL__ guards. In a couple of cases, a trailing #endif lacked a comment describing its corresponding #if or #ifdef, so these are fixes up at the same time. Guards in auto-generated crypto code are left as-is, as these guards are generated by scripting imported from the upstream openssl project scripts. Guards in UAPI headers are left as-is, as these can be included by userspace or the kernel. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-05arm64: Remove unused assembly macroJulien Thierry1-11/+0
As of commit 4141c857fd09dbed480f021b3eece4f46c653161 ("arm64: convert raw syscall invocation to C"), moving syscall handling from assembly to C, the macro mask_nospec64 is no longer referenced. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-04Merge tag 'powerpc-5.3-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linuxLinus Torvalds8-5/+52
Pull powerpc fixes from Michael Ellerman: "Some more powerpc fixes for 5.3: - Wire up the new clone3 syscall. - A fix for the PAPR SCM nvdimm driver, to fix a crash when firmware gives us a device that's attached to a non-online NUMA node. - A fix for a boot failure on 32-bit with KASAN enabled. - Three fixes for implicit fall through warnings, some of which are errors for us due to -Werror. Thanks to: Aneesh Kumar K.V, Christophe Leroy, Kees Cook, Santosh Sivaraj, Stephen Rothwell" * tag 'powerpc-5.3-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/kasan: fix early boot failure on PPC32 drivers/macintosh/smu.c: Mark expected switch fall-through powerpc/spe: Mark expected switch fall-throughs powerpc/nvdimm: Pick nearby online node if the device node is not online powerpc/kvm: Fall through switch case explicitly powerpc: Wire up clone3 syscall
2019-08-03Merge tag 'xtensa-20190803' of git://github.com/jcmvbkbc/linux-xtensaLinus Torvalds1-0/+1
Pull Xtensa fix from Max Filippov: "Fix build for xtensa cores with coprocessors that was broken by entry/return abstraction patch" * tag 'xtensa-20190803' of git://github.com/jcmvbkbc/linux-xtensa: xtensa: fix build for cores with coprocessors
2019-08-03Merge branch 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds2-0/+76
Pull vdso timer fixes from Thomas Gleixner: "A series of commits to deal with the regression caused by the generic VDSO implementation. The usage of clock_gettime64() for 32bit compat fallback syscalls caused seccomp filters to kill innocent processes because they only allow clock_gettime(). Handle the compat syscalls with clock_gettime() as before, which is not a functional problem for the VDSO as the legacy compat application interface is not y2038 safe anyway. It's just extra fallback code which needs to be implemented on every architecture. It's opt in for now so that it does not break the compile of already converted architectures in linux-next. Once these are fixed, the #ifdeffery goes away. So much for trying to be smart and reuse code..." * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: arm64: compat: vdso: Use legacy syscalls as fallback x86/vdso/32: Use 32bit syscall fallback lib/vdso/32: Provide legacy syscall fallbacks lib/vdso: Move fallback invocation to the callers lib/vdso/32: Remove inconsistent NULL pointer checks
2019-08-03Merge branch 'akpm' (patches from Andrew)Linus Torvalds1-0/+1
Merge misc fixes from Andrew Morton: "17 fixes" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: drivers/acpi/scan.c: document why we don't need the device_hotplug_lock memremap: move from kernel/ to mm/ lib/test_meminit.c: use GFP_ATOMIC in RCU critical section asm-generic: fix -Wtype-limits compiler warnings cgroup: kselftest: relax fs_spec checks mm/memory_hotplug.c: remove unneeded return for void function mm/migrate.c: initialize pud_entry in migrate_vma() coredump: split pipe command whitespace before expanding template page flags: prioritize kasan bits over last-cpuid ubsan: build ubsan.c more conservatively kasan: remove clang version check for KASAN_STACK mm: compaction: avoid 100% CPU usage during compaction when a task is killed mm: migrate: fix reference check race between __find_get_block() and migration mm: vmscan: check if mem cgroup is disabled or not before calling memcg slab shrinker ocfs2: remove set but not used variable 'last_hash' Revert "kmemleak: allow to coexist with fault injection" kernel/signal.c: fix a kernel-doc markup
2019-08-03Merge tag 'riscv/for-v5.3-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linuxLinus Torvalds3-7/+6
Pull RISC-V fixes from Paul Walmsley: "Three minor RISC-V-related changes for v5.3-rc3: - Add build ID to VDSO builds to avoid a double-free in perf when libelf isn't used - Align the RV64 defconfig to the output of "make savedefconfig" so subsequent defconfig patches don't get out of hand - Drop a superfluous DT property from the FU540 SoC DT data (since it must be already set in board data that includes it)" * tag 'riscv/for-v5.3-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: riscv: defconfig: align RV64 defconfig to the output of "make savedefconfig" riscv: dts: fu540-c000: drop "timebase-frequency" riscv: Fix perf record without libelf support
2019-08-03page flags: prioritize kasan bits over last-cpuidArnd Bergmann1-0/+1
ARM64 randdconfig builds regularly run into a build error, especially when NUMA_BALANCING and SPARSEMEM are enabled but not SPARSEMEM_VMEMMAP: #error "KASAN: not enough bits in page flags for tag" The last-cpuid bits are already contitional on the available space, so the result of the calculation is a bit random on whether they were already left out or not. Adding the kasan tag bits before last-cpuid makes it much more likely to end up with a successful build here, and should be reliable for randconfig at least, as long as that does not randomize NR_CPUS or NODES_SHIFT but uses the defaults. In order for the modified check to not trigger in the x86 vdso32 code where all constants are wrong (building with -m32), enclose all the definitions with an #ifdef. [arnd@arndb.de: build fix] Link: http://lkml.kernel.org/r/CAK8P3a3Mno1SWTcuAOT0Wa9VS15pdU6EfnkxLbDpyS55yO04+g@mail.gmail.com Link: http://lkml.kernel.org/r/20190722115520.3743282-1-arnd@arndb.de Link: https://lore.kernel.org/lkml/20190618095347.3850490-1-arnd@arndb.de/ Fixes: 2813b9c02962 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-08-02Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linuxLinus Torvalds17-65/+116
Pull arm64 fixes from Catalin Marinas: - Update the compat layer to allow single-byte watchpoints on all addresses (similar to the native support) - arm_pmu: fix the restoration of the counters on the CPU_PM_ENTER_FAILED path - Fix build regression with vDSO and Makefile not stripping CROSS_COMPILE_COMPAT - Fix the CTR_EL0 (cache type register) sanitisation on heterogeneous machines (e.g. big.LITTLE) - Fix the interrupt controller priority mask value when pseudo-NMIs are enabled - arm64 kprobes fixes: recovering of the PSTATE.D flag in the single-step exception handler, NOKPROBE annotations for unwind_frame() and walk_stackframe(), remove unneeded rcu_read_lock/unlock from debug handlers - Several gcc fall-through warnings - Unused variable warnings * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: Make debug exception handlers visible from RCU arm64: kprobes: Recover pstate.D in single-step exception handler arm64/mm: fix variable 'tag' set but not used arm64/mm: fix variable 'pud' set but not used arm64: Remove unneeded rcu_read_lock from debug handlers arm64: unwind: Prohibit probing on return_address() arm64: Lower priority mask for GIC_PRIO_IRQON arm64/efi: fix variable 'si' set but not used arm64: cpufeature: Fix feature comparison for CTR_EL0.{CWG,ERG} arm64: vdso: Fix Makefile regression arm64: module: Mark expected switch fall-through arm64: smp: Mark expected switch fall-through arm64: hw_breakpoint: Fix warnings about implicit fallthrough drivers/perf: arm_pmu: Fix failure path in PM notifier arm64: compat: Allow single-byte watchpoints on all addresses
2019-08-02Merge branch 'parisc-5.3-4' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linuxLinus Torvalds7-6/+12
Pull parisc fixes from Helge Deller: "A few small fixes for the parisc architecture: - Fix fall-through warnings in parisc math emu code - Fix vmlinuz linking failure with debug-enabled kernels - Fix a race condition in kernel live-patching code - Add missing archclean Makefile target & defconfig adjustments" * 'parisc-5.3-4' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux: parisc: Add archclean Makefile target parisc: Strip debug info from kernel before creating compressed vmlinuz parisc: Fix build of compressed kernel even with debug enabled parisc: fix race condition in patching code parisc: rename default_defconfig to defconfig parisc: Fix fall-through warnings in fpudispatch.c parisc: Mark expected switch fall-throughs in fault.c
2019-08-02Merge tag 's390-5.3-4' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linuxLinus Torvalds12-236/+382
Pull s390 updates from Vasily Gorbik: - Default configs updates - Minor qdio cleanup - Sparse warnings fixes - Implicit-fallthrough warnings fixes * tag 's390-5.3-4' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390/zcrypt: adjust switch fall through comments for -Wimplicit-fallthrough vfio-ccw: make vfio_ccw_async_region_ops static s390/3215: add switch fall through comment for -Wimplicit-fallthrough s390/tape: add fallthrough annotations s390/mm: add fallthrough annotations s390/mm: make gmap_test_and_clear_dirty_pmd static s390/kexec: add missing include to machine_kexec_reloc.c s390/perf: make cf_diag_csd static s390/lib: add missing include s390/boot: add missing declarations and includes s390: update configs s390: clean up qdio.h
2019-08-02Merge tag 'arm-swiotlb-5.3' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds4-1/+74
Pull arm swiotlb support from Christoph Hellwig: "This fixes a cascade of regressions that originally started with the addition of the ia64 port, but only got fatal once we removed most uses of block layer bounce buffering in Linux 4.18. The reason is that while the original i386/PAE code that was the first architecture that supported > 4GB of memory without an iommu decided to leave bounce buffering to the subsystems, which in those days just mean block and networking as no one else consumed arbitrary userspace memory. Later with ia64, x86_64 and other ports we assumed that either an iommu or something that fakes it up ("software IOTLB" in beautiful Intel speak) is present and that subsystems can rely on that for dealing with addressing limitations in devices. Except that the ARM LPAE scheme that added larger physical address to 32-bit ARM did not follow that scheme and thus only worked by chance and only for block and networking I/O directly to highmem. Long story, short fix - add swiotlb support to arm when build for LPAE platforms, which actuallys turns out to be pretty trivial with the modern dma-direct / swiotlb code to fix the Linux 4.18-ish regression" * tag 'arm-swiotlb-5.3' of git://git.infradead.org/users/hch/dma-mapping: arm: use swiotlb for bounce buffering on LPAE configs dma-mapping: check pfn validity in dma_common_{mmap,get_sgtable}
2019-08-02arm64: Make debug exception handlers visible from RCUMasami Hiramatsu1-8/+49
Make debug exceptions visible from RCU so that synchronize_rcu() correctly track the debug exception handler. This also introduces sanity checks for user-mode exceptions as same as x86's ist_enter()/ist_exit(). The debug exception can interrupt in idle task. For example, it warns if we put a kprobe on a function called from idle task as below. The warning message showed that the rcu_read_lock() caused this problem. But actually, this means the RCU is lost the context which is already in NMI/IRQ. /sys/kernel/debug/tracing # echo p default_idle_call >> kprobe_events /sys/kernel/debug/tracing # echo 1 > events/kprobes/enable /sys/kernel/debug/tracing # [ 135.122237] [ 135.125035] ============================= [ 135.125310] WARNING: suspicious RCU usage [ 135.125581] 5.2.0-08445-g9187c508bdc7 #20 Not tainted [ 135.125904] ----------------------------- [ 135.126205] include/linux/rcupdate.h:594 rcu_read_lock() used illegally while idle! [ 135.126839] [ 135.126839] other info that might help us debug this: [ 135.126839] [ 135.127410] [ 135.127410] RCU used illegally from idle CPU! [ 135.127410] rcu_scheduler_active = 2, debug_locks = 1 [ 135.128114] RCU used illegally from extended quiescent state! [ 135.128555] 1 lock held by swapper/0/0: [ 135.128944] #0: (____ptrval____) (rcu_read_lock){....}, at: call_break_hook+0x0/0x178 [ 135.130499] [ 135.130499] stack backtrace: [ 135.131192] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.2.0-08445-g9187c508bdc7 #20 [ 135.131841] Hardware name: linux,dummy-virt (DT) [ 135.132224] Call trace: [ 135.132491] dump_backtrace+0x0/0x140 [ 135.132806] show_stack+0x24/0x30 [ 135.133133] dump_stack+0xc4/0x10c [ 135.133726] lockdep_rcu_suspicious+0xf8/0x108 [ 135.134171] call_break_hook+0x170/0x178 [ 135.134486] brk_handler+0x28/0x68 [ 135.134792] do_debug_exception+0x90/0x150 [ 135.135051] el1_dbg+0x18/0x8c [ 135.135260] default_idle_call+0x0/0x44 [ 135.135516] cpu_startup_entry+0x2c/0x30 [ 135.135815] rest_init+0x1b0/0x280 [ 135.136044] arch_call_rest_init+0x14/0x1c [ 135.136305] start_kernel+0x4d4/0x500 [ 135.136597] So make debug exception visible to RCU can fix this warning. Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Acked-by: Paul E. McKenney <paulmck@linux.ibm.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-02arm64: kprobes: Recover pstate.D in single-step exception handlerMasami Hiramatsu2-34/+8
kprobes manipulates the interrupted PSTATE for single step, and doesn't restore it. Thus, if we put a kprobe where the pstate.D (debug) masked, the mask will be cleared after the kprobe hits. Moreover, in the most complicated case, this can lead a kernel crash with below message when a nested kprobe hits. [ 152.118921] Unexpected kernel single-step exception at EL1 When the 1st kprobe hits, do_debug_exception() will be called. At this point, debug exception (= pstate.D) must be masked (=1). But if another kprobes hits before single-step of the first kprobe (e.g. inside user pre_handler), it unmask the debug exception (pstate.D = 0) and return. Then, when the 1st kprobe setting up single-step, it saves current DAIF, mask DAIF, enable single-step, and restore DAIF. However, since "D" flag in DAIF is cleared by the 2nd kprobe, the single-step exception happens soon after restoring DAIF. This has been introduced by commit 7419333fa15e ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler") To solve this issue, this stores all DAIF bits and restore it after single stepping. Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Fixes: 7419333fa15e ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler") Reviewed-by: James Morse <james.morse@arm.com> Tested-by: James Morse <james.morse@arm.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-01arm64/mm: fix variable 'tag' set but not usedQian Cai1-3/+7
When CONFIG_KASAN_SW_TAGS=n, set_tag() is compiled away. GCC throws a warning, mm/kasan/common.c: In function '__kasan_kmalloc': mm/kasan/common.c:464:5: warning: variable 'tag' set but not used [-Wunused-but-set-variable] u8 tag = 0xff; ^~~ Fix it by making __tag_set() a static inline function the same as arch_kasan_set_tag() in mm/kasan/kasan.h for consistency because there is a macro in arch/arm64/include/asm/kasan.h, #define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag) However, when CONFIG_DEBUG_VIRTUAL=n and CONFIG_SPARSEMEM_VMEMMAP=y, page_to_virt() will call __tag_set() with incorrect type of a parameter, so fix that as well. Also, still let page_to_virt() return "void *" instead of "const void *", so will not need to add a similar cast in lowmem_page_address(). Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-01arm64/mm: fix variable 'pud' set but not usedQian Cai1-2/+2
GCC throws a warning, arch/arm64/mm/mmu.c: In function 'pud_free_pmd_page': arch/arm64/mm/mmu.c:1033:8: warning: variable 'pud' set but not used [-Wunused-but-set-variable] pud_t pud; ^~~ because pud_table() is a macro and compiled away. Fix it by making it a static inline function and for pud_sect() as well. Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-01arm64: Remove unneeded rcu_read_lock from debug handlersMasami Hiramatsu1-6/+8
Remove rcu_read_lock()/rcu_read_unlock() from debug exception handlers since we are sure those are not preemptible and interrupts are off. Acked-by: Paul E. McKenney <paulmck@linux.ibm.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-01arm64: unwind: Prohibit probing on return_address()Masami Hiramatsu2-0/+6
Prohibit probing on return_address() and subroutines which is called from return_address(), since the it is invoked from trace_hardirqs_off() which is also kprobe blacklisted. Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>