aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm64/include/asm/uaccess.h (follow)
AgeCommit message (Collapse)AuthorFilesLines
2019-08-06arm64: Introduce prctl() options to control the tagged user addresses ABICatalin Marinas1-1/+3
It is not desirable to relax the ABI to allow tagged user addresses into the kernel indiscriminately. This patch introduces a prctl() interface for enabling or disabling the tagged ABI with a global sysctl control for preventing applications from enabling the relaxed ABI (meant for testing user-space prctl() return error checking without reconfiguring the kernel). The ABI properties are inherited by threads of the same application and fork()'ed children but cleared on execve(). A Kconfig option allows the overall disabling of the relaxed ABI. The PR_SET_TAGGED_ADDR_CTRL will be expanded in the future to handle MTE-specific settings like imprecise vs precise exceptions. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-06arm64: untag user pointers in access_ok and __uaccess_mask_ptrAndrey Konovalov1-3/+7
This patch is a part of a series that extends kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments. copy_from_user (and a few other similar functions) are used to copy data from user memory into the kernel memory or vice versa. Since a user can provided a tagged pointer to one of the syscalls that use copy_from_user, we need to correctly handle such pointers. Do this by untagging user pointers in access_ok and in __uaccess_mask_ptr, before performing access validity checks. Note, that this patch only temporarily untags the pointers to perform the checks, but then passes them as is into the kernel internals. Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> [will: Add __force to casting in untagged_addr() to kill sparse warning] Signed-off-by: Will Deacon <will@kernel.org>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner1-12/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-10Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linuxLinus Torvalds1-24/+12
Pull arm64 updates from Catalin Marinas: - Pseudo NMI support for arm64 using GICv3 interrupt priorities - uaccess macros clean-up (unsafe user accessors also merged but reverted, waiting for objtool support on arm64) - ptrace regsets for Pointer Authentication (ARMv8.3) key management - inX() ordering w.r.t. delay() on arm64 and riscv (acks in place by the riscv maintainers) - arm64/perf updates: PMU bindings converted to json-schema, unused variable and misleading comment removed - arm64/debug fixes to ensure checking of the triggering exception level and to avoid the propagation of the UNKNOWN FAR value into the si_code for debug signals - Workaround for Fujitsu A64FX erratum 010001 - lib/raid6 ARM NEON optimisations - NR_CPUS now defaults to 256 on arm64 - Minor clean-ups (documentation/comments, Kconfig warning, unused asm-offsets, clang warnings) - MAINTAINERS update for list information to the ARM64 ACPI entry * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (54 commits) arm64: mmu: drop paging_init comments arm64: debug: Ensure debug handlers check triggering exception level arm64: debug: Don't propagate UNKNOWN FAR into si_code for debug signals Revert "arm64: uaccess: Implement unsafe accessors" arm64: avoid clang warning about self-assignment arm64: Kconfig.platforms: fix warning unmet direct dependencies lib/raid6: arm: optimize away a mask operation in NEON recovery routine lib/raid6: use vdupq_n_u8 to avoid endianness warnings arm64: io: Hook up __io_par() for inX() ordering riscv: io: Update __io_[p]ar() macros to take an argument asm-generic/io: Pass result of I/O accessor to __io_[p]ar() arm64: Add workaround for Fujitsu A64FX erratum 010001 arm64: Rename get_thread_info() arm64: Remove documentation about TIF_USEDFPU arm64: irqflags: Fix clang build warnings arm64: Enable the support of pseudo-NMIs arm64: Skip irqflags tracing for NMI in IRQs disabled context arm64: Skip preemption when exiting an NMI arm64: Handle serror in NMI context irqchip/gic-v3: Allow interrupts to be set as pseudo-NMI ...
2019-03-04get rid of legacy 'get_ds()' functionLinus Torvalds1-1/+0
Every in-kernel use of this function defined it to KERNEL_DS (either as an actual define, or as an inline function). It's an entirely historical artifact, and long long long ago used to actually read the segment selector valueof '%ds' on x86. Which in the kernel is always KERNEL_DS. Inspired by a patch from Jann Horn that just did this for a very small subset of users (the ones in fs/), along with Al who suggested a script. I then just took it to the logical extreme and removed all the remaining gunk. Roughly scripted with git grep -l '(get_ds())' -- :^tools/ | xargs sed -i 's/(get_ds())/(KERNEL_DS)/' git grep -lw 'get_ds' -- :^tools/ | xargs sed -i '/^#define get_ds()/d' plus manual fixups to remove a few unusual usage patterns, the couple of inline function cases and to fix up a comment that had become stale. The 'get_ds()' function remains in an x86 kvm selftest, since in user space it actually does something relevant. Inspired-by: Jann Horn <jannh@google.com> Inspired-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-01Revert "arm64: uaccess: Implement unsafe accessors"Catalin Marinas1-59/+20
This reverts commit 0bd3ef34d2a8dd4056560567073d8bfc5da92e39. There is ongoing work on objtool to identify incorrect uses of user_access_{begin,end}. Until this is sorted, do not enable the functionality on arm64. Also, on ARMv8.2 CPUs with hardware PAN and UAO support, there is no obvious performance benefit to the unsafe user accessors. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-01-25arm64: uaccess: Implement unsafe accessorsJulien Thierry1-20/+59
Current implementation of get/put_user_unsafe default to get/put_user which toggle PAN before each access, despite having been told by the caller that multiple accesses to user memory were about to happen. Provide implementations for user_access_begin/end to turn PAN off/on and implement unsafe accessors that assume PAN was already turned off. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-01-25arm64: uaccess: Cleanup get/put_user()Julien Thierry1-24/+12
__get/put_user_check() macro is made to return a value but this is never used. Get rid of them and just use directly __get/put_user_error() as a statement, reducing macro indirection. Also, take this opportunity to rename __get/put_user_err() as it gets a bit confusing having them along __get/put_user_error(). Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-01-03Remove 'type' argument from access_ok() functionLinus Torvalds1-4/+4
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument of the user address range verification function since we got rid of the old racy i386-only code to walk page tables by hand. It existed because the original 80386 would not honor the write protect bit when in kernel mode, so you had to do COW by hand before doing any user access. But we haven't supported that in a long time, and these days the 'type' argument is a purely historical artifact. A discussion about extending 'user_access_begin()' to do the range checking resulted this patch, because there is no way we're going to move the old VERIFY_xyz interface to that model. And it's best done at the end of the merge window when I've done most of my merges, so let's just get this done once and for all. This patch was mostly done with a sed-script, with manual fix-ups for the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form. There were a couple of notable cases: - csky still had the old "verify_area()" name as an alias. - the iter_iov code had magical hardcoded knowledge of the actual values of VERIFY_{READ,WRITE} (not that they mattered, since nothing really used it) - microblaze used the type argument for a debug printout but other than those oddities this should be a total no-op patch. I tried to fix up all architectures, did fairly extensive grepping for access_ok() uses, and the changes are trivial, but I may have missed something. Any missed conversion should be trivially fixable, though. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-28arm64: move untagged_addr macro from uaccess.h to memory.hAndrey Konovalov1-7/+0
Move the untagged_addr() macro from arch/arm64/include/asm/uaccess.h to arch/arm64/include/asm/memory.h to be later reused by KASAN. Also make the untagged_addr() macro accept all kinds of address types (void *, unsigned long, etc.). This allows not to specify type casts in each place where the macro is used. This is done by using __typeof__. Link: http://lkml.kernel.org/r/2e9ef8d2ed594106eca514b268365b5419113f6a.1544099024.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Christoph Lameter <cl@linux.com> Cc: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-06arm64: Add support for SB barrier and patch in over DSB; ISB sequencesWill Deacon1-2/+1
We currently use a DSB; ISB sequence to inhibit speculation in set_fs(). Whilst this works for current CPUs, future CPUs may implement a new SB barrier instruction which acts as an architected speculation barrier. On CPUs that support it, patch in an SB; NOP sequence over the DSB; ISB sequence and advertise the presence of the new instruction to userspace. Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-10-10Revert "arm64: uaccess: implement unsafe accessors"James Morse1-46/+15
This reverts commit a1f33941f7e103bcf471eaf8461b212223c642d6. The unsafe accessors allow the PAN enable/disable calls to be made once for a group of accesses. Adding these means we can now have sequences that look like this: | user_access_begin(); | unsafe_put_user(static-value, x, err); | unsafe_put_user(helper-that-sleeps(), x, err); | user_access_end(); Calling schedule() without taking an exception doesn't switch the PSTATE or TTBRs. We can switch out of a uaccess-enabled region, and run other code with uaccess enabled for a different thread. We can also switch from uaccess-disabled code back into this region, meaning the unsafe_put_user()s will fault. For software-PAN, threads that do this will get stuck as handle_mm_fault() will determine the page has already been mapped in, but we fault again as the page tables aren't loaded. To solve this we need code in __switch_to() that save/restores the PAN state. Acked-by: Julien Thierry <julien.thierry@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-10-01arm64: remove unused asm/compiler.h header fileArd Biesheuvel1-1/+0
arm64 does not define CONFIG_HAVE_ARCH_COMPILER_H, nor does it keep anything useful in its copy of asm/compiler.h, so let's remove it before anybody starts using it. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-10arm64: uaccess: implement unsafe accessorsJulien Thierry1-15/+46
Current implementation of get/put_user_unsafe default to get/put_user which toggle PAN before each access, despite having been told by the caller that multiple accesses to user memory were about to happen. Provide implementations for user_access_begin/end to turn PAN off/on and implement unsafe accessors that assume PAN was already turned off. Tested-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-02-19arm64: uaccess: Formalise types for access_ok()Robin Murphy1-6/+6
In converting __range_ok() into a static inline, I inadvertently made it more type-safe, but without considering the ordering of the relevant conversions. This leads to quite a lot of Sparse noise about the fact that we use __chk_user_ptr() after addr has already been converted from a user pointer to an unsigned long. Rather than just adding another cast for the sake of shutting Sparse up, it seems reasonable to rework the types to make logical sense (although the resulting codegen for __range_ok() remains identical). The only callers this affects directly are our compat traps where the inferred "user-pointer-ness" of a register value now warrants explicit casting. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-02-06arm64: uaccess: Mask __user pointers for __arch_{clear, copy_*}_userWill Deacon1-7/+22
Like we've done for get_user and put_user, ensure that user pointers are masked before invoking the underlying __arch_{clear,copy_*}_user operations. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-02-06arm64: uaccess: Don't bother eliding access_ok checks in __{get, put}_userWill Deacon1-22/+32
access_ok isn't an expensive operation once the addr_limit for the current thread has been loaded into the cache. Given that the initial access_ok check preceding a sequence of __{get,put}_user operations will take the brunt of the miss, we can make the __* variants identical to the full-fat versions, which brings with it the benefits of address masking. The likely cost in these sequences will be from toggling PAN/UAO, which we can address later by implementing the *_unsafe versions. Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-02-06arm64: uaccess: Prevent speculative use of the current addr_limitWill Deacon1-0/+7
A mispredicted conditional call to set_fs could result in the wrong addr_limit being forwarded under speculation to a subsequent access_ok check, potentially forming part of a spectre-v1 attack using uaccess routines. This patch prevents this forwarding from taking place, but putting heavy barriers in set_fs after writing the addr_limit. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-02-06arm64: Use pointer masking to limit uaccess speculationRobin Murphy1-3/+23
Similarly to x86, mitigate speculation past an access_ok() check by masking the pointer against the address limit before use. Even if we don't expect speculative writes per se, it is plausible that a CPU may still speculate at least as far as fetching a cache line for writing, hence we also harden put_user() and clear_user() for peace of mind. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-02-06arm64: Make USER_DS an inclusive limitRobin Murphy1-19/+26
Currently, USER_DS represents an exclusive limit while KERNEL_DS is inclusive. In order to do some clever trickery for speculation-safe masking, we need them both to behave equivalently - there aren't enough bits to make KERNEL_DS exclusive, so we have precisely one option. This also happens to correct a longstanding false negative for a range ending on the very top byte of kernel memory. Mark Rutland points out that we've actually got the semantics of addresses vs. segments muddled up in most of the places we need to amend, so shuffle the {USER,KERNEL}_DS definitions around such that we can correct those properly instead of just pasting "-1"s everywhere. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-16arm64: kpti: Fix the interaction between ASID switching and software PANCatalin Marinas1-3/+6
With ARM64_SW_TTBR0_PAN enabled, the exception entry code checks the active ASID to decide whether user access was enabled (non-zero ASID) when the exception was taken. On return from exception, if user access was previously disabled, it re-instates TTBR0_EL1 from the per-thread saved value (updated in switch_mm() or efi_set_pgd()). Commit 7655abb95386 ("arm64: mm: Move ASID from TTBR0 to TTBR1") makes a TTBR0_EL1 + ASID switching non-atomic. Subsequently, commit 27a921e75711 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") changes the __uaccess_ttbr0_disable() function and asm macro to first write the reserved TTBR0_EL1 followed by the ASID=0 update in TTBR1_EL1. If an exception occurs between these two, the exception return code will re-instate a valid TTBR0_EL1. Similar scenario can happen in cpu_switch_mm() between setting the reserved TTBR0_EL1 and the ASID update in cpu_do_switch_mm(). This patch reverts the entry.S check for ASID == 0 to TTBR0_EL1 and disables the interrupts around the TTBR0_EL1 and ASID switching code in __uaccess_ttbr0_disable(). It also ensures that, when returning from the EFI runtime services, efi_set_pgd() doesn't leave a non-zero ASID in TTBR1_EL1 by using uaccess_ttbr0_{enable,disable}. The accesses to current_thread_info()->ttbr0 are updated to use READ_ONCE/WRITE_ONCE. As a safety measure, __uaccess_ttbr0_enable() always masks out any existing non-zero ASID TTBR1_EL1 before writing in the new ASID. Fixes: 27a921e75711 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") Acked-by: Will Deacon <will.deacon@arm.com> Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: James Morse <james.morse@arm.com> Tested-by: James Morse <james.morse@arm.com> Co-developed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-14arm64: Re-order reserved_ttbr0 in linker scriptSteve Capper1-2/+2
Currently one resolves the location of the reserved_ttbr0 for PAN by taking a positive offset from swapper_pg_dir. In a future patch we wish to extend the swapper s.t. its size is determined at link time rather than comile time, rendering SWAPPER_DIR_SIZE unsuitable for such a low level calculation. In this patch we re-arrange the order of the linker script s.t. instead one computes reserved_ttbr0 by subtracting RESERVED_TTBR0_SIZE from swapper_pg_dir. Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-13arm64: uaccess: Add PAN helperJames Morse1-0/+12
Add __uaccess_{en,dis}able_hw_pan() helpers to set/clear the PSTATE.PAN bit. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-12-11arm64: mm: Introduce TTBR_ASID_MASK for getting at the ASID in the TTBRWill Deacon1-2/+2
There are now a handful of open-coded masks to extract the ASID from a TTBR value, so introduce a TTBR_ASID_MASK and use that instead. Suggested-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <labbott@redhat.com> Tested-by: Shanker Donthineni <shankerd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-12-11arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PANWill Deacon1-4/+17
With the ASID now installed in TTBR1, we can re-enable ARM64_SW_TTBR0_PAN by ensuring that we switch to a reserved ASID of zero when disabling user access and restore the active user ASID on the uaccess enable path. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <labbott@redhat.com> Tested-by: Shanker Donthineni <shankerd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-09-05Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linuxLinus Torvalds1-0/+12
Pull arm64 updates from Catalin Marinas: - VMAP_STACK support, allowing the kernel stacks to be allocated in the vmalloc space with a guard page for trapping stack overflows. One of the patches introduces THREAD_ALIGN and changes the generic alloc_thread_stack_node() to use this instead of THREAD_SIZE (no functional change for other architectures) - Contiguous PTE hugetlb support re-enabled (after being reverted a couple of times). We now have the semantics agreed in the generic mm layer together with API improvements so that the architecture code can detect between contiguous and non-contiguous huge PTEs - Initial support for persistent memory on ARM: DC CVAP instruction exposed to user space (HWCAP) and the in-kernel pmem API implemented - raid6 improvements for arm64: faster algorithm for the delta syndrome and implementation of the recovery routines using Neon - FP/SIMD refactoring and removal of support for Neon in interrupt context. This is in preparation for full SVE support - PTE accessors converted from inline asm to cmpxchg so that we can use LSE atomics if available (ARMv8.1) - Perf support for Cortex-A35 and A73 - Non-urgent fixes and cleanups * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (75 commits) arm64: cleanup {COMPAT_,}SET_PERSONALITY() macro arm64: introduce separated bits for mm_context_t flags arm64: hugetlb: Cleanup setup_hugepagesz arm64: Re-enable support for contiguous hugepages arm64: hugetlb: Override set_huge_swap_pte_at() to support contiguous hugepages arm64: hugetlb: Override huge_pte_clear() to support contiguous hugepages arm64: hugetlb: Handle swap entries in huge_pte_offset() for contiguous hugepages arm64: hugetlb: Add break-before-make logic for contiguous entries arm64: hugetlb: Spring clean huge pte accessors arm64: hugetlb: Introduce pte_pgprot helper arm64: hugetlb: set_huge_pte_at Add WARN_ON on !pte_present arm64: kexec: have own crash_smp_send_stop() for crash dump for nonpanic cores arm64: dma-mapping: Mark atomic_pool as __ro_after_init arm64: dma-mapping: Do not pass data to gen_pool_set_algo() arm64: Remove the !CONFIG_ARM64_HW_AFDBM alternative code paths arm64: Ignore hardware dirty bit updates in ptep_set_wrprotect() arm64: Move PTE_RDONLY bit handling out of set_pte_at() kvm: arm64: Convert kvm_set_s2pte_readonly() from inline asm to cmpxchg() arm64: Convert pte handling from inline asm to using (cmp)xchg arm64: neon/efi: Make EFI fpsimd save/restore variables static ...
2017-09-04Merge branch 'x86-syscall-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds1-0/+3
Pull syscall updates from Ingo Molnar: "Improve the security of set_fs(): we now check the address limit on a number of key platforms (x86, arm, arm64) before returning to user-space - without adding overhead to the typical system call fast path" * 'x86-syscall-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: arm64/syscalls: Check address limit on user-mode return arm/syscalls: Check address limit on user-mode return x86/syscalls: Check address limit on user-mode return
2017-08-09arm64: uaccess: Implement *_flushcache variantsRobin Murphy1-0/+12
Implement the set of copy functions with guarantees of a clean cache upon completion necessary to support the pmem driver. Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-07-20arm64: uaccess: Remove redundant __force from addr cast in __range_okWill Deacon1-1/+1
Casting a pointer to an integral type doesn't require a __force attribute, because you'll need to cast back to a pointer in order to dereference the thing anyway. This patch removes the redundant __force cast from __range_ok. Reported-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-07-15Merge branch 'work.uaccess-unaligned' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds1-4/+0
Pull uacess-unaligned removal from Al Viro: "That stuff had just one user, and an exotic one, at that - binfmt_flat on arm and m68k" * 'work.uaccess-unaligned' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: kill {__,}{get,put}_user_unaligned() binfmt_flat: flat_{get,put}_addr_from_rp() should be able to fail
2017-07-08arm64/syscalls: Check address limit on user-mode returnThomas Garnier1-0/+3
Ensure the address limit is a user-mode segment before returning to user-mode. Otherwise a process can corrupt kernel-mode memory and elevate privileges [1]. The set_fs function sets the TIF_SETFS flag to force a slow path on return. In the slow path, the address limit is checked to be USER_DS if needed. [1] https://bugs.chromium.org/p/project-zero/issues/detail?id=990 Signed-off-by: Thomas Garnier <thgarnie@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: kernel-hardening@lists.openwall.com Cc: Will Deacon <will.deacon@arm.com> Cc: David Howells <dhowells@redhat.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Pratyush Anand <panand@redhat.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Petr Mladek <pmladek@suse.com> Cc: Rik van Riel <riel@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Will Drewry <wad@chromium.org> Cc: linux-api@vger.kernel.org Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Paolo Bonzini <pbonzini@redhat.com> Link: http://lkml.kernel.org/r/20170615011203.144108-3-thgarnie@google.com
2017-07-03kill {__,}{get,put}_user_unaligned()Al Viro1-4/+0
no users left Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-05-15kill strlen_user()Al Viro1-1/+0
no callers, no consistent semantics, no sane way to use it... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-05-11Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linuxLinus Torvalds1-6/+7
Pull more arm64 updates from Catalin Marinas: - Silence module allocation failures when CONFIG_ARM*_MODULE_PLTS is enabled. This requires a check for __GFP_NOWARN in alloc_vmap_area() - Improve/sanitise user tagged pointers handling in the kernel - Inline asm fixes/cleanups * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: Silence first allocation with CONFIG_ARM64_MODULE_PLTS=y ARM: Silence first allocation with CONFIG_ARM_MODULE_PLTS=y mm: Silence vmap() allocation failures based on caller gfp_flags arm64: uaccess: suppress spurious clang warning arm64: atomic_lse: match asm register sizes arm64: armv8_deprecated: ensure extension of addr arm64: uaccess: ensure extension of access_ok() addr arm64: ensure extension of smp_store_release value arm64: xchg: hazard against entire exchange variable arm64: documentation: document tagged pointer stack constraints arm64: entry: improve data abort handling of tagged pointers arm64: hw_breakpoint: fix watchpoint matching for tagged pointers arm64: traps: fix userspace cache maintenance emulation on a tagged pointer
2017-05-09arm64: uaccess: suppress spurious clang warningMark Rutland1-2/+2
Clang tries to warn when there's a mismatch between an operand's size, and the size of the register it is held in, as this may indicate a bug. Specifically, clang warns when the operand's type is less than 64 bits wide, and the register is used unqualified (i.e. %N rather than %xN or %wN). Unfortunately clang can generate these warnings for unreachable code. For example, for code like: do { \ typeof(*(ptr)) __v = (v); \ switch(sizeof(*(ptr))) { \ case 1: \ // assume __v is 1 byte wide \ asm ("{op}b %w0" : : "r" (v)); \ break; \ case 8: \ // assume __v is 8 bytes wide \ asm ("{op} %0" : : "r" (v)); \ break; \ } while (0) ... if op() were passed a char value and pointer to char, clang may produce a warning for the unreachable case where sizeof(*(ptr)) is 8. For the same reasons, clang produces warnings when __put_user_err() is used for types that are less than 64 bits wide. We could avoid this with a cast to a fixed-width type in each of the cases. However, GCC will then warn that pointer types are being cast to mismatched integer sizes (in unreachable paths). Another option would be to use the same union trickery as we do for __smp_store_release() and __smp_load_acquire(), but this is fairly invasive. Instead, this patch suppresses the clang warning by using an x modifier in the assembly for the 8 byte case of __put_user_err(). No additional work is necessary as the value has been cast to typeof(*(ptr)), so the compiler will have performed any necessary extension for the reachable case. For consistency, __get_user_err() is also updated to use the x modifier for its 8 byte case. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reported-by: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-05-09arm64: uaccess: ensure extension of access_ok() addrMark Rutland1-1/+2
Our access_ok() simply hands its arguments over to __range_ok(), which implicitly assummes that the addr parameter is 64 bits wide. This isn't necessarily true for compat code, which might pass down a 32-bit address parameter. In these cases, we don't have a guarantee that the address has been zero extended to 64 bits, and the upper bits of the register may contain unknown values, potentially resulting in a suprious failure. Avoid this by explicitly casting the addr parameter to an unsigned long (as is done on other architectures), ensuring that the parameter is widened appropriately. Fixes: 0aea86a2176c ("arm64: User access library functions") Cc: <stable@vger.kernel.org> # 3.7.x- Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-05-09arm64: hw_breakpoint: fix watchpoint matching for tagged pointersKristina Martsenko1-3/+3
When we take a watchpoint exception, the address that triggered the watchpoint is found in FAR_EL1. We compare it to the address of each configured watchpoint to see which one was hit. The configured watchpoint addresses are untagged, while the address in FAR_EL1 will have an address tag if the data access was done using a tagged address. The tag needs to be removed to compare the address to the watchpoints. Currently we don't remove it, and as a result can report the wrong watchpoint as being hit (specifically, always either the highest TTBR0 watchpoint or lowest TTBR1 watchpoint). This patch removes the tag. Fixes: d50240a5f6ce ("arm64: mm: permit use of tagged pointers at EL0") Cc: <stable@vger.kernel.org> # 3.12.x- Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-28arm64: switch to RAW_COPY_USERAl Viro1-50/+5
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-03-28arm64: add extable.hAl Viro1-22/+1
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-03-05uaccess: drop duplicate includes from asm/uaccess.hAl Viro1-2/+0
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-03-05uaccess: move VERIFY_{READ,WRITE} definitions to linux/uaccess.hAl Viro1-3/+0
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-02-08arm64: uaccess: consistently check object sizesMark Rutland1-2/+2
Currently in arm64's copy_{to,from}_user, we only check the source/destination object size if access_ok() tells us the user access is permissible. However, in copy_from_user() we'll subsequently zero any remainder on the destination object. If we failed the access_ok() check, that applies to the whole object size, which we didn't check. To ensure that we catch that case, this patch hoists check_object_size() to the start of copy_from_user(), matching __copy_from_user() and __copy_to_user(). To make all of our uaccess copy primitives consistent, the same is done to copy_to_user(). Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-12-26arm64: don't pull uaccess.h into *.SAl Viro1-64/+0
Split asm-only parts of arm64 uaccess.h into a new header and use that from *.S. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-12-12arm64: Disable PAN on uaccess_enable()Marc Zyngier1-1/+1
Commit 4b65a5db3627 ("arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1") added conditional user access enable/disable. Unfortunately, a typo prevents the PAN bit from being cleared for user access functions. Restore the PAN functionality by adding the missing '!'. Fixes: b65a5db3627 ("arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1") Reported-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-21arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1Catalin Marinas1-6/+102
This patch adds the uaccess macros/functions to disable access to user space by setting TTBR0_EL1 to a reserved zeroed page. Since the value written to TTBR0_EL1 must be a physical address, for simplicity this patch introduces a reserved_ttbr0 page at a constant offset from swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value adjusted by the reserved_ttbr0 offset. Enabling access to user is done by restoring TTBR0_EL1 with the value from the struct thread_info ttbr0 variable. Interrupts must be disabled during the uaccess_ttbr0_enable code to ensure the atomicity of the thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the get_thread_info asm macro from entry.S to assembler.h for reuse in the uaccess_ttbr0_* macros. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-21arm64: Factor out PAN enabling/disabling into separate uaccess_* macrosCatalin Marinas1-10/+69
This patch moves the directly coded alternatives for turning PAN on/off into separate uaccess_{enable,disable} macros or functions. The asm macros take a few arguments which will be used in subsequent patches. Note that any (unlikely) access that the compiler might generate between uaccess_enable() and uaccess_disable(), other than those explicitly specified by the user access code, will not be protected by PAN. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-10-20arm64: Cortex-A53 errata workaround: check for kernel addressesAndre Przywara1-0/+8
Commit 7dd01aef0557 ("arm64: trap userspace "dc cvau" cache operation on errata-affected core") adds code to execute cache maintenance instructions in the kernel on behalf of userland on CPUs with certain ARM CPU errata. It turns out that the address hasn't been checked to be a valid user space address, allowing userland to clean cache lines in kernel space. Fix this by introducing an address check before executing the instructions on behalf of userland. Since the address doesn't come via a syscall parameter, we can't just reject tagged pointers and instead have to remove the tag when checking against the user address limit. Cc: <stable@vger.kernel.org> Fixes: 7dd01aef0557 ("arm64: trap userspace "dc cvau" cache operation on errata-affected core") Reported-by: Kristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: Andre Przywara <andre.przywara@arm.com> [will: rework commit message + replace access_ok with max_user_addr()] Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-15arm64: don't zero in __copy_from_user{,_inatomic}Al Viro1-4/+6
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-08-08Merge tag 'usercopy-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linuxLinus Torvalds1-5/+10
Pull usercopy protection from Kees Cook: "Tbhis implements HARDENED_USERCOPY verification of copy_to_user and copy_from_user bounds checking for most architectures on SLAB and SLUB" * tag 'usercopy-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: mm: SLUB hardened usercopy support mm: SLAB hardened usercopy support s390/uaccess: Enable hardened usercopy sparc/uaccess: Enable hardened usercopy powerpc/uaccess: Enable hardened usercopy ia64/uaccess: Enable hardened usercopy arm64/uaccess: Enable hardened usercopy ARM: uaccess: Enable hardened usercopy x86/uaccess: Enable hardened usercopy mm: Hardened usercopy mm: Implement stack frame object validation mm: Add is_migrate_cma_page
2016-07-26arm64/uaccess: Enable hardened usercopyKees Cook1-7/+22
Enables CONFIG_HARDENED_USERCOPY checks on arm64. As done by KASAN in -next, renames the low-level functions to __arch_copy_*_user() so a static inline can do additional work before the copy. Signed-off-by: Kees Cook <keescook@chromium.org>