aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/kernel/scs.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2022-11-09scs: add support for dynamic shadow call stacksArd Biesheuvel1-2/+12
In order to allow arches to use code patching to conditionally emit the shadow stack pushes and pops, rather than always taking the performance hit even on CPUs that implement alternatives such as stack pointer authentication on arm64, add a Kconfig symbol that can be set by the arch to omit the SCS codegen itself, without otherwise affecting how support code for SCS and compiler options (for register reservation, for instance) are emitted. Also, add a static key and some plumbing to omit the allocation of shadow call stack for dynamic SCS configurations if SCS is disabled at runtime. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Tested-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20221027155908.1940624-3-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-03-24kasan, vmalloc: only tag normal vmalloc allocationsAndrey Konovalov1-4/+8
The kernel can use to allocate executable memory. The only supported way to do that is via __vmalloc_node_range() with the executable bit set in the prot argument. (vmap() resets the bit via pgprot_nx()). Once tag-based KASAN modes start tagging vmalloc allocations, executing code from such allocations will lead to the PC register getting a tag, which is not tolerated by the kernel. Only tag the allocations for normal kernel pages. [andreyknvl@google.com: pass KASAN_VMALLOC_PROT_NORMAL to kasan_unpoison_vmalloc()] Link: https://lkml.kernel.org/r/9230ca3d3e40ffca041c133a524191fd71969a8d.1646233925.git.andreyknvl@google.com [andreyknvl@google.com: support tagged vmalloc mappings] Link: https://lkml.kernel.org/r/2f6605e3a358cf64d73a05710cb3da356886ad29.1646233925.git.andreyknvl@google.com [andreyknvl@google.com: don't unintentionally disabled poisoning] Link: https://lkml.kernel.org/r/de4587d6a719232e83c760113e46ed2d4d8da61e.1646757322.git.andreyknvl@google.com Link: https://lkml.kernel.org/r/fbfd9939a4dc375923c9a5c6b9e7ab05c26b8c6b.1643047180.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Acked-by: Marco Elver <elver@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-03-24kasan, vmalloc: add vmalloc tagging for HW_TAGSAndrey Konovalov1-2/+2
Add vmalloc tagging support to HW_TAGS KASAN. The key difference between HW_TAGS and the other two KASAN modes when it comes to vmalloc: HW_TAGS KASAN can only assign tags to physical memory. The other two modes have shadow memory covering every mapped virtual memory region. Make __kasan_unpoison_vmalloc() for HW_TAGS KASAN: - Skip non-VM_ALLOC mappings as HW_TAGS KASAN can only tag a single mapping of normal physical memory; see the comment in the function. - Generate a random tag, tag the returned pointer and the allocation, and initialize the allocation at the same time. - Propagate the tag into the page stucts to allow accesses through page_address(vmalloc_to_page()). The rest of vmalloc-related KASAN hooks are not needed: - The shadow-related ones are fully skipped. - __kasan_poison_vmalloc() is kept as a no-op with a comment. Poisoning and zeroing of physical pages that are backing vmalloc() allocations are skipped via __GFP_SKIP_KASAN_UNPOISON and __GFP_SKIP_ZERO: __kasan_unpoison_vmalloc() does that instead. Enabling CONFIG_KASAN_VMALLOC with HW_TAGS is not yet allowed. Link: https://lkml.kernel.org/r/d19b2e9e59a9abc59d05b72dea8429dcaea739c6.1643047180.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Acked-by: Marco Elver <elver@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-09-30scs: Release kasan vmalloc poison in scs_free processYee Lee1-0/+1
Since scs allocation is moved to vmalloc region, the shadow stack is protected by kasan_posion_vmalloc. However, the vfree_atomic operation needs to access its context for scs_free process and causes kasan error as the dump info below. This patch Adds kasan_unpoison_vmalloc() before vfree_atomic, which aligns to the prior flow as using kmem_cache. The vmalloc region will go back posioned in the following vumap() operations. ================================================================== BUG: KASAN: vmalloc-out-of-bounds in llist_add_batch+0x60/0xd4 Write of size 8 at addr ffff8000100b9000 by task kthreadd/2 CPU: 0 PID: 2 Comm: kthreadd Not tainted 5.15.0-rc2-11681-g92477dd1faa6-dirty #1 Hardware name: linux,dummy-virt (DT) Call trace: dump_backtrace+0x0/0x43c show_stack+0x1c/0x2c dump_stack_lvl+0x68/0x84 print_address_description+0x80/0x394 kasan_report+0x180/0x1dc __asan_report_store8_noabort+0x48/0x58 llist_add_batch+0x60/0xd4 vfree_atomic+0x60/0xe0 scs_free+0x1dc/0x1fc scs_release+0xa4/0xd4 free_task+0x30/0xe4 __put_task_struct+0x1ec/0x2e0 delayed_put_task_struct+0x5c/0xa0 rcu_do_batch+0x62c/0x8a0 rcu_core+0x60c/0xc14 rcu_core_si+0x14/0x24 __do_softirq+0x19c/0x68c irq_exit+0x118/0x2dc handle_domain_irq+0xcc/0x134 gic_handle_irq+0x7c/0x1bc call_on_irq_stack+0x40/0x70 do_interrupt_handler+0x78/0x9c el1_interrupt+0x34/0x60 el1h_64_irq_handler+0x1c/0x2c el1h_64_irq+0x78/0x7c _raw_spin_unlock_irqrestore+0x40/0xcc sched_fork+0x4f0/0xb00 copy_process+0xacc/0x3648 kernel_clone+0x168/0x534 kernel_thread+0x13c/0x1b0 kthreadd+0x2bc/0x400 ret_from_fork+0x10/0x20 Memory state around the buggy address: ffff8000100b8f00: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 ffff8000100b8f80: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 >ffff8000100b9000: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 ^ ffff8000100b9080: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 ffff8000100b9100: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 ================================================================== Suggested-by: Kuan-Ying Lee <kuan-ying.lee@mediatek.com> Acked-by: Will Deacon <will@kernel.org> Tested-by: Will Deacon <will@kernel.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Yee Lee <yee.lee@mediatek.com> Fixes: a2abe7cbd8fe ("scs: switch to vmapped shadow stacks") Link: https://lore.kernel.org/r/20210930081619.30091-1-yee.lee@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2020-12-01scs: switch to vmapped shadow stacksSami Tolvanen1-11/+60
The kernel currently uses kmem_cache to allocate shadow call stacks, which means an overflows may not be immediately detected and can potentially result in another task's shadow stack to be overwritten. This change switches SCS to use virtually mapped shadow stacks for tasks, which increases shadow stack size to a full page and provides more robust overflow detection, similarly to VMAP_STACK. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201130233442.2562064-2-samitolvanen@google.com Signed-off-by: Will Deacon <will@kernel.org>
2020-08-07mm: memcontrol: account kernel stack per nodeShakeel Butt1-1/+1
Currently the kernel stack is being accounted per-zone. There is no need to do that. In addition due to being per-zone, memcg has to keep a separate MEMCG_KERNEL_STACK_KB. Make the stat per-node and deprecate MEMCG_KERNEL_STACK_KB as memcg_stat_item is an extension of node_stat_item. In addition localize the kernel stack stats updates to account_kernel_stack(). Signed-off-by: Shakeel Butt <shakeelb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Roman Gushchin <guro@fb.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Link: http://lkml.kernel.org/r/20200630161539.1759185-1-shakeelb@google.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-04scs: Report SCS usage in bytes rather than number of entriesWill Deacon1-1/+1
Fix the SCS debug usage check so that we report the number of bytes used, rather than the number of entries. Fixes: 5bbaf9d1fcb9 ("scs: Add support for stack usage debugging") Reported-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18scs: Remove references to asm/scs.h from core codeWill Deacon1-1/+0
asm/scs.h is no longer needed by the core code, so remove a redundant header inclusion and update the stale Kconfig text. Tested-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18scs: Move scs_overflow_check() out of architecture codeWill Deacon1-1/+2
There is nothing architecture-specific about scs_overflow_check() as it's just a trivial wrapper around scs_corrupted(). For parity with task_stack_end_corrupted(), rename scs_corrupted() to task_scs_end_corrupted() and call it from schedule_debug() when CONFIG_SCHED_STACK_END_CHECK_is enabled, which better reflects its purpose as a debug feature to catch inadvertent overflow of the SCS. Finally, remove the unused scs_overflow_check() function entirely. This has absolutely no impact on architectures that do not support SCS (currently arm64 only). Tested-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18scs: Move accounting into alloc/free functionsWill Deacon1-24/+21
There's no need to perform the shadow stack page accounting independently of the lifetime of the underlying allocation, so call the accounting code from the {alloc,free}() functions and simplify the code in the process. Tested-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18arm64: scs: Store absolute SCS stack pointer value in thread_infoWill Deacon1-2/+1
Storing the SCS information in thread_info as a {base,offset} pair introduces an additional load instruction on the ret-to-user path, since the SCS stack pointer in x18 has to be converted back to an offset by subtracting the base. Replace the offset with the absolute SCS stack pointer value instead and avoid the redundant load. Tested-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15scs: Add support for stack usage debuggingSami Tolvanen1-1/+29
Implements CONFIG_DEBUG_STACK_USAGE for shadow stacks. When enabled, also prints out the highest shadow stack usage per process. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Will Deacon <will@kernel.org> [will: rewrote most of scs_check_usage()] Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15scs: Add page accounting for shadow call stack allocationsSami Tolvanen1-0/+15
This change adds accounting for the memory allocated for shadow stacks. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15scs: Add support for Clang's Shadow Call Stack (SCS)Sami Tolvanen1-0/+65
This change adds generic support for Clang's Shadow Call Stack, which uses a shadow stack to protect return addresses from being overwritten by an attacker. Details are available here: https://clang.llvm.org/docs/ShadowCallStack.html Note that security guarantees in the kernel differ from the ones documented for user space. The kernel must store addresses of shadow stacks in memory, which means an attacker capable reading and writing arbitrary memory may be able to locate them and hijack control flow by modifying the stacks. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [will: Numerous cosmetic changes] Signed-off-by: Will Deacon <will@kernel.org>