aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm64/kernel/process.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2017-06-22arm64: ptrace: Flush user-RW TLS reg to thread_struct before readingDave Martin1-2/+6
When reading current's user-writable TLS register (which occurs when dumping core for native tasks), it is possible that userspace has modified it since the time the task was last scheduled out. The new TLS register value is not guaranteed to have been written immediately back to thread_struct in this case. As a result, a coredump can capture stale data for this register. Reading the register for a stopped task via ptrace is unaffected. For native tasks, this patch explicitly flushes the TPIDR_EL0 register back to thread_struct before dumping when operating on current, thus ensuring that coredump contents are up to date. For compat tasks, the TLS register is not user-writable and so cannot be out of sync, so no flush is required in compat_tls_get(). Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-05-30arm64: Add dump_backtrace() in show_regsKefeng Wang1-0/+1
Generic code expects show_regs() to dump the stack, but arm64's show_regs() does not. This makes it hard to debug softlockups and other issues that result in show_regs() being called. This patch updates arm64's show_regs() to dump the stack, as common code expects. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> [will: folded in bug_handler fix from mrutland] Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-03-23arm64: drop unnecessary newlines in show_regs()Kefeng Wang1-2/+0
There are two unnecessary newlines, one is in show_regs, another is in __show_regs(), drop them. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-02sched/headers: Prepare for new header dependencies before moving code to <linux/sched/task_stack.h>Ingo Molnar1-0/+1
We are going to split <linux/sched/task_stack.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/task_stack.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-03-02sched/headers: Prepare for new header dependencies before moving code to <linux/sched/task.h>Ingo Molnar1-0/+1
We are going to split <linux/sched/task.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/task.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-03-02sched/headers: Prepare for new header dependencies before moving code to <linux/sched/debug.h>Ingo Molnar1-0/+1
We are going to split <linux/sched/debug.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/debug.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-02-09arm64: use linux/sizes.h for constantsMiles Chen1-2/+2
Use linux/size.h to improve code readability. Signed-off-by: Miles Chen <miles.chen@mediatek.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-10arm64: Don't trace __switch_to if function graph tracer is enabledJoel Fernandes1-1/+1
Function graph tracer shows negative time (wrap around) when tracing __switch_to if the nosleep-time trace option is enabled. Time compensation for nosleep-time is done by an ftrace probe on sched_switch. This doesn't work well for the following events (with letters representing timestamps): A - sched switch probe called for task T switch out B - __switch_to calltime is recorded C - sched_switch probe called for task T switch in D - __switch_to rettime is recorded If C - A > D - B, then we end up over compensating for the time spent in __switch_to giving rise to negative times in the trace output. On x86, __switch_to is not traced if function graph tracer is enabled. Do the same for arm64 as well. Cc: Todd Kjos <tkjos@google.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Joel Fernandes <joelaf@google.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-11-16arm64: Add hypervisor safe helper for checking constant capabilitiesSuzuki K Poulose1-1/+1
The hypervisor may not have full access to the kernel data structures and hence cannot safely use cpus_have_cap() helper for checking the system capability. Add a safe helper for hypervisors to check a constant system capability, which *doesn't* fall back to checking the bitmap maintained by the kernel. With this, make the cpus_have_cap() only check the bitmask and force constant cap checks to use the new API for quicker checks. Cc: Robert Ritcher <rritcher@cavium.com> Cc: Tirumalesh Chalamarla <tchalamarla@cavium.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11arm64: split thread_info from task stackMark Rutland1-0/+16
This patch moves arm64's struct thread_info from the task stack into task_struct. This protects thread_info from corruption in the case of stack overflows, and makes its address harder to determine if stack addresses are leaked, making a number of attacks more difficult. Precise detection and handling of overflow is left for subsequent patches. Largely, this involves changing code to store the task_struct in sp_el0, and acquire the thread_info from the task struct. Core code now implements current_thread_info(), and as noted in <linux/sched.h> this relies on offsetof(task_struct, thread_info) == 0, enforced by core code. This change means that the 'tsk' register used in entry.S now points to a task_struct, rather than a thread_info as it used to. To make this clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets appropriately updated to account for the structural change. Userspace clobbers sp_el0, and we can no longer restore this from the stack. Instead, the current task is cached in a per-cpu variable that we can safely access from early assembly as interrupts are disabled (and we are thus not preemptible). Both secondary entry and idle are updated to stash the sp and task pointer separately. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <labbott@redhat.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11arm64: prep stack walkers for THREAD_INFO_IN_TASKMark Rutland1-6/+14
When CONFIG_THREAD_INFO_IN_TASK is selected, task stacks may be freed before a task is destroyed. To account for this, the stacks are refcounted, and when manipulating the stack of another task, it is necessary to get/put the stack to ensure it isn't freed and/or re-used while we do so. This patch reworks the arm64 stack walking code to account for this. When CONFIG_THREAD_INFO_IN_TASK is not selected these perform no refcounting, and this should only be a structural change that does not affect behaviour. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <labbott@redhat.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-10-20arm64: fix show_regs fallout from KERN_CONT changesMark Rutland1-3/+12
Recently in commit 4bcc595ccd80decb ("printk: reinstate KERN_CONT for printing continuation lines"), the behaviour of printk changed w.r.t. KERN_CONT. Now, KERN_CONT is mandatory to continue existing lines. Without this, prefixes are inserted, making output illegible, e.g. [ 1007.069010] pc : [<ffff00000871898c>] lr : [<ffff000008718948>] pstate: 40000145 [ 1007.076329] sp : ffff000008d53ec0 [ 1007.079606] x29: ffff000008d53ec0 [ 1007.082797] x28: 0000000080c50018 [ 1007.086160] [ 1007.087630] x27: ffff000008e0c7f8 [ 1007.090820] x26: ffff80097631ca00 [ 1007.094183] [ 1007.095653] x25: 0000000000000001 [ 1007.098843] x24: 000000ea68b61cac [ 1007.102206] ... or when dumped with the userpace dmesg tool, which has slightly different implicit newline behaviour. e.g. [ 1007.069010] pc : [<ffff00000871898c>] lr : [<ffff000008718948>] pstate: 40000145 [ 1007.076329] sp : ffff000008d53ec0 [ 1007.079606] x29: ffff000008d53ec0 [ 1007.082797] x28: 0000000080c50018 [ 1007.086160] [ 1007.087630] x27: ffff000008e0c7f8 [ 1007.090820] x26: ffff80097631ca00 [ 1007.094183] [ 1007.095653] x25: 0000000000000001 [ 1007.098843] x24: 000000ea68b61cac [ 1007.102206] We can't simply always use KERN_CONT for lines which may or may not be continuations. That causes line prefixes (e.g. timestamps) to be supressed, and the alignment of all but the first line will be broken. For even more fun, we can't simply insert some dummy empty-string printk calls, as GCC warns for an empty printk string, and even if we pass KERN_DEFAULT explcitly to silence the warning, the prefix gets swallowed unless there is an additional part to the string. Instead, we must manually iterate over pairs of registers, which gives us the legible output we want in either case, e.g. [ 169.771790] pc : [<ffff00000871898c>] lr : [<ffff000008718948>] pstate: 40000145 [ 169.779109] sp : ffff000008d53ec0 [ 169.782386] x29: ffff000008d53ec0 x28: 0000000080c50018 [ 169.787650] x27: ffff000008e0c7f8 x26: ffff80097631de00 [ 169.792913] x25: 0000000000000001 x24: 00000027827b2cf4 Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-10-20arm64: suspend: Reconfigure PSTATE after resume from idleJames Morse1-1/+2
The suspend/resume path in kernel/sleep.S, as used by cpu-idle, does not save/restore PSTATE. As a result of this cpufeatures that were detected and have bits in PSTATE get lost when we resume from idle. UAO gets set appropriately on the next context switch. PAN will be re-enabled next time we return from user-space, but on a preemptible kernel we may run work accessing user space before this point. Add code to re-enable theses two features in __cpu_suspend_exit(). We re-use uao_thread_switch() passing current. Signed-off-by: James Morse <james.morse@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-10-11arm64: use simpler API for random address requestsJason Cooper1-6/+2
Currently, all callers to randomize_range() set the length to 0 and calculate end by adding a constant to the start address. We can simplify the API to remove a bunch of needless checks and variables. Use the new randomize_addr(start, range) call to set the requested address. Link: http://lkml.kernel.org/r/20160803233913.32511-5-jason@lakedaemon.net Signed-off-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Kees Cook <keescook@chromium.org> Cc: "Russell King - ARM Linux" <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-09-09arm64: simplify sysreg manipulationMark Rutland1-8/+6
A while back we added {read,write}_sysreg accessors to handle accesses to system registers, without the usual boilerplate asm volatile, temporary variable, etc. This patch makes use of these across arm64 to make code shorter and clearer. For sequences with a trailing ISB, the existing isb() macro is also used so that asm blocks can be removed entirely. A few uses of inline assembly for msr/mrs are left as-is. Those manipulating sp_el0 for the current thread_info value have special clobber requiremends. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-05-20exit_thread: remove empty bodiesJiri Slaby1-7/+0
Define HAVE_EXIT_THREAD for archs which want to do something in exit_thread. For others, let's define exit_thread as an empty inline. This is a cleanup before we change the prototype of exit_thread to accept a task parameter. [akpm@linux-foundation.org: fix mips] Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: "David S. Miller" <davem@davemloft.net> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Aurelien Jacquiot <a-jacquiot@ti.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chen Liqin <liqin.linux@gmail.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Chris Zankel <chris@zankel.net> Cc: David Howells <dhowells@redhat.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Haavard Skinnemoen <hskinnemoen@gmail.com> Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: James Hogan <james.hogan@imgtec.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Jonas Bonn <jonas@southpole.se> Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com> Cc: Lennox Wu <lennox.wu@gmail.com> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Mikael Starvik <starvik@axis.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Rich Felker <dalias@libc.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Russell King <linux@arm.linux.org.uk> Cc: Steven Miao <realmz6@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-12arm64: do not enforce strict 16 byte alignment to stack pointerColin Ian King1-3/+0
copy_thread should not be enforcing 16 byte aligment and returning -EINVAL. Other architectures trap misaligned stack access with SIGBUS so arm64 should follow this convention, so remove the strict enforcement check. For example, currently clone(2) fails with -EINVAL when passing a misaligned stack and this gives little clue to what is wrong. Instead, it is arguable that a SIGBUS on the fist access to a misaligned stack allows one to figure out that it is a misaligned stack issue rather than trying to figure out why an unconventional (and undocumented) -EINVAL is being returned. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-05-11arm64: kernel: Fix incorrect brk randomizationKees Cook1-7/+8
This fixes two issues with the arm64 brk randomziation. First, the STACK_RND_MASK was being used incorrectly. The original code was: unsigned long range_end = base + (STACK_RND_MASK << PAGE_SHIFT) + 1; STACK_RND_MASK is 0x7ff (32-bit) or 0x3ffff (64-bit), with 4K pages where PAGE_SHIFT is 12: #define STACK_RND_MASK (test_thread_flag(TIF_32BIT) ? \ 0x7ff >> (PAGE_SHIFT - 12) : \ 0x3ffff >> (PAGE_SHIFT - 12)) This means the resulting offset from base would be 0x7ff0001 or 0x3ffff0001, which is wrong since it creates an unaligned end address. It was likely intended to be: unsigned long range_end = base + ((STACK_RND_MASK + 1) << PAGE_SHIFT) Which would result in offsets of 0x800000 (32-bit) and 0x40000000 (64-bit). However, even this corrected 32-bit compat offset (0x00800000) is much smaller than native ARM's brk randomization value (0x02000000): unsigned long arch_randomize_brk(struct mm_struct *mm) { unsigned long range_end = mm->brk + 0x02000000; return randomize_range(mm->brk, range_end, 0) ? : mm->brk; } So, instead of basing arm64's brk randomization on mistaken STACK_RND_MASK calculations, just use specific corrected values for compat (0x2000000) and native arm64 (0x40000000). Reviewed-by: Jon Medhurst <tixy@linaro.org> Signed-off-by: Kees Cook <keescook@chromium.org> [will: use is_compat_task() as suggested by tixy] Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-02-18arm64: Remove the get_thread_info() functionCatalin Marinas1-9/+6
This function was introduced by previous commits implementing UAO. However, it can be replaced with task_thread_info() in uao_thread_switch() or get_fs() in do_page_fault() (the latter being called only on the current context, so no need for using the saved pt_regs). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-18arm64: kernel: Add support for User Access OverrideJames Morse1-0/+19
'User Access Override' is a new ARMv8.2 feature which allows the unprivileged load and store instructions to be overridden to behave in the normal way. This patch converts {get,put}_user() and friends to use ldtr*/sttr* instructions - so that they can only access EL0 memory, then enables UAO when fs==KERNEL_DS so that these functions can access kernel memory. This allows user space's read/write permissions to be checked against the page tables, instead of testing addr<USER_DS, then using the kernel's read/write permissions. Signed-off-by: James Morse <james.morse@arm.com> [catalin.marinas@arm.com: move uao_thread_switch() above dsb()] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-12-21arm64: ftrace: fix a stack tracer's output under function graph tracerAKASHI Takahiro1-0/+3
Function graph tracer modifies a return address (LR) in a stack frame to hook a function return. This will result in many useless entries (return_to_handler) showing up in a) a stack tracer's output b) perf call graph (with perf record -g) c) dump_backtrace (at panic et al.) For example, in case of a), $ echo function_graph > /sys/kernel/debug/tracing/current_tracer $ echo 1 > /proc/sys/kernel/stack_trace_enabled $ cat /sys/kernel/debug/tracing/stack_trace Depth Size Location (54 entries) ----- ---- -------- 0) 4504 16 gic_raise_softirq+0x28/0x150 1) 4488 80 smp_cross_call+0x38/0xb8 2) 4408 48 return_to_handler+0x0/0x40 3) 4360 32 return_to_handler+0x0/0x40 ... In case of b), $ echo function_graph > /sys/kernel/debug/tracing/current_tracer $ perf record -e mem:XXX:x -ag -- sleep 10 $ perf report ... | | |--0.22%-- 0x550f8 | | | 0x10888 | | | el0_svc_naked | | | sys_openat | | | return_to_handler | | | return_to_handler ... In case of c), $ echo function_graph > /sys/kernel/debug/tracing/current_tracer $ echo c > /proc/sysrq-trigger ... Call trace: [<ffffffc00044d3ac>] sysrq_handle_crash+0x24/0x30 [<ffffffc000092250>] return_to_handler+0x0/0x40 [<ffffffc000092250>] return_to_handler+0x0/0x40 ... This patch replaces such entries with real addresses preserved in current->ret_stack[] at unwind_frame(). This way, we can cover all the cases. Reviewed-by: Jungseok Lee <jungseoklee85@gmail.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> [will: fixed minor context changes conflicting with irq stack bits] Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-21arm64: pass a task parameter to unwind_frame()AKASHI Takahiro1-1/+1
Function graph tracer modifies a return address (LR) in a stack frame to hook a function's return. This will result in many useless entries (return_to_handler) showing up in a call stack list. We will fix this problem in a later patch ("arm64: ftrace: fix a stack tracer's output under function graph tracer"). But since real return addresses are saved in ret_stack[] array in struct task_struct, unwind functions need to be notified of, in addition to a stack pointer address, which task is being traced in order to find out real return addresses. This patch extends unwind functions' interfaces by adding an extra argument of a pointer to task_struct. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-10-19arm64: add cpu_idle tracepoints to arch_cpu_idleJisheng Zhang1-0/+3
Currently, if cpuidle is disabled or not supported, powertop reports zero wakeups and zero events. This is due to the cpu_idle tracepoints are missing. This patch is to make cpu_idle tracepoints always available even if cpuidle is disabled or not supported. Signed-off-by: Jisheng Zhang <jszhang@marvell.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-11arm64: kernel thread don't need to save fpsimd context.Janet Liu1-1/+2
kernel thread's default fpsimd state is zero. When fork a thread, if parent is kernel thread, and save hardware context to parent's fpsimd state, but this hardware context is user process's context, because kernel thread don't use fpsimd, it will not introduce issue, it add a little cost. Signed-off-by: Janet Liu <janet.liu@spreadtrum.com> Signed-off-by: Chunyan Zhang <chunyan.zhang@spreadtrum.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-01arm64: context-switch user tls register tpidr_el0 for compat tasksWill Deacon1-27/+20
Since commit a4780adeefd0 ("ARM: 7735/2: Preserve the user r/w register TPIDRURW on context switch and fork"), arch/arm/ has context switched the user-writable TLS register, so do the same for compat tasks running under the arm64 kernel. Reported-by: André Hentschel <nerv@dawncrow.de> Tested-by: André Hentschel <nerv@dawncrow.de> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-05-19arm64: kill flush_cache_all()Mark Rutland1-11/+1
The documented semantics of flush_cache_all are not possible to provide for arm64 (short of flushing the entire physical address space by VA), and there are currently no users; KVM uses VA maintenance exclusively, cpu_reset is never called, and the only two users outside of arch code cannot be built for arm64. While cpu_soft_reset and related functions (which call flush_cache_all) were thought to be useful for kexec, their current implementations only serve to mask bugs. For correctness kexec will need to perform maintenance by VA anyway to account for system caches, line migration, and other subtleties of the cache architecture. As the extent of this cache maintenance will be kexec-specific, it should probably live in the kexec code. This patch removes flush_cache_all, and related unused components, preventing further abuse. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Geoff Levand <geoff@infradead.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-03-14efi/arm64: use UEFI for system reset and poweroffArd Biesheuvel1-0/+8
If UEFI Runtime Services are available, they are preferred over direct PSCI calls or other methods to reset the system. For the reset case, we need to hook into machine_restart(), as the arm_pm_restart function pointer may be overwritten by modules. Tested-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-24arm64: ASLR: Don't randomise text when randomise_va_space == 0Arun Chandran1-5/+0
When user asks to turn off ASLR by writing "0" to /proc/sys/kernel/randomize_va_space there should not be any randomization to mmap base, stack, VDSO, libs, text and heap Currently arm64 violates this behavior by randomising text. Fix this by defining a constant ELF_ET_DYN_BASE. The randomisation of mm->mmap_base is done by setup_new_exec -> arch_pick_mmap_layout -> mmap_base -> mmap_rnd. Signed-off-by: Arun Chandran <achandran@mvista.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-10Merge tag 'restart-handler-for-v3.18' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-stagingLinus Torvalds1-1/+2
Pull restart handler infrastructure from Guenter Roeck: "This series was supposed to be pulled through various trees using it, and I did not plan to send a separate pull request. As it turns out, the pinctrl tree did not merge with it, is now upstream, and uses it, meaning there are now build failures. Please pull this series directly to fix those build failures" * tag 'restart-handler-for-v3.18' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging: arm/arm64: unexport restart handlers watchdog: sunxi: register restart handler with kernel restart handler watchdog: alim7101: register restart handler with kernel restart handler watchdog: moxart: register restart handler with kernel restart handler arm: support restart through restart handler call chain arm64: support restart through restart handler call chain power/restart: call machine_restart instead of arm_pm_restart kernel: add support for kernel restart handler call chain
2014-10-08Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linuxLinus Torvalds1-28/+2
Pull arm64 updates from Catalin Marinas: - eBPF JIT compiler for arm64 - CPU suspend backend for PSCI (firmware interface) with standard idle states defined in DT (generic idle driver to be merged via a different tree) - Support for CONFIG_DEBUG_SET_MODULE_RONX - Support for unmapped cpu-release-addr (outside kernel linear mapping) - set_arch_dma_coherent_ops() implemented and bus notifiers removed - EFI_STUB improvements when base of DRAM is occupied - Typos in KGDB macros - Clean-up to (partially) allow kernel building with LLVM - Other clean-ups (extern keyword, phys_addr_t usage) * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (51 commits) arm64: Remove unneeded extern keyword ARM64: make of_device_ids const arm64: Use phys_addr_t type for physical address aarch64: filter $x from kallsyms arm64: Use DMA_ERROR_CODE to denote failed allocation arm64: Fix typos in KGDB macros arm64: insn: Add return statements after BUG_ON() arm64: debug: don't re-enable debug exceptions on return from el1_dbg Revert "arm64: dmi: Add SMBIOS/DMI support" arm64: Implement set_arch_dma_coherent_ops() to replace bus notifiers of: amba: use of_dma_configure for AMBA devices arm64: dmi: Add SMBIOS/DMI support arm64: Correct ftrace calls to aarch64_insn_gen_branch_imm() arm64:mm: initialize max_mapnr using function set_max_mapnr setup: Move unmask of async interrupts after possible earlycon setup arm64: LLVMLinux: Fix inline arm64 assembly for use with clang arm64: pageattr: Correctly adjust unaligned start addresses net: bpf: arm64: fix module memory leak when JIT image build fails arm64: add PSCI CPU_SUSPEND based cpu_suspend support arm64: kernel: introduce cpu_init_idle CPU operation ...
2014-09-26arm/arm64: unexport restart handlersGuenter Roeck1-1/+0
Implementing a restart handler in a module don't make sense as there would be no guarantee that the module is loaded when a restart is needed. Unexport arm_pm_restart to ensure that no one gets the idea to do it anyway. Signed-off-by: Guenter Roeck <linux@roeck-us.net> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Heiko Stuebner <heiko@sntech.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jonas Jensen <jonas.jensen@gmail.com> Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Wim Van Sebroeck <wim@iguana.be> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2014-09-26arm64: support restart through restart handler call chainGuenter Roeck1-0/+2
The kernel core now supports a restart handler call chain to restart the system. Call it if arm_pm_restart is not set. Signed-off-by: Guenter Roeck <linux@roeck-us.net> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Heiko Stuebner <heiko@sntech.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jonas Jensen <jonas.jensen@gmail.com> Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Wim Van Sebroeck <wim@iguana.be> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2014-09-11arm64: flush TLS registers during execWill Deacon1-0/+18
Nathan reports that we leak TLS information from the parent context during an exec, as we don't clear the TLS registers when flushing the thread state. This patch updates the flushing code so that we: (1) Unconditionally zero the tpidr_el0 register (since this is fully context switched for native tasks and zeroed for compat tasks) (2) Zero the tp_value state in thread_info before clearing the tpidrr0_el0 register for compat tasks (since this is only writable by the set_tls compat syscall and therefore not fully switched). A missing compiler barrier is also added to the compat set_tls syscall. Cc: <stable@vger.kernel.org> Acked-by: Nathan Lynch <Nathan_Lynch@mentor.com> Reported-by: Nathan Lynch <Nathan_Lynch@mentor.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08arm64: convert part of soft_restart() to assemblyArun Chandran1-28/+2
The current soft_restart() and setup_restart implementations incorrectly assume that compiler will not spill/fill values to/from stack. However this assumption seems to be wrong, revealed by the disassembly of the currently existing code (v3.16) built with Linaro GCC 4.9-2014.05. ffffffc000085224 <soft_restart>: ffffffc000085224: a9be7bfd stp x29, x30, [sp,#-32]! ffffffc000085228: 910003fd mov x29, sp ffffffc00008522c: f9000fa0 str x0, [x29,#24] ffffffc000085230: 94003d21 bl ffffffc0000946b4 <setup_mm_for_reboot> ffffffc000085234: 94003b33 bl ffffffc000093f00 <flush_cache_all> ffffffc000085238: 94003dfa bl ffffffc000094a20 <cpu_cache_off> ffffffc00008523c: 94003b31 bl ffffffc000093f00 <flush_cache_all> ffffffc000085240: b0003321 adrp x1, ffffffc0006ea000 <reset_devices> ffffffc000085244: f9400fa0 ldr x0, [x29,#24] ----> spilled addr ffffffc000085248: f942fc22 ldr x2, [x1,#1528] ----> global memstart_addr ffffffc00008524c: f0000061 adrp x1, ffffffc000094000 <__inval_cache_range+0x40> ffffffc000085250: 91290021 add x1, x1, #0xa40 ffffffc000085254: 8b010041 add x1, x2, x1 ffffffc000085258: d2c00802 mov x2, #0x4000000000 // #274877906944 ffffffc00008525c: 8b020021 add x1, x1, x2 ffffffc000085260: d63f0020 blr x1 ... Here the compiler generates memory accesses after the cache is disabled, loading stale values for the spilled value and global variable. As we cannot control when the compiler will access memory we must rewrite the functions in assembly to stash values we need in registers prior to disabling the cache, avoiding the use of memory. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Arun Chandran <achandran@mvista.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-07-09arm64: Add CONFIG_CC_STACKPROTECTORLaura Abbott1-0/+6
arm64 currently lacks support for -fstack-protector. Add similar functionality to arm to detect stack corruption. Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-16arm64: Fix deadlock scenario with smp_send_stop()Arun KS1-2/+3
If one process calls sys_reboot and that process then stops other CPUs while those CPUs are within a spin_lock() region we can potentially encounter a deadlock scenario like below. CPU 0 CPU 1 ----- ----- spin_lock(my_lock) smp_send_stop() <send IPI> handle_IPI() disable_preemption/irqs while(1); <PREEMPT> spin_lock(my_lock) <--- Waits forever We shouldn't attempt to run any other tasks after we send a stop IPI to a CPU so disable preemption so that this task runs to completion. We use local_irq_disable() here for cross-arch consistency with x86. Based-on-work-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Arun KS <getarunks@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-16arm64: Fix machine_shutdown() definitionArun KS1-6/+35
This patch ports most of commit 19ab428f4b79 "ARM: 7759/1: decouple CPU offlining from reboot/shutdown" by Stephen Warren from arch/arm to arch/arm64. machine_shutdown() is a hook for kexec. Add a comment saying so, since it isn't obvious from the function name. Halt, power-off, and restart have different requirements re: stopping secondary CPUs than kexec has. The former simply require the secondary CPUs to be quiesced somehow, whereas kexec requires them to be completely non-operational, so that no matter where the kexec target images are written in RAM, they won't influence operation of the secondary CPUS,which could happen if the CPUs were still executing some kind of pin loop. To this end, modify machine_halt, power_off, and restart to call smp_send_stop() directly, rather than calling machine_shutdown(). In machine_shutdown(), replace the call to smp_send_stop() with a call to disable_nonboot_cpus(). This completely disables all but one CPU, thus satisfying the kexec requirements a couple paragraphs above. Signed-off-by: Arun KS <getarunks@gmail.com> Acked-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-16Merge tag 'for-3.16' of git://git.linaro.org/people/ard.biesheuvel/linux-arm into upstreamCatalin Marinas1-1/+1
FPSIMD register bank context switching and crypto algorithms optimisations for arm64 from Ard Biesheuvel. * tag 'for-3.16' of git://git.linaro.org/people/ard.biesheuvel/linux-arm: arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions arm64: pull in <asm/simd.h> from asm-generic arm64/crypto: AES in CCM mode using ARMv8 Crypto Extensions arm64/crypto: AES using ARMv8 Crypto Extensions arm64/crypto: GHASH secure hash using ARMv8 Crypto Extensions arm64/crypto: SHA-224/SHA-256 using ARMv8 Crypto Extensions arm64/crypto: SHA-1 using ARMv8 Crypto Extensions arm64: add support for kernel mode NEON in interrupt context arm64: defer reloading a task's FPSIMD state to userland resume arm64: add abstractions for FPSIMD state manipulation asm-generic: allow generic unaligned access if the arch supports it Conflicts: arch/arm64/include/asm/thread_info.h
2014-05-12arm64: is_compat_task is defined both in asm/compat.h and linux/compat.hAKASHI Takahiro1-0/+1
Some kernel files may include both linux/compat.h and asm/compat.h directly or indirectly. Since both header files contain is_compat_task() under !CONFIG_COMPAT, compiling them with !CONFIG_COMPAT will eventually fail. Such files include kernel/auditsc.c, kernel/seccomp.c and init/do_mountfs.c (do_mountfs.c may read asm/compat.h via asm/ftrace.h once ftrace is implemented). So this patch proactively 1) removes is_compat_task() under !CONFIG_COMPAT from asm/compat.h 2) replaces asm/compat.h to linux/compat.h in kernel/*.c, but asm/compat.h is still necessary in ptrace.c and process.c because they use is_compat_thread(). Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: barriers: make use of barrier options with explicit barriersWill Deacon1-1/+1
When calling our low-level barrier macros directly, we can often suffice with more relaxed behaviour than the default "all accesses, full system" option. This patch updates the users of dsb() to specify the option which they actually require. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-08arm64: add abstractions for FPSIMD state manipulationArd Biesheuvel1-1/+1
There are two tacit assumptions in the FPSIMD handling code that will no longer hold after the next patch that optimizes away some FPSIMD state restores: . the FPSIMD registers of this CPU contain the userland FPSIMD state of task 'current'; . when switching to a task, its FPSIMD state will always be restored from memory. This patch adds the following functions to abstract away from straight FPSIMD register file saves and restores: - fpsimd_preserve_current_state -> ensure current's FPSIMD state is saved - fpsimd_update_current_state -> replace current's FPSIMD state Where necessary, the signal handling and fork code are updated to use the above wrappers instead of poking into the FPSIMD registers directly. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2014-03-31Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linuxLinus Torvalds1-1/+10
Pull ARM64 updates from Catalin Marinas: - KGDB support for arm64 - PCI I/O space extended to 16M (in preparation of PCIe support patches) - Dropping ZONE_DMA32 in favour of ZONE_DMA (we only need one for the time being), together with swiotlb late initialisation to correctly setup the bounce buffer - DMA API cache maintenance support (not all ARMv8 platforms have hardware cache coherency) - Crypto extensions advertising via ELF_HWCAP2 for compat user space - Perf support for dwarf unwinding in compat mode - asm/tlb.h converted to the generic mmu_gather code - asm-generic rwsem implementation - Code clean-up * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (42 commits) arm64: Remove pgprot_dmacoherent() arm64: Support DMA_ATTR_WRITE_COMBINE arm64: Implement custom mmap functions for dma mapping arm64: Fix __range_ok macro arm64: Fix duplicated Kconfig entries arm64: mm: Route pmd thp functions through pte equivalents arm64: rwsem: use asm-generic rwsem implementation asm-generic: rwsem: de-PPCify rwsem.h arm64: enable generic CPU feature modalias matching for this architecture arm64: smp: make local symbol static arm64: debug: make local symbols static ARM64: perf: support dwarf unwinding in compat mode ARM64: perf: add support for frame pointer unwinding in compat mode ARM64: perf: add support for perf registers API arm64: Add boot time configuration of Intermediate Physical Address size arm64: Do not synchronise I and D caches for special ptes arm64: Make DMA coherent and strongly ordered mappings not executable arm64: barriers: add dmb barrier arm64: topology: Implement basic CPU topology support arm64: advertise ARMv8 extensions to 32-bit compat ELF binaries ...
2014-02-26arm64: Fix the soft_restart routineGeoff Levand1-1/+10
Change the soft_restart() routine to call cpu_reset() at its identity mapped physical address. The cpu_reset() routine must be called at its identity mapped physical address so that when the MMU is turned off the instruction pointer will be at the correct location in physical memory. Signed-off-by: Geoff Levand <geoff@infradead.org> for Huawei, Linaro Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-22cpuidle/arm64: Remove redundant cpuidle_idle_call()Nicolas Pitre1-5/+2
The core idle loop now takes care of it. Signed-off-by: Nicolas Pitre <nico@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com> Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: linux-pm@vger.kernel.org Cc: linaro-kernel@lists.linaro.org Link: http://lkml.kernel.org/n/tip-wk9vpc8dsn46s12pl602ljpo@git.kernel.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-01-30arm64: FIQs are unusedNicolas Pitre1-6/+0
So any FIQ handling is superfluous at the moment. The functions to disable/enable FIQs is kept around if ever someone needs them in the future, but existing calling sites including arch_cpu_idle_prepare() may go for now. Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19Merge tag 'arm64-suspend' of git://linux-arm.org/linux-2.6-lp into upstreamCatalin Marinas1-2/+5
* tag 'arm64-suspend' of git://linux-arm.org/linux-2.6-lp: arm64: add CPU power management menu/entries arm64: kernel: add PM build infrastructure arm64: kernel: add CPU idle call arm64: enable generic clockevent broadcast arm64: kernel: implement HW breakpoints CPU PM notifier arm64: kernel: refactor code to install/uninstall breakpoints arm: kvm: implement CPU PM notifier arm64: kernel: implement fpsimd CPU PM notifier arm64: kernel: cpu_{suspend/resume} implementation arm64: kernel: suspend/resume registers save/restore arm64: kernel: build MPIDR_EL1 hash function data structure arm64: kernel: add MPIDR_EL1 accessors macros Conflicts: arch/arm64/Kconfig
2013-12-19ARM64: check stack pointer in get_wchanKonstantin Khlebnikov1-2/+5
get_wchan() is lockless. Task may wakeup at any time and change its own stack, thus each next stack frame may be overwritten and filled with random stuff. Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-16arm64: kernel: add CPU idle callLorenzo Pieralisi1-2/+5
When CPU idle is enabled, the architectural idle call should go through the idle subsystem to allow CPUs to enter idle states defined by the platform CPU idle back-end operations. This patch, mirroring other archs behaviour, adds the CPU idle call to the architectural arch_cpu_idle implementation for arm64. Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-10-25arm64: add CPU_HOTPLUG infrastructureMark Rutland1-0/+7
This patch adds the basic infrastructure necessary to support CPU_HOTPLUG on arm64, based on the arm implementation. Actual hotplug support will depend on an implementation's cpu_operations (e.g. PSCI). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-09-20arm64: Correctly report LR and SP for compat tasksCatalin Marinas1-5/+16
When a task crashes and we print debugging information, ensure that compat tasks show the actual AArch32 LR and SP registers rather than the AArch64 ones. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>