aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/arch (follow)
AgeCommit message (Collapse)AuthorFilesLines
2017-07-12KVM: SVM: handle singlestep exception when skipping emulated instructionsLadi Prosek1-26/+33
kvm_skip_emulated_instruction handles the singlestep debug exception which is something we almost always want. This commit (specifically the change in rdmsr_interception) makes the debug.flat KVM unit test pass on AMD. Two call sites still call skip_emulated_instruction directly: * In svm_queue_exception where it's used only for moving the rip forward * In task_switch_interception which is analogous to handle_task_switch in VMX Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-07-12KVM: x86: take slots_lock in kvm_free_pitRadim Krčmář1-0/+2
kvm_vm_release() did not have slots_lock when calling kvm_io_bus_unregister_dev() and this went unnoticed until 4a12f9517728 ("KVM: mark kvm->busses as rcu protected") added dynamic checks. Luckily, there should be no race at that point: ============================= WARNING: suspicious RCU usage 4.12.0.kvm+ #0 Not tainted ----------------------------- ./include/linux/kvm_host.h:479 suspicious rcu_dereference_check() usage! lockdep_rcu_suspicious+0xc5/0x100 kvm_io_bus_unregister_dev+0x173/0x190 [kvm] kvm_free_pit+0x28/0x80 [kvm] kvm_arch_sync_events+0x2d/0x30 [kvm] kvm_put_kvm+0xa7/0x2a0 [kvm] kvm_vm_release+0x21/0x30 [kvm] Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-07-12kvm: vmx: Properly handle machine check during VM-entryJim Mattson1-6/+9
vmx_complete_atomic_exit should call kvm_machine_check for any VM-entry failure due to a machine-check event. Such an exit should be recognized solely by its basic exit reason (i.e. the low 16 bits of the VMCS exit reason field). None of the other VMCS exit information fields contain valid information when the VM-exit is due to "VM-entry failure due to machine-check event". Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: Xiao Guangrong <xiaoguangrong@tencent.com> [Changed VM_EXIT_INTR_INFO condition to better describe its reason.] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-07-12KVM: x86: update master clock before computing kvmclock_offsetRadim Krčmář1-1/+7
kvm master clock usually has a different frequency than the kernel boot clock. This is not a problem until the master clock is updated; update uses the current kernel boot clock to compute new kvm clock, which erases any kvm clock cycles that might have built up due to frequency difference over a long period. KVM_SET_CLOCK is one of places where we can safely update master clock as the guest-visible clock is going to be shifted anyway. The problem with current code is that it updates the kvm master clock after updating the offset. If the master clock was enabled before calling KVM_SET_CLOCK, then it might have built up a significant delta from kernel boot clock. In the worst case, the time set by userspace would be shifted by so much that it couldn't have been set at any point during KVM_SET_CLOCK. To fix this, move kvm_gen_update_masterclock() before computing kvmclock_offset, which means that the master clock and kernel boot clock will be sufficiently close together. Another solution would be to replace get_kvmclock_ns() with "ktime_get_boot_ns() + ka->kvmclock_offset", which is marginally more accurate, but would break symmetry with KVM_GET_CLOCK. Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-07-12kvm: nVMX: Shadow "high" parts of shadowed 64-bit VMCS fieldsJim Mattson1-26/+34
Inconsistencies result from shadowing only accesses to the full 64-bits of a 64-bit VMCS field, but not shadowing accesses to the high 32-bits of the field. The "high" part of a 64-bit field should be shadowed whenever the full 64-bit field is shadowed. Signed-off-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-12kvm: nVMX: Fix nested_vmx_check_msr_bitmap_controlsJim Mattson1-11/+6
Allow the L1 guest to specify the last page of addressable guest physical memory for an L2 MSR permission bitmap. Also remove the vmcs12_read_any() check that should never fail. Fixes: 3af18d9c5fe95 ("KVM: nVMX: Prepare for using hardware MSR bitmap") Signed-off-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-12kvm: nVMX: Validate the I/O bitmaps on nested VM-entryJim Mattson1-0/+16
According to the SDM, if the "use I/O bitmaps" VM-execution control is 1, bits 11:0 of each I/O-bitmap address must be 0. Neither address should set any bits beyond the processor's physical-address width. Signed-off-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-12kvm: nVMX: Don't set vmcs12 to "launched" when VMLAUNCH failsJim Mattson1-2/+2
The VMCS launch state is not set to "launched" unless the VMLAUNCH actually succeeds. VMLAUNCH failure includes VM-exits with bit 31 set. Note that this change does not address the general problem that a failure to launch/resume vmcs02 (i.e. vmx->fail) is not handled correctly. Signed-off-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-12powerpc/64: Fix atomic64_inc_not_zero() to return an intMichael Ellerman1-2/+2
Although it's not documented anywhere, there is an expectation that atomic64_inc_not_zero() returns a result which fits in an int. This is the behaviour implemented on all arches except powerpc. This has caused at least one bug in practice, in the percpu-refcount code, where the long result from our atomic64_inc_not_zero() was truncated to an int leading to lost references and stuck systems. That was worked around in that code in commit 966d2b04e070 ("percpu-refcount: fix reference leak during percpu-atomic transition"). To the best of my grepping abilities there are no other callers in-tree which truncate the value, but we should fix it anyway. Because the breakage is subtle and potentially very harmful I'm also tagging it for stable. Code generation is largely unaffected because in most cases the callers are just using the result for a test anyway. In particular the case of fget() that was mentioned in commit a6cf7ed5119f ("powerpc/atomic: Implement atomic*_inc_not_zero") generates exactly the same code. Fixes: a6cf7ed5119f ("powerpc/atomic: Implement atomic*_inc_not_zero") Cc: stable@vger.kernel.org # v3.4 Noticed-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-12powerpc: Fix emulation of mfocrf in emulate_step()Anton Blanchard1-0/+13
From POWER4 onwards, mfocrf() only places the specified CR field into the destination GPR, and the rest of it is set to 0. The PowerPC AS from version 3.0 now requires this behaviour. The emulation code currently puts the entire CR into the destination GPR. Fix it. Fixes: 6888199f7fe5 ("[POWERPC] Emulate more instructions in software") Cc: stable@vger.kernel.org # v2.6.22+ Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-12powerpc: Fix emulation of mcrf in emulate_step()Anton Blanchard1-2/+4
The mcrf emulation code was using the CR field number directly as the shift value, without taking into account that CR fields are numbered from 0-7 starting at the high bits. That meant it was looking at the CR fields in the reverse order. Fixes: cf87c3f6b647 ("powerpc: Emulate icbi, mcrf and conditional-trap instructions") Cc: stable@vger.kernel.org # v3.18+ Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-11Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparcLinus Torvalds7-20/+57
Pull sparc fixes from David Miller: - Fix symbol version generation for assembler on sparc, from Nagarathnam Muthusamy. - Fix compound page handling in gup_huge_pmd(), from Nitin Gupta. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc: sparc64: Fix gup_huge_pmd Adding the type of exported symbols sed regex in Makefile.build requires line break between exported symbols Adding asm-prototypes.h for genksyms to generate crc
2017-07-12powerpc/perf: Add POWER9 alternate PM_RUN_CYC and PM_RUN_INST_CMPL eventsAnton Blanchard2-0/+6
Similar to POWER8, POWER9 can count run cycles and run instructions completed on more than one PMU. Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-11Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/gerg/m68knommuLinus Torvalds6-6/+0
Pull x86nommu update from Greg Ungerer: "Only a single change, to remove old Kconfig options from defconfigs" * 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/gerg/m68knommu: m68k: defconfig: Cleanup from old Kconfig options
2017-07-11xtensa: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada2-9/+10
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Also, move "generic-y += kprobes.h" up in order to keep the entries sorted. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-07-11unicore32: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada2-29/+28
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Also, move "generic-y += kprobes.h" up in order to keep the entries sorted. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-07-11tile: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada2-19/+19
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-07-11sparc: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada2-1/+2
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-07-11sh: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada2-19/+18
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-07-11parisc: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada2-6/+5
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Also, move "generic-y += kprobes.h" up in order to keep the entries sorted. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-07-11openrisc: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada2-28/+28
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Also, move "generic-y += kprobes.h" up in order to keep the entries sorted. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Stafford Horne <shorne@gmail.com>
2017-07-11nios2: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada2-24/+24
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Also, move "generic-y += kprobes.h" up in order to keep the entries sorted. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Tobias Klauser <tklauser@distanz.ch>
2017-07-11nios2: remove unneeded arch/nios2/include/(generated/)asm/signal.hMasahiro Yamada2-23/+0
Currently, NIOS2 has three signal.h files under arch/nios2/include: [1] arch/nios2/include/asm/signal.h [2] arch/nios2/include/uapi/asm/signal.h [3] arch/nios2/include/generated/asm/signal.h [3] is build-time generated by scripts/Makefile.asm-generic. However, -I$(srctree)/arch/$(hdr-arch)/include search path is listed before -I$(objtree)/arch/$(hdr-arch)/include/generated in LINUXINCLUDE. Therefore [1] is always included instead of [3]. Remove [3] which is never included. If we look at [1], it just includes [2]. So, [1] can be removed as well. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Tobias Klauser <tklauser@distanz.ch>
2017-07-11powerpc/perf: Fix SDAR_MODE value for continous sampling on Power9Madhavan Srinivasan1-2/+4
In case of continous sampling (non-marked), the code currently sets MMCRA[SDAR_MODE] to 0b01 (Update on TLB miss) for Power9 DD1. On DD2 and later it copies the sdar_mode value from the event code, which for most events is 0b00 (No updates). However we must set a non-zero value for SDAR_MODE when doing continuous sampling, so honor the event code, unless it's zero, in which case we use use 0b01 (Update on TLB miss). Fixes: 78b4416aa249 ("powerpc/perf: Handle sdar_mode for marked event in power9") Cc: stable@vger.kernel.org # v4.11+ Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-11MIPS: Fix MIPS I ISA /proc/cpuinfo reportingMaciej W. Rozycki1-1/+1
Correct a commit 515a6393dbac ("MIPS: kernel: proc: Add MIPS R6 support to /proc/cpuinfo") regression that caused MIPS I systems to show no ISA levels supported in /proc/cpuinfo, e.g.: system type : Digital DECstation 2100/3100 machine : Unknown processor : 0 cpu model : R3000 V2.0 FPU V2.0 BogoMIPS : 10.69 wait instruction : no microsecond timers : no tlb_entries : 64 extra interrupt vector : no hardware watchpoint : no isa : ASEs implemented : shadow register sets : 1 kscratch registers : 0 package : 0 core : 0 VCED exceptions : not available VCEI exceptions : not available and similarly exclude `mips1' from the ISA list for any processors below MIPSr1. This is because the condition to show `mips1' on has been made `cpu_has_mips_r1' rather than newly-introduced `cpu_has_mips_1'. Use the correct condition then. Fixes: 515a6393dbac ("MIPS: kernel: proc: Add MIPS R6 support to /proc/cpuinfo") Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # 3.19+ Patchwork: https://patchwork.linux-mips.org/patch/16758/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: Fix minimum alignment requirement of IRQ stackMatt Redfearn1-1/+1
Commit db8466c581cc ("MIPS: IRQ Stack: Unwind IRQ stack onto task stack") erroneously set the initial stack pointer of the IRQ stack to a value with a 4 byte alignment. The MIPS32 ABI requires that the minimum stack alignment is 8 byte, and the MIPS64 ABIs(n32/n64) require 16 byte minimum alignment. Fix IRQ_STACK_START such that it leaves space for the dummy stack frame (containing interrupted task kernel stack pointer) while also meeting minimum alignment requirements. Fixes: db8466c581cc ("MIPS: IRQ Stack: Unwind IRQ stack onto task stack") Reported-by: Darius Ivanauskas <dasilt@yahoo.com> Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Petr Mladek <pmladek@suse.com> Cc: Aaron Tomlin <atomlin@redhat.com> Cc: Jason A. Donenfeld <jason@zx2c4.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/16760/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: generic: Support MIPS Boston development boardsPaul Burton5-0/+311
Add support for the MIPS Boston development board to generic kernels, which essentially amounts to: - Adding the device tree source for the MIPS Boston board. - Adding a Kconfig fragment which enables the appropriate drivers for the MIPS Boston board. With these changes in place generic kernels will support the board by default, and kernels with only the drivers needed for Boston enabled can be configured by setting BOARDS=boston during configuration. For example: $ make ARCH=mips 64r6el_defconfig BOARDS=boston Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16485/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: DTS: img: Don't attempt to build-in all .dtb filesPaul Burton1-2/+1
When building a FIT image we may want the kernel to build multiple .dtb files, but we don't want to build them all into the kernel binary as object files since they'll instead be included in the FIT image. Commit daa10170da27 ("MIPS: DTS: img: add device tree for Marduk board") however created arch/mips/boot/dts/img/Makefile with a line that builds any enabled .dtb files into the kernel. Remove this & build the pistachio object specifically, in preparation for adding .dtb targets which we don't want to build into the kernel. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Rahul Bedarkar <rahulbedarkar89@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16484/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: Traced negative syscalls should return -ENOSYSJames Hogan1-0/+7
If a negative system call number is used when system call tracing is enabled, syscall_trace_enter() will return that negative system call number without having written the return value and error flag into the pt_regs. The caller then treats it as a cancelled system call and assumes that the return value and error flag are already written, leaving the negative system call number in the return register ($v0), and the 4th system call argument in the error register ($a3). Add a special case to detect this at the end of syscall_trace_enter(), to set the return value to error -ENOSYS when this happens. Fixes: d218af78492a ("MIPS: scall: Always run the seccomp syscall filters") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16653/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: Correct forced syscall errorsJames Hogan1-1/+1
When the system call return value is forced to be an error (for example due to SECCOMP_RET_ERRNO), syscall_set_return_value() puts the error code in the return register $v0 and -1 in the error register $a3. However normally executed system calls put 1 in the error register rather than -1, so fix syscall_set_return_value() to be consistent with that. I don't anticipate that anything would have been broken by this, since the most natural way to check the error register on MIPS would be a conditional branch if error register is [not] equal to zero (bnez or beqz). Fixes: 1d7bf993e073 ("MIPS: ftrace: Add support for syscall tracepoints.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16652/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: Negate error syscall return in traceJames Hogan1-1/+1
The sys_exit trace event takes a single return value for the system call, which MIPS passes the value of the $v0 (result) register, however MIPS returns positive error codes in $v0 with $a3 specifying that $v0 contains an error code. As a result erroring system calls are traced returning positive error numbers that can't always be distinguished from success. Use regs_return_value() to negate the error code if $a3 is set. Fixes: 1d7bf993e073 ("MIPS: ftrace: Add support for syscall tracepoints.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.13+ Patchwork: https://patchwork.linux-mips.org/patch/16651/ Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: Drop duplicate HAVE_SYSCALL_TRACEPOINTS selectJames Hogan1-1/+0
MIPS selects HAVE_SYSCALL_TRACEPOINTS twice. The first was added back in v3.13 by commit 2d7bf993e073 ("MIPS: ftrace: Add support for syscall tracepoints."), but then a second redundant one was added in v4.2 by commit fb59e394c30c ("MIPS: ftrace: Enable support for syscall tracepoints."). Drop the duplicate select. Fixes: fb59e394c30c ("MIPS: ftrace: Enable support for syscall tracepoints.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16654/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS16e2: Provide feature overrides for non-MIPS16 systemsMaciej W. Rozycki16-0/+16
Hardcode the absence of the MIPS16e2 ASE for all the systems that do so for the MIPS16 ASE already, providing for code to be optimized away. Signed-off-by: Maciej W. Rozycki <macro@imgtec.com> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16097/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: MIPS16e2: Report ASE presence in /proc/cpuinfoMaciej W. Rozycki1-0/+1
Only now that both feature determination and unaligned emulation is in place add reporting to /proc/cpuinfo, so that the presence of "mips16e2" there not only indicates our recognition of the hardware feature, but correct unaligned emulation as well. Signed-off-by: Maciej W. Rozycki <macro@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16757/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11spufs: Implement show_optionsDavid Howells1-3/+19
Implement the show_options superblock op for spufs as part of a bid to get rid of s_options and generic_show_options() to make it easier to implement a context-based mount where the mount options can be passed individually over a file descriptor. Signed-off-by: David Howells <dhowells@redhat.com> cc: Jeremy Kerr <jk@ozlabs.org> cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-07-11powerpc/asm: Mark cr0 as clobbered in mftb()Oliver O'Halloran1-1/+1
The workaround for the CELL timebase bug does not correctly mark cr0 as being clobbered. This means GCC doesn't know that the asm block changes cr0 and might leave the result of an unrelated comparison in cr0 across the block, which we then trash, leading to basically random behaviour. Fixes: 859deea949c3 ("[POWERPC] Cell timebase bug workaround") Cc: stable@vger.kernel.org # v2.6.19+ Signed-off-by: Oliver O'Halloran <oohall@gmail.com> [mpe: Tweak change log and flag for stable] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-11powerpc/powernv: Fix local TLB flush for boot and MCE on POWER9Nicholas Piggin3-18/+67
There are two cases outside the normal address space management where a CPU's local TLB is to be flushed: 1. Host boot; in case something has left stale entries in the TLB (e.g., kexec). 2. Machine check; to clean corrupted TLB entries. CPU state restore from deep idle states also flushes the TLB. However this seems to be a side effect of reusing the boot code to set CPU state, rather than a requirement itself. The current flushing has a number of problems with ISA v3.0B: - The current radix mode of the MMU is not taken into account. tlbiel is undefined if the R field does not match the current radix mode. - ISA v3.0B hash must flush the partition and process table caches. - ISA v3.0B radix must flush partition and process scoped translations, partition and process table caches, and also the page walk cache. Add POWER9 cases to handle these, with radix vs hash determined by the host MMU mode. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-10s390: reduce ELF_ET_DYN_BASEKees Cook1-8/+7
Now that explicitly executed loaders are loaded in the mmap region, we have more freedom to decide where we position PIE binaries in the address space to avoid possible collisions with mmap or stack regions. For 64-bit, align to 4GB to allow runtimes to use the entire 32-bit address space for 32-bit pointers. On 32-bit use 4MB, which is the traditional x86 minimum load location, likely to avoid historically requiring a 4MB page table entry when only a portion of the first 4MB would be used (since the NULL address is avoided). For s390 the position could be 0x10000, but that is needlessly close to the NULL address. Link: http://lkml.kernel.org/r/1498154792-49952-5-git-send-email-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Pratyush Anand <panand@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10powerpc: move ELF_ET_DYN_BASE to 4GB / 4MBKees Cook1-6/+7
Now that explicitly executed loaders are loaded in the mmap region, we have more freedom to decide where we position PIE binaries in the address space to avoid possible collisions with mmap or stack regions. For 64-bit, align to 4GB to allow runtimes to use the entire 32-bit address space for 32-bit pointers. On 32-bit use 4MB, which is the traditional x86 minimum load location, likely to avoid historically requiring a 4MB page table entry when only a portion of the first 4MB would be used (since the NULL address is avoided). Link: http://lkml.kernel.org/r/1498154792-49952-4-git-send-email-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org> Tested-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Michael Ellerman <mpe@ellerman.id.au> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Pratyush Anand <panand@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10arm64: move ELF_ET_DYN_BASE to 4GB / 4MBKees Cook1-6/+6
Now that explicitly executed loaders are loaded in the mmap region, we have more freedom to decide where we position PIE binaries in the address space to avoid possible collisions with mmap or stack regions. For 64-bit, align to 4GB to allow runtimes to use the entire 32-bit address space for 32-bit pointers. On 32-bit use 4MB, to match ARM. This could be 0x8000, the standard ET_EXEC load address, but that is needlessly close to the NULL address, and anyone running arm compat PIE will have an MMU, so the tight mapping is not needed. Link: http://lkml.kernel.org/r/1498251600-132458-4-git-send-email-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10arm: move ELF_ET_DYN_BASE to 4MBKees Cook1-6/+2
Now that explicitly executed loaders are loaded in the mmap region, we have more freedom to decide where we position PIE binaries in the address space to avoid possible collisions with mmap or stack regions. 4MB is chosen here mainly to have parity with x86, where this is the traditional minimum load location, likely to avoid historically requiring a 4MB page table entry when only a portion of the first 4MB would be used (since the NULL address is avoided). For ARM the position could be 0x8000, the standard ET_EXEC load address, but that is needlessly close to the NULL address, and anyone running PIE on 32-bit ARM will have an MMU, so the tight mapping is not needed. Link: http://lkml.kernel.org/r/1498154792-49952-2-git-send-email-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Pratyush Anand <panand@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Daniel Micay <danielmicay@gmail.com> Cc: Dmitry Safonov <dsafonov@virtuozzo.com> Cc: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com> Cc: Kees Cook <keescook@chromium.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Qualys Security Advisory <qsa@qualys.com> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10binfmt_elf: use ELF_ET_DYN_BASE only for PIEKees Cook1-6/+7
The ELF_ET_DYN_BASE position was originally intended to keep loaders away from ET_EXEC binaries. (For example, running "/lib/ld-linux.so.2 /bin/cat" might cause the subsequent load of /bin/cat into where the loader had been loaded.) With the advent of PIE (ET_DYN binaries with an INTERP Program Header), ELF_ET_DYN_BASE continued to be used since the kernel was only looking at ET_DYN. However, since ELF_ET_DYN_BASE is traditionally set at the top 1/3rd of the TASK_SIZE, a substantial portion of the address space is unused. For 32-bit tasks when RLIMIT_STACK is set to RLIM_INFINITY, programs are loaded above the mmap region. This means they can be made to collide (CVE-2017-1000370) or nearly collide (CVE-2017-1000371) with pathological stack regions. Lowering ELF_ET_DYN_BASE solves both by moving programs below the mmap region in all cases, and will now additionally avoid programs falling back to the mmap region by enforcing MAP_FIXED for program loads (i.e. if it would have collided with the stack, now it will fail to load instead of falling back to the mmap region). To allow for a lower ELF_ET_DYN_BASE, loaders (ET_DYN without INTERP) are loaded into the mmap region, leaving space available for either an ET_EXEC binary with a fixed location or PIE being loaded into mmap by the loader. Only PIE programs are loaded offset from ELF_ET_DYN_BASE, which means architectures can now safely lower their values without risk of loaders colliding with their subsequently loaded programs. For 64-bit, ELF_ET_DYN_BASE is best set to 4GB to allow runtimes to use the entire 32-bit address space for 32-bit pointers. Thanks to PaX Team, Daniel Micay, and Rik van Riel for inspiration and suggestions on how to implement this solution. Fixes: d1fd836dcf00 ("mm: split ET_DYN ASLR from mmap ASLR") Link: http://lkml.kernel.org/r/20170621173201.GA114489@beast Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Rik van Riel <riel@redhat.com> Cc: Daniel Micay <danielmicay@gmail.com> Cc: Qualys Security Advisory <qsa@qualys.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Dmitry Safonov <dsafonov@virtuozzo.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Pratyush Anand <panand@redhat.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10lib/extable.c: use bsearch() library function in search_extable()Thomas Meyer4-32/+36
[thomas@m3y3r.de: v3: fix arch specific implementations] Link: http://lkml.kernel.org/r/1497890858.12931.7.camel@m3y3r.de Signed-off-by: Thomas Meyer <thomas@m3y3r.de> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10bitmap: use memcmp optimisation in more situationsMatthew Wilcox1-0/+1
Commit 7dd968163f7c ("bitmap: bitmap_equal memcmp optimization") was rather more restrictive than necessary; we can use memcmp() to implement bitmap_equal() as long as the number of bits can be proved to be a multiple of 8. And architectures other than s390 may be able to make good use of this optimisation. [arnd@arndb.de: fix build: add a memcmp() declaration] Link: http://lkml.kernel.org/r/20170630153908.3439707-1-arnd@arndb.de Link: http://lkml.kernel.org/r/20170628153221.11322-5-willy@infradead.org Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10ARM: fix rd_size declarationBart Van Assche1-2/+1
The global variable 'rd_size' is declared as 'int' in source file arch/arm/kernel/atags_parse.c and as 'unsigned long' in drivers/block/brd.c. Fix this inconsistency. Additionally, remove the declarations of rd_image_start, rd_prompt and rd_doload from parse_tag_ramdisk() since these duplicate existing declarations in <linux/initrd.h>. Link: http://lkml.kernel.org/r/20170627065024.12347-1-bart.vanassche@wdc.com Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Cc: Jens Axboe <axboe@kernel.dk> Cc: Jan Kara <jack@suse.cz> Cc: Jason Yan <yanaijie@huawei.com> Cc: Zhaohongjiang <zhaohongjiang@huawei.com> Cc: Miao Xie <miaoxie@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10frv: cmpxchg: implement cmpxchg64()Will Deacon1-0/+1
FRV supports 64-bit cmpxchg, which is provided by the arch code as __cmpxchg_64 and subsequently used to implement atomic64_cmpxchg. This patch hooks up the generic cmpxchg64 API using the same function, which also provides default definitions of the relaxed, acquire and release variants. This fixes the build when COMPILE_TEST=y and IOMMU_IO_PGTABLE_LPAE=y. Link: http://lkml.kernel.org/r/1499084670-6996-1-git-send-email-will.deacon@arm.com Signed-off-by: Will Deacon <will.deacon@arm.com> Reported-by: kbuild test robot <fengguang.wu@intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10frv: use generic fb.hTobias Klauser2-12/+1
The arch uses a verbatim copy of the asm-generic version and does not add any own implementations to the header, so use asm-generic/fb.h instead of duplicating code. Link: http://lkml.kernel.org/r/20170517083307.1697-1-tklauser@distanz.ch Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Reviewed-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10frv: remove wrapper header for asm/device.hTobias Klauser2-7/+1
frv's asm/device.h is merely including asm-generic/device.h. Thus, the arch specific header can be omitted and the generic header can be used directly. Link: http://lkml.kernel.org/r/20170517124915.26904-1-tklauser@distanz.ch Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Reviewed-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10arm64/kasan: don't allocate extra shadow memoryAndrey Ryabinin1-7/+1
We used to read several bytes of the shadow memory in advance. Therefore additional shadow memory mapped to prevent crash if speculative load would happen near the end of the mapped shadow memory. Now we don't have such speculative loads, so we no longer need to map additional shadow memory. Link: http://lkml.kernel.org/r/20170601162338.23540-3-aryabinin@virtuozzo.com Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10x86/kasan: don't allocate extra shadow memoryAndrey Ryabinin1-6/+1
We used to read several bytes of the shadow memory in advance. Therefore additional shadow memory mapped to prevent crash if speculative load would happen near the end of the mapped shadow memory. Now we don't have such speculative loads, so we no longer need to map additional shadow memory. Link: http://lkml.kernel.org/r/20170601162338.23540-2-aryabinin@virtuozzo.com Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Alexander Potapenko <glider@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>