aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm64/kvm (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2018-09-17arm64: sysreg: Clean up instructions for modifying PSTATE fieldsSuzuki K Poulose2-13/+23
Instructions for modifying the PSTATE fields which were not supported in the older toolchains (e.g, PAN, UAO) are generated using macros. We have so far used the normal sys_reg() helper for defining the PSTATE fields. While this works fine, it is really difficult to correlate the code with the Arm ARM definition. As per Arm ARM, the PSTATE fields are defined only using Op1, Op2 fields, with fixed values for Op0, CRn. Also the CRm field has been reserved for the Immediate value for the instruction. So using the sys_reg() looks quite confusing. This patch cleans up the instruction helpers by bringing them in line with the Arm ARM definitions to make it easier to correlate code with the document. No functional changes. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-14arm64: fix for bad_mode() handler to always result in panicHari Vyas1-1/+0
The bad_mode() handler is called if we encounter an uunknown exception, with the expectation that the subsequent call to panic() will halt the system. Unfortunately, if the exception calling bad_mode() is taken from EL0, then the call to die() can end up killing the current user task and calling schedule() instead of falling through to panic(). Remove the die() call altogether, since we really want to bring down the machine in this "impossible" case. Signed-off-by: Hari Vyas <hari.vyas@broadcom.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-14arm64: force_signal_inject: WARN if called from kernel contextWill Deacon1-1/+4
force_signal_inject() is designed to send a fatal signal to userspace, so WARN if the current pt_regs indicates a kernel context. This can currently happen for the undefined instruction trap, so patch that up so we always BUG() if we didn't have a handler. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-14arm64: cpu: Move errata and feature enable callbacks closer to callersWill Deacon5-29/+28
The cpu errata and feature enable callbacks are only called via their respective arm64_cpu_capabilities structure and therefore shouldn't exist in the global namespace. Move the PAN, RAS and cache maintenance emulation enable callbacks into the same files as their corresponding arm64_cpu_capabilities structures, making them static in the process. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-14KVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and !vheWill Deacon2-0/+22
When running without VHE, it is necessary to set SCTLR_EL2.DSSBS if SSBD has been forcefully disabled on the kernel command-line. Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-14arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3Will Deacon8-2/+106
On CPUs with support for PSTATE.SSBS, the kernel can toggle the SSBD state without needing to call into firmware. This patch hooks into the existing SSBD infrastructure so that SSBS is used on CPUs that support it, but it's all made horribly complicated by the very real possibility of big/little systems that don't uniformly provide the new capability. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-14arm64: entry: Allow handling of undefined instructions from EL1Will Deacon2-5/+8
Rather than panic() when taking an undefined instruction exception from EL1, allow a hook to be registered in case we want to emulate the instruction, like we will for the SSBS PSTATE manipulation instructions. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-14arm64: ssbd: Drop #ifdefs for PR_SPEC_STORE_BYPASSWill Deacon1-3/+0
Now that we're all merged nicely into mainline, there's no need to check to see if PR_SPEC_STORE_BYPASS is defined. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-14arm64: cpufeature: Detect SSBS and advertise to userspaceWill Deacon5-7/+33
Armv8.5 introduces a new PSTATE bit known as Speculative Store Bypass Safe (SSBS) which can be used as a mitigation against Spectre variant 4. Additionally, a CPU may provide instructions to manipulate PSTATE.SSBS directly, so that userspace can toggle the SSBS control without trapping to the kernel. This patch probes for the existence of SSBS and advertise the new instructions to userspace if they exist. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-14arm64: Fix silly typo in commentWill Deacon1-1/+1
I was passing through and figuered I'd fix this up: featuer -> feature Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-11arm64: tlb: Rewrite stale comment in asm/tlbflush.hWill Deacon1-25/+55
Peter Z asked me to justify the barrier usage in asm/tlbflush.h, but actually that whole block comment needs to be rewritten. Reported-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-11arm64: tlb: Avoid synchronous TLBIs when freeing page tablesWill Deacon3-14/+1
By selecting HAVE_RCU_TABLE_INVALIDATE, we can rely on tlb_flush() being called if we fail to batch table pages for freeing. This in turn allows us to postpone walk-cache invalidation until tlb_finish_mmu(), which avoids lots of unnecessary DSBs and means we can shoot down the ASID if the range is large enough. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-11arm64: tlb: Adjust stride and type of TLBI according to mmu_gatherWill Deacon1-9/+10
Now that the core mmu_gather code keeps track of both the levels of page table cleared and also whether or not these entries correspond to intermediate entries, we can use this in our tlb_flush() callback to reduce the number of invalidations we issue as well as their scope. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-11arm64: tlb: Remove redundant !CONFIG_HAVE_RCU_TABLE_FREE codeWill Deacon1-9/+3
If there's one thing the RCU-based table freeing doesn't need, it's more ifdeffery. Remove the redundant !CONFIG_HAVE_RCU_TABLE_FREE code, since this option is unconditionally selected in our Kconfig. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-11arm64: tlbflush: Allow stride to be specified for __flush_tlb_range()Will Deacon2-7/+10
When we are unmapping intermediate page-table entries or huge pages, we don't need to issue a TLBI instruction for every PAGE_SIZE chunk in the VA range being unmapped. Allow the invalidation stride to be passed to __flush_tlb_range(), and adjust our "just nuke the ASID" heuristic to take this into account. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-11arm64: tlb: Justify non-leaf invalidation in flush_tlb_range()Will Deacon1-0/+4
Add a comment to explain why we can't get away with last-level invalidation in flush_tlb_range() Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-11arm64: pgtable: Implement p[mu]d_valid() and check in set_p[mu]d()Will Deacon1-2/+8
Now that our walk-cache invalidation routines imply a DSB before the invalidation, we no longer need one when we are clearing an entry during unmap. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-11arm64: tlb: Add DSB ISHST prior to TLBI in __flush_tlb_[kernel_]pgtable()Will Deacon1-0/+2
__flush_tlb_[kernel_]pgtable() rely on set_pXd() having a DSB after writing the new table entry and therefore avoid the barrier prior to the TLBI instruction. In preparation for delaying our walk-cache invalidation on the unmap() path, move the DSB into the TLB invalidation routines. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-11arm64: tlb: Use last-level invalidation in flush_tlb_kernel_range()Will Deacon1-1/+1
flush_tlb_kernel_range() is only ever used to invalidate last-level entries, so we can restrict the scope of the TLB invalidation instruction. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-10arm64: uaccess: implement unsafe accessorsJulien Thierry1-15/+46
Current implementation of get/put_user_unsafe default to get/put_user which toggle PAN before each access, despite having been told by the caller that multiple accesses to user memory were about to happen. Provide implementations for user_access_begin/end to turn PAN off/on and implement unsafe accessors that assume PAN was already turned off. Tested-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-10arm64: dump: Use consistent capitalisation for page-table dumpsWill Deacon1-3/+3
Being consistent in our capitalisation for page-table dumps helps when grepping for things like "end". Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-10arm64/lib: add accelerated crc32 routinesArd Biesheuvel3-0/+63
Unlike crc32c(), which is wired up to the crypto API internally so the optimal driver is selected based on the platform's capabilities, crc32_le() is implemented as a library function using a slice-by-8 table based C implementation. Even though few of the call sites may be bottlenecks, calling a time variant implementation with a non-negligible D-cache footprint is a bit of a waste, given that ARMv8.1 and up mandates support for the CRC32 instructions that were optional in ARMv8.0, but are already widely available, even on the Cortex-A53 based Raspberry Pi. So implement routines that use these instructions if available, and fall back to the existing generic routines otherwise. The selection is based on alternatives patching. Note that this unconditionally selects CONFIG_CRC32 as a builtin. Since CRC32 is relied upon by core functionality such as CONFIG_OF_FLATTREE, this just codifies the status quo. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-10arm64: cpufeature: add feature for CRC32 instructionsArd Biesheuvel2-1/+11
Add a CRC32 feature bit and wire it up to the CPU id register so we will be able to use alternatives patching for CRC32 operations. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-10lib/crc32: make core crc32() routines weak so they can be overriddenArd Biesheuvel1-4/+7
Allow architectures to drop in accelerated CRC32 routines by making the crc32_le/__crc32c_le entry points weak, and exposing non-weak aliases for them that may be used by the accelerated versions as fallbacks in case the instructions they rely upon are not available. Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-07MAINTAINERS: Add entry for MMU GATHER AND TLB INVALIDATIONWill Deacon1-0/+13
We recently had to debug a TLB invalidation problem on the munmap() path, which was made more difficult than necessary because: (a) The MMU gather code had changed without people realising (b) Many people subtly misunderstood the operation of the MMU gather code and its interactions with RCU and arch-specific TLB invalidation (c) Untangling the intended behaviour involved educated guesswork and plenty of discussion Hopefully, we can avoid getting into this mess again by designating a cross-arch group of people to look after this code. It is not intended that they will have a separate tree, but they at least provide a point of contact for anybody working in this area and can co-ordinate any proposed future changes to the internal API. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-09-07mm/memory: Move mmu_gather and TLB invalidation code into its own filePeter Zijlstra4-252/+265
In preparation for maintaining the mmu_gather code as its own entity, move the implementation out of memory.c and into its own file. Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-09-04asm-generic/tlb: Track which levels of the page tables have been clearedWill Deacon2-9/+53
It is common for architectures with hugepage support to require only a single TLB invalidation operation per hugepage during unmap(), rather than iterating through the mapping at a PAGE_SIZE increment. Currently, however, the level in the page table where the unmap() operation occurs is not stored in the mmu_gather structure, therefore forcing architectures to issue additional TLB invalidation operations or to give up and over-invalidate by e.g. invalidating the entire TLB. Ideally, we could add an interval rbtree to the mmu_gather structure, which would allow us to associate the correct mapping granule with the various sub-mappings within the range being invalidated. However, this is costly in terms of book-keeping and memory management, so instead we approximate by keeping track of the page table levels that are cleared and provide a means to query the smallest granule required for invalidation. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-09-04asm-generic/tlb: Track freeing of page-table directories in struct mmu_gatherPeter Zijlstra1-8/+23
Some architectures require different TLB invalidation instructions depending on whether it is only the last-level of page table being changed, or whether there are also changes to the intermediate (directory) entries higher up the tree. Add a new bit to the flags bitfield in struct mmu_gather so that the architecture code can operate accordingly if it's the intermediate levels being invalidated. Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-09-04asm-generic/tlb: Guard with #ifdef CONFIG_MMUWill Deacon1-0/+4
The inner workings of the mmu_gather-based TLB invalidation mechanism are not relevant to nommu configurations, so guard them with an #ifdef. This allows us to implement future functions using static inlines without breaking the build. Acked-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-09-02Linux 4.19-rc2Linus Torvalds1-1/+1
2018-09-02x86/pti: Fix section mismatch warning/errorRandy Dunlap1-1/+1
Fix the section mismatch warning in arch/x86/mm/pti.c: WARNING: vmlinux.o(.text+0x6972a): Section mismatch in reference from the function pti_clone_pgtable() to the function .init.text:pti_user_pagetable_walk_pte() The function pti_clone_pgtable() references the function __init pti_user_pagetable_walk_pte(). This is often because pti_clone_pgtable lacks a __init annotation or the annotation of pti_user_pagetable_walk_pte is wrong. FATAL: modpost: Section mismatches detected. Fixes: 85900ea51577 ("x86/pti: Map the vsyscall page if needed") Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/43a6d6a3-d69d-5eda-da09-0b1c88215a2a@infradead.org
2018-09-01x86/vdso: Fix lsl operand orderSamuel Neves1-1/+1
In the __getcpu function, lsl is using the wrong target and destination registers. Luckily, the compiler tends to choose %eax for both variables, so it has been working so far. Fixes: a582c540ac1b ("x86/vdso: Use RDPID in preference to LSL when available") Signed-off-by: Samuel Neves <sneves@dei.uc.pt> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Andy Lutomirski <luto@kernel.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20180901201452.27828-1-sneves@dei.uc.pt
2018-09-01x86/mce: Fix set_mce_nospec() to avoid #GP faultLuckTony1-1/+24
The trick with flipping bit 63 to avoid loading the address of the 1:1 mapping of the poisoned page while the 1:1 map is updated used to work when unmapping the page. But it falls down horribly when attempting to directly set the page as uncacheable. The problem is that when the cache mode is changed to uncachable, the pages needs to be flushed from the cache first. But the decoy address is non-canonical due to bit 63 flipped, and the CLFLUSH instruction throws a #GP fault. Add code to change_page_attr_set_clr() to fix the address before calling flush. Fixes: 284ce4011ba6 ("x86/memory_failure: Introduce {set, clear}_mce_nospec()") Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Anvin <hpa@zytor.com> Cc: Borislav Petkov <bp@alien8.de> Cc: linux-edac <linux-edac@vger.kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Jiang <dave.jiang@intel.com> Link: https://lkml.kernel.org/r/20180831165506.GA9605@agluck-desk
2018-08-31x86/efi: Load fixmap GDT in efi_call_phys_epilog()Joerg Roedel1-6/+2
When PTI is enabled on x86-32 the kernel uses the GDT mapped in the fixmap for the simple reason that this address is also mapped for user-space. The efi_call_phys_prolog()/efi_call_phys_epilog() wrappers change the GDT to call EFI runtime services and switch back to the kernel GDT when they return. But the switch-back uses the writable GDT, not the fixmap GDT. When that happened and and the CPU returns to user-space it switches to the user %cr3 and tries to restore user segment registers. This fails because the writable GDT is not mapped in the user page-table, and without a GDT the fault handlers also can't be launched. The result is a triple fault and reboot of the machine. Fix that by restoring the GDT back to the fixmap GDT which is also mapped in the user page-table. Fixes: 7757d607c6b3 x86/pti: ('Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32') Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Guenter Roeck <linux@roeck-us.net> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: hpa@zytor.com Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/1535702738-10971-1-git-send-email-joro@8bytes.org
2018-08-31x86/nmi: Fix NMI uaccess race against CR3 switchingAndy Lutomirski4-1/+53
A NMI can hit in the middle of context switching or in the middle of switch_mm_irqs_off(). In either case, CR3 might not match current->mm, which could cause copy_from_user_nmi() and friends to read the wrong memory. Fix it by adding a new nmi_uaccess_okay() helper and checking it in copy_from_user_nmi() and in __copy_from_user_nmi()'s callers. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Rik van Riel <riel@surriel.com> Cc: Nadav Amit <nadav.amit@gmail.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Jann Horn <jannh@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/dd956eba16646fd0b15c3c0741269dfd84452dac.1535557289.git.luto@kernel.org
2018-08-31x86: Allow generating user-space headers without a compilerBen Hutchings1-4/+7
When bootstrapping an architecture, it's usual to generate the kernel's user-space headers (make headers_install) before building a compiler. Move the compiler check (for asm goto support) to the archprepare target so that it is only done when building code for the target. Fixes: e501ce957a78 ("x86: Force asm-goto") Reported-by: Helmut Grohne <helmutg@debian.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20180829194317.GA4765@decadent.org.uk
2018-08-31x86/dumpstack: Don't dump kernel memory based on usermode RIPJann Horn3-5/+15
show_opcodes() is used both for dumping kernel instructions and for dumping user instructions. If userspace causes #PF by jumping to a kernel address, show_opcodes() can be reached with regs->ip controlled by the user, pointing to kernel code. Make sure that userspace can't trick us into dumping kernel memory into dmesg. Fixes: 7cccf0725cf7 ("x86/dumpstack: Add a show_ip() function") Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: security@kernel.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20180828154901.112726-1-jannh@google.com
2018-08-31of: Add device_type access helper functionsRob Herring1-0/+12
In preparation to remove direct access to device_node.type, add of_node_is_type() and of_node_get_device_type() helpers to check and retrieve the device type. Cc: Frank Rowand <frowand.list@gmail.com> Signed-off-by: Rob Herring <robh@kernel.org>
2018-08-31cpu/hotplug: Remove skip_onerr field from cpuhp_step structureMukesh Ojha1-22/+4
When notifiers were there, `skip_onerr` was used to avoid calling particular step startup/teardown callbacks in the CPU up/down rollback path, which made the hotplug asymmetric. As notifiers are gone now after the full state machine conversion, the `skip_onerr` field is no longer required. Remove it from the structure and its usage. Signed-off-by: Mukesh Ojha <mojha@codeaurora.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/1535439294-31426-1-git-send-email-mojha@codeaurora.org
2018-08-31arm64: mm: always enable CONFIG_HOLES_IN_ZONEJames Morse1-1/+0
Commit 6d526ee26ccd ("arm64: mm: enable CONFIG_HOLES_IN_ZONE for NUMA") only enabled HOLES_IN_ZONE for NUMA systems because the NUMA code was choking on the missing zone for nomap pages. This problem doesn't just apply to NUMA systems. If the architecture doesn't set HAVE_ARCH_PFN_VALID, pfn_valid() will return true if the pfn is part of a valid sparsemem section. When working with multiple pages, the mm code uses pfn_valid_within() to test each page it uses within the sparsemem section is valid. On most systems memory comes in MAX_ORDER_NR_PAGES chunks which all have valid/initialised struct pages. In this case pfn_valid_within() is optimised out. Systems where this isn't true (e.g. due to nomap) should set HOLES_IN_ZONE and provide HAVE_ARCH_PFN_VALID so that mm tests each page as it works with it. Currently non-NUMA arm64 systems can't enable HOLES_IN_ZONE, leading to a VM_BUG_ON(): | page:fffffdff802e1780 is uninitialized and poisoned | raw: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff | raw: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff | page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p)) | ------------[ cut here ]------------ | kernel BUG at include/linux/mm.h:978! | Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [...] | CPU: 1 PID: 25236 Comm: dd Not tainted 4.18.0 #7 | Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 | pstate: 40000085 (nZcv daIf -PAN -UAO) | pc : move_freepages_block+0x144/0x248 | lr : move_freepages_block+0x144/0x248 | sp : fffffe0071177680 [...] | Process dd (pid: 25236, stack limit = 0x0000000094cc07fb) | Call trace: | move_freepages_block+0x144/0x248 | steal_suitable_fallback+0x100/0x16c | get_page_from_freelist+0x440/0xb20 | __alloc_pages_nodemask+0xe8/0x838 | new_slab+0xd4/0x418 | ___slab_alloc.constprop.27+0x380/0x4a8 | __slab_alloc.isra.21.constprop.26+0x24/0x34 | kmem_cache_alloc+0xa8/0x180 | alloc_buffer_head+0x1c/0x90 | alloc_page_buffers+0x68/0xb0 | create_empty_buffers+0x20/0x1ec | create_page_buffers+0xb0/0xf0 | __block_write_begin_int+0xc4/0x564 | __block_write_begin+0x10/0x18 | block_write_begin+0x48/0xd0 | blkdev_write_begin+0x28/0x30 | generic_perform_write+0x98/0x16c | __generic_file_write_iter+0x138/0x168 | blkdev_write_iter+0x80/0xf0 | __vfs_write+0xe4/0x10c | vfs_write+0xb4/0x168 | ksys_write+0x44/0x88 | sys_write+0xc/0x14 | el0_svc_naked+0x30/0x34 | Code: aa1303e0 90001a01 91296421 94008902 (d4210000) | ---[ end trace 1601ba47f6e883fe ]--- Remove the NUMA dependency. Link: https://www.spinics.net/lists/arm-kernel/msg671851.html Cc: <stable@vger.kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reported-by: Mikulas Patocka <mpatocka@redhat.com> Reviewed-by: Pavel Tatashin <pavel.tatashin@microsoft.com> Tested-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-08-31m68k/mac: Use correct PMU response formatFinn Thain1-6/+4
Now that the 68k Mac port has adopted the via-pmu driver, it must decode the PMU response accordingly otherwise the date and time will be wrong. Fixes: ebd722275f9cfc67 ("macintosh/via-pmu: Replace via-pmu68k driver with via-pmu driver") Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
2018-08-30disable stringop truncation warnings for nowStephen Rothwell1-0/+3
They are too noisy Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-30clk: x86: Set default parent to 48MhzAkshu Agrawal1-1/+1
System clk provided in ST soc can be set to: 48Mhz, non-spread 25Mhz, spread To get accurate rate, we need it to set it at non-spread option which is 48Mhz. Signed-off-by: Akshu Agrawal <akshu.agrawal@amd.com> Reviewed-by: Daniel Kurtz <djkurtz@chromium.org> Fixes: 421bf6a1f061 ("clk: x86: Add ST oscout platform clock") Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2018-08-30i2c: sh_mobile: fix leak when using DMA bounce bufferWolfram Sang1-2/+4
We only freed the bounce buffer after successful DMA, missing the cases where DMA setup may have gone wrong. Use a better location which always gets called after each message and use 'stop_after_dma' as a flag for a successful transfer. Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-08-30i2c: sh_mobile: define start_ch() void as it only returns 0 anyhowWolfram Sang1-6/+3
After various refactoring over the years, start_ch() doesn't return errno anymore, so make the function return void. This saves the error handling when calling it which in turn eases cleanup of resources of a future patch. Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-08-30i2c: refactor function to release a DMA safe bufferWolfram Sang4-10/+15
a) rename to 'put' instead of 'release' to match 'get' when obtaining the buffer b) change the argument order to have the buffer as first argument c) add a new argument telling the function if the message was transferred. This allows the function to be used also in cases where setting up DMA failed, so the buffer needs to be freed without syncing to the message buffer. Also convert the only user. Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-08-30i2c: algos: bit: make the error messages grepableJan Kundrát1-24/+31
Yep, I went looking for one of these, and I wasn't able to find it easily. That's worse than a line which is 82-chars long, IMHO. Signed-off-by: Jan Kundrát <jan.kundrat@cesnet.cz> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-08-30i2c: designware: Re-init controllers with pm_disabled set on resumeHans de Goede2-2/+6
On Bay Trail and Cherry Trail devices we set the pm_disabled flag for I2C busses which the OS shares with the PUNIT as these need special handling. Until now we called dev_pm_syscore_device(dev, true) for I2C controllers with this flag set to keep these I2C controllers always on. After commit 12864ff8545f ("ACPI / LPSS: Avoid PM quirks on suspend and resume from hibernation"), this no longer works. This commit modifies lpss_iosf_exit_d3_state() to only run if lpss_iosf_enter_d3_state() has ran before it, so that it does not run on a resume from hibernate (or from S3). On these systems the conditions for lpss_iosf_enter_d3_state() to run never become true, so lpss_iosf_exit_d3_state() never gets called and the 2 LPSS DMA controllers never get forced into D0 mode, instead they are left in their default automatic power-on when needed mode. The not forcing of D0 mode for the DMA controllers enables these systems to properly enter S0ix modes, which is a good thing. But after entering S0ix modes the I2C controller connected to the PMIC no longer works, leading to e.g. broken battery monitoring. The _PS3 method for this I2C controller looks like this: Method (_PS3, 0, NotSerialized) // _PS3: Power State 3 { If ((((PMID == 0x04) || (PMID == 0x05)) || (PMID == 0x06))) { Return (Zero) } PSAT |= 0x03 Local0 = PSAT /* \_SB_.I2C5.PSAT */ } Where PMID = 0x05, so we enter the Return (Zero) path on these systems. So even if we were to not call dev_pm_syscore_device(dev, true) the I2C controller will be left in D0 rather then be switched to D3. Yet on other Bay and Cherry Trail devices S0ix is not entered unless *all* I2C controllers are in D3 mode. This combined with the I2C controller no longer working now that we reach S0ix states on these systems leads to me believing that the PUNIT itself puts the I2C controller in D3 when all other conditions for entering S0ix states are true. Since now the I2C controller is put in D3 over a suspend/resume we must re-initialize it afterwards and that does indeed fix it no longer working. This commit implements this fix by: 1) Making the suspend_late callback a no-op if pm_disabled is set and making the resume_early callback skip the clock re-enable (since it now was not disabled) while still doing the necessary I2C controller re-init. 2) Removing the dev_pm_syscore_device(dev, true) call, so that the suspend and resume callbacks are actually called. Normally this would cause the ACPI pm code to call _PS3 putting the I2C controller in D3, wreaking havoc since it is shared with the PUNIT, but in this special case the _PS3 method is a no-op so we can safely allow a "fake" suspend / resume. Fixes: 12864ff8545f ("ACPI / LPSS: Avoid PM quirks on suspend and resume ...") Link: https://bugzilla.kernel.org/show_bug.cgi?id=200861 Cc: 4.15+ <stable@vger.kernel.org> # 4.15+ Signed-off-by: Hans de Goede <hdegoede@redhat.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Jarkko Nikula <jarkko.nikula@linux.intel.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-08-30i2c: i801: Allow ACPI AML access I/O ports not reserved for SMBusMika Westerberg1-1/+8
Commit 7ae81952cda ("i2c: i801: Allow ACPI SystemIO OpRegion to conflict with PCI BAR") made it possible for AML code to access SMBus I/O ports by installing custom SystemIO OpRegion handler and blocking i80i driver access upon first AML read/write to this OpRegion. However, while ThinkPad T560 does have SystemIO OpRegion declared under the SMBus device, it does not access any of the SMBus registers: Device (SMBU) { ... OperationRegion (SMBP, PCI_Config, 0x50, 0x04) Field (SMBP, DWordAcc, NoLock, Preserve) { , 5, TCOB, 11, Offset (0x04) } Name (TCBV, 0x00) Method (TCBS, 0, NotSerialized) { If ((TCBV == 0x00)) { TCBV = (\_SB.PCI0.SMBU.TCOB << 0x05) } Return (TCBV) /* \_SB_.PCI0.SMBU.TCBV */ } OperationRegion (TCBA, SystemIO, TCBS (), 0x10) Field (TCBA, ByteAcc, NoLock, Preserve) { Offset (0x04), , 9, CPSC, 1 } } Problem with the current approach is that it blocks all I/O port access and because this system has touchpad connected to the SMBus controller after first AML access (happens during suspend/resume cycle) the touchpad fails to work anymore. Fix this so that we allow ACPI AML I/O port access if it does not touch the region reserved for the SMBus. Fixes: 7ae81952cda ("i2c: i801: Allow ACPI SystemIO OpRegion to conflict with PCI BAR") Link: https://bugzilla.kernel.org/show_bug.cgi?id=200737 Reported-by: Yussuf Khalil <dev@pp3345.net> Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Reviewed-by: Jean Delvare <jdelvare@suse.de> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-08-30of: add node name compare helper functionsRob Herring2-0/+35
In preparation to remove device_node.name pointer, add helper functions for node name comparisons which are a common pattern throughout the kernel. Cc: Frank Rowand <frowand.list@gmail.com> Signed-off-by: Rob Herring <robh@kernel.org>