aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm64/kernel/debug-monitors.c (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2015-10-16arm64: AArch32 user space PC alignment exceptionMark Salyzyn1-0/+2
ARMv7 does not have a PC alignment exception. ARMv8 AArch32 user space however can produce a PC alignment exception. Add handler so that we do not dump an unexpected stack trace in the logs. Signed-off-by: Mark Salyzyn <salyzyn@android.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-16arm64: Minor coding style fixes for kc_offset_to_vaddr and kc_vaddr_to_offsetCatalin Marinas1-2/+3
These were introduced by commit 03875ad52fdd (arm64: add kc_offset_to_vaddr and kc_vaddr_to_offset macro). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-13Revert "arm64: ioremap: add ioremap_cache macro"Catalin Marinas1-1/+0
This reverts commit 1b6d7f8742d5d46c478f10c9e57da18d049b116d. This patch would conflict with Dan Williams' "tree-wide convert to memremap()" series (ioremap_cache replaced by arch_memremap) Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-13arm64: add kc_offset_to_vaddr and kc_vaddr_to_offset macroyalin wang1-0/+2
This patch add kc_offset_to_vaddr() and kc_vaddr_to_offset(), the default version doesn't work on arm64, because arm64 kernel address is below the PAGE_OFFSET, like module address and vmemmap address are all below PAGE_OFFSET address. Signed-off-by: yalin wang <yalin.wang2010@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-13arm64: ioremap: add ioremap_cache macroyalin wang1-0/+1
Add ioremap_cache macro, because some code will test if this macro is defined or not, and will generate a generric version if not defined, for example, memremap.c do like this. Signed-off-by: yalin wang <yalin.wang2010@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-13arm64: kasan: fix issues reported by sparseWill Deacon2-1/+3
Sparse reports some new issues introduced by the kasan patches: arch/arm64/mm/kasan_init.c:91:13: warning: no previous prototype for 'kasan_early_init' [-Wmissing-prototypes] void __init kasan_early_init(void) ^ arch/arm64/mm/kasan_init.c:91:13: warning: symbol 'kasan_early_init' was not declared. Should it be static? [sparse] This patch resolves the problem by adding a prototype for kasan_early_init and marking the function as asmlinkage, since it's only called from head.S. Signed-off-by: Will Deacon <will.deacon@arm.com> Acked-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12Documentation/features/KASAN: arm64 supports KASAN nowAndrey Ryabinin1-1/+1
Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12ARM64: kasan: print memory assignmentLinus Walleij1-0/+6
This prints out the virtual memory assigned to KASan in the boot crawl along with other memory assignments, if and only if KASan is activated. Example dmesg from the Juno Development board: Memory: 1691156K/2080768K available (5465K kernel code, 444K rwdata, 2160K rodata, 340K init, 217K bss, 373228K reserved, 16384K cma-reserved) Virtual kernel memory layout: kasan : 0xffffff8000000000 - 0xffffff9000000000 ( 64 GB) vmalloc : 0xffffff9000000000 - 0xffffffbdbfff0000 ( 182 GB) vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000 ( 8 GB maximum) 0xffffffbdc2000000 - 0xffffffbdc3fc0000 ( 31 MB actual) fixed : 0xffffffbffabfd000 - 0xffffffbffac00000 ( 12 KB) PCI I/O : 0xffffffbffae00000 - 0xffffffbffbe00000 ( 16 MB) modules : 0xffffffbffc000000 - 0xffffffc000000000 ( 64 MB) memory : 0xffffffc000000000 - 0xffffffc07f000000 ( 2032 MB) .init : 0xffffffc0007f5000 - 0xffffffc00084a000 ( 340 KB) .text : 0xffffffc000080000 - 0xffffffc0007f45b4 ( 7634 KB) .data : 0xffffffc000850000 - 0xffffffc0008bf200 ( 445 KB) Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12arm64: add KASAN supportAndrey Ryabinin18-6/+288
This patch adds arch specific code for kernel address sanitizer (see Documentation/kasan.txt). 1/8 of kernel addresses reserved for shadow memory. There was no big enough hole for this, so virtual addresses for shadow were stolen from vmalloc area. At early boot stage the whole shadow region populated with just one physical page (kasan_zero_page). Later, this page reused as readonly zero shadow for some memory that KASan currently don't track (vmalloc). After mapping the physical memory, pages for shadow memory are allocated and mapped. Functions like memset/memmove/memcpy do a lot of memory accesses. If bad pointer passed to one of these function it is important to catch this. Compiler's instrumentation cannot do this since these functions are written in assembly. KASan replaces memory functions with manually instrumented variants. Original functions declared as weak symbols so strong definitions in mm/kasan/kasan.c could replace them. Original functions have aliases with '__' prefix in name, so we could call non-instrumented variant if needed. Some files built without kasan instrumentation (e.g. mm/slub.c). Original mem* function replaced (via #define) with prefixed variants to disable memory access checks for such files. Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12arm64: move PGD_SIZE definition to pgalloc.hAndrey Ryabinin2-2/+1
This will be used by KASAN latter. Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12arm64: atomics: implement native {relaxed, acquire, release} atomicsWill Deacon4-262/+371
Commit 654672d4ba1a ("locking/atomics: Add _{acquire|release|relaxed}() variants of some atomic operation") introduced a relaxed atomic API to Linux that maps nicely onto the arm64 memory model, including the new ARMv8.1 atomic instructions. This patch hooks up the API to our relaxed atomic instructions, rather than have them all expand to the full-barrier variants as they do currently. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12arm64/efi: isolate EFI stub from the kernel properArd Biesheuvel6-20/+139
Since arm64 does not use a builtin decompressor, the EFI stub is built into the kernel proper. So far, this has been working fine, but actually, since the stub is in fact a PE/COFF relocatable binary that is executed at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we should not be seamlessly sharing code with the kernel proper, which is a position dependent executable linked at a high virtual offset. So instead, separate the contents of libstub and its dependencies, by putting them into their own namespace by prefixing all of its symbols with __efistub. This way, we have tight control over what parts of the kernel proper are referenced by the stub. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12arm64: use ENDPIPROC() to annotate position independent assembler routinesArd Biesheuvel10-13/+24
For more control over which functions are called with the MMU off or with the UEFI 1:1 mapping active, annotate some assembler routines as position independent. This is done by introducing ENDPIPROC(), which replaces the ENDPROC() declaration of those routines. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12arm64/efi: remove /chosen/linux, uefi-stub-kern-ver DT propertyArd Biesheuvel2-11/+0
With the stub to kernel interface being promoted to a proper interface so that other agents than the stub can boot the kernel proper in EFI mode, we can remove the linux,uefi-stub-kern-ver field, considering that its original purpose was to prevent this from happening in the first place. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12arm64: Fix missing #include in hw_breakpoint.cCatalin Marinas1-0/+1
A prior commit used to detect the hw breakpoint ABI behaviour based on the target state missed the asm/compat.h include and the build fails with !CONFIG_COMPAT. Fixes: 8f48c0629049 ("arm64: hw_breakpoint: use target state to determine ABI behaviour") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-09arm64: fix a migrating irq bug when hotplug cpuYang Yingliang4-64/+3
When cpu is disabled, all irqs will be migratged to another cpu. In some cases, a new affinity is different, the old affinity need to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE, the old affinity can not be updated. Fix it by using irq_do_set_affinity. And migrating interrupts is a core code matter, so use the generic function irq_migrate_all_off_this_cpu() to migrate interrupts in kernel/irq/migration.c. Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King - ARM Linux <linux@arm.linux.org.uk> Cc: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-08arm64: Mark kernel page ranges contiguousJeremy Linton1-8/+61
With 64k pages, the next larger segment size is 512M. The linux kernel also uses different protection flags to cover its code and data. Because of this requirement, the vast majority of the kernel code and data structures end up being mapped with 64k pages instead of the larger pages common with a 4k page kernel. Recent ARM processors support a contiguous bit in the page tables which allows the a TLB to cover a range larger than a single PTE if that range is mapped into physically contiguous ram. So, for the kernel its a good idea to set this flag. Some basic micro benchmarks show it can significantly reduce the number of L1 dTLB refills. Add boot option to enable/disable CONT marking, as well as fix a bug found by Steve Capper. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> [catalin.marinas@arm.com: remove CONFIG_ARM64_CONT_PTE altogether] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-08arm64: Make the kernel page dump utility aware of the CONT bitJeremy Linton1-1/+17
The kernel page dump utility needs to be aware of the CONT bit before it will break up pages ranges for display. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-08arm64: Default kernel pages should be contiguousJeremy Linton1-0/+1
The default page attributes for a PMD being broken should have the CONT bit set. Create a new definition for an early boot range of PTE's that are contiguous. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-08arm64: Macros to check/set/unset the contiguous bitJeremy Linton1-0/+11
Add the supporting macros to check if the contiguous bit is set, set the bit, or clear it in a PTE entry. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-08arm64: PTE/PMD contiguous bit definitionJeremy Linton1-0/+9
Define the bit positions in the PTE and PMD for the contiguous bit. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-08arm64: Add contiguous page flag shifts and constantsJeremy Linton1-1/+7
Add the number of pages required to form a contiguous range, as well as some supporting constants. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07MAINTAINERS: add myself as arm perf reviewerMark Rutland1-0/+1
As suggested by Will Deacon, add myself as a reviewer of the ARM PMU profiling and debugging code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07MAINTAINERS: update ARM PMU profiling and debugging for arm64Mark Rutland1-4/+4
Will Deacon maintains the profiling and debugging code under both arch/arm and arch/arm64. Update MAINTAINERS to reflect this, in preparation for adding myself as a reviewer of said code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: dts: juno: describe PMUs separatelyMark Rutland2-14/+22
The A57 and A53 PMUs in Juno support different events, so describe them separately in both the Juno and Juno R1 DTs. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Liviu Dudau <liviu.dudau@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: perf: add Cortex-A57 supportMark Rutland2-0/+62
The Cortex-A57 PMU supports a few events outside of the required PMUv3 set that are rather useful. This patch adds the event map data for said events. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: perf: add Cortex-A53 supportMark Rutland2-2/+63
The Cortex-A53 PMU supports a few events outside of the required PMUv3 set that are rather useful. This patch adds the event map data for said events. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: perf: move to shared arm_pmu frameworkMark Rutland4-951/+93
Now that the arm_pmu framework has been factored out to drivers/perf we can make use of it for arm64, gaining support for heterogeneous PMUs and unifying the two codebases before they diverge further. The as yet unused PMU name for PMUv3 is changed to armv8_pmuv3, matching the style previously applied to the 32-bit PMUs. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: hw_breakpoint: use target state to determine ABI behaviourWill Deacon1-2/+16
The arm64 hw_breakpoint interface is slightly less flexible than its 32-bit counterpart, thanks to some changes in the architecture rendering unaligned watchpoint addresses obselete for AArch64. However, in a multi-arch environment (i.e. debugging a 32-bit target with a 64-bit GDB under a 64-bit kernel), we need to provide a feature compatible interface to GDB in order for debugging to function correctly. This patch adds a new helper, is_compat_bp, to our hw_breakpoint implementation which changes the interface behaviour based on the architecture of the debug target as opposed to the debugger itself. This allows debugged to function as expected for multi-arch configurations without relying on deprecated architectural behaviours when debugging native applications. Cc: Yao Qi <yao.qi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: mm: remove dsb from update_mmu_cacheWill Deacon1-3/+3
update_mmu_cache() consists of a dsb(ishst) instruction so that new user mappings are guaranteed to be visible to the page table walker on exception return. In reality this can be a very expensive operation which is rarely needed. Removing this barrier shows a modest improvement in hackbench scores and , in the worst case, we re-take the user fault and establish that there was nothing to do. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: tlb: remove redundant barrier from __flush_tlb_pgtableWill Deacon1-1/+0
__flush_tlb_pgtable is used to invalidate intermediate page table entries after they have been cleared and are about to be freed. Since pXd_clear imply memory barriers, we don't need the extra one here. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: mm: kill mm_cpumask usageWill Deacon2-9/+0
mm_cpumask isn't actually used for anything on arm64, so remove all the code trying to keep it up-to-date. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: switch_mm: simplify mm and CPU checksWill Deacon2-3/+5
switch_mm performs some checks to try and avoid entering the ASID allocator: (1) If we're switching to the init_mm (no user mappings), then simply set a reserved TTBR0 value with no page table (the zero page) (2) If prev == next *and* the mm_cpumask indicates that we've run on this CPU before, then we can skip the allocator. However, there is plenty of redundancy here. With the new ASID allocator, if prev == next, then we know that our ASID is valid and do not need to worry about re-allocation. Consequently, we can drop the mm_cpumask check in (2) and move the prev == next check before the init_mm check, since if prev == next == init_mm then there's nothing to do. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: tlbflush: avoid flushing when fullmm == 1Will Deacon1-11/+15
The TLB gather code sets fullmm=1 when tearing down the entire address space for an mm_struct on exit or execve. Given that the ASID allocator will never re-allocate a dirty ASID, this flushing is not needed and can simply be avoided in the flushing code. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: tlbflush: remove redundant ASID casts to (unsigned long)Will Deacon1-5/+4
The ASID macro returns a 64-bit (long long) value, so there is no need to cast to (unsigned long) before shifting prior to a TLBI operation. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: mm: rewrite ASID allocator and MM context-switching codeWill Deacon7-169/+166
Our current switch_mm implementation suffers from a number of problems: (1) The ASID allocator relies on IPIs to synchronise the CPUs on a rollover event (2) Because of (1), we cannot allocate ASIDs with interrupts disabled and therefore make use of a TIF_SWITCH_MM flag to postpone the actual switch to finish_arch_post_lock_switch (3) We run context switch with a reserved (invalid) TTBR0 value, even though the ASID and pgd are updated atomically (4) We take a global spinlock (cpu_asid_lock) during context-switch (5) We use h/w broadcast TLB operations when they are not required (e.g. in flush_context) This patch addresses these problems by rewriting the ASID algorithm to match the bitmap-based arch/arm/ implementation more closely. This in turn allows us to remove much of the complications surrounding switch_mm, including the ugly thread flag. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: flush: use local TLB and I-cache invalidationWill Deacon7-7/+22
There are a number of places where a single CPU is running with a private page-table and we need to perform maintenance on the TLB and I-cache in order to ensure correctness, but do not require the operation to be broadcast to other CPUs. This patch adds local variants of tlb_flush_all and __flush_icache_all to support these use-cases and updates the callers respectively. __local_flush_icache_all also implies an isb, since it is intended to be used synchronously. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Daney <david.daney@cavium.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: proc: de-scope TLBI operation during cold bootWill Deacon1-2/+2
When cold-booting a CPU, we must invalidate any junk entries from the local TLB prior to enabling the MMU. This doesn't require broadcasting within the inner-shareable domain, so de-scope the operation to apply only to the local CPU. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: mm: remove unused cpu_set_idmap_tcr_t0sz functionWill Deacon1-23/+12
With commit b08d4640a3dc ("arm64: remove dead code"), cpu_set_idmap_tcr_t0sz is no longer called and can therefore be removed from the kernel. This patch removes the function and effectively inlines the helper function __cpu_set_tcr_t0sz into cpu_set_default_tcr_t0sz. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: introduce VA_START macro - the first kernel virtual address.Andrey Ryabinin2-1/+3
In order to not use lengthy (UL(0xffffffffffffffff) << VA_BITS) everywhere, replace it with VA_START. Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: copy_to-from-in_user optimization using copy templateFeng Kan3-92/+120
This patch optimize copy_to-from-in_user for arm 64bit architecture. The copy template is used as template file for all the copy*.S files. Minor change was made to it to accommodate the copy to/from/in user files. Signed-off-by: Feng Kan <fkan@apm.com> Signed-off-by: Balamurugan Shanmugam <bshanmugam@apm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: Change memcpy in kernel to use the copy template fileFeng Kan2-153/+219
This converts the memcpy.S to use the copy template file. The copy template file was based originally on the memcpy.S Signed-off-by: Feng Kan <fkan@apm.com> Signed-off-by: Balamurugan Shanmugam <bshanmugam@apm.com> [catalin.marinas@arm.com: removed tmp3(w) .req statements as they are not used] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: defconfig: Enable samsung serial and mmcAlim Akhtar1-0/+8
This patch update defconfig, adds samsung serial and Synopsys Designware MMC configs related to exynos SoC Signed-off-by: Alim Akhtar <alim.akhtar@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-04Linux 4.3-rc4Linus Torvalds1-1/+1
2015-10-04MIPS: scall: Always run the seccomp syscall filtersMarkos Chandras4-73/+42
The MIPS syscall handler code used to return -ENOSYS on invalid syscalls. Whilst this is expected, it caused problems for seccomp filters because the said filters never had the change to run since the code returned -ENOSYS before triggering them. This caused problems on the chromium testsuite for filters looking for invalid syscalls. This has now changed and the seccomp filters are always run even if the syscall is invalid. We return -ENOSYS once we return from the seccomp filters. Moreover, similar codepaths have been merged in the process which simplifies somewhat the overall syscall code. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/11236/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-10-02clocksource: Fix abs() usage w/ 64bit valuesJohn Stultz1-1/+1
This patch fixes one cases where abs() was being used with 64-bit nanosecond values, where the result may be capped at 32-bits. This potentially could cause watchdog false negatives on 32-bit systems, so this patch addresses the issue by using abs64(). Signed-off-by: John Stultz <john.stultz@linaro.org> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1442279124-7309-2-git-send-email-john.stultz@linaro.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-10-02irqchip/gic-v3-its: Count additional LPIs for the aliased devicesMarc Zyngier1-1/+1
When configuring the interrupt mapping for a new device, we iterate over all the possible aliases to account for their maximum MSI allocation. This was introduced by e8137f4f5088 ("irqchip: gicv3-its: Iterate over PCI aliases to generate ITS configuration"). Turns out that the code doing that is a bit braindead, and repeatedly accounts for the same device over and over. Fix this by counting the actual alias that is passed to us by the core code. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Alex Shi <alex.shi@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: David Daney <ddaney.cavm@gmail.com> Cc: Jason Cooper <jason@lakedaemon.net> Link: http://lkml.kernel.org/r/1443800646-8074-3-git-send-email-marc.zyngier@arm.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-10-02irqchip/gic-v3-its: Silence warning when its_lpi_alloc_chunks gets inlinedMarc Zyngier1-0/+3
More agressive inlining in recent versions of GCC have uncovered a new set of warnings: drivers/irqchip/irq-gic-v3-its.c: In function its_msi_prepare: drivers/irqchip/irq-gic-v3-its.c:1148:26: warning: lpi_base may be used uninitialized in this function [-Wmaybe-uninitialized] dev->event_map.lpi_base = lpi_base; ^ drivers/irqchip/irq-gic-v3-its.c:1116:6: note: lpi_base was declared here int lpi_base; ^ drivers/irqchip/irq-gic-v3-its.c:1149:25: warning: nr_lpis may be used uninitialized in this function [-Wmaybe-uninitialized] dev->event_map.nr_lpis = nr_lpis; ^ drivers/irqchip/irq-gic-v3-its.c:1117:6: note: nr_lpis was declared here int nr_lpis; ^ The warning is fairly benign (there is no code path that could actually use uninitialized variables), but let's silence it anyway by zeroing the variables on the error path. Reported-by: Alex Shi <alex.shi@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: David Daney <ddaney.cavm@gmail.com> Cc: Jason Cooper <jason@lakedaemon.net> Link: http://lkml.kernel.org/r/1443800646-8074-2-git-send-email-marc.zyngier@arm.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-10-02Revert "Input: synaptics - fix handling of disabling gesture mode"Dmitry Torokhov1-8/+4
This reverts commit e51e38494a8ecc18650efb0c840600637891de2c: we actually do want the device to work in extended W mode, as this is the mode that allows us receiving multiple contact information. Cc: stable@vger.kernel.org
2015-10-02MIPS: Octeon: Fix kernel panic on startup from memory corruptionMatt Bennett1-1/+1
During development it was found that a number of builds would panic during the kernel init process, more specifically in 'delayed_fput()'. The panic showed the kernel trying to access a memory address of '0xb7fdc00' while traversing the 'delayed_fput_list' structure. Comparing this memory address to the value of the pointer used on builds that did not panic confirmed that the pointer on crashing builds must have been corrupted at some stage earlier in the init process. By traversing the list earlier and earlier in the code it was found that 'plat_mem_setup()' was responsible for corrupting the list. Specifically the line: memory = cvmx_bootmem_phy_alloc(mem_alloc_size, __pa_symbol(&__init_end), -1, 0x100000, CVMX_BOOTMEM_FLAG_NO_LOCKING); Which would eventually call: cvmx_bootmem_phy_set_size(new_ent_addr, cvmx_bootmem_phy_get_size (ent_addr) - (desired_min_addr - ent_addr)); Where 'new_ent_addr'=0x4800000 (the address of 'delayed_fput_list') and the second argument (size)=0xb7fdc00 (the address causing the kernel panic). The job of this part of 'plat_mem_setup()' is to allocate chunks of memory for the kernel to use. At the start of each chunk of memory the size of the chunk is written, hence the value 0xb7fdc00 is written onto memory at 0x4800000, therefore the kernel panics when it goes back to access 'delayed_fput_list' later on in the initialisation process. On builds that were not crashing it was found that the compiler had placed 'delayed_fput_list' at 0x4800008, meaning it wasn't corrupted (but something else in memory was overwritten). As can be seen in the first function call above the code begins to allocate chunks of memory beginning from the symbol '__init_end'. The MIPS linker script (vmlinux.lds.S) however defines the .bss section to begin after '__init_end'. Therefore memory within the .bss section is allocated to the kernel to use (System.map shows 'delayed_fput_list' and other kernel structures to be in .bss). To stop the kernel panic (and the .bss section being corrupted) memory should begin being allocated from the symbol '_end'. Signed-off-by: Matt Bennett <matt.bennett@alliedtelesis.co.nz> Acked-by: David Daney <david.daney@cavium.com> Cc: linux-mips@linux-mips.org Cc: aleksey.makarov@auriga.com Patchwork: https://patchwork.linux-mips.org/patch/11251/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>