aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2019-02-26ARM: 8848/1: virt: Align GIC version check with arm64 counterpartVladimir Murzin1-2/+2
arm64 has got relaxation on GIC version check at early boot stage due to update of the GIC architecture let's align ARM with that. To help backports (even though the code was correct at the time of writing) Fixes: e59941b9b381 ("ARM: 8527/1: virt: enable GICv3 system registers") Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8847/1: pm: fix HYP/SVC mode mismatch when MCPM is usedMarek Szyprowski3-1/+14
MCPM does a soft reset of the CPUs and uses common cpu_resume() routine to perform low-level platform initialization. This results in a try to install HYP stubs for the second time for each CPU and results in false HYP/SVC mode mismatch detection. The HYP stubs are already installed at the beginning of the kernel initialization on the boot CPU (head.S) or in the secondary_startup() for other CPUs. To fix this issue MCPM code should use a cpu_resume() routine without HYP stubs installation. This change fixes HYP/SVC mode mismatch on Samsung Exynos5422-based Odroid XU3/XU4/HC1 boards. Fixes: 3721924c8154 ("ARM: 8081/1: MCPM: provide infrastructure to allow for MCPM loopback") Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Anand Moon <linux.amoon@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8845/1: use unified assembler in c filesStefan Agner3-3/+6
Use unified assembler syntax (UAL) in inline assembler. Divided syntax is considered deprecated. This will also allow to build the kernel using LLVM's integrated assembler. When compiling non-Thumb2 GCC always emits a ".syntax divided" at the beginning of the inline assembly which makes the assembler fail. Since GCC 5 there is the -masm-syntax-unified GCC option which make GCC assume unified syntax asm and hence emits ".syntax unified" even in ARM mode. However, the option is broken since GCC version 6 (see GCC PR88648 [1]). Work around by adding ".syntax unified" as part of the inline assembly. [0] https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html#index-masm-syntax-unified [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88648 Signed-off-by: Stefan Agner <stefan@agner.ch> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8844/1: use unified assembler in assembly filesStefan Agner31-124/+124
Use unified assembler syntax (UAL) in assembly files. Divided syntax is considered deprecated. This will also allow to build the kernel using LLVM's integrated assembler. Signed-off-by: Stefan Agner <stefan@agner.ch> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8843/1: use unified assembler in headersStefan Agner3-14/+14
Use unified assembler syntax (UAL) in headers. Divided syntax is considered deprecated. This will also allow to build the kernel using LLVM's integrated assembler. Signed-off-by: Stefan Agner <stefan@agner.ch> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8841/1: use unified assembler in macrosStefan Agner3-4/+4
Use unified assembler syntax (UAL) in macros. Divided syntax is considered deprecated. This will also allow to build the kernel using LLVM's integrated assembler. Signed-off-by: Stefan Agner <stefan@agner.ch> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8840/1: use a raw_spinlock_t in unwindSebastian Andrzej Siewior1-7/+7
Mostly unwind is done with irqs enabled however SLUB may call it with irqs disabled while creating a new SLUB cache. I had system freeze while loading a module which called kmem_cache_create() on init. That means SLUB's __slab_alloc() disabled interrupts and then ->new_slab_objects() ->new_slab() ->setup_object() ->setup_object_debug() ->init_tracking() ->set_track() ->save_stack_trace() ->save_stack_trace_tsk() ->walk_stackframe() ->unwind_frame() ->unwind_find_idx() =>spin_lock_irqsave(&unwind_lock); Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8839/1: kprobe: make patch_lock a raw_spinlock_tYang Shi1-3/+3
When running kprobe on -rt kernel, the below bug is caught: |BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:931 |in_atomic(): 1, irqs_disabled(): 128, pid: 14, name: migration/0 |Preemption disabled at:[<802f2b98>] cpu_stopper_thread+0xc0/0x140 |CPU: 0 PID: 14 Comm: migration/0 Tainted: G O 4.8.3-rt2 #1 |Hardware name: Freescale LS1021A |[<8025a43c>] (___might_sleep) |[<80b5b324>] (rt_spin_lock) |[<80b5c31c>] (__patch_text_real) |[<80b5c3ac>] (patch_text_stop_machine) |[<802f2920>] (multi_cpu_stop) Since patch_text_stop_machine() is called in stop_machine() which disables IRQ, sleepable lock should be not used in this atomic context, so replace patch_lock to raw lock. Signed-off-by: Yang Shi <yang.shi@linaro.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8837/1: coresight: etmv4: Update ID register table to add UCI supportMike Leach2-9/+20
Adds macro to enable UCI entries to be added to AMBA ID tables. Updates the ID register tables to contain a UCI entry for the A35 ETM device to allow correct matching of driver in the amba bus code. Signed-off-by: Mike Leach <mike.leach@linaro.org> Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8836/1: drivers: amba: Update component matching to use the CoreSight UCI values.Mike Leach2-8/+43
The patches provide an update of amba_device and matching code to handle the additional registers required for the Class 0x9 (CoreSight) UCI. The *data pointer in the amba_id is used by the driver to provide extended ID register values for matching. CoreSight components where PID/CID pair is currently sufficient for unique identification need not provide this additional information. Signed-off-by: Mike Leach <mike.leach@linaro.org> Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8838/1: drivers: amba: Updates to component identification for driver matching.Mike Leach5-63/+90
The CoreSight specification (ARM IHI 0029E), updates the ID register requirements for components on an AMBA bus, to cover both traditional ARM Primecell type devices, and newer CoreSight and other components. The Peripheral ID (PID) / Component ID (CID) pair is extended in certain cases to uniquely identify components. CoreSight components related to a single function can share Peripheral ID values, and must be further identified using a Unique Component Identifier (UCI). e.g. the ETM, CTI, PMU and Debug hardware of the A35 all share the same PID. Bits 15:12 of the CID are defined to be the device class. Class 0xF remains for PrimeCell and legacy components. Class 0x9 defines the component as CoreSight (CORESIGHT_CID above) Class 0x0, 0x1, 0xB, 0xE define components that do not have driver support at present. Class 0x2-0x8,0xA and 0xD-0xD are presently reserved. The specification futher defines which classes of device use the standard CID/PID pair, and when additional ID registers are required. This patch introduces the amba_cs_uci_id structure which will be used in all coresight drivers for indentification via the private data pointer in the amba_id structure. Existing drivers that currently use the amba_id->data pointer for private data are updated to use the amba_cs_uci_id->data pointer. Macros and inline functions are added to simplify this code. Signed-off-by: Mike Leach <mike.leach@linaro.org> Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-12ARM: 8835/1: dma-mapping: Clear DMA ops on teardownRobin Murphy1-0/+2
Installing the appropriate non-IOMMU DMA ops in arm_iommu_detch_device() serves the case where IOMMU-aware drivers choose to control their own mapping but still make DMA API calls, however it also affects the case when the arch code itself tears down the mapping upon driver unbinding, where the ops now get left in place and can inhibit arch_setup_dma_ops() on subsequent re-probe attempts. Fix the latter case by making sure that arch_teardown_dma_ops() cleans up whenever the ops were automatically installed by its counterpart. Reported-by: Tobias Jakobi <tjakobi@math.uni-bielefeld.de> Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Fixes: 1874619a7df4 "ARM: dma-mapping: Set proper DMA ops in arm_iommu_detach_device()" Tested-by: Tobias Jakobi <tjakobi@math.uni-bielefeld.de> Tested-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-12ARM: 8834/1: Fix: kprobes: optimized kprobes illegal instructionMathieu Desnoyers1-1/+1
commit e46daee53bb5 ("ARM: 8806/1: kprobes: Fix false positive with FORTIFY_SOURCE") introduced a regression in optimized kprobes. It triggers "invalid instruction" oopses when using kprobes instrumentation through lttng and perf. This commit was introduced in kernel v4.20, and has been backported to stable kernels 4.19 and 4.14. This crash was also reported by Hongzhi Song on the redhat bugzilla where the patch was originally introduced. Link: https://bugzilla.redhat.com/show_bug.cgi?id=1639397 Link: https://bugs.lttng.org/issues/1174 Link: https://lore.kernel.org/lkml/342740659.2887.1549307721609.JavaMail.zimbra@efficios.com Fixes: e46daee53bb5 ("ARM: 8806/1: kprobes: Fix false positive with FORTIFY_SOURCE") Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reported-by: Robert Berger <Robert.Berger@ReliableEmbeddedSystems.com> Tested-by: Robert Berger <Robert.Berger@ReliableEmbeddedSystems.com> Acked-by: Kees Cook <keescook@chromium.org> Cc: Robert Berger <Robert.Berger@ReliableEmbeddedSystems.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: William Cohen <wcohen@redhat.com> Cc: Laura Abbott <labbott@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: <stable@vger.kernel.org> # v4.14+ Cc: linux-arm-kernel@lists.infradead.org Cc: patches@armlinux.org.uk Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-12ARM: 8833/1: Ensure that NEON code always compiles with ClangNathan Chancellor4-5/+5
While building arm32 allyesconfig, I ran into the following errors: arch/arm/lib/xor-neon.c:17:2: error: You should compile this file with '-mfloat-abi=softfp -mfpu=neon' In file included from lib/raid6/neon1.c:27: /home/nathan/cbl/prebuilt/lib/clang/8.0.0/include/arm_neon.h:28:2: error: "NEON support not enabled" Building V=1 showed NEON_FLAGS getting passed along to Clang but __ARM_NEON__ was not getting defined. Ultimately, it boils down to Clang only defining __ARM_NEON__ when targeting armv7, rather than armv6k, which is the '-march' value for allyesconfig. >From lib/Basic/Targets/ARM.cpp in the Clang source: // This only gets set when Neon instructions are actually available, unlike // the VFP define, hence the soft float and arch check. This is subtly // different from gcc, we follow the intent which was that it should be set // when Neon instructions are actually available. if ((FPU & NeonFPU) && !SoftFloat && ArchVersion >= 7) { Builder.defineMacro("__ARM_NEON", "1"); Builder.defineMacro("__ARM_NEON__"); // current AArch32 NEON implementations do not support double-precision // floating-point even when it is present in VFP. Builder.defineMacro("__ARM_NEON_FP", "0x" + Twine::utohexstr(HW_FP & ~HW_FP_DP)); } Ard Biesheuvel recommended explicitly adding '-march=armv7-a' at the beginning of the NEON_FLAGS definitions so that __ARM_NEON__ always gets definined by Clang. This doesn't functionally change anything because that code will only run where NEON is supported, which is implicitly armv7. Link: https://github.com/ClangBuiltLinux/linux/issues/287 Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: avoid Cortex-A9 livelock on tight dmb loopsRussell King5-4/+17
machine_crash_nonpanic_core() does this: while (1) cpu_relax(); because the kernel has crashed, and we have no known safe way to deal with the CPU. So, we place the CPU into an infinite loop which we expect it to never exit - at least not until the system as a whole is reset by some method. In the absence of erratum 754327, this code assembles to: b . In other words, an infinite loop. When erratum 754327 is enabled, this becomes: 1: dmb b 1b It has been observed that on some systems (eg, OMAP4) where, if a crash is triggered, the system tries to kexec into the panic kernel, but fails after taking the secondary CPU down - placing it into one of these loops. This causes the system to livelock, and the most noticable effect is the system stops after issuing: Loading crashdump kernel... to the system console. The tested as working solution I came up with was to add wfe() to these infinite loops thusly: while (1) { cpu_relax(); wfe(); } which, without 754327 builds to: 1: wfe b 1b or with 754327 is enabled: 1: dmb wfe b 1b Adding "wfe" does two things depending on the environment we're running under: - where we're running on bare metal, and the processor implements "wfe", it stops us spinning endlessly in a loop where we're never going to do any useful work. - if we're running in a VM, it allows the CPU to be given back to the hypervisor and rescheduled for other purposes (maybe a different VM) rather than wasting CPU cycles inside a crashed VM. However, in light of erratum 794072, Will Deacon wanted to see 10 nops as well - which is reasonable to cover the case where we have erratum 754327 enabled _and_ we have a processor that doesn't implement the wfe hint. So, we now end up with: 1: wfe b 1b when erratum 754327 is disabled, or: 1: dmb nop nop nop nop nop nop nop nop nop nop wfe b 1b when erratum 754327 is enabled. We also get the dmb + 10 nop sequence elsewhere in the kernel, in terminating loops. This is reasonable - it means we get the workaround for erratum 794072 when erratum 754327 is enabled, but still relinquish the dead processor - either by placing it in a lower power mode when wfe is implemented as such or by returning it to the hypervisior, or in the case where wfe is a no-op, we use the workaround specified in erratum 794072 to avoid the problem. These as two entirely orthogonal problems - the 10 nops addresses erratum 794072, and the wfe is an optimisation that makes the system more efficient when crashed either in terms of power consumption or by allowing the host/other VMs to make use of the CPU. I don't see any reason not to use kexec() inside a VM - it has the potential to provide automated recovery from a failure of the VMs kernel with the opportunity for saving a crashdump of the failure. A panic() with a reboot timeout won't do that, and reading the libvirt documentation, setting on_reboot to "preserve" won't either (the documentation states "The preserve action for an on_reboot event is treated as a destroy".) Surely it has to be a good thing to avoiding having CPUs spinning inside a VM that is doing no useful work. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: smp: remove arch-provided "pen_release"Russell King12-43/+56
Consolidating the "pen_release" stuff amongst the various SoC implementations gives credence to having a CPU holding pen for secondary CPUs. However, this is far from the truth. Many SoC implementations cargo-cult copied various bits of the pen release implementation from the initial Realview/Versatile Express implementation without understanding what it was or why it existed. The reason it existed is because these are _development_ platforms, and some board firmware is unable to individually control the startup of secondary CPUs. Moreover, they do not have a way to power down or reset secondary CPUs for hot-unplug. Hence, the pen_release implementation was designed for ARM Ltd's development platforms to provide a working implementation, even though it is very far from what is required. It was decided a while back to reduce the duplication by consolidating the "pen_release" variable, but this only made the situation worse - we have ended up with several implementations that read this variable but do not write it - again, showing the cargo-cult mentality at work, lack of proper review of new code, and in some cases a lack of testing. While it would be preferable to remove pen_release entirely from the kernel, this is not possible without help from the SoC maintainers, which seems to be lacking. However, I want to remove pen_release from arch code to remove the credence that having it gives. This patch removes pen_release from the arch code entirely, adding private per-SoC definitions for it instead, and explicitly stating that write_pen_release() is cargo-cult copied and should not be copied any further. Rename write_pen_release() in a similar fashion as well. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: actions: remove boot_lock and pen_releaseRussell King1-15/+0
The actions SMP implementation has several issues: 1. pen_release is only ever read and compared to -1, and is defined in arch/arm/kernel/smp.c to be -1. This test will always succeed. 2. we are already guaranteed to be single threaded while bringing up a CPU, so the spinlock makes no sense, remove it. 3. owl_secondary_startup() is not referenced nor defined, the prototype is redundant, remove it. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: oxnas: remove CPU hotplug implementationRussell King3-114/+0
The CPU hotplug implementation on this platform is cargo-culted from the plat-versatile implementation, and is buggy. Once a CPU hits the "low power" loop, it will wait for pen_release to be set to the CPU number to wake up again - but nothing in this implementation does that. So, once a CPU has entered cpu_die() it will never, ever leave. Remove this useless cargo-culted implementation. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: qcom: remove unnecessary boot_lockRussell King1-26/+0
The boot_lock is something that was required for ARM development platforms to ensure that the delay calibration worked properly. This is not necessary for modern platforms that have better bus bandwidth and do not need to calibrate the delay loop for secondary cores. Remove the boot_lock entirely. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8824/1: fix a migrating irq bug when hotplug cpuDietmar Eggemann4-64/+2
Arm TC2 fails cpu hotplug stress test. This issue was tracked down to a missing copy of the new affinity cpumask for the vexpress-spc interrupt into struct irq_common_data.affinity when the interrupt is migrated in migrate_one_irq(). Fix it by replacing the arm specific hotplug cpu migration with the generic irq code. This is the counterpart implementation to commit 217d453d473c ("arm64: fix a migrating irq bug when hotplug cpu"). Tested with cpu hotplug stress test on Arm TC2 (multi_v7_defconfig plus CONFIG_ARM_BIG_LITTLE_CPUFREQ=y and CONFIG_ARM_VEXPRESS_SPC_CPUFREQ=y). The vexpress-spc interrupt (irq=22) on this board is affine to CPU0. Its affinity cpumask now changes correctly e.g. from 0 to 1-4 when CPU0 is hotplugged out. Suggested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8832/1: NOMMU: Limit visibility for CONFIG_FLASH_{MEM_BASE,SIZE}Vladimir Murzin1-0/+2
It looks like usage of CONFIG_FLASH_{MEM_BASE,SIZE} is limited with: arch/arm/mm/proc-arm740.S arch/arm/mm/proc-arm940.S arch/arm/mm/proc-arm946.S So it might look confusing to see the option for anything except these. Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8831/1: NOMMU: pmsa-v8: remove unneeded semicolonPeng Hao1-2/+2
Remove unneeded semicolon. [vladimir] proper tags in subject line Signed-off-by: Peng Hao <peng.hao2@zte.com.cn> Acked-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8830/1: NOMMU: Toggle only bits in EXC_RETURN we are really care ofVladimir Murzin4-2/+10
ARMv8M introduces support for Security extension to M class, among other things it affects exception handling, especially, encoding of EXC_RETURN. The new bits have been added: Bit [6] Secure or Non-secure stack Bit [5] Default callee register stacking Bit [0] Exception Secure which conflicts with hard-coded value of EXC_RETURN: In fact, we only care of few bits: Bit [3] Mode (0 - Handler, 1 - Thread) Bit [2] Stack pointer selection (0 - Main, 1 - Process) We can toggle only those bits and left other bits as they were on exception entry. It is basically, what patch does - saves EXC_RETURN when we do transition form Thread to Handler mode (it is first svc), so later saved value is used instead of EXC_RET_THREADMODE_PROCESSSTACK. Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8829/1: spinlock: use unified assembler language syntaxStefan Agner1-1/+2
Convert the conditional infix to a postfix to make sure this inline assembly is unified syntax. Since gcc assumes non-unified syntax when emitting ARM instructions, make sure to define the syntax as unified. This allows to use LLVM's integrated assembler. Signed-off-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8828/1: uaccess: use unified assembler language syntaxStefan Agner1-1/+2
Convert the conditional infix to a postfix to make sure this inline assembly is unified syntax. Since gcc assumes non-unified syntax when emitting ARM instructions, make sure to define the syntax as unified. This allows to use LLVM's integrated assembler. Signed-off-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8827/1: fix argument count to match macro definitionStefan Agner1-1/+1
The macro str8w takes 10 arguments, abort being the 10th. In this particular instantiation the abort argument is passed as 11th argument leading to an error when using LLVM's integrated assembler: <instantiation>:46:47: error: too many positional arguments str8w r0, r3, r4, r5, r6, r7, r8, r9, ip, , abort=19f ^ arch/arm/lib/copy_template.S:277:5: note: while in macro instantiation 18: forward_copy_shift pull=24 push=8 ^ The argument is not used in the macro hence this does not change code generation. Signed-off-by: Stefan Agner <stefan@agner.ch> Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8826/1: mm: initialize pfn limits with find_limits()Doug Berger1-16/+4
The max_low_pfn value must be set before sparse_init() is called to keep the early memblock allocations and frees balanced for kmemleak initialization when sparsemem is enabled. This commit accomplishes that by replacing the local variables min, max_low, and max_high with the global limit variables min_low_pfn, max_low_pfn, and max_pfn respectively in bootmem_init(). The global variables are initialized directly by find_limits() and used in the remainder of the function. Fixes: 9099daed9c69 ("mm: kmemleak: avoid using __va() on addresses that don't have a lowmem mapping") Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Doug Berger <opendmb@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8823/1: Implement pgprot_device()Vincent Whitchurch1-0/+3
This is used when mmapping the PCI resource* files in sys. Because ARM currently lacks an implementation of pgprot_device(), it falls back to pgprot_uncached() (Strongly Ordered), but we should be able to use Device memory instead. Doing this speeds up large writes to the resource files by about 40% on one of my systems. It also ensures that mmaps on these resources use the same memory type as ioremap(). Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8822/1: smp_twd: Remove legacy TWD registrationGeert Uytterhoeven3-83/+0
As of commit 7484c727b636a838 ("ARM: realview: delete the RealView board files"), the ARM Timer and Watchdog Unit is instantiated from DT only. Moreover, the driver is selected from ARCH_MULTIPLATFORM platforms only, which implies OF, TIMER_OF, and COMMON_CLK. Hence remove all unused legacy infrastructure from the driver. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8821/1: Correct meaning of SCU in HAVE_ARM_SCU help txtGeert Uytterhoeven1-1/+1
According to the ARM Cortex-A5 and Cortex-A9 Technical Reference Manuals, SCU stands for "Snoop Control Unit". Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8820/1: mm: Stop printing the virtual memory layoutGeert Uytterhoeven1-49/+0
Since commit ad67b74d2469d9b8 ("printk: hash addresses printed with %p"), the virtual memory layout printed during boot up contains "ptrval" instead of actual addresses: Memory: 501296K/524288K available (6144K kernel code, 528K rwdata, 1944K rodata, 1024K init, 7584K bss, 22992K reserved, 0K cma-reserved) Virtual kernel memory layout: vector : 0xffff0000 - 0xffff1000 ( 4 kB) fixmap : 0xffc00000 - 0xfff00000 (3072 kB) vmalloc : 0xe0800000 - 0xff800000 ( 496 MB) lowmem : 0xc0000000 - 0xe0000000 ( 512 MB) modules : 0xbf000000 - 0xc0000000 ( 16 MB) .text : 0x(ptrval) - 0x(ptrval) (7136 kB) .init : 0x(ptrval) - 0x(ptrval) (1024 kB) .data : 0x(ptrval) - 0x(ptrval) ( 529 kB) .bss : 0x(ptrval) - 0x(ptrval) (7585 kB) Instead of changing the printing to "%px", and leaking virtual memory layout information again, just remove the printing completely, cfr. e.g. commits 071929dbdd865f77 ("arm64: Stop printing the virtual memory layout") and 31833332f7987636 ("m68k/mm: Stop printing the virtual memory layout"). All interesting information (actual section sizes) is already printed by mem_init_print_info() just above anyway. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8819/1: Remove '-p' from LDFLAGSNathan Chancellor3-4/+2
This option is not supported by lld: ld.lld: error: unknown argument: -p This has been a no-op in binutils since 2004 (see commit dea514f51da1 in that tree). Given that the lowest officially supported of binutils for the kernel is 2.20, which was released in 2009, nobody needs this flag around so just remove it. Commit 1a381d4a0a9a ("arm64: remove no-op -p linker flag") did the same for arm64. Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8818/1: dma-mapping: update comment about handling dma_ops when detaching from IOMMUWolfram Sang (Renesas)1-1/+1
Update the comment because we don't set the pointer to NULL anymore. Also use the correct pointer name 'dma_ops' instead of 'dma_map_ops'. Fixes: 1874619a7df4 ("ARM: dma-mapping: Set proper DMA ops in arm_iommu_detach_device()") Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-01ARM: 8817/1: mm: skip cleaning of idmap page tables on LPAE capable coresArd Biesheuvel1-1/+3
Currently, init_static_idmap() installs some page table entries to cover the identity mapped part of the kernel image (which is only about 160 bytes in size in a multi_v7_defconfig Thumb2 build), and calls flush_cache_louis() to ensure that the updates are visible to the page table walker on the same core. When running under virtualization, flush_cache_louis() may take more than 10 seconds to complete: [ 0.108192] Setting up static identity map for 0x40300000 - 0x403000a0 [ 13.078127] rcu: Hierarchical SRCU implementation. This is due to the fact that set/way ops are not virtualizable, and so KVM may trap each one, resulting in a substantial delay. Since only LPAE capable CPUs may execute under virtualization, and considering that LPAE capable CPUs are guaranteed to have cache coherent page table walkers (per the architecture), let's only perform this cache maintenance on non-LPAE cores. Cc: Will Deacon <will.deacon@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-01-06Linux 5.0-rc1Linus Torvalds1-3/+3
2019-01-06Change mincore() to count "mapped" pages rather than "cached" pagesLinus Torvalds1-81/+13
The semantics of what "in core" means for the mincore() system call are somewhat unclear, but Linux has always (since 2.3.52, which is when mincore() was initially done) treated it as "page is available in page cache" rather than "page is mapped in the mapping". The problem with that traditional semantic is that it exposes a lot of system cache state that it really probably shouldn't, and that users shouldn't really even care about. So let's try to avoid that information leak by simply changing the semantics to be that mincore() counts actual mapped pages, not pages that might be cheaply mapped if they were faulted (note the "might be" part of the old semantics: being in the cache doesn't actually guarantee that you can access them without IO anyway, since things like network filesystems may have to revalidate the cache before use). In many ways the old semantics were somewhat insane even aside from the information leak issue. From the very beginning (and that beginning is a long time ago: 2.3.52 was released in March 2000, I think), the code had a comment saying Later we can get more picky about what "in core" means precisely. and this is that "later". Admittedly it is much later than is really comfortable. NOTE! This is a real semantic change, and it is for example known to change the output of "fincore", since that program literally does a mmmap without populating it, and then doing "mincore()" on that mapping that doesn't actually have any pages in it. I'm hoping that nobody actually has any workflow that cares, and the info leak is real. We may have to do something different if it turns out that people have valid reasons to want the old semantics, and if we can limit the information leak sanely. Cc: Kevin Easton <kevin@guarana.org> Cc: Jiri Kosina <jikos@kernel.org> Cc: Masatake YAMATO <yamato@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-06Fix 'acccess_ok()' on alpha and SHLinus Torvalds2-5/+10
Commit 594cc251fdd0 ("make 'user_access_begin()' do 'access_ok()'") broke both alpha and SH booting in qemu, as noticed by Guenter Roeck. It turns out that the bug wasn't actually in that commit itself (which would have been surprising: it was mostly a no-op), but in how the addition of access_ok() to the strncpy_from_user() and strnlen_user() functions now triggered the case where those functions would test the access of the very last byte of the user address space. The string functions actually did that user range test before too, but they did it manually by just comparing against user_addr_max(). But with user_access_begin() doing the check (using "access_ok()"), it now exposed problems in the architecture implementations of that function. For example, on alpha, the access_ok() helper macro looked like this: #define __access_ok(addr, size) \ ((get_fs().seg & (addr | size | (addr+size))) == 0) and what it basically tests is of any of the high bits get set (the USER_DS masking value is 0xfffffc0000000000). And that's completely wrong for the "addr+size" check. Because it's off-by-one for the case where we check to the very end of the user address space, which is exactly what the strn*_user() functions do. Why? Because "addr+size" will be exactly the size of the address space, so trying to access the last byte of the user address space will fail the __access_ok() check, even though it shouldn't. As a result, the user string accessor functions failed consistently - because they literally don't know how long the string is going to be, and the max access is going to be that last byte of the user address space. Side note: that alpha macro is buggy for another reason too - it re-uses the arguments twice. And SH has another version of almost the exact same bug: #define __addr_ok(addr) \ ((unsigned long __force)(addr) < current_thread_info()->addr_limit.seg) so far so good: yes, a user address must be below the limit. But then: #define __access_ok(addr, size) \ (__addr_ok((addr) + (size))) is wrong with the exact same off-by-one case: the case when "addr+size" is exactly _equal_ to the limit is actually perfectly fine (think "one byte access at the last address of the user address space") The SH version is actually seriously buggy in another way: it doesn't actually check for overflow, even though it did copy the _comment_ that talks about overflow. So it turns out that both SH and alpha actually have completely buggy implementations of access_ok(), but they happened to work in practice (although the SH overflow one is a serious serious security bug, not that anybody likely cares about SH security). This fixes the problems by using a similar macro on both alpha and SH. It isn't trying to be clever, the end address is based on this logic: unsigned long __ao_end = __ao_a + __ao_b - !!__ao_b; which basically says "add start and length, and then subtract one unless the length was zero". We can't subtract one for a zero length, or we'd just hit an underflow instead. For a lot of access_ok() users the length is a constant, so this isn't actually as expensive as it initially looks. Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net> Cc: Matt Turner <mattst88@gmail.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-06fscrypt: add Adiantum supportEric Biggers7-188/+468
Add support for the Adiantum encryption mode to fscrypt. Adiantum is a tweakable, length-preserving encryption mode with security provably reducible to that of XChaCha12 and AES-256, subject to a security bound. It's also a true wide-block mode, unlike XTS. See the paper "Adiantum: length-preserving encryption for entry-level processors" (https://eprint.iacr.org/2018/720.pdf) for more details. Also see commit 059c2a4d8e16 ("crypto: adiantum - add Adiantum support"). On sufficiently long messages, Adiantum's bottlenecks are XChaCha12 and the NH hash function. These algorithms are fast even on processors without dedicated crypto instructions. Adiantum makes it feasible to enable storage encryption on low-end mobile devices that lack AES instructions; currently such devices are unencrypted. On ARM Cortex-A7, on 4096-byte messages Adiantum encryption is about 4 times faster than AES-256-XTS encryption; decryption is about 5 times faster. In fscrypt, Adiantum is suitable for encrypting both file contents and names. With filenames, it fixes a known weakness: when two filenames in a directory share a common prefix of >= 16 bytes, with CTS-CBC their encrypted filenames share a common prefix too, leaking information. Adiantum does not have this problem. Since Adiantum also accepts long tweaks (IVs), it's also safe to use the master key directly for Adiantum encryption rather than deriving per-file keys, provided that the per-file nonce is included in the IVs and the master key isn't used for any other encryption mode. This configuration saves memory and improves performance. A new fscrypt policy flag is added to allow users to opt-in to this configuration. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2019-01-06kconfig: rename generated .*conf-cfg to *conf-cfgMasahiro Yamada2-18/+19
Remove the dot-prefixing since it is just a matter of the .gitignore file. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: remove unnecessary stubs for archheader and archscriptsMasahiro Yamada1-5/+1
Make simply skips a missing rule when it is marked as .PHONY. Remove the dummy targets. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: use assignment instead of define ... endef for filechk_* rulesMasahiro Yamada6-26/+12
You do not have to use define ... endef for filechk_* rules. For simple cases, the use of assignment looks cleaner, IMHO. I updated the usage for scripts/Kbuild.include in case somebody misunderstands the 'define ... endif' is the requirement. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-01-06arch: remove redundant UAPI generic-y definesMasahiro Yamada24-408/+0
Now that Kbuild automatically creates asm-generic wrappers for missing mandatory headers, it is redundant to list the same headers in generic-y and mandatory-y. Suggested-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Sam Ravnborg <sam@ravnborg.org>
2019-01-06kbuild: generate asm-generic wrappers if mandatory headers are missingMasahiro Yamada3-10/+10
Some time ago, Sam pointed out a certain degree of overwrap between generic-y and mandatory-y. (https://lkml.org/lkml/2017/7/10/121) I tweaked the meaning of mandatory-y a little bit; now it defines the minimum set of ASM headers that all architectures must have. If arch does not have specific implementation of a mandatory header, Kbuild will let it fallback to the asm-generic one by automatically generating a wrapper. This will allow to drop lots of redundant generic-y defines. Previously, "mandatory" was used in the context of UAPI, but I guess this can be extended to kernel space ASM headers. Suggested-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Sam Ravnborg <sam@ravnborg.org>
2019-01-06arch: remove stale comments "UAPI Header export list"Masahiro Yamada24-25/+0
These comments are leftovers of commit fcc8487d477a ("uapi: export all headers under uapi directories"). Prior to that commit, exported headers must be explicitly added to header-y. Now, all headers under the uapi/ directories are exported. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06riscv: remove redundant kernel-space generic-yMasahiro Yamada1-25/+0
This commit removes redundant generic-y defines in arch/riscv/include/asm/Kbuild. [1] It is redundant to define the same generic-y in both arch/$(ARCH)/include/asm/Kbuild and arch/$(ARCH)/include/uapi/asm/Kbuild. Remove the following generic-y: errno.h fcntl.h ioctl.h ioctls.h ipcbuf.h mman.h msgbuf.h param.h poll.h posix_types.h resource.h sembuf.h setup.h shmbuf.h signal.h socket.h sockios.h stat.h statfs.h swab.h termbits.h termios.h types.h [2] It is redundant to define generic-y when arch-specific implementation exists in arch/$(ARCH)/include/asm/*.h Remove the following generic-y: cacheflush.h module.h Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: change filechk to surround the given command with { }Masahiro Yamada7-12/+14
filechk_* rules often consist of multiple 'echo' lines. They must be surrounded with { } or ( ) to work correctly. Otherwise, only the string from the last 'echo' would be written into the target. Let's take care of that in the 'filechk' in scripts/Kbuild.include to clean up filechk_* rules. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: remove redundant target cleaning on failureMasahiro Yamada9-25/+16
Since commit 9c2af1c7377a ("kbuild: add .DELETE_ON_ERROR special target"), the target file is automatically deleted on failure. The boilerplate code ... || { rm -f $@; false; } is unneeded. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: clean up rule_dtc_dt_yamlMasahiro Yamada1-2/+2
Commit 3a2429e1faf4 ("kbuild: change if_changed_rule for multi-line recipe") and commit 4f0e3a57d6eb ("kbuild: Add support for DT binding schema checks") came in via different sub-systems. This is a follow-up cleanup. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: remove UIMAGE_IN and UIMAGE_OUTMasahiro Yamada1-4/+2
The only/last user of UIMAGE_IN/OUT was removed by commit 4722a3e6b716 ("microblaze: fix multiple bugs in arch/microblaze/boot/Makefile"). The input and output should always be $< and $@. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06jump_label: move 'asm goto' support test to KconfigMasahiro Yamada42-119/+65
Currently, CONFIG_JUMP_LABEL just means "I _want_ to use jump label". The jump label is controlled by HAVE_JUMP_LABEL, which is defined like this: #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL) # define HAVE_JUMP_LABEL #endif We can improve this by testing 'asm goto' support in Kconfig, then make JUMP_LABEL depend on CC_HAS_ASM_GOTO. Ugly #ifdef HAVE_JUMP_LABEL will go away, and CONFIG_JUMP_LABEL will match to the real kernel capability. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Tested-by: Sedat Dilek <sedat.dilek@gmail.com>