aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2015-06-08arm64: fix missing syscall trace exitJosh Stone1-1/+6
If a syscall is entered without TIF_SYSCALL_TRACE set, then it goes on the fast path. It's then possible to have TIF_SYSCALL_TRACE added in the middle of the syscall, but ret_fast_syscall doesn't check this flag again. This causes a ptrace syscall-exit-stop to be missed. For instance, from a PTRACE_EVENT_FORK reported during do_fork, the tracer might resume with PTRACE_SYSCALL, setting TIF_SYSCALL_TRACE. Now the completion of the fork should have a syscall-exit-stop. Russell King fixed this on arm by re-checking _TIF_SYSCALL_WORK in the fast exit path. Do the same on arm64. Reviewed-by: Will Deacon <will.deacon@arm.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Josh Stone <jistone@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-05arm64: alternative: Work around .inst assembler bugsMarc Zyngier1-7/+18
AArch64 toolchains suffer from the following bug: $ cat blah.S 1: .inst 0x01020304 .if ((. - 1b) != 4) .error "blah" .endif $ aarch64-linux-gnu-gcc -c blah.S blah.S: Assembler messages: blah.S:3: Error: non-constant expression in ".if" statement which precludes the use of msr_s and co as part of alternatives. We workaround this issue by not directly testing the labels themselves, but by moving the current output pointer by a value that should always be zero. If this value is not null, then we will trigger a backward move, which is expclicitely forbidden. This triggers the error we're after: AS arch/arm64/kvm/hyp.o arch/arm64/kvm/hyp.S: Assembler messages: arch/arm64/kvm/hyp.S:1377: Error: attempt to move .org backwards scripts/Makefile.build:294: recipe for target 'arch/arm64/kvm/hyp.o' failed make[1]: *** [arch/arm64/kvm/hyp.o] Error 1 Makefile:946: recipe for target 'arch/arm64/kvm' failed Not pretty, but at least works on the current toolchains. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-05arm64: alternative: Merge alternative-asm.h into alternative.hMarc Zyngier4-31/+29
asm/alternative-asm.h and asm/alternative.h are extremely similar, and really deserve to live in the same file (as this makes further modufications a bit easier). Fold the content of alternative-asm.h into alternative.h, and update the few users. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-05arm64: alternative: Allow immediate branch as alternative instructionMarc Zyngier1-5/+66
Since all branches are PC-relative on AArch64, these instructions cannot be used as an alternative with the simplistic approach we currently have (the immediate has been computed from the .altinstr_replacement section, and end-up being completely off if the target is outside of the replacement sequence). This patch handles the branch instructions in a different way, using the insn framework to recompute the immediate, and generate the right displacement in the above case. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-05arm64: Rework alternate sequence for ARM erratum 845719Marc Zyngier1-12/+15
The workaround for erratum 845719 is currently using a branch between two alternate sequences, which is quite fragile, and that we are going to break as we rework the alternative code. This patch reworks the workaround to fit in a single alternative sequence. The generated code itself is unchanged. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-03arm64: insn: Add aarch64_{get,set}_branch_offsetMarc Zyngier2-0/+63
In order to deal with branches located in alternate sequences, but pointing to the main kernel text, it is required to extract the relative displacement encoded in the instruction, and to be able to update said instruction with a new offset (once it is known). For this, we introduce three new helpers: - aarch64_insn_is_branch_imm is a predicate indicating if the instruction is an immediate branch - aarch64_get_branch_offset returns a signed value representing the byte offset encoded in a branch instruction - aarch64_set_branch_offset takes an instruction and an offset, and returns the corresponding updated instruction. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-02arm64: drop sleep_idmap_phys and clean up cpu_resume()Ard Biesheuvel2-8/+2
Two cleanups of the asm function cpu_resume(): - The global variable sleep_idmap_phys always points to idmap_pg_dir, so we can just use that value directly in the CPU resume path. - Unclutter the load of sleep_save_sp::save_ptr_stash_phys. Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-02arm64: reduce ID map to a single pageArd Biesheuvel3-7/+19
Commit ea8c2e112445 ("arm64: Extend the idmap to the whole kernel image") changed the early page table code so that the entire kernel Image is covered by the identity map. This allows functions that need to enable or disable the MMU to reside anywhere in the kernel Image. However, this change has the unfortunate side effect that the Image cannot cross a physical 512 MB alignment boundary anymore, since the early page table code cannot deal with the Image crossing a /virtual/ 512 MB alignment boundary. So instead, reduce the ID map to a single page, that is populated by the contents of the .idmap.text section. Only three functions reside there at the moment: __enable_mmu(), cpu_resume_mmu() and cpu_reset(). If new code is introduced that needs to manipulate the MMU state, it should be added to this section as well. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-02arm64: use fixmap region for permanent FDT mappingArd Biesheuvel9-63/+115
Currently, the FDT blob needs to be in the same 512 MB region as the kernel, so that it can be mapped into the kernel virtual memory space very early on using a minimal set of statically allocated translation tables. Now that we have early fixmap support, we can relax this restriction, by moving the permanent FDT mapping to the fixmap region instead. This way, the FDT blob may be anywhere in memory. This also moves the vetting of the FDT to mmu.c, since the early init code in head.S does not handle mapping of the FDT anymore. At the same time, fix up some comments in head.S that have gone stale. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-02of/fdt: split off FDT self reservation from memreserve processingArd Biesheuvel5-5/+19
This splits off the reservation of the memory occupied by the FDT binary itself from the processing of the memory reservations it contains. This is necessary because the physical address of the FDT, which is needed to perform the reservation, may not be known to the FDT driver core, i.e., it may be mapped outside the linear direct mapping, in which case __pa() returns a bogus value. Cc: Russell King <linux@arm.linux.org.uk> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Acked-by: Rob Herring <robh@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-01arm64: context-switch user tls register tpidr_el0 for compat tasksWill Deacon2-28/+38
Since commit a4780adeefd0 ("ARM: 7735/2: Preserve the user r/w register TPIDRURW on context switch and fork"), arch/arm/ has context switched the user-writable TLS register, so do the same for compat tasks running under the arm64 kernel. Reported-by: André Hentschel <nerv@dawncrow.de> Tested-by: André Hentschel <nerv@dawncrow.de> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-05-27arm64: psci: remove ACPI couplingMark Rutland4-18/+24
The 32-bit ARM port doesn't have ACPI headers, and conditionally including them is going to look horrendous. In preparation for sharing the PSCI invocation code with 32-bit, move the acpi_psci_* function declarations and definitions such that the PSCI client code need not include ACPI headers. While it would seem like we could simply hide the ACPI includes in psci.h, the ACPI headers have hilarious circular dependencies which make this infeasible without reorganising most of ACPICA. So rather than doing that, move the acpi_psci_* prototypes into psci.h. The psci_acpi_init function is made dependent on CONFIG_ACPI (with a stub implementation in asm/psci.h) such that it need not be built for 32-bit ARM or kernels without ACPI support. The currently missing __init annotations are added to the prototypes in the header. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Reviewed-by: Al Stone <al.stone@linaro.org> Reviewed-by: Ashwin Chaugule <ashwin.chaugule@linaro.org> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com>
2015-05-27arm64: psci: kill psci_power_stateMark Rutland1-52/+37
A PSCI 1.0 implementation may choose to use the new extended StateID format, the presence of which may be queried via the PSCI_FEATURES call. The layout of this new StateID format is incompatible with the existing format, and so to handle both we must abstract attempts to parse the fields. In preparation for PSCI 1.0 support, this patch introduces psci_power_state_loses_context and psci_power_state_is_valid functions to query information from a PSCI power state, which is no longer decomposed (and hence the pack/unpack functions are removed). As it is no longer decomposed, it is now passed round as an opaque u32 token. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
2015-05-27arm64: psci: account for Trusted OS instancesMark Rutland1-0/+66
Software resident in the secure world (a "Trusted OS") may cause CPU_OFF calls for the CPU it is resident on to be denied. Such a denial would be fatal for the kernel, and so we must detect when this can happen before the point of no return. This patch implements Trusted OS detection for PSCI 0.2+ systems, using MIGRATE_INFO_TYPE and MIGRATE_INFO_UP_CPU. When a trusted OS is detected as resident on a particular CPU, attempts to hot unplug that CPU will be denied early, before they can prove fatal. Trusted OS migration is not implemented by this patch. Implementation of migratable UP trusted OSs seems unlikely, and the right policy for migration is unclear (and will likely differ across implementations). As such, it is likely that migration will require cooperation with Trusted OS drivers. PSCI implementations prior to 0.1 do not provide the facility to detect the presence of a Trusted OS, nor the CPU any such OS is resident on, so without additional information it is not possible to handle Trusted OSs with PSCI 0.1. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com>
2015-05-27arm64: psci: support unsigned return valuesMark Rutland1-29/+18
PSCI_VERSION and MIGRATE_INFO_TYPE_UP_CPU return unsigned values, with the latter returning a 64-bit value. However, the PSCI invocation functions have prototypes returning int. This patch upgrades the invocation functions to return unsigned long, with a new typedef to keep things legible. As PSCI_VERSION cannot return a negative value, the erroneous check against PSCI_RET_NOT_SUPPORTED is also removed. The unrelated psci_initcall_t typedef is moved closer to its first user, to avoid confusion with the invocation functions. In preparation for sharing the code with ARM, unsigned long is used in preference of u64. In the SMC32 calling convention, the relevant fields will be 32 bits wide. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com>
2015-05-27arm64: psci: remove unnecessary id indirectionMark Rutland1-17/+3
PSCI 0.1 did not define canonical IDs for CPU_ON, CPU_OFF, CPU_SUSPEND, or MIGRATE, and so these need to be provided when using firmware compliant to PSCI 0.1. However, functions introduced in 0.2 or later have canonical IDs, and these cannot be provided via DT. There's no need to indirect the IDs via a table; they can be used directly at callsites (and already are for SYSTEM_OFF and SYSTEM_RESET). This patch removes the unnecessary function ID indirection for AFFINITY_INFO and MIGRATE_INFO_TYPE. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
2015-05-27arm64: smp: consistently use error codesMark Rutland2-7/+10
cpu_kill currently returns one for success and zero for failure, which is unlike all the other cpu_operations, which return zero for success and an error code upon failure. This difference is unnecessarily confusing. Make cpu_kill consistent with the other cpu_operations. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
2015-05-27arm64: smp_plat: add get_logical_indexMark Rutland1-0/+16
The PSCI MIGRATE_INFO_UP_CPU call returns a physical ID, which we will need to map back to a Linux logical ID. Implement a reusable get_logical_index to map from a physical ID to a logical ID. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
2015-05-27arm/arm64: kvm: add missing PSCI includeMark Rutland1-0/+2
We make use of the PSCI function IDs, but don't explicitly include the header which defines them. Relying on transitive header includes is fragile and will be broken as headers are refactored. This patch includes the relevant header file directly so as to avoid future breakage. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com>
2015-05-21arm64: Use common outgoing-CPU-notification codePaul E. McKenney1-4/+2
This commit removes the open-coded CPU-offline notification with new common code. In particular, this change avoids calling scheduler code using RCU from an offline CPU that RCU is ignoring. This is a minimal change. A more intrusive change might invoke the cpu_check_up_prepare() and cpu_set_state_online() functions at CPU-online time, which would allow onlining throw an error if the CPU did not go offline properly. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: linux-arm-kernel@lists.infradead.org Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-05-19arm64: perf: Fix callchain parse error with kernel tracepoint eventsHou Pengyang1-0/+7
For ARM64, when tracing with tracepoint events, the IP and pstate are set to 0, preventing the perf code parsing the callchain and resolving the symbols correctly. ./perf record -e sched:sched_switch -g --call-graph dwarf ls [ perf record: Captured and wrote 0.146 MB perf.data ] ./perf report -f Samples: 194 of event 'sched:sched_switch', Event count (approx.): 194 Children Self Command Shared Object Symbol 100.00% 100.00% ls [unknown] [.] 0000000000000000 The fix is to implement perf_arch_fetch_caller_regs for ARM64, which fills several necessary registers used for callchain unwinding, including pc,sp, fp and spsr . With this patch, callchain can be parsed correctly as follows: ...... + 2.63% 0.00% ls [kernel.kallsyms] [k] vfs_symlink + 2.63% 0.00% ls [kernel.kallsyms] [k] follow_down + 2.63% 0.00% ls [kernel.kallsyms] [k] pfkey_get + 2.63% 0.00% ls [kernel.kallsyms] [k] do_execveat_common.isra.33 - 2.63% 0.00% ls [kernel.kallsyms] [k] pfkey_send_policy_notify pfkey_send_policy_notify pfkey_get v9fs_vfs_rename page_follow_link_light link_path_walk el0_svc_naked ....... Signed-off-by: Hou Pengyang <houpengyang@huawei.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-05-19ARM64: kernel: unify ACPI and DT cpus initializationLorenzo Pieralisi7-198/+196
The code that initializes cpus on arm64 is currently split in two different code paths that carry out DT and ACPI cpus initialization. Most of the code executing SMP initialization is common and should be merged to reduce discrepancies between ACPI and DT initialization and to have code initializing cpus in a single common place in the kernel. This patch refactors arm64 SMP cpus initialization code to merge ACPI and DT boot paths in a common file and to create sanity checks that can be reused by both boot methods. Current code assumes PSCI is the only available boot method when arm64 boots with ACPI; this can be easily extended if/when the ACPI parking protocol is merged into the kernel. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Mark Rutland <mark.rutland@arm.com> [DT] Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-05-19ARM64: kernel: make cpu_ops hooks DT agnosticLorenzo Pieralisi7-46/+70
ARM64 CPU operations such as cpu_init and cpu_init_idle take a struct device_node pointer as a parameter, which corresponds to the device tree node of the logical cpu on which the operation has to be applied. With the advent of ACPI on arm64, where MADT static table entries are used to initialize cpus, the device tree node parameter in cpu_ops hooks become useless when booting with ACPI, since in that case cpu device tree nodes are not present and can not be used for cpu initialization. The current cpu_init hook requires a struct device_node pointer parameter because it is called while parsing the device tree to initialize CPUs, when the cpu_logical_map (that is used to match a cpu node reg property to a device tree node) for a given logical cpu id is not set up yet. This means that the cpu_init hook cannot rely on the of_get_cpu_node function to retrieve the device tree node corresponding to the logical cpu id passed in as parameter, so the cpu device tree node must be passed in as a parameter to fix this catch-22 dependency cycle. This patch reshuffles the cpu_logical_map initialization code so that the cpu_init cpu_ops hook can safely use the of_get_cpu_node function to retrieve the cpu device tree node, removing the need for the device tree node pointer parameter. In the process, the patch removes device tree node parameters from all cpu_ops hooks, in preparation for SMP DT/ACPI cpus initialization consolidation. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Mark Rutland <mark.rutland@arm.com> [DT] Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-05-19arm64: Rename temp variable in read*_relaxed()Michal Simek1-4/+4
This resolves the following sparse warning from readl() and other macros, which ends up embedding readl_relaxed() using the same variable. Warning log: include/asm-generic/io.h:364:16: warning: symbol '__v' shadows an earlier one include/asm-generic/io.h:364:16: originally declared here include/asm-generic/io.h:372:16: warning: symbol '__v' shadows an earlier one include/asm-generic/io.h:372:16: originally declared here include/asm-generic/io.h:380:16: warning: symbol '__v' shadows an earlier one include/asm-generic/io.h:380:16: originally declared here include/asm-generic/io.h:568:16: warning: symbol '__v' shadows an earlier one include/asm-generic/io.h:568:16: originally declared here include/asm-generic/io.h:576:16: warning: symbol '__v' shadows an earlier one include/asm-generic/io.h:576:16: originally declared here include/asm-generic/io.h:584:16: warning: symbol '__v' shadows an earlier one include/asm-generic/io.h:584:16: originally declared here The same patch was already applied to arm32 as "ARM: 7118/1: rename temp variable in read*_relaxed()" (sha1: b0c1264f534a1cb3c52036a23a04d238434a0df6) Acked-by: Liviu Dudau <Liviu.Dudau@arm.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-05-19arm64: kill flush_cache_all()Mark Rutland7-141/+1
The documented semantics of flush_cache_all are not possible to provide for arm64 (short of flushing the entire physical address space by VA), and there are currently no users; KVM uses VA maintenance exclusively, cpu_reset is never called, and the only two users outside of arch code cannot be built for arm64. While cpu_soft_reset and related functions (which call flush_cache_all) were thought to be useful for kexec, their current implementations only serve to mask bugs. For correctness kexec will need to perform maintenance by VA anyway to account for system caches, line migration, and other subtleties of the cache architecture. As the extent of this cache maintenance will be kexec-specific, it should probably live in the kexec code. This patch removes flush_cache_all, and related unused components, preventing further abuse. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Geoff Levand <geoff@infradead.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-05-19arm64: Allow forced irq threadingAnders Roxell1-0/+1
Now its safe to allow forced interrupt threading for arm64, all timer interrupts and the perf interrupt are marked NO_THREAD, as is the case with arch/arm: da0ec6f ARM: 7814/2: Allow forced irq threading Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-05-19arm64: Mark PMU interrupt IRQF_NO_THREADAnders Roxell1-1/+1
Mark the PMU interrupts as non-threadable, as is the case with arch/arm: d9c3365 ARM: 7813/1: Mark pmu interupt IRQF_NO_THREAD Acked-by: Will Deacon <will.deacon@arm.com> Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-05-18Linux 4.1-rc4Linus Torvalds1-1/+1
2015-05-18watchdog: Fix merge 'conflict'Peter Zijlstra1-5/+15
Two watchdog changes that came through different trees had a non conflicting conflict, that is, one changed the semantics of a variable but no actual code conflict happened. So the merge appeared fine, but the resulting code did not behave as expected. Commit 195daf665a62 ("watchdog: enable the new user interface of the watchdog mechanism") changes the semantics of watchdog_user_enabled, which thereafter is only used by the functions introduced by b3738d293233 ("watchdog: Add watchdog enable/disable all functions"). There further appears to be a distinct lack of serialization between setting and using watchdog_enabled, so perhaps we should wrap the {en,dis}able_all() things in watchdog_proc_mutex. This patch fixes a s2r failure reported by Michal; which I cannot readily explain. But this does make the code internally consistent again. Reported-and-tested-by: Michal Hocko <mhocko@suse.cz> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-15Documentation: dt: mtd: replace "nor-jedec" binding with "jedec, spi-nor"Brian Norris2-6/+6
In commit 8ff16cf77ce3 ("Documentation: devicetree: m25p80: add "nor-jedec" binding"), we added a generic "nor-jedec" binding to catch all mostly-compatible SPI NOR flash which can be detected via the READ ID opcode (0x9F). This was discussed and reviewed at the time, however objections have come up since then as part of this discussion: http://lkml.kernel.org/g/20150511224646.GJ32500@ld-irv-0074 It seems the parties involved agree that "jedec,spi-nor" does a better job of capturing the fact that this is SPI-specific, not just any NOR flash. This binding was only merged for v4.1-rc1, so it's still OK to change the naming. At the same time, let's move the documentation to a better name. Next up: stop referring to code (drivers/mtd/devices/m25p80.c) from the documentation. Signed-off-by: Brian Norris <computersforpeace@gmail.com> Cc: Marek Vasut <marex@denx.de> Cc: Rafał Miłecki <zajec5@gmail.com> Cc: Rob Herring <robh+dt@kernel.org> Cc: Pawel Moll <pawel.moll@arm.com> Cc: Ian Campbell <ijc+devicetree@hellion.org.uk> Cc: Kumar Gala <galak@codeaurora.org> Cc: devicetree@vger.kernel.org Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Mark Rutland <mark.rutland@arm.com>
2015-05-15MIPS: tlb-r4k: Fix PG_ELPA commentJames Hogan1-1/+1
The ELPA bit in PageGrain is all about large *physical* addresses, so correct the reference to "large virtual address" in the comment above where it is set for MIPS64. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: David Daney <ddaney@caviumnetworks.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10038/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-05-15MIPS: Fix up obsolete cpu_set usageEzequiel Garcia1-1/+1
cpu_set was removed (along with a bunch of cpumask helpers) by commit 2f0f267ea072 ("cpumask: remove deprecated functions."). Fix this by replacing cpu_set with cpumask_set_cpu. Without this fix the following error is triggered when CONFIG_MIPS_MT_FPAFF=y. arch/mips/kernel/smp-cps.c: In function 'cps_smp_setup': arch/mips/kernel/smp-cps.c:95:3: error: implicit declaration of function 'cpu_set' [-Werror=implicit-function-declaration] Fixes: 90db024f140d ("MIPS: smp-cps: cpu_set FPU mask if FPU present") Signed-off-by: Ezequiel Garcia <ezequiel.garcia@imgtec.com> Acked-by: Niklas Cassel <niklass@axis.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9912/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-05-15MAINTAINERS: Add dts entries for some of the Marvell SoCsGregory CLEMENT1-1/+9
Since many releases, the modifications of the mvebu and berlin device tree files are merged through the mvebu subsystem. This patch makes it official in order to help the contributors using the get_maintainer.pl to find the accurate peoples. In the same time, updated the mvebu description which now includes the kirkwood SoCs and new Armada SoCs. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Andrew Lunn <andrew@lunn.ch>
2015-05-15ext4: fix an ext3 collapse range regression in xfstestsTheodore Ts'o1-0/+8
The xfstests test suite assumes that an attempt to collapse range on the range (0, 1) will return EOPNOTSUPP if the file system does not support collapse range. Commit 280227a75b56: "ext4: move check under lock scope to close a race" broke this, and this caused xfstests to fail when run when testing file systems that did not have the extents feature enabled. Reported-by: Eric Whitney <enwlinux@gmail.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-05-14mm, numa: really disable NUMA balancing by default on single node machinesMel Gorman1-1/+1
NUMA balancing is meant to be disabled by default on UMA machines but the check is using nr_node_ids (highest node) instead of num_online_nodes (online nodes). The consequences are that a UMA machine with a node ID of 1 or higher will enable NUMA balancing. This will incur useless overhead due to minor faults with the impact depending on the workload. These are the impact on the stats when running a kernel build on a single node machine whose node ID happened to be 1: vanilla patched NUMA base PTE updates 5113158 0 NUMA huge PMD updates 643 0 NUMA page range updates 5442374 0 NUMA hint faults 2109622 0 NUMA hint local faults 2109622 0 NUMA hint local percent 100 100 NUMA pages migrated 0 0 Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: <stable@vger.kernel.org> [3.8+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14MAINTAINERS: update Jingoo Han's email addressJingoo Han1-5/+5
Change my private email address. Signed-off-by: Jingoo Han <jingoohan1@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14CMA: page_isolation: check buddy before accessing itHui Zhu1-1/+2
I had an issue: Unable to handle kernel NULL pointer dereference at virtual address 0000082a pgd = cc970000 [0000082a] *pgd=00000000 Internal error: Oops: 5 [#1] PREEMPT SMP ARM PC is at get_pageblock_flags_group+0x5c/0xb0 LR is at unset_migratetype_isolate+0x148/0x1b0 pc : [<c00cc9a0>] lr : [<c0109874>] psr: 80000093 sp : c7029d00 ip : 00000105 fp : c7029d1c r10: 00000001 r9 : 0000000a r8 : 00000004 r7 : 60000013 r6 : 000000a4 r5 : c0a357e4 r4 : 00000000 r3 : 00000826 r2 : 00000002 r1 : 00000000 r0 : 0000003f Flags: Nzcv IRQs off FIQs on Mode SVC_32 ISA ARM Segment user Control: 10c5387d Table: 2cb7006a DAC: 00000015 Backtrace: get_pageblock_flags_group+0x0/0xb0 unset_migratetype_isolate+0x0/0x1b0 undo_isolate_page_range+0x0/0xdc __alloc_contig_range+0x0/0x34c alloc_contig_range+0x0/0x18 This issue is because when calling unset_migratetype_isolate() to unset a part of CMA memory, it try to access the buddy page to get its status: if (order >= pageblock_order) { page_idx = page_to_pfn(page) & ((1 << MAX_ORDER) - 1); buddy_idx = __find_buddy_index(page_idx, order); buddy = page + (buddy_idx - page_idx); if (!is_migrate_isolate_page(buddy)) { But the begin addr of this part of CMA memory is very close to a part of memory that is reserved at boot time (not in buddy system). So add a check before accessing it. [akpm@linux-foundation.org: use conventional code layout] Signed-off-by: Hui Zhu <zhuhui@xiaomi.com> Suggested-by: Laura Abbott <labbott@redhat.com> Suggested-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14uidgid: make uid_valid and gid_valid work with !CONFIG_MULTIUSERJosh Triplett1-2/+2
{u,g}id_valid call {u,g}id_eq, which calls __k{u,g}id_val on both arguments and compares. With !CONFIG_MULTIUSER, __k{u,g}id_val return a constant 0, which makes {u,g}id_valid always return false. Change {u,g}id_valid to compare their argument against -1 instead. That produces identical results in the normal CONFIG_MULTIUSER=y case, but with !CONFIG_MULTIUSER will make {u,g}id_valid constant-fold into "return true;" rather than "return false;". This fixes uses of devpts without CONFIG_MULTIUSER. Signed-off-by: Josh Triplett <josh@joshtriplett.org> Reported-by: Fengguang Wu <fengguang.wu@intel.com>, Cc: Peter Hurley <peter@hurleysoftware.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14kernfs: do not account ino_ida allocations to memcgVladimir Davydov1-1/+8
root->ino_ida is used for kernfs inode number allocations. Since IDA has a layered structure, different IDs can reside on the same layer, which is currently accounted to some memory cgroup. The problem is that each kmem cache of a memory cgroup has its own directory on sysfs (under /sys/fs/kernel/<cache-name>/cgroup). If the inode number of such a directory or any file in it gets allocated from a layer accounted to the cgroup which the cache is created for, the cgroup will get pinned for good, because one has to free all kmem allocations accounted to a cgroup in order to release it and destroy all its kmem caches. That said we must not account layers of ino_ida to any memory cgroup. Since per net init operations may create new sysfs entries directly (e.g. lo device) or indirectly (nf_conntrack creates a new kmem cache per each namespace, which, in turn, creates new sysfs entries), an easy way to reproduce this issue is by creating network namespace(s) from inside a kmem-active memory cgroup. Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Acked-by: Tejun Heo <tj@kernel.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Greg Thelen <gthelen@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: <stable@vger.kernel.org> [4.0.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14gfp: add __GFP_NOACCOUNTVladimir Davydov3-1/+8
Not all kmem allocations should be accounted to memcg. The following patch gives an example when accounting of a certain type of allocations to memcg can effectively result in a memory leak. This patch adds the __GFP_NOACCOUNT flag which if passed to kmalloc and friends will force the allocation to go through the root cgroup. It will be used by the next patch. Note, since in case of kmemleak enabled each kmalloc implies yet another allocation from the kmemleak_object cache, we add __GFP_NOACCOUNT to gfp_kmemleak_mask. Alternatively, we could introduce a per kmem cache flag disabling accounting for all allocations of a particular kind, but (a) we would not be able to bypass accounting for kmalloc then and (b) a kmem cache with this flag set could not be merged with a kmem cache without this flag, which would increase the number of global caches and therefore fragmentation even if the memory cgroup controller is not used. Despite its generic name, currently __GFP_NOACCOUNT disables accounting only for kmem allocations while user page allocations are always charged. To catch abusing of this flag, a warning is issued on an attempt of passing it to mem_cgroup_try_charge. Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Cc: Tejun Heo <tj@kernel.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Greg Thelen <gthelen@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: <stable@vger.kernel.org> [4.0.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14tools/vm: fix page-flags buildAndi Kleen1-1/+1
libabikfs.a doesn't exist anymore, so we now need to link with libapi.a. Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14drivers/rtc/rtc-armada38x.c: remove unused local `flags'Andrew Morton1-1/+1
Reported-by: Fengguang Wu <fengguang.wu@gmail.com> Cc: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14jbd2: fix r_count overflows leading to buffer overflow in journal recoveryDarrick J. Wong2-9/+19
The journal revoke block recovery code does not check r_count for sanity, which means that an evil value of r_count could result in the kernel reading off the end of the revoke table and into whatever garbage lies beyond. This could crash the kernel, so fix that. However, in testing this fix, I discovered that the code to write out the revoke tables also was not correctly checking to see if the block was full -- the current offset check is fine so long as the revoke table space size is a multiple of the record size, but this is not true when either journal_csum_v[23] are set. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Jan Kara <jack@suse.cz> Cc: stable@vger.kernel.org
2015-05-14ext4: check for zero length extent explicitlyEryu Guan1-1/+1
The following commit introduced a bug when checking for zero length extent 5946d08 ext4: check for overlapping extents in ext4_valid_extent_entries() Zero length extent could pass the check if lblock is zero. Adding the explicit check for zero length back. Signed-off-by: Eryu Guan <guaneryu@gmail.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: stable@vger.kernel.org
2015-05-14ext4: fix NULL pointer dereference when journal restart failsLukas Czerner2-9/+22
Currently when journal restart fails, we'll have the h_transaction of the handle set to NULL to indicate that the handle has been effectively aborted. We handle this situation quietly in the jbd2_journal_stop() and just free the handle and exit because everything else has been done before we attempted (and failed) to restart the journal. Unfortunately there are a number of problems with that approach introduced with commit 41a5b913197c "jbd2: invalidate handle if jbd2_journal_restart() fails" First of all in ext4 jbd2_journal_stop() will be called through __ext4_journal_stop() where we would try to get a hold of the superblock by dereferencing h_transaction which in this case would lead to NULL pointer dereference and crash. In addition we're going to free the handle regardless of the refcount which is bad as well, because others up the call chain will still reference the handle so we might potentially reference already freed memory. Moreover it's expected that we'll get aborted handle as well as detached handle in some of the journalling function as the error propagates up the stack, so it's unnecessary to call WARN_ON every time we get detached handle. And finally we might leak some memory by forgetting to free reserved handle in jbd2_journal_stop() in the case where handle was detached from the transaction (h_transaction is NULL). Fix the NULL pointer dereference in __ext4_journal_stop() by just calling jbd2_journal_stop() quietly as suggested by Jan Kara. Also fix the potential memory leak in jbd2_journal_stop() and use proper handle refcounting before we attempt to free it to avoid use-after-free issues. And finally remove all WARN_ON(!transaction) from the code so that we do not get random traces when something goes wrong because when journal restart fails we will get to some of those functions. Cc: stable@vger.kernel.org Signed-off-by: Lukas Czerner <lczerner@redhat.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Jan Kara <jack@suse.cz>
2015-05-14ext4: remove unused function prototype from ext4.hTheodore Ts'o1-1/+0
The ext4_extent_tree_init() function hasn't been in the ext4 code for a long time ago, except in an unused function prototype in ext4.h Google-Bug-Id: 4530137 Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-05-14ext4: don't save the error information if the block device is read-onlyTheodore Ts'o1-0/+2
Google-Bug-Id: 20939131 Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-05-14ext4: fix lazytime optimizationTheodore Ts'o1-1/+1
We had a fencepost error in the lazytime optimization which means that timestamp would get written to the wrong inode. Cc: stable@vger.kernel.org Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-05-14mtd: readtest: don't clobber error reportsBrian Norris1-2/+4
Commit 2a6a28e7922c ("mtd: Make MTD tests cancelable") accidentally clobbered any read failure reports. Coverity CID #1296020 Signed-off-by: Brian Norris <computersforpeace@gmail.com>
2015-05-14MAINTAINERS: ARM: EXYNOS: Add Krzysztof Kozlowski as co-maintainerKrzysztof Kozlowski1-0/+1
Add Krzysztof Kozlowski as a co-maintainer of Samsung Exynos ARM architecture to review the patches. Patches will go as usual - picked up by Kukjin Kim. Cc: Russell King <linux@arm.linux.org.uk> Cc: Kukjin Kim <kgene@kernel.org> Cc: Kevin Hilman <khilman@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Olof Johansson <olof@lixom.net> Cc: linux-samsung-soc@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk> Acked-by: Tobias Jakobi <liquid.acid@gmx.net> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Kevin Hilman <khilman@linaro.org>