aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/syscall-counts.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2014-05-09arm64: mm: use inner-shareable barriers for inner-shareable maintenanceWill Deacon2-4/+4
In order to ensure ordering and completion of inner-shareable maintenance instructions (cache and TLB) on AArch64, we can use the -ish suffix to the dmb and dsb instructions respectively. This patch updates our low-level cache and tlb maintenance routines to use the inner-shareable barrier variants where appropriate. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: kvm: use inner-shareable barriers for inner-shareable maintenanceWill Deacon1-3/+9
In order to ensure completion of inner-shareable maintenance instructions (cache and TLB) on AArch64, we can use the -ish suffix to the dsb instruction. This patch relaxes our dsb sy instructions to dsb ish where possible. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: head: fix cache flushing and barriers in set_cpu_boot_mode_flagWill Deacon1-5/+3
set_cpu_boot_mode_flag is used to identify which exception levels are encountered across the system by CPUs trying to enter the kernel. The basic algorithm is: if a CPU is booting at EL2, it will set a flag at an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable. Otherwise, a flag is set at an offset of zero into the same cacheline. This enables us to check that all CPUs booted at the same exception level. This cacheline is written with the stage-1 MMU off (that is, via a strongly-ordered mapping) and will bypass any clean lines in the cache, leading to potential coherence problems when the variable is later checked via the normal, cacheable mapping of the kernel image. This patch reworks the broken flushing code so that we: (1) Use a DMB to order the strongly-ordered write of the cacheline against the subsequent cache-maintenance operation (by-VA operations only hazard against normal, cacheable accesses). (2) Use a single dc ivac instruction to invalidate any clean lines containing a stale copy of the line after it has been updated. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: barriers: use barrier() instead of smp_mb() when !SMPWill Deacon1-2/+2
The recently introduced acquire/release accessors refer to smp_mb() in the !CONFIG_SMP case. This is confusing when reading the code, so use barrier() directly when we know we're UP. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: barriers: wire up new barrier optionsWill Deacon1-7/+7
Now that all callers of the barrier macros are updated to pass the mandatory options, update the macros so the option is actually used. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: barriers: make use of barrier options with explicit barriersWill Deacon6-15/+15
When calling our low-level barrier macros directly, we can often suffice with more relaxed behaviour than the default "all accesses, full system" option. This patch updates the users of dsb() to specify the option which they actually require. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: mm: Optimise tlb flush logic where we have >4K granuleSteve Capper3-77/+26
The tlb maintainence functions: __cpu_flush_user_tlb_range and __cpu_flush_kern_tlb_range do not take into consideration the page granule when looping through the address range, and repeatedly flush tlb entries for the same page when operating with 64K pages. This patch re-works the logic s.t. we instead advance the loop by 1 << (PAGE_SHIFT - 12), so avoid repeating ourselves. Also the routines have been converted from assembler to static inline functions to aid with legibility and potential compiler optimisations. The isb() has been removed from flush_tlb_kernel_range(.) as it is only needed when changing the execute permission of a mapping. If one needs to set an area of the kernel as execute/non-execute an isb() must be inserted after the call to flush_tlb_kernel_range. Cc: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Steve Capper <steve.capper@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: xchg: prevent warning if return value is unusedWill Deacon1-1/+6
Some users of xchg() don't bother using the return value, which results in a compiler warning like the following (from kgdb): In file included from linux/arch/arm64/include/asm/atomic.h:27:0, from include/linux/atomic.h:4, from include/linux/spinlock.h:402, from include/linux/seqlock.h:35, from include/linux/time.h:5, from include/uapi/linux/timex.h:56, from include/linux/timex.h:56, from include/linux/sched.h:19, from include/linux/pid_namespace.h:4, from kernel/debug/debug_core.c:30: kernel/debug/debug_core.c: In function ‘kgdb_cpu_enter’: linux/arch/arm64/include/asm/cmpxchg.h:75:3: warning: value computed is not used [-Wunused-value] ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) ^ linux/arch/arm64/include/asm/atomic.h:132:30: note: in expansion of macro ‘xchg’ #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) kernel/debug/debug_core.c:504:4: note: in expansion of macro ‘atomic_xchg’ atomic_xchg(&kgdb_active, cpu); ^ This patch makes use of the same trick as we do for cmpxchg, by assigning the return value to a dummy variable in the xchg() macro itself. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: mm: Create gigabyte kernel logical mappings where possibleSteve Capper3-1/+36
We have the capability to map 1GB level 1 blocks when using a 4K granule. This patch adjusts the create_mapping logic s.t. when mapping physical memory on boot, we attempt to use a 1GB block if both the VA and PA start and end are 1GB aligned. This both reduces the levels of lookup required to resolve a kernel logical address, as well as reduces TLB pressure on cores that support 1GB TLB entries. Signed-off-by: Steve Capper <steve.capper@linaro.org> Tested-by: Jungseok Lee <jays.lee@samsung.com> [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: Make atomic64_t() return "long", not "long long"Bjorn Helgaas1-1/+1
arm64 sets CONFIG_64BIT=y and hence uses the "long counter" atomic64_t definition from include/linux/types.h. Make atomic64_read() return "long", not "long long". Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: Clean up the default pgprot settingCatalin Marinas5-95/+50
The primary aim of this patchset is to remove the pgprot_default and prot_sect_default global variables and rely strictly on predefined values. The original goal was to be able to run SMP kernels on UP hardware by not setting the Shareability bit. However, it is unlikely to see UP ARMv8 hardware and even if we do, the Shareability bit is no longer assumed to disable cacheable accesses. A side effect is that the device mappings now have the Shareability attribute set. The hardware, however, should ignore it since Device accesses are always Outer Shareable. Following the removal of the two global variables, there is some PROT_* macro reshuffling and cleanup, including the __PAGE_* macros (replaced by PAGE_*). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
2014-05-09arm64: Introduce execute-only page access permissionsCatalin Marinas2-8/+8
The ARMv8 architecture allows execute-only user permissions by clearing the PTE_UXN and PTE_USER bits. The kernel, however, can still access such page, so execute-only page permission does not protect against read(2)/write(2) etc. accesses. Systems requiring such protection must implement/enable features like SECCOMP. This patch changes the arm64 __P100 and __S100 protection_map[] macros to the new __PAGE_EXECONLY attributes. A side effect is that pte_valid_user() no longer triggers for __PAGE_EXECONLY since PTE_USER isn't set. To work around this, the check is done on the PTE_NG bit via the pte_valid_ng() macro. VM_READ is also checked now for page faults. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: Expose ESR_EL1 information to user when SIGSEGV/SIGBUSCatalin Marinas2-0/+17
This information is useful for instruction emulators to detect read/write and access size without having to decode the faulting instruction. The current patch exports it via sigcontext (struct esr_context) and is only valid for SIGSEGV and SIGBUS. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: Remove the aux_context structureCatalin Marinas2-41/+17
This patch removes the aux_context structure (and the containing file) to allow the placement of the _aarch64_ctx end magic based on the context stored on the signal stack. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: Provide read/write fault information in compat signal handlersCatalin Marinas5-9/+20
For AArch32, bit 11 (WnR) of the FSR/ESR register is set when the fault was caused by a write access and applications like Qemu rely on such information being provided in sigcontext. This patch introduces the ESR_EL1 tracking for the arm64 kernel faults and sets bit 11 accordingly in compat sigcontext. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: Remove boot thread synchronisation for spin-table release methodCatalin Marinas1-38/+1
The synchronisation with the boot thread already happens in __cpu_up() via wait_for_completion_timeout(). In addition, __cpu_up() calls are protected by the cpu_add_remove_lock mutex and already serialised. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: Implement cache_line_size() based on CTR_EL0.CWGCatalin Marinas4-1/+41
The hardware provides the maximum cache line size in the system via the CTR_EL0.CWG bits. This patch implements the cache_line_size() function to read such information, together with a sanity check if the statically defined L1_CACHE_BYTES is smaller than the hardware value. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
2014-05-04Linux 3.15-rc4Linus Torvalds1-1/+1
2014-05-04vexpress: Initialise the sysregs before setting up the clocksCatalin Marinas1-0/+2
Following arm64 commit bc3ee18a7a57 (arm64: init: Move of_clk_init to time_init()), vexpress_osc_of_setup() is called via of_clk_init() long before initcalls are issued. Initialising the vexpress oscillators requires the vespress sysregs to be already initialised, so this patch adds an explicit call to vexpress_sysreg_of_early_init() in vexpress oscillator setup function. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Will Deacon <will.deacon@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Tested-by: Pawel Moll <pawel.moll@arm.com> Acked-by: Pawel Moll <pawel.moll@arm.com> Cc: Mike Turquette <mturquette@linaro.org>
2014-05-03arm64: Mark the Applied Micro X-Gene SATA controller as DMA coherentCatalin Marinas2-0/+6
Since the default DMA ops for arm64 are non-coherent, mark the X-Gene controller explicitly as dma-coherent to avoid additional cache maintenance. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Loc Ho <lho@apm.com>
2014-05-03arm64: Use bus notifiers to set per-device coherent DMA opsCatalin Marinas2-2/+33
Recently, the default DMA ops have been changed to non-coherent for alignment with 32-bit ARM platforms (and DT files). This patch adds bus notifiers to be able to set the coherent DMA ops (with no cache maintenance) for devices explicitly marked as coherent via the "dma-coherent" DT property. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-03arm64: Make default dma_ops to be noncoherentRitesh Harjani1-1/+1
Currently arm64 dma_ops is by default made coherent which makes it opposite in default policy from arm. Make default dma_ops to be noncoherent (same as arm), as currently there aren't any dma-capable drivers which assumes coherent ops Signed-off-by: Ritesh Harjani <ritesh.harjani@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-03arm64: fixmap: fix missing sub-page offset for earlyprintkMarc Zyngier2-4/+5
Commit d57c33c5daa4 (add generic fixmap.h) added (among other similar things) set_fixmap_io to deal with early ioremap of devices. More recently, commit bf4b558eba92 (arm64: add early_ioremap support) converted the arm64 earlyprintk to use set_fixmap_io. A side effect of this conversion is that my virtual machines have stopped booting when I pass "earlyprintk=uart8250-8bit,0x3f8" to the guest kernel. Turns out that the new earlyprintk code doesn't care at all about sub-page offsets, and just assumes that the earlyprintk device will be page-aligned. Obviously, that doesn't play well with the above example. Further investigation shows that set_fixmap_io uses __set_fixmap instead of __set_fixmap_offset. A fix is to introduce a set_fixmap_offset_io that uses the latter, and to remove the superflous call to fix_to_virt (which only returns the value that set_fixmap_io has already given us). With this applied, my VMs are back in business. Tested on a Cortex-A57 platform with kvmtool as platform emulation. Cc: Will Deacon <will.deacon@arm.com> Acked-by: Mark Salter <msalter@redhat.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-03arm64: Fix for the arm64 kern_addr_valid() functionDave Anderson1-0/+3
Fix for the arm64 kern_addr_valid() function to recognize virtual addresses in the kernel logical memory map. The function fails as written because it does not check whether the addresses in that region are mapped at the pmd level to 2MB or 512MB pages, continues the page table walk to the pte level, and issues a garbage value to pfn_valid(). Tested on 4K-page and 64K-page kernels. Signed-off-by: Dave Anderson <anderson@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-02tracing: Use rcu_dereference_sched() for trace event triggersSteven Rostedt (Red Hat)1-1/+1
As trace event triggers are now part of the mainline kernel, I added my trace event trigger tests to my test suite I run on all my kernels. Now these tests get run under different config options, and one of those options is CONFIG_PROVE_RCU, which checks under lockdep that the rcu locking primitives are being used correctly. This triggered the following splat: =============================== [ INFO: suspicious RCU usage. ] 3.15.0-rc2-test+ #11 Not tainted ------------------------------- kernel/trace/trace_events_trigger.c:80 suspicious rcu_dereference_check() usage! other info that might help us debug this: rcu_scheduler_active = 1, debug_locks = 0 4 locks held by swapper/1/0: #0: ((&(&j_cdbs->work)->timer)){..-...}, at: [<ffffffff8104d2cc>] call_timer_fn+0x5/0x1be #1: (&(&pool->lock)->rlock){-.-...}, at: [<ffffffff81059856>] __queue_work+0x140/0x283 #2: (&p->pi_lock){-.-.-.}, at: [<ffffffff8106e961>] try_to_wake_up+0x2e/0x1e8 #3: (&rq->lock){-.-.-.}, at: [<ffffffff8106ead3>] try_to_wake_up+0x1a0/0x1e8 stack backtrace: CPU: 1 PID: 0 Comm: swapper/1 Not tainted 3.15.0-rc2-test+ #11 Hardware name: /DG965MQ, BIOS MQ96510J.86A.0372.2006.0605.1717 06/05/2006 0000000000000001 ffff88007e083b98 ffffffff819f53a5 0000000000000006 ffff88007b0942c0 ffff88007e083bc8 ffffffff81081307 ffff88007ad96d20 0000000000000000 ffff88007af2d840 ffff88007b2e701c ffff88007e083c18 Call Trace: <IRQ> [<ffffffff819f53a5>] dump_stack+0x4f/0x7c [<ffffffff81081307>] lockdep_rcu_suspicious+0x107/0x110 [<ffffffff810ee51c>] event_triggers_call+0x99/0x108 [<ffffffff810e8174>] ftrace_event_buffer_commit+0x42/0xa4 [<ffffffff8106aadc>] ftrace_raw_event_sched_wakeup_template+0x71/0x7c [<ffffffff8106bcbf>] ttwu_do_wakeup+0x7f/0xff [<ffffffff8106bd9b>] ttwu_do_activate.constprop.126+0x5c/0x61 [<ffffffff8106eadf>] try_to_wake_up+0x1ac/0x1e8 [<ffffffff8106eb77>] wake_up_process+0x36/0x3b [<ffffffff810575cc>] wake_up_worker+0x24/0x26 [<ffffffff810578bc>] insert_work+0x5c/0x65 [<ffffffff81059982>] __queue_work+0x26c/0x283 [<ffffffff81059999>] ? __queue_work+0x283/0x283 [<ffffffff810599b7>] delayed_work_timer_fn+0x1e/0x20 [<ffffffff8104d3a6>] call_timer_fn+0xdf/0x1be^M [<ffffffff8104d2cc>] ? call_timer_fn+0x5/0x1be [<ffffffff81059999>] ? __queue_work+0x283/0x283 [<ffffffff8104d823>] run_timer_softirq+0x1a4/0x22f^M [<ffffffff8104696d>] __do_softirq+0x17b/0x31b^M [<ffffffff81046d03>] irq_exit+0x42/0x97 [<ffffffff81a08db6>] smp_apic_timer_interrupt+0x37/0x44 [<ffffffff81a07a2f>] apic_timer_interrupt+0x6f/0x80 <EOI> [<ffffffff8100a5d8>] ? default_idle+0x21/0x32 [<ffffffff8100a5d6>] ? default_idle+0x1f/0x32 [<ffffffff8100ac10>] arch_cpu_idle+0xf/0x11 [<ffffffff8107b3a4>] cpu_startup_entry+0x1a3/0x213 [<ffffffff8102a23c>] start_secondary+0x212/0x219 The cause is that the triggers are protected by rcu_read_lock_sched() but the data is dereferenced with rcu_dereference() which expects it to be protected with rcu_read_lock(). The proper reference should be rcu_dereference_sched(). Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Cc: stable@vger.kernel.org # 3.14+ Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-05-01dm cache: fix writethrough mode quiescing in cache_mapMike Snitzer1-0/+1
Commit 2ee57d58735 ("dm cache: add passthrough mode") inadvertently removed the deferred set reference that was taken in cache_map()'s writethrough mode support. Restore taking this reference. This issue was found with code inspection. Signed-off-by: Mike Snitzer <snitzer@redhat.com> Acked-by: Joe Thornber <ejt@redhat.com> Cc: stable@vger.kernel.org # 3.13+
2014-05-01parisc: Use generic uapi/asm/resource.h fileHelge Deller2-7/+2
Signed-off-by: Helge Deller <deller@gmx.de>
2014-05-01parisc: remove _STK_LIM_MAX overrideJohn David Anglin1-1/+0
There are only a couple of architectures that override _STK_LIM_MAX to a non-infinity value. This changes the stack allocation semantics in subtle ways. For example, GNU make changes its stack allocation to the hard maximum defined by _STK_LIM_MAX. As a results, threads executed by processes running under make are allocated a stack size of _STK_LIM_MAX rather than a sensible default value. This causes various thread stress tests to fail when they can't muster more than about 50 threads. The attached change implements the default behavior used by the majority of architectures. Signed-off-by: John David Anglin <dave.anglin@bell.net> Reviewed-by: Carlos O'Donell <carlos@systemhalted.org> Cc: stable@vger.kernel.org # 3.14 Signed-off-by: Helge Deller <deller@gmx.de>
2014-05-01Hexagon: Delete stale barrier.hVineet Gupta1-37/+0
Commit 93ea02bb8435 ("arch: Clean up asm/barrier.h implementations") wired generic barrier.h for hexagon, but failed to delete the existing file. Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Compile-tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-05-01word-at-a-time: simplify big-endian zero_bytemask macroH. Peter Anvin1-1/+1
This is simpler and cleaner. Depending on architecture, a smart compiler may or may not generate the same code. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-05-01aio: fix potential leak in aio_run_iocb().Leon Yu1-4/+2
iovec should be reclaimed whenever caller of rw_copy_check_uvector() returns, but it doesn't hold when failure happens right after aio_setup_vectored_rw(). Fix that in a such way to avoid hairy goto. Signed-off-by: Leon Yu <chianglungyu@gmail.com> Signed-off-by: Benjamin LaHaise <bcrl@kvack.org> Cc: stable@vger.kernel.org
2014-05-01Revert "hwmon: (coretemp) Refine TjMax detection"Guenter Roeck1-2/+2
This reverts commit 9fb6c9c73b11bef65ba80a362547fd116c1e1c9d. Tjmax on some Intel CPUs is below 85 degrees C. One known example is L5630 with Tjmax of 71 degrees C. There are other Xeon processors with Tjmax of 70 or 80 degrees C. Also, the Intel IA32 System Programming document states that the temperature target is in bits 23:16 of MSR 0x1a2 (MSR_TEMPERATURE_TARGET), which is 8 bits, not 7. So even if turbostat uses similar checks to validate Tjmax, there is no evidence that the checks are actually required. On the contrary, the checks are known to cause problems and therefore need to be removed. This fixes https://bugzilla.kernel.org/show_bug.cgi?id=75071. Fixes: 9fb6c9c hwmon: (coretemp) Refine TjMax detection Reviewed-by: Jean Delvare <jdelvare@suse.de> Cc: stable@vger.kernel.org # 3.14+ Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2014-05-01ACPI / processor: Fix failure of loading acpi-cpufreq driverLan Tianyu1-3/+4
According commit d640113fe (ACPI: processor: fix acpi_get_cpuid for UP processor), BIOS may not provide _MAT or MADT tables and acpi_get_apicid() always returns -1. For these cases, original code will pass apic_id with vaule of -1 to acpi_map_cpuid() and it will check the acpi_id. If acpi_id is equal to zero, ignores apic_id and return zero for CPU0. Commit b981513 (ACPI / scan: bail out early if failed to parse APIC ID for CPU) changed the behavior. Return ENODEV when find apic_id is less than zero after calling acpi_get_apicid(). This causes acpi-cpufreq driver fails to be loaded on some machines. This patch is to fix it. Fixes: b981513f806d (ACPI / scan: bail out early if failed to parse APIC ID for CPU) References: https://bugzilla.kernel.org/show_bug.cgi?id=73781 Cc: 3.14+ <stable@vger.kernel.org> # 3.14+ Reported-and-tested-by: KATO Hiroshi <katoh@mikage.ne.jp> Reported-and-tested-by: Stuart Foster <smf.linux@ntlworld.com> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2014-04-30PNP / ACPI: Do not return errors if _DIS or _SRS are not presentRafael J. Wysocki1-18/+26
The ACPI PNP subsystem returns errors from pnpacpi_set_resources() and pnpacpi_disable_resources() if the _SRS or _DIS methods are not present, respectively, but it should not do that, because those methods are optional. For this reason, modify pnpacpi_set_resources() and pnpacpi_disable_resources(), respectively, to ignore missing _SRS or _DIS. This problem has been uncovered by commit 202317a573b2 (ACPI / scan: Add acpi_device objects for all device nodes in the namespace) and manifested itself by causing serial port suspend to fail on some systems. Fixes: 202317a573b2 (ACPI / scan: Add acpi_device objects for all device nodes in the namespace) References: https://bugzilla.kernel.org/show_bug.cgi?id=74371 Reported-by: wxg4net <wxg4net@gmail.com> Reported-and-tested-by: <nonproffessional@gmail.com> Cc: 3.14+ <stable@vger.kernel.org> # 3.14+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2014-04-30MAINTAINERS: email address change for Jeff LaytonJeff Layton1-1/+1
jlayton@redhat.com -> jlayton@poochiereds.net Signed-off-by: Jeff Layton <jlayton@poochiereds.net>
2014-04-30ARC: !PREEMPT: Ensure Return to kernel mode is IRQ safeVineet Gupta1-3/+5
There was a very small race window where resume to kernel mode from a Exception Path (or pure kernel mode which is true for most of ARC exceptions anyways), was not disabling interrupts in restore_regs, clobbering the exception regs Anton found the culprit call flow (after many sleepless nights) | 1. we got a Trap from user land | 2. started to service it. | 3. While doing some stuff on user-land memory (I think it is padzero()), | we got a DataTlbMiss | 4. On return from it we are taking "resume_kernel_mode" path | 5. NEED_RESHED is not set, so we go to "return from exception" path in | restore regs. | 6. there seems to be IRQ happening Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Cc: <stable@vger.kernel.org> #3.10, 3.12, 3.13, 3.14 Cc: Anton Kolesov <Anton.Kolesov@synopsys.com> Cc: Francois Bedard <Francois.Bedard@synopsys.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-04-30perf tests x86: Fix stack map lookup in dwarf unwind testJiri Olsa1-1/+1
Previous commit 'perf x86: Fix perf to use non-executable stack, again' moved stack map into MAP__VARIABLE map type again. Fixing the dwarf unwind test stack map lookup appropriately. Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Jean Pihet <jean.pihet@linaro.org> Link: http://lkml.kernel.org/n/tip-ttzyhbe4zls24z7ednkmhvxl@git.kernel.org Signed-off-by: Jiri Olsa <jolsa@kernel.org>
2014-04-30perf x86: Fix perf to use non-executable stack, againMathias Krause2-1/+11
arch/x86/tests/regs_load.S is missing the linker note about the stack requirements, therefore making the linker fall back to an executable stack. As this object gets linked against the final perf binary, it'll needlessly end up with an executable stack. Fix this by adding the appropriate linker note. Also add a global linker flag to prevent future regressions, as suggested by Jiri. This way perf won't get an executable stack even if we fail to add the .GNU-stack linker note to future assembler files. Though, doing so might create regressions the other way around, when (statically) linking against libraries needing an executable stack. But, apparently, regressing in that direction is wanted as it is an indicator of poor code quality -- or just missing linker notes. Fixes: 3c8b06f981 ("perf tests x86: Introduce perf_regs_load function") Signed-off-by: Mathias Krause <minipli@googlemail.com> Acked-by: Ingo Molnar <mingo@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1398617466-22749-1-git-send-email-minipli@googlemail.com Signed-off-by: Jiri Olsa <jolsa@kernel.org>
2014-04-30perf tools: Remove extra '/' character in events file pathXia Kaixu1-2/+2
The array debugfs_known_mountpoints[] will cause extra '/' character output. Remove it. pre: $ perf probe -l /sys/kernel/debug//tracing/uprobe_events file does not exist - please rebuild kernel with CONFIG_UPROBE_EVENTS. post: $ perf probe -l /sys/kernel/debug/tracing/uprobe_events file does not exist - please rebuild kernel with CONFIG_UPROBE_EVENTS. Signed-off-by: Xia Kaixu <xiakaixu@huawei.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/535B6660.2060001@huawei.com Signed-off-by: Jiri Olsa <jolsa@kernel.org>
2014-04-30perf machine: Search for modules in %s/lib/modules/%sRichard Yao1-4/+12
Modules installed outside of the kernel's build system should go into "%s/lib/modules/%s/extra", but at present, perf will only look at them when they are in "%s/lib/modules/%s/kernel". Lets encourage good citizenship by relaxing this requirement to "%s/lib/modules/%s". This way open source modules that are out-of-tree have no incentive to start populating a directory reserved for in-kernel modules and I can stop hex-editing my system's perf binary when profiling OSS out-of-tree modules. Feedback from Namhyung Kim correctly revealed that the hex-edits that I had been doing meant that perf was also traversing the build and source symlinks in %s/lib/modules/%s. That is undesireable, so we explicitly exclude them from traversal with a minor tweak to the traversal routine. Signed-off-by: Richard Yao <ryao@gentoo.org> Acked-by: Namhyung kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1398532675-13684-1-git-send-email-ryao@gentoo.org Signed-off-by: Jiri Olsa <jolsa@kernel.org>
2014-04-30perf tests: Add static build make testJiri Olsa1-0/+2
Adding test for building static perf build into the automated suite. Also available via following commands: $ make -f tests/make make_static - make_static: cd . && make -f Makefile DESTDIR=/tmp/tmp.7u5MlB4njo LDFLAGS=-static $ make -f tests/make make_static_O - make_static_O: cd . && make -f Makefile O=/tmp/tmp.Ay6r3wEmtX DESTDIR=/tmp/tmp.vK0KQwO0Vi LDFLAGS=-static Acked-by: David Ahern <dsahern@gmail.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1398760413-7574-1-git-send-email-jolsa@kernel.org Signed-off-by: Jiri Olsa <jolsa@kernel.org>
2014-04-30perf tools: Fix bfd dependency libraries detectionJiri Olsa1-11/+23
There's false assumption in the library detection code assuming -liberty and -lz are always present once bfd is detected. The fails on Ubuntu (14.04) as reported by Ingo. Forcing the bdf dependency libraries detection any time bfd library is detected. Reported-by: Ingo Molnar <mingo@kernel.org> Tested-by: Ingo Molnar <mingo@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1398676935-6615-1-git-send-email-jolsa@kernel.org Signed-off-by: Jiri Olsa <jolsa@kernel.org>
2014-04-30perf tools: Use LDFLAGS instead of ALL_LDFLAGSJiri Olsa1-1/+1
We no longer use ALL_LDFLAGS, Replacing with LDFLAGS. Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1398675770-3109-1-git-send-email-jolsa@kernel.org Signed-off-by: Jiri Olsa <jolsa@kernel.org>
2014-04-30timer: Prevent overflow in apply_slackJiri Bohac1-1/+1
On architectures with sizeof(int) < sizeof (long), the computation of mask inside apply_slack() can be undefined if the computed bit is > 32. E.g. with: expires = 0xffffe6f5 and slack = 25, we get: expires_limit = 0x20000000e bit = 33 mask = (1 << 33) - 1 /* undefined */ On x86, mask becomes 1 and and the slack is not applied properly. On s390, mask is -1, expires is set to 0 and the timer fires immediately. Use 1UL << bit to solve that issue. Suggested-by: Deborah Townsend <dstownse@us.ibm.com> Signed-off-by: Jiri Bohac <jbohac@suse.cz> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20140418152310.GA13654@midget.suse.cz Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-04-30hrtimer: Prevent remote enqueue of leftmost timersLeon Ma1-0/+5
If a cpu is idle and starts an hrtimer which is not pinned on that same cpu, the nohz code might target the timer to a different cpu. In the case that we switch the cpu base of the timer we already have a sanity check in place, which determines whether the timer is earlier than the current leftmost timer on the target cpu. In that case we enqueue the timer on the current cpu because we cannot reprogram the clock event device on the target. If the timers base is already the target CPU we do not have this sanity check in place so we enqueue the timer as the leftmost timer in the target cpus rb tree, but we cannot reprogram the clock event device on the target cpu. So the timer expires late and subsequently prevents the reprogramming of the target cpu clock event device until the previously programmed event fires or a timer with an earlier expiry time gets enqueued on the target cpu itself. Add the same target check as we have for the switch base case and start the timer on the current cpu if it would become the leftmost timer on the target. [ tglx: Rewrote subject and changelog ] Signed-off-by: Leon Ma <xindong.ma@intel.com> Link: http://lkml.kernel.org/r/1398847391-5994-1-git-send-email-xindong.ma@intel.com Cc: stable@vger.kernel.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-04-30hrtimer: Prevent all reprogramming if hang detectedStuart Hayes1-0/+17
If the last hrtimer interrupt detected a hang it sets hang_detected=1 and programs the clock event device with a delay to let the system make progress. If hang_detected == 1, we prevent reprogramming of the clock event device in hrtimer_reprogram() but not in hrtimer_force_reprogram(). This can lead to the following situation: hrtimer_interrupt() hang_detected = 1; program ce device to Xms from now (hang delay) We have two timers pending: T1 expires 50ms from now T2 expires 5s from now Now T1 gets canceled, which causes hrtimer_force_reprogram() to be invoked, which in turn programs the clock event device to T2 (5 seconds from now). Any hrtimer_start after that will not reprogram the hardware due to hang_detected still being set. So we effectivly block all timers until the T2 event fires and cleans up the hang situation. Add a check for hang_detected to hrtimer_force_reprogram() which prevents the reprogramming of the hang delay in the hardware timer. The subsequent hrtimer_interrupt will resolve all outstanding issues. [ tglx: Rewrote subject and changelog and fixed up the comment in hrtimer_force_reprogram() ] Signed-off-by: Stuart Hayes <stuart.w.hayes@gmail.com> Link: http://lkml.kernel.org/r/53602DC6.2060101@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-04-30drm/exynos: use %pad for dma_addr_tJingoo Han2-2/+2
Use %pad for dma_addr_t, because a dma_addr_t type can vary based on build options. So, it prevents possible build warnings in printks. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Reviewed-by: Daniel Kurtz <djkurtz@chromium.org> Signed-off-by: Inki Dae <inki.dae@samsung.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2014-04-30drm/exynos: dsi: use IS_ERR() to check devm_ioremap_resource() resultsJingoo Han1-2/+2
devm_ioremap_resource() returns an error pointer, not NULL. Thus, the result should be checked with IS_ERR(). Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Inki Dae <inki.dae@samsung.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2014-04-30MAINTAINERS: update maintainer entry for Exynos DP driverJingoo Han1-0/+6
Recently, Exynos DP driver was moved from drivers/video/exynos/ directory to drivers/gpu/drm/exynos/ directory. So, I update and add maintainer entry for Exynos DP driver. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Inki Dae <inki.dae@samsung.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2014-04-30drm/exynos: balance framebuffer refcountAndrzej Hajda1-0/+1
exynos_drm_crtc_mode_set assigns primary framebuffer to plane without taking reference. Then during framebuffer removal it is dereferenced twice, causing oops. The patch fixes it. Signed-off-by: Andrzej Hajda <a.hajda@samsung.com> Signed-off-by: Inki Dae <inki.dae@samsung.com> Signed-off-by: Dave Airlie <airlied@redhat.com>