aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/call-graph-from-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2015-10-20sched/deadline: Fix migration of SCHED_DEADLINE tasksLuca Abeni1-3/+5
Commit: 9d5142624256 ("sched/deadline: Reduce rq lock contention by eliminating locking of non-feasible target") broke select_task_rq_dl() and find_lock_later_rq(), because it introduced a comparison between the local task's deadline and dl.earliest_dl.curr of the remote queue. However, if the remote runqueue does not contain any SCHED_DEADLINE task its earliest_dl.curr is 0 (always smaller than the deadline of the local task) and the remote runqueue is not selected for pushing. As a result, if an application creates multiple SCHED_DEADLINE threads, they will never be pushed to runqueues that do not already contain SCHED_DEADLINE tasks. This patch fixes the issue by checking if dl.dl_nr_running == 0. Signed-off-by: Luca Abeni <luca.abeni@unitn.it> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Juri Lelli <juri.lelli@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wanpeng Li <wanpeng.li@linux.intel.com> Fixes: 9d5142624256 ("sched/deadline: Reduce rq lock contention by eliminating locking of non-feasible target") Link: http://lkml.kernel.org/r/1444982781-15608-1-git-send-email-luca.abeni@unitn.it Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-20nohz: Revert "nohz: Set isolcpus when nohz_full is set"Frederic Weisbecker1-3/+0
This reverts: 8cb9764fc88b ("nohz: Set isolcpus when nohz_full is set") We assumed that full-nohz users always want scheduler isolation on full dynticks CPUs, therefore we included full-nohz CPUs on cpu_isolated_map. This means that tasks run by default on CPUs outside the nohz_full range unless their affinity is explicity overwritten. This suits pure isolation workloads but when the machine is needed to run common workloads, the available sets of CPUs to run common tasks becomes reduced. We reach an extreme case when CONFIG_NO_HZ_FULL_ALL is enabled as it leaves only CPU 0 for non-isolation tasks, which makes people think that their supercomputer regressed to 90's UP - which is true in a sense. Some full-nohz users appear to be interested in running normal workloads either before or after an isolation workload. Full-nohz isn't optimized toward normal workloads but it's still better than UP performance. We are reaching a limitation in kernel presets here. Lets revert this cpu_isolated_map inclusion and let userspace do its own scheduler isolation using cpusets or explicit affinity settings. Reported-by: Ingo Molnar <mingo@kernel.org> Reported-by: Mike Galbraith <umgwanakikbuti@gmail.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Christoph Lameter <cl@linux.com> Cc: Dave Jones <davej@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Link: http://lkml.kernel.org/r/1444663283-30068-1-git-send-email-fweisbec@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-20sched/fair: Update task group's load_avg after task migrationYuyang Du1-2/+3
When cfs_rq has cfs_rq->removed_load_avg set (when a task migrates from this cfs_rq), we need to update its contribution to the group's load_avg. This should not increase tg's update too much, because in most cases, the cfs_rq has already decayed its load_avg. Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Yuyang Du <yuyang.du@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1444699103-20272-2-git-send-email-yuyang.du@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-20sched/fair: Fix overly small weight for interactive group entitiesYuyang Du1-2/+2
Commit: 9d89c257dfb9 ("sched/fair: Rewrite runnable load and utilization average tracking") led to an overly small weight for interactive group entities. The bad case can be easily reproduced when a number of CPU hogs compete for the CPUs at the same time (thanks to Mike). This is largly because the task group's load average tracking cross CPUs lags behind the real changes. To fix this we accelerate the group share distribution process by using the load.weight of the cfs_rq. This may increase the entire group's share, but we have to do so to protect the (fragile) interactive tasks, especially from CPU hogs. Reported-by: Mike Galbraith <umgwanakikbuti@gmail.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Mike Galbraith <umgwanakikbuti@gmail.com> Signed-off-by: Yuyang Du <yuyang.du@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1444699103-20272-1-git-send-email-yuyang.du@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-19sched/x86: Fix typo in __switch_to() commentsChuck Ebbert1-1/+1
Fix obvious mistake: FS/GS should be DS/ES. Signed-off-by: Chuck Ebbert <cebbert.lkml@gmail.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20151014143119.78858eeb@r5 Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-12sched, tracing: Stop/start critical timings around the idle=poll idle loopDaniel Bristot de Oliveira1-0/+2
When using idle=poll, the preemptoff tracer is always showing the idle task as the culprit for long latencies. That happens because critical timings are not stopped before idle loop. This patch stops critical timings before entering the idle loop, starting it again after the idle loop. This problem does not affect the irqsoff tracer because interruptions are enabled before entering the idle loop. Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com> Reviewed-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/10fc3705874aef11dbe152a068b591a7be1899b4.1444314899.git.bristot@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-11Linux 4.3-rc5Linus Torvalds1-2/+2
2015-10-11MAINTAINERS: Change Matt Fleming's email addressMatt Fleming1-2/+2
My Intel email address will soon expire. Replace it with my personal address so people still know where to send patches. Signed-off-by: Matt Fleming <matt.fleming@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1444494136-10333-1-git-send-email-matt@codeblueprint.co.uk Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-10namei: results of d_is_negative() should be checked after dentry revalidationTrond Myklebust1-2/+6
Leandro Awa writes: "After switching to version 4.1.6, our parallelized and distributed workflows now fail consistently with errors of the form: T34: ./regex.c:39:22: error: config.h: No such file or directory From our 'git bisect' testing, the following commit appears to be the possible cause of the behavior we've been seeing: commit 766c4cbfacd8" Al Viro says: "What happens is that 766c4cbfacd8 got the things subtly wrong. We used to treat d_is_negative() after lookup_fast() as "fall with ENOENT". That was wrong - checking ->d_flags outside of ->d_seq protection is unreliable and failing with hard error on what should've fallen back to non-RCU pathname resolution is a bug. Unfortunately, we'd pulled the test too far up and ran afoul of another kind of staleness. The dentry might have been absolutely stable from the RCU point of view (and we might be on UP, etc), but stale from the remote fs point of view. If ->d_revalidate() returns "it's actually stale", dentry gets thrown away and the original code wouldn't even have looked at its ->d_flags. What we need is to check ->d_flags where 766c4cbfacd8 does (prior to ->d_seq validation) but only use the result in cases where we do not discard this dentry outright" Reported-by: Leandro Awa <lawa@nvidia.com> Link: https://bugzilla.kernel.org/show_bug.cgi?id=104911 Fixes: 766c4cbfacd8 ("namei: d_is_negative() should be checked...") Tested-by: Leandro Awa <lawa@nvidia.com> Cc: stable@vger.kernel.org # v4.1+ Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Acked-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-10-09dm snapshot: add new persistent store option to support overflowMike Snitzer6-18/+37
Commit 76c44f6d80 introduced the possibly for "Overflow" to be reported by the snapshot device's status. Older userspace (e.g. lvm2) does not handle the "Overflow" status response. Fix this incompatibility by requiring newer userspace code, that can cope with "Overflow", request the persistent store with overflow support by using "PO" (Persistent with Overflow) for the snapshot store type. Reported-by: Zdenek Kabelac <zkabelac@redhat.com> Fixes: 76c44f6d80 ("dm snapshot: don't invalidate on-disk image on snapshot write overflow") Reviewed-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2015-10-09irqdomain: Add an accessor for the of_node fieldMarc Zyngier1-0/+5
As we're about to remove the of_node field from the irqdomain structure, introduce an accessor for it. Subsequent patches will take care of the actual repainting. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Jason Cooper <jason@lakedaemon.net> Link: http://lkml.kernel.org/r/1444402211-1141-1-git-send-email-marc.zyngier@arm.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-10-09genirq: Fix handle_bad_irq kerneldoc commentArnd Bergmann1-1/+0
A recent cleanup removed the 'irq' parameter from many functions, but left the documentation for this in place for at least one function. This removes it. Fixes: bd0b9ac405e1 ("genirq: Remove irq argument from irq flow handlers") Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Grygorii Strashko <grygorii.strashko@ti.com> Cc: Tony Lindgren <tony@atomide.com> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: kbuild-all@01.org Cc: Austin Schuh <austin@peloton-tech.com> Cc: Santosh Shilimkar <ssantosh@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/5400000.cD19rmgWjV@wuerfel Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-10-09genirq: Export handle_bad_irqArnd Bergmann1-0/+1
A cleanup of the omap gpio driver introduced a use of the handle_bad_irq() function in a device driver that can be a loadable module. This broke the ARM allmodconfig build: ERROR: "handle_bad_irq" [drivers/gpio/gpio-omap.ko] undefined! This patch exports the handle_bad_irq symbol in order to allow the use in modules. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Grygorii Strashko <grygorii.strashko@ti.com> Cc: Santosh Shilimkar <ssantosh@kernel.org> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Austin Schuh <austin@peloton-tech.com> Cc: Tony Lindgren <tony@atomide.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/5847725.4IBopItaOr@wuerfel Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-10-09dm cache: fix NULL pointer when switching from cleaner policyJoe Thornber1-1/+1
The cleaner policy doesn't make use of the per cache block hint space in the metadata (unlike the other policies). When switching from the cleaner policy to mq or smq a NULL pointer crash (in dm_tm_new_block) was observed. The crash was caused by bugs in dm-cache-metadata.c when trying to skip creation of the hint btree. The minimal fix is to change hint size for the cleaner policy to 4 bytes (only hint size supported). Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org
2015-10-09crash in md-raid1 and md-raid10 due to incorrect list manipulationMikulas Patocka2-4/+4
The commit 55ce74d4bfe1b9444436264c637f39a152d1e5ac (md/raid1: ensure device failure recorded before write request returns) is causing crash in the LVM2 testsuite test shell/lvchange-raid.sh. For me the crash is 100% reproducible. The reason for the crash is that the newly added code in raid1d moves the list from conf->bio_end_io_list to tmp, then tests if tmp is non-empty and then incorrectly pops the bio from conf->bio_end_io_list (which is empty because the list was alrady moved). Raid-10 has a similar bug. Kernel Fault: Code=15 regs=000000006ccb8640 (Addr=0000000100000000) CPU: 3 PID: 1930 Comm: mdX_raid1 Not tainted 4.2.0-rc5-bisect+ #35 task: 000000006cc1f258 ti: 000000006ccb8000 task.ti: 000000006ccb8000 YZrvWESTHLNXBCVMcbcbcbcbOGFRQPDI PSW: 00001000000001001111111000001111 Not tainted r00-03 000000ff0804fe0f 000000001059d000 000000001059f818 000000007f16be38 r04-07 000000001059d000 000000007f16be08 0000000000200200 0000000000000001 r08-11 000000006ccb8260 000000007b7934d0 0000000000000001 0000000000000000 r12-15 000000004056f320 0000000000000000 0000000000013dd0 0000000000000000 r16-19 00000000f0d00ae0 0000000000000000 0000000000000000 0000000000000001 r20-23 000000000800000f 0000000042200390 0000000000000000 0000000000000000 r24-27 0000000000000001 000000000800000f 000000007f16be08 000000001059d000 r28-31 0000000100000000 000000006ccb8560 000000006ccb8640 0000000000000000 sr00-03 0000000000249800 0000000000000000 0000000000000000 0000000000249800 sr04-07 0000000000000000 0000000000000000 0000000000000000 0000000000000000 IASQ: 0000000000000000 0000000000000000 IAOQ: 000000001059f61c 000000001059f620 IIR: 0f8010c6 ISR: 0000000000000000 IOR: 0000000100000000 CPU: 3 CR30: 000000006ccb8000 CR31: 0000000000000000 ORIG_R28: 000000001059d000 IAOQ[0]: call_bio_endio+0x34/0x1a8 [raid1] IAOQ[1]: call_bio_endio+0x38/0x1a8 [raid1] RP(r2): raid_end_bio_io+0x88/0x168 [raid1] Backtrace: [<000000001059f818>] raid_end_bio_io+0x88/0x168 [raid1] [<00000000105a4f64>] raid1d+0x144/0x1640 [raid1] [<000000004017fd5c>] kthread+0x144/0x160 Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Fixes: 55ce74d4bfe1 ("md/raid1: ensure device failure recorded before write request returns.") Fixes: 95af587e95aa ("md/raid10: ensure device failure recorded before write request returns.") Signed-off-by: NeilBrown <neilb@suse.com>
2015-10-08cpufreq: prevent lockup on reading scaling_available_frequenciesSrinivas Pandruvada1-1/+3
When scaling_available_frequencies is read on an offlined cpu, then either lockup or junk values are displayed. This is caused by freed freq_table, which policy is using. Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2015-10-08cpufreq: acpi_cpufreq: prevent crash on reading freqdomain_cpusSrinivas Pandruvada1-0/+3
When freqdomain_cpus attribute is read from an offlined cpu, it will cause crash. This change prevents calling cpufreq_show_cpus when policy driver_data is NULL. Crash info: [ 170.814949] BUG: unable to handle kernel NULL pointer dereference at 0000000000000018 [ 170.814990] IP: [<ffffffff813b2490>] _find_next_bit.part.0+0x10/0x70 [ 170.815021] PGD 227d30067 PUD 229e56067 PMD 0 [ 170.815043] Oops: 0000 [#2] SMP [ 170.816022] CPU: 3 PID: 3121 Comm: cat Tainted: G D OE 4.3.0-rc3+ #33 ... ... [ 170.816657] Call Trace: [ 170.816672] [<ffffffff813b2505>] ? find_next_bit+0x15/0x20 [ 170.816696] [<ffffffff8160e47c>] cpufreq_show_cpus+0x5c/0xd0 [ 170.816722] [<ffffffffa031a409>] show_freqdomain_cpus+0x19/0x20 [acpi_cpufreq] [ 170.816749] [<ffffffff8160e65b>] show+0x3b/0x60 [ 170.816769] [<ffffffff8129b31c>] sysfs_kf_seq_show+0xbc/0x130 [ 170.816793] [<ffffffff81299be3>] kernfs_seq_show+0x23/0x30 [ 170.816816] [<ffffffff81240f2c>] seq_read+0xec/0x390 [ 170.816837] [<ffffffff8129a64a>] kernfs_fop_read+0x10a/0x160 [ 170.816861] [<ffffffff8121d9b7>] __vfs_read+0x37/0x100 [ 170.816883] [<ffffffff813217c0>] ? security_file_permission+0xa0/0xc0 [ 170.816909] [<ffffffff8121e2e3>] vfs_read+0x83/0x130 [ 170.816930] [<ffffffff8121f035>] SyS_read+0x55/0xc0 ... ... [ 170.817185] ---[ end trace bc6eadf82b2b965a ]--- Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Cc: 4.2+ <stable@vger.kernel.org> # 4.2+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2015-10-08mmc: sdhci-of-at91: use SDHCI_QUIRK2_NEED_DELAY_AFTER_INT_CLK_RST quirkludovic.desroches@atmel.com1-0/+1
The Atmel sdhci device needs the SDHCI_QUIRK2_NEED_DELAY_AFTER_INT_CLK_RST quirk. Without it, the internal clock could never stabilised when changing the sd clock frequency. Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2015-10-08mmc: sdhci: add quirk SDHCI_QUIRK2_NEED_DELAY_AFTER_INT_CLK_RSTludovic.desroches@atmel.com2-0/+7
The Atmel sdhci device needs a new quirk. sdhci_set_clock set the Clock Control Register to 0 before computing the new value and writing it. It disables the internal clock which causes a reset mecanism. If we write the new value before this reset mecanism is done, it will prevent the stabilisation of the internal clock, so a delay is needed. This delay is about 2-3 cycles of the base clock. To be safe, a 1 ms delay is used. Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2015-10-08mmc: sdhci-pxav3: fix error handling of armada_38x_quirksMarcin Wojtas1-1/+1
In case of armada_38x_quirks error, all clocks should be cleaned-up, same as after mv_conf_mbus_windows failure. Signed-off-by: Marcin Wojtas <mw@semihalf.com> Cc: <stable@vger.kernel.org> # v4.2 Reviewed-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2015-10-08mmc: sdhci-pxav3: disable clock inversion for HS MMC cardsNadav Haklai1-0/+3
According to 'FE-2946959' erratum the clock inversion option is needed to support slow frequencies when the card input hold time requirement is high. This setting is not required for high speed MMC and might cause timing violation. Signed-off-by: Nadav Haklai <nadavh@marvell.com> Cc: <stable@vger.kernel.org> # v4.2 Reviewed-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2015-10-08mmc: sdhci-pxav3: remove broken clock base quirk for Armada 38x sdhci driverNadav Haklai1-0/+1
shci-pxav3 driver is enabling by default the SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN quirk. However this quirk is not required for Armada 38x and leads to wrong clock setting in the divider. Signed-off-by: Nadav Haklai <nadavh@marvell.com> Signed-off-by: Marcin Wojtas <mw@semihalf.com> Cc: <stable@vger.kernel.org> # v4.2 Reviewed-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2015-10-08arch/powerpc: provide zero_bytemask() for big-endianChris Metcalf1-0/+5
For some reason, only the little-endian flavor of powerpc provided the zero_bytemask() implementation. Reported-by: Michal Sojka <sojkam1@fel.cvut.cz> Acked-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
2015-10-08mmc: host: omap_hsmmc: Fix MMC for omap3 legacy bootingTony Lindgren1-3/+3
Starting with commit 7d607f917008 ("mmc: host: omap_hsmmc: use devm_regulator_get_optional() for vmmc") MMC on omap3 stopped working for legacy booting. This is because legacy booting sets up some of the resource in the platform init code, and for optional regulators always seem to return -EPROBE_DEFER for the legacy booting. Let's fix the issue by checking for device tree based booting for now. Then when omap3 boots in device tree only mode, this patch can be just reverted. Fixes: 7d607f917008 ("mmc: host: omap_hsmmc: use devm_regulator_get_optional() for vmmc") Cc: Felipe Balbi <balbi@ti.com> Cc: Kishon Vijay Abraham I <kishon@ti.com> Cc: Nishanth Menon <nm@ti.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Tony Lindgren <tony@atomide.com> Tested-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2015-10-08Revert "mmc: host: omap_hsmmc: use regulator_is_enabled to find pbias status"Tony Lindgren1-2/+6
This reverts commit c55d7a0553643a7e8f120688b82b594471084d3c. Without reverting this commit we get "unbalanced disables for pbias_mmc_omap4" errors on omap4430. It seems that 4430 and 4460 behave in a different way for the PBIAS regulator registers and until that has been debugged further we cannot rely on the regulator status registers in hardare on 4430. Fixes: 7d607f917008 ("mmc: host: omap_hsmmc: use devm_regulator_get_optional() for vmmc") Cc: Felipe Balbi <balbi@ti.com> Cc: Kishon Vijay Abraham I <kishon@ti.com> Cc: Nishanth Menon <nm@ti.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Tony Lindgren <tony@atomide.com> Tested-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2015-10-07swiotlb: Enable it under x86 PAEChristian Melki1-0/+1
Most distributions end up enabling SWIOTLB already with 32-bit kernels due to the combination of CONFIG_HYPERVISOR_GUEST|CONFIG_XEN=y as those end up requiring the SWIOTLB. However for those that are not interested in virtualization and run in 32-bit they will discover that: "32-bit PAE 4.2.0 kernel (no IOMMU code) would hang when writing to my USB disk. The kernel spews million(-ish messages per sec) to syslog, effectively "hanging" userspace with my kernel. Oct 2 14:33:06 voodoochild kernel: [ 223.287447] nommu_map_sg: overflow 25dcac000+1024 of device mask ffffffff Oct 2 14:33:06 voodoochild kernel: [ 223.287448] nommu_map_sg: overflow 25dcac000+1024 of device mask ffffffff Oct 2 14:33:06 voodoochild kernel: [ 223.287449] nommu_map_sg: overflow 25dcac000+1024 of device mask ffffffff ... etc ..." Enabling it makes the problem go away. N.B. With a6dfa128ce5c414ab46b1d690f7a1b8decb8526d "config: Enable NEED_DMA_MAP_STATE by default when SWIOTLB is selected" we also have the important part of the SG macros enabled to make this work properly - in case anybody wants to backport this patch. Reported-and-Tested-by: Christian Melki <christian.melki@t2data.com> Signed-off-by: Christian Melki <christian.melki@t2data.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2015-10-073w-9xxx: don't unmap bounce buffered commandsChristoph Hellwig1-7/+21
3w controller don't dma map small single SGL entry commands but instead bounce buffer them. Add a helper to identify these commands and don't call scsi_dma_unmap for them. Based on an earlier patch from James Bottomley. Fixes: 118c85 ("3w-9xxx: fix command completion race") Reported-by: Tóth Attila <atoth@atoth.sote.hu> Tested-by: Tóth Attila <atoth@atoth.sote.hu> Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Adam Radford <aradford@gmail.com> Signed-off-by: James Bottomley <JBottomley@Odin.com>
2015-10-07perf tools: Fix build break on powerpc due to sample_reg_masksSukadev Bhattiprolu3-1/+4
perf_regs.c does not get built on Powerpc as CONFIG_PERF_REGS is false. So the weak definition for 'sample_regs_masks' doesn't get picked up. Adding perf_regs.o to util/Build unconditionally, exposes a redefinition error for 'perf_reg_value()' function (due to the static inline version in util/perf_regs.h). So use #ifdef HAVE_PERF_REGS_SUPPORT' around that function. Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: linuxppc-dev@ozlabs.org Link: http://lkml.kernel.org/r/20150930182836.GA27858@us.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-10-07video: of: fix memory leakSudip Mukherjee1-0/+1
If of_parse_display_timing() fails we are printing an error message and jumping to the error path but we missed freeing "dt". Signed-off-by: Sudip Mukherjee <sudip@vectorindia.org> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
2015-10-07Revert "fs: do not prefault sys_write() user buffer pages"Linus Torvalds1-18/+16
This reverts commit 998ef75ddb5709bbea0bf1506cd2717348a3c647. The commit itself does not appear to be buggy per se, but it is exposing a bug in ext4 (and Ted thinks ext3 too, but we solved that by getting rid of it). It's too late in the release cycle to really worry about this, even if Dave Hansen has a patch that may actually fix the underlying ext4 problem. We can (and should) revisit this for the next release. The problem is that moving the prefaulting later now exposes a special case with partially successful writes that isn't handled correctly. And the prefaulting likely isn't normally even that much of a performance issue - it looks like at least one reason Dave saw this in his performance tests is that he also ran them on Skylake that now supports the new SMAP code, which makes the normally very cheap user space prefaulting noticeably more expensive. Bisected-and-acked-by: Ted Ts'o <tytso@mit.edu> Analyzed-and-acked-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-10-06NFS: Fix a tracepoint NULL-pointer dereferenceAnna Schumaker1-1/+1
Running xfstest generic/013 with the tracepoint nfs:nfs4_open_file enabled produces a NULL-pointer dereference when calculating fileid and filehandle of the opened file. Fix this by checking if state is NULL before trying to use the inode pointer. Reported-by: Olga Kornievskaia <aglo@umich.edu> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2015-10-06strscpy: zero any trailing garbage bytes in the destinationChris Metcalf1-1/+2
It's possible that the destination can be shadowed in userspace (as, for example, the perf buffers are now). So we should take care not to leak data that could be inspected by userspace. Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
2015-10-06word-at-a-time.h: support zero_bytemask() on alpha and tileChris Metcalf2-1/+9
Both alpha and tile needed implementations of zero_bytemask. The alpha version is untested. Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
2015-10-06word-at-a-time.h: fix some Kbuild filesChris Metcalf3-2/+1
arch/tile added word-at-a-time.h after the patch that added generic-y entries; the generic-y entry is now stale. arch/h8300 is newer than the generic-y patch for word-at-a-time.h, and needs a generic-y entry. arch/powerpc seems to have gotten a generic-y entry by mistake in the first patch; this change removes it. Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
2015-10-06arm64: replace read_lock to rcu lock in call_break_hookYang Shi1-10/+11
BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917 in_atomic(): 0, irqs_disabled(): 128, pid: 342, name: perf 1 lock held by perf/342: #0: (break_hook_lock){+.+...}, at: [<ffffffc0000851ac>] call_break_hook+0x34/0xd0 irq event stamp: 62224 hardirqs last enabled at (62223): [<ffffffc00010b7bc>] __call_rcu.constprop.59+0x104/0x270 hardirqs last disabled at (62224): [<ffffffc0000fbe20>] vprintk_emit+0x68/0x640 softirqs last enabled at (0): [<ffffffc000097928>] copy_process.part.8+0x428/0x17f8 softirqs last disabled at (0): [< (null)>] (null) CPU: 0 PID: 342 Comm: perf Not tainted 4.1.6-rt5 #4 Hardware name: linux,dummy-virt (DT) Call trace: [<ffffffc000089968>] dump_backtrace+0x0/0x128 [<ffffffc000089ab0>] show_stack+0x20/0x30 [<ffffffc0007030d0>] dump_stack+0x7c/0xa0 [<ffffffc0000c878c>] ___might_sleep+0x174/0x260 [<ffffffc000708ac8>] __rt_spin_lock+0x28/0x40 [<ffffffc000708db0>] rt_read_lock+0x60/0x80 [<ffffffc0000851a8>] call_break_hook+0x30/0xd0 [<ffffffc000085a70>] brk_handler+0x30/0x98 [<ffffffc000082248>] do_debug_exception+0x50/0xb8 Exception stack(0xffffffc00514fe30 to 0xffffffc00514ff50) fe20: 00000000 00000000 c1594680 0000007f fe40: ffffffff ffffffff 92063940 0000007f 0550dcd8 ffffffc0 00000000 00000000 fe60: 0514fe70 ffffffc0 000be1f8 ffffffc0 0514feb0 ffffffc0 0008948c ffffffc0 fe80: 00000004 00000000 0514fed0 ffffffc0 ffffffff ffffffff 9282a948 0000007f fea0: 00000000 00000000 9282b708 0000007f c1592820 0000007f 00083914 ffffffc0 fec0: 00000000 00000000 00000010 00000000 00000064 00000000 00000001 00000000 fee0: 005101e0 00000000 c1594680 0000007f c1594740 0000007f ffffffd8 ffffff80 ff00: 00000000 00000000 00000000 00000000 c1594770 0000007f c1594770 0000007f ff20: 00665e10 00000000 7f7f7f7f 7f7f7f7f 01010101 01010101 00000000 00000000 ff40: 928e4cc0 0000007f 91ff11e8 0000007f call_break_hook is called in atomic context (hard irq disabled), so replace the sleepable lock to rcu lock, replace relevant list operations to rcu version and call synchronize_rcu() in unregister_break_hook(). And, replace write lock to spinlock in {un}register_break_hook. Signed-off-by: Yang Shi <yang.shi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-10-06arm64: Don't relocate non-existent initrdMark Rutland1-0/+2
When booting a kernel without an initrd, the kernel reports that it moves -1 bytes worth, having gone through the motions with initrd_start equal to initrd_end: Moving initrd from [4080000000-407fffffff] to [9fff49000-9fff48fff] Prevent this by bailing out early when the initrd size is zero (i.e. we have no initrd), avoiding the confusing message and other associated work. Fixes: 1570f0d7ab425c1e ("arm64: support initrd outside kernel linear map") Cc: Mark Salter <msalter@redhat.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-10-06sched/core: Remove a parameter in the migrate_task_rq() functionxiaofeng.yan3-3/+3
The parameter "int next_cpu" in the following function is unused: migrate_task_rq(struct task_struct *p, int next_cpu) Remove it. Signed-off-by: xiaofeng.yan <yanxiaofeng@inspur.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/1442991360-31945-1-git-send-email-yanxiaofeng@inspur.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-06sched/core: Drop unlikely behind BUG_ON()Geliang Tang1-1/+1
(1) For !CONFIG_BUG cases, the bug call is a no-op, so we couldn't care less and the change is ok. (2) PPC and MIPS, which HAVE_ARCH_BUG_ON, do not rely on branch predictions as it seems to be pointless [1] and thus callers should not be trying to push an optimization in the first place. (3) For CONFIG_BUG and !HAVE_ARCH_BUG_ON cases, BUG_ON() contains an unlikely compiler flag already. Hence, we can drop unlikely behind BUG_ON(). [1] http://lkml.iu.edu/hypermail/linux/kernel/1101.3/02289.html Signed-off-by: Geliang Tang <geliangtang@163.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/6fa7125979f98bbeac26e268271769b6ca935c8d.1444051018.git.geliangtang@163.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-06sched/core: Fix task and run queue sched_info::run_delay inconsistenciesPeter Zijlstra2-25/+33
Mike Meyer reported the following bug: > During evaluation of some performance data, it was discovered thread > and run queue run_delay accounting data was inconsistent with the other > accounting data that was collected. Further investigation found under > certain circumstances execution time was leaking into the task and > run queue accounting of run_delay. > > Consider the following sequence: > > a. thread is running. > b. thread moves beween cgroups, changes scheduling class or priority. > c. thread sleeps OR > d. thread involuntarily gives up cpu. > > a. implies: > > thread->sched_info.last_queued = 0 > > a. and b. results in the following: > > 1. dequeue_task(rq, thread) > > sched_info_dequeued(rq, thread) > delta = 0 > > sched_info_reset_dequeued(thread) > thread->sched_info.last_queued = 0 > > thread->sched_info.run_delay += delta > > 2. enqueue_task(rq, thread) > > sched_info_queued(rq, thread) > > /* thread is still on cpu at this point. */ > thread->sched_info.last_queued = task_rq(thread)->clock; > > c. results in: > > dequeue_task(rq, thread) > > sched_info_dequeued(rq, thread) > > /* delta is execution time not run_delay. */ > delta = task_rq(thread)->clock - thread->sched_info.last_queued > > sched_info_reset_dequeued(thread) > thread->sched_info.last_queued = 0 > > thread->sched_info.run_delay += delta > > Since thread was running between enqueue_task(rq, thread) and > dequeue_task(rq, thread), the delta above is really execution > time and not run_delay. > > d. results in: > > __sched_info_switch(thread, next_thread) > > sched_info_depart(rq, thread) > > sched_info_queued(rq, thread) > > /* last_queued not updated due to being non-zero */ > return > > Since thread was running between enqueue_task(rq, thread) and > __sched_info_switch(thread, next_thread), the execution time > between enqueue_task(rq, thread) and > __sched_info_switch(thread, next_thread) now will become > associated with run_delay due to when last_queued was last updated. > This alternative patch solves the problem by not calling sched_info_{de,}queued() in {de,en}queue_task(). Therefore the sched_info state is preserved and things work as expected. By inlining the {de,en}queue_task() functions the new condition becomes (mostly) a compile-time constant and we'll not emit any new branch instructions. It even shrinks the code (due to inlining {en,de}queue_task()): $ size defconfig-build/kernel/sched/core.o defconfig-build/kernel/sched/core.o.orig text data bss dec hex filename 64019 23378 2344 89741 15e8d defconfig-build/kernel/sched/core.o 64149 23378 2344 89871 15f0f defconfig-build/kernel/sched/core.o.orig Reported-by: Mike Meyer <Mike.Meyer@Teradata.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/20150930154413.GO3604@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-06sched/numa: Fix task_tick_fair() from disabling numa_balancingSrikar Dronamraju1-1/+1
If static branch 'sched_numa_balancing' is enabled, it should kickstart NUMA balancing through task_tick_numa(). However the following commit: 2a595721a1fa ("sched/numa: Convert sched_numa_balancing to a static_branch") erroneously disables this. Fix this anomaly by enabling task_tick_numa() when the static branch 'sched_numa_balancing' is enabled. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/1443752305-27413-1-git-send-email-srikar@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-06sched/core: Add preempt_count invariant checkPeter Zijlstra1-0/+4
Ingo requested I keep my debug check for the preempt_count invariant. Requested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-06sched/core: More notrace annotationsPeter Zijlstra1-3/+3
preempt_schedule_common() is marked notrace, but it does not use _notrace() preempt_count functions and __schedule() is also not marked notrace, which means that its perfectly possible to end up in the tracer from preempt_schedule_common(). Steve says: | Yep, there's some history to this. This was originally the issue that | caused function tracing to go into infinite recursion. But now we have | preempt_schedule_notrace(), which is used by the function tracer, and | that function must not be traced till preemption is disabled. | | Now if function tracing is running and we take an interrupt when | NEED_RESCHED is set, it calls | | preempt_schedule_common() (not traced) | | But then that calls preempt_disable() (traced) | | function tracer calls preempt_disable_notrace() followed by | preempt_enable_notrace() which will see NEED_RESCHED set, and it will | call preempt_schedule_notrace(), which stops the recursion, but | still calls __schedule() here, and that means when we return, we call | the __schedule() from preempt_schedule_common(). | | That said, I prefer this patch. Preemption is disabled before calling | __schedule(), and we get rid of a one round recursion with the | scheduler. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-06sched/core: Kill PREEMPT_ACTIVEPeter Zijlstra1-5/+0
Its unused, kill the definition. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-06sched/core, sched/x86: Kill thread_info::saved_preempt_countPeter Zijlstra4-22/+1
With the introduction of the context switch preempt_count invariant, and the demise of PREEMPT_ACTIVE, its pointless to save/restore the per-cpu preemption count, it must always be 2. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-06sched/core: Simplify preempt_count testsPeter Zijlstra2-3/+2
Since we stopped setting PREEMPT_ACTIVE, there is no need to mask it out of preempt_count() tests. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-06sched/core: Robustify preemption leak checksPeter Zijlstra2-2/+6
When we warn about a preempt_count leak; reset the preempt_count to the known good value such that the problem does not ripple forward. This is most important on x86 which has a per cpu preempt_count that is not saved/restored (after this series). So if you schedule with an invalid (!2*PREEMPT_DISABLE_OFFSET) preempt_count the next task is messed up too. Enforcing this invariant limits the borkage to just the one task. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-06sched/core: Stop setting PREEMPT_ACTIVEPeter Zijlstra2-25/+6
Now that nothing tests for PREEMPT_ACTIVE anymore, stop setting it. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-06sched/core: Fix trace_sched_switch()Peter Zijlstra5-17/+14
__trace_sched_switch_state() is the last remaining PREEMPT_ACTIVE user, move trace_sched_switch() from prepare_task_switch() to __schedule() and propagate the @preempt argument. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-06sched/core: Add preempt argument to __schedule()Peter Zijlstra1-6/+6
There is only a single PREEMPT_ACTIVE use in the regular __schedule() path and that is to circumvent the task->state check. Since the code setting PREEMPT_ACTIVE is the immediate caller of __schedule() we can replace this with a function argument. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-06sched/core: Create preempt_count invariantPeter Zijlstra4-9/+35
Assuming units of PREEMPT_DISABLE_OFFSET for preempt_count() numbers. Now that TASK_DEAD no longer results in preempt_count() == 3 during scheduling, we will always call context_switch() with preempt_count() == 2. However, we don't always end up with preempt_count() == 2 in finish_task_switch() because new tasks get created with preempt_count() == 1. Create FORK_PREEMPT_COUNT and set it to 2 and use that in the right places. Note that we cannot use INIT_PREEMPT_COUNT as that serves another purpose (boot). After this, preempt_count() is invariant across the context switch, with exception of PREEMPT_ACTIVE. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>