aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/net/usb/usbnet.c (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2014-01-13inet_diag: fix inet_diag_dump_icsk() to use correct state for timewait socketsNeal Cardwell1-1/+4
Fix inet_diag_dump_icsk() to reflect the fact that both TCP_TIME_WAIT and TCP_FIN_WAIT2 connections are represented by inet_timewait_sock (not just TIME_WAIT), and for such sockets the tw_substate field holds the real state, which can be either TCP_TIME_WAIT or TCP_FIN_WAIT2. This brings the inet_diag state-matching code in line with the field it uses to populate idiag_state. This is also analogous to the info exported in /proc/net/tcp, where get_tcp4_sock() exports sk->sk_state and get_timewait4_sock() exports tw->tw_substate. Before fixing this, (a) neither "ss -nemoi" nor "ss -nemoi state fin-wait-2" would return a socket in TCP_FIN_WAIT2; and (b) "ss -nemoi state time-wait" would also return sockets in state TCP_FIN_WAIT2. This is an old bug that predates 05dbc7b ("tcp/dccp: remove twchain"). Signed-off-by: Neal Cardwell <ncardwell@google.com> Cc: Eric Dumazet <edumazet@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-01-13arch: Re-sort some Kbuild files to hopefully help avoid some conflictsStephen Rothwell7-7/+7
Checkin: 93ea02bb8435 arch: Clean up asm/barrier.h implementations using asm-generic/barrier.h ... unfortunately left some Kbuild files out of order, which caused unnecessary merge conflicts, in particular with checkin: e3fec2f74f7f lib: Add missing arch generic-y entries for asm-generic/hash.h Put them back in order to make the upcoming merges cleaner. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Link: http://lkml.kernel.org/r/20140114164420.d296fbcc4be3a5f126c86069@canb.auug.org.au Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Peter Zijlstra <peterz@infradead.org> Cc: David Miller <davem@davemloft.net>
2014-01-14md: fix problem when adding device to read-only array with bitmap.NeilBrown2-3/+18
If an array is started degraded, and then the missing device is found it can be re-added and a minimal bitmap-based recovery will bring it fully up-to-date. If the array is read-only a recovery would not be allowed. But also if the array is read-only and the missing device was present very recently, then there could be no need for any recovery at all, so we simply include the device in the read-only array without any recovery. However... if the missing device was removed a little longer ago it could be missing some updates, but if a bitmap is present it will be conditionally accepted pending a bitmap-based update. We don't currently detect this case properly and will include that old device into the read-only array with no recovery even though it really needs a recovery. This patch keeps track of whether a bitmap-based-recovery is really needed or not in the new Bitmap_sync rdev flag. If that is set, then the device will not be added to a read-only array. Cc: Andrei Warkentin <andreiw@vmware.com> Fixes: d70ed2e4fafdbef0800e73942482bb075c21578b Cc: stable@vger.kernel.org (3.2+) Signed-off-by: NeilBrown <neilb@suse.de>
2014-01-14md/raid10: fix bug when raid10 recovery fails to recover a block.NeilBrown1-4/+4
commit e875ecea266a543e643b19e44cf472f1412708f9 md/raid10 record bad blocks as needed during recovery. added code to the "cannot recover this block" path to record a bad block rather than fail the whole recovery. Unfortunately this new case was placed *after* r10bio was freed rather than *before*, yet it still uses r10bio. This is will crash with a null dereference. So move the freeing of r10bio down where it is safe. Cc: stable@vger.kernel.org (v3.1+) Fixes: e875ecea266a543e643b19e44cf472f1412708f9 Reported-by: Damian Nowak <spam@nowaker.net> URL: https://bugzilla.kernel.org/show_bug.cgi?id=68181 Signed-off-by: NeilBrown <neilb@suse.de>
2014-01-14md/raid5: fix a recently broken BUG_ON().NeilBrown1-1/+2
commit 6d183de4077191d1201283a9035ce57a9b05254d md/raid5: fix newly-broken locking in get_active_stripe. simplified a BUG_ON, but removed too much so now it sometimes fires when it shouldn't. When the STRIPE_EXPANDING flag is set, the stripe_head might be on a special list while multiple stripe_heads are collected, or it might not be on any list, even a 'free' list when the refcount is zero. As long as STRIPE_EXPANDING is set, it will be found and added back to a list eventually. So both of the BUG_ONs which test for the ->lru being empty or not need to avoid the case where STRIPE_EXPANDING is set. The patch which broke this was marked for -stable, so this patch needs to be applied to any branch that received 6d183de4 Fixes: 6d183de4077191d1201283a9035ce57a9b05254d Cc: stable@vger.kernel.org (any release to which above was applied) Signed-off-by: NeilBrown <neilb@suse.de>
2014-01-14md/raid1: fix request counting bug in new 'barrier' code.NeilBrown1-2/+1
The new iobarrier implementation in raid1 (which keeps normal writes and resync activity separate) counts every request what is not before the current resync point in either next_window_requests or current_window_requests. It flags that the request is counted by setting ->start_next_window. allow_barrier follows this model exactly and decrements one of the *_window_requests if and only if ->start_next_window is set. However wait_barrier(), which increments *_window_requests uses a slightly different test for setting -.start_next_window (which is set from the return value of this function). So there is a possibility of the counts getting out of sync, and this leads to the resync hanging. So change wait_barrier() to return a non-zero value in exactly the same cases that it increments *_window_requests. But was introduced in 3.13-rc1. Reported-by: Bruno Wolff III <bruno@wolff.to> URL: https://bugzilla.kernel.org/show_bug.cgi?id=68061 Fixes: 79ef3a8aa1cb1523cc231c9a90a278333c21f761 Cc: majianpeng <majianpeng@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de>
2014-01-14md/raid10: fix two bugs in handling of known-bad-blocks.NeilBrown1-2/+2
If we discover a bad block when reading we split the request and potentially read some of it from a different device. The code path of this has two bugs in RAID10. 1/ we get a spin_lock with _irq, but unlock without _irq!! 2/ The calculation of 'sectors_handled' is wrong, as can be clearly seen by comparison with raid1.c This leads to at least 2 warnings and a probable crash is a RAID10 ever had known bad blocks. Cc: stable@vger.kernel.org (v3.1+) Fixes: 856e08e23762dfb92ffc68fd0a8d228f9e152160 Reported-by: Damian Nowak <spam@nowaker.net> URL: https://bugzilla.kernel.org/show_bug.cgi?id=68181 Signed-off-by: NeilBrown <neilb@suse.de>
2014-01-14md/raid5: Fix possible confusion when multiple write errors occur.NeilBrown1-2/+2
commit 5d8c71f9e5fbdd95650be00294d238e27a363b5c md: raid5 crash during degradation Fixed a crash in an overly simplistic way which could leave R5_WriteError or R5_MadeGood set in the stripe cache for devices for which it is no longer relevant. When those devices are removed and spares added the flags are still set and can cause incorrect behaviour. commit 14a75d3e07c784c004b4b44b34af996b8e4ac453 md/raid5: preferentially read from replacement device if possible. Fixed the same bug if a more effective way, so we can now revert the original commit. Reported-and-tested-by: Alexander Lyakas <alex.bolshoy@gmail.com> Cc: stable@vger.kernel.org (3.2+ - 3.2 will need a different fix though) Fixes: 5d8c71f9e5fbdd95650be00294d238e27a363b5c Signed-off-by: NeilBrown <neilb@suse.de>
2014-01-14Revert "drm: copy mode type in drm_mode_connector_list_update()"Dave Airlie1-1/+1
This reverts commit 3fbd6439e4639ecaeaae6c079e0aa497a1ac3482. This caused some strange booting lockup issues on an Intel G33 belonging to Daniel Vetter, very unusual, I was hoping Daniel would track this down, but it looks like instead I'll have to hack a different fix for -next. Signed-off-by: Dave Airlie <airlied@redhat.com>
2014-01-13cxgb4: silence shift wrapping static checker warningDan Carpenter1-1/+1
I don't know how large "tp->vlan_shift" is but static checkers worry about shift wrapping bugs here. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Dimitris Michailidis <dm@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-01-13s390: delete new instances of __cpuinit usagePaul Gortmaker1-2/+2
The patch "s390/perf: add support for the CPU-Measurement Sampling Facility" added a new instance of the __cpuinit macro usage. We removed this a couple versions ago; we now want to remove the compat no-op stubs. Introducing new users is not what we want to see at this point in time, as it will break once the stubs are gone. Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-01-13s390/compat: fix PSW32_USER_BITS definitionHeiko Carstens1-1/+2
PSW32_USER_BITS should define the primary address space for user space instead of the home address space. Symptom of this bug is that gdb doesn't work in compat mode. The bug was introduced with e258d719ff28 "s390/uaccess: always run the kernel in home space" and f26946d7ecad "s390/compat: make psw32_user_bits a constant value again". Cc: stable@vger.kernel.org # v3.13+ Reported-by: Andreas Arnez <arnez@linux.vnet.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-01-13futexes: Avoid taking the hb->lock if there's nothing to wake upDavidlohr Bueso1-25/+92
In futex_wake() there is clearly no point in taking the hb->lock if we know beforehand that there are no tasks to be woken. While the hash bucket's plist head is a cheap way of knowing this, we cannot rely 100% on it as there is a racy window between the futex_wait call and when the task is actually added to the plist. To this end, we couple it with the spinlock check as tasks trying to enter the critical region are most likely potential waiters that will be added to the plist, thus preventing tasks sleeping forever if wakers don't acknowledge all possible waiters. Furthermore, the futex ordering guarantees are preserved, ensuring that waiters either observe the changed user space value before blocking or is woken by a concurrent waker. For wakers, this is done by relying on the barriers in get_futex_key_refs() -- for archs that do not have implicit mb in atomic_inc(), we explicitly add them through a new futex_get_mm function. For waiters we rely on the fact that spin_lock calls already update the head counter, so spinners are visible even if the lock hasn't been acquired yet. For more details please refer to the updated comments in the code and related discussion: https://lkml.org/lkml/2013/11/26/556 Special thanks to tglx for careful review and feedback. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: Darren Hart <dvhart@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Davidlohr Bueso <davidlohr@hp.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Jeff Mahoney <jeffm@suse.com> Cc: Scott Norton <scott.norton@hp.com> Cc: Tom Vaden <tom.vaden@hp.com> Cc: Aswin Chandramouleeswaran <aswin@hp.com> Cc: Waiman Long <Waiman.Long@hp.com> Cc: Jason Low <jason.low2@hp.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/1389569486-25487-5-git-send-email-davidlohr@hp.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-01-13futexes: Document multiprocessor ordering guaranteesThomas Gleixner1-0/+57
That's essential, if you want to hack on futexes. Reviewed-by: Darren Hart <dvhart@linux.intel.com> Reviewed-by: Peter Zijlstra <peterz@infradead.org> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Davidlohr Bueso <davidlohr@hp.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Jeff Mahoney <jeffm@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Scott Norton <scott.norton@hp.com> Cc: Tom Vaden <tom.vaden@hp.com> Cc: Aswin Chandramouleeswaran <aswin@hp.com> Cc: Waiman Long <Waiman.Long@hp.com> Cc: Jason Low <jason.low2@hp.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/1389569486-25487-4-git-send-email-davidlohr@hp.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-01-13futexes: Increase hash table size for better performanceDavidlohr Bueso1-7/+19
Currently, the futex global hash table suffers from its fixed, smallish (for today's standards) size of 256 entries, as well as its lack of NUMA awareness. Large systems, using many futexes, can be prone to high amounts of collisions; where these futexes hash to the same bucket and lead to extra contention on the same hb->lock. Furthermore, cacheline bouncing is a reality when we have multiple hb->locks residing on the same cacheline and different futexes hash to adjacent buckets. This patch keeps the current static size of 16 entries for small systems, or otherwise, 256 * ncpus (or larger as we need to round the number to a power of 2). Note that this number of CPUs accounts for all CPUs that can ever be available in the system, taking into consideration things like hotpluging. While we do impose extra overhead at bootup by making the hash table larger, this is a one time thing, and does not shadow the benefits of this patch. Furthermore, as suggested by tglx, by cache aligning the hash buckets we can avoid access across cacheline boundaries and also avoid massive cache line bouncing if multiple cpus are hammering away at different hash buckets which happen to reside in the same cache line. Also, similar to other core kernel components (pid, dcache, tcp), by using alloc_large_system_hash() we benefit from its NUMA awareness and thus the table is distributed among the nodes instead of in a single one. For a custom microbenchmark that pounds on the uaddr hashing -- making the wait path fail at futex_wait_setup() returning -EWOULDBLOCK for large amounts of futexes, we can see the following benefits on a 80-core, 8-socket 1Tb server: +---------+--------------------+------------------------+-----------------------+-------------------------------+ | threads | baseline (ops/sec) | aligned-only (ops/sec) | large table (ops/sec) | large table+aligned (ops/sec) | +---------+--------------------+------------------------+-----------------------+-------------------------------+ |     512 |              32426 | 50531  (+55.8%)        | 255274  (+687.2%)     | 292553  (+802.2%)             | |     256 |              65360 | 99588  (+52.3%)        | 443563  (+578.6%)     | 508088  (+677.3%)             | |     128 |             125635 | 200075 (+59.2%)        | 742613  (+491.1%)     | 835452  (+564.9%)             | |      80 |             193559 | 323425 (+67.1%)        | 1028147 (+431.1%)     | 1130304 (+483.9%)             | |      64 |             247667 | 443740 (+79.1%)        | 997300  (+302.6%)     | 1145494 (+362.5%)             | |      32 |             628412 | 721401 (+14.7%)        | 965996  (+53.7%)      | 1122115 (+78.5%)              | +---------+--------------------+------------------------+-----------------------+-------------------------------+ Reviewed-by: Darren Hart <dvhart@linux.intel.com> Reviewed-by: Peter Zijlstra <peterz@infradead.org> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Waiman Long <Waiman.Long@hp.com> Reviewed-and-tested-by: Jason Low <jason.low2@hp.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Davidlohr Bueso <davidlohr@hp.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Jeff Mahoney <jeffm@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Scott Norton <scott.norton@hp.com> Cc: Tom Vaden <tom.vaden@hp.com> Cc: Aswin Chandramouleeswaran <aswin@hp.com> Link: http://lkml.kernel.org/r/1389569486-25487-3-git-send-email-davidlohr@hp.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-01-13futexes: Clean up various detailsJason Low1-27/+12
- Remove unnecessary head variables. - Delete unused parameter in queue_unlock(). Reviewed-by: Darren Hart <dvhart@linux.intel.com> Reviewed-by: Peter Zijlstra <peterz@infradead.org> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Jason Low <jason.low2@hp.com> Signed-off-by: Davidlohr Bueso <davidlohr@hp.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Jeff Mahoney <jeffm@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Scott Norton <scott.norton@hp.com> Cc: Tom Vaden <tom.vaden@hp.com> Cc: Aswin Chandramouleeswaran <aswin@hp.com> Cc: Waiman Long <Waiman.Long@hp.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/1389569486-25487-2-git-send-email-davidlohr@hp.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-01-13m68k/irq - Use polled IRQ flag for MFP timer cascaded interruptsMichael Schmitz1-0/+3
Some Atari hardware has no capacity to raise interrupts (e.g. network or USB adapter hardware attached via ROM port). The driver interrupt routine is called from a timer interrupt (timer D) in these cases, using chained device specific pseudo interrupts (IRQ_MFP_TIMER1 ff.) These interrupts will more often than not, return IRQ_NONE as there is not always work for the device handler when called. Too many unhandled interrupts will result in the interrupt being disabled by the stuck interrupt watchdog. As preferred option to flag interrupts as needing exclusion from the watchdog mechanism, tglx added the IRQ_IS_POLLED flag for use in such a case. Currently, two interrupts need to use this flag. Add more users as needed. Signed-off-by: Michael Schmitz <schmitz@debian.org> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
2014-01-13powerpc: Check return value of instance-to-package OF callBenjamin Herrenschmidt1-9/+13
On PA-Semi firmware, the instance-to-package callback doesn't seem to be implemented. We didn't check for error, however, thus subsequently passed the -1 value returned into stdout_node to thins like prom_getprop etc... Thus caused the firmware to load values around 0 (physical) internally as node structures. It somewhat "worked" as long as we had a NULL in the right place (address 8) at the beginning of the kernel, we didn't "see" the bug. But commit 5c0484e25ec03243d4c2f2d4416d4a13efc77f6a "powerpc: Endian safe trampoline" changed the kernel entry point causing that old bug to now cause a crash early during boot. This fixes booting on PA-Semi board by properly checking the return value from instance-to-package. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Tested-by: Olof Johansson <olof@lixom.net> ---
2014-01-12ARM: 7938/1: OMAP4/highbank: Flush L2 cache before disablingTaras Kondratiuk2-0/+2
Kexec disables outer cache before jumping to reboot code, but it doesn't flush it explicitly. Flush is done implicitly inside of l2x0_disable(). But some SoC's override default .disable handler and don't flush cache. This may lead to a corrupted memory during Kexec reboot on these platforms. This patch adds cache flush inside of OMAP4 and Highbank outer_cache.disable() handlers to make it consistent with default l2x0_disable(). Acked-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Taras Kondratiuk <taras.kondratiuk@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-01-12Linux 3.13-rc8Linus Torvalds1-1/+1
2014-01-12SELinux: Fix possible NULL pointer dereference in selinux_inode_permission()Steven Rostedt2-3/+22
While running stress tests on adding and deleting ftrace instances I hit this bug: BUG: unable to handle kernel NULL pointer dereference at 0000000000000020 IP: selinux_inode_permission+0x85/0x160 PGD 63681067 PUD 7ddbe067 PMD 0 Oops: 0000 [#1] PREEMPT CPU: 0 PID: 5634 Comm: ftrace-test-mki Not tainted 3.13.0-rc4-test-00033-gd2a6dde-dirty #20 Hardware name: /DG965MQ, BIOS MQ96510J.86A.0372.2006.0605.1717 06/05/2006 task: ffff880078375800 ti: ffff88007ddb0000 task.ti: ffff88007ddb0000 RIP: 0010:[<ffffffff812d8bc5>] [<ffffffff812d8bc5>] selinux_inode_permission+0x85/0x160 RSP: 0018:ffff88007ddb1c48 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 0000000000800000 RCX: ffff88006dd43840 RDX: 0000000000000001 RSI: 0000000000000081 RDI: ffff88006ee46000 RBP: ffff88007ddb1c88 R08: 0000000000000000 R09: ffff88007ddb1c54 R10: 6e6576652f6f6f66 R11: 0000000000000003 R12: 0000000000000000 R13: 0000000000000081 R14: ffff88006ee46000 R15: 0000000000000000 FS: 00007f217b5b6700(0000) GS:ffffffff81e21000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033^M CR2: 0000000000000020 CR3: 000000006a0fe000 CR4: 00000000000007f0 Call Trace: security_inode_permission+0x1c/0x30 __inode_permission+0x41/0xa0 inode_permission+0x18/0x50 link_path_walk+0x66/0x920 path_openat+0xa6/0x6c0 do_filp_open+0x43/0xa0 do_sys_open+0x146/0x240 SyS_open+0x1e/0x20 system_call_fastpath+0x16/0x1b Code: 84 a1 00 00 00 81 e3 00 20 00 00 89 d8 83 c8 02 40 f6 c6 04 0f 45 d8 40 f6 c6 08 74 71 80 cf 02 49 8b 46 38 4c 8d 4d cc 45 31 c0 <0f> b7 50 20 8b 70 1c 48 8b 41 70 89 d9 8b 78 04 e8 36 cf ff ff RIP selinux_inode_permission+0x85/0x160 CR2: 0000000000000020 Investigating, I found that the inode->i_security was NULL, and the dereference of it caused the oops. in selinux_inode_permission(): isec = inode->i_security; rc = avc_has_perm_noaudit(sid, isec->sid, isec->sclass, perms, 0, &avd); Note, the crash came from stressing the deletion and reading of debugfs files. I was not able to recreate this via normal files. But I'm not sure they are safe. It may just be that the race window is much harder to hit. What seems to have happened (and what I have traced), is the file is being opened at the same time the file or directory is being deleted. As the dentry and inode locks are not held during the path walk, nor is the inodes ref counts being incremented, there is nothing saving these structures from being discarded except for an rcu_read_lock(). The rcu_read_lock() protects against freeing of the inode, but it does not protect freeing of the inode_security_struct. Now if the freeing of the i_security happens with a call_rcu(), and the i_security field of the inode is not changed (it gets freed as the inode gets freed) then there will be no issue here. (Linus Torvalds suggested not setting the field to NULL such that we do not need to check if it is NULL in the permission check). Note, this is a hack, but it fixes the problem at hand. A real fix is to restructure the destroy_inode() to call all the destructor handlers from the RCU callback. But that is a major job to do, and requires a lot of work. For now, we just band-aid this bug with this fix (it works), and work on a more maintainable solution in the future. Link: http://lkml.kernel.org/r/20140109101932.0508dec7@gandalf.local.home Link: http://lkml.kernel.org/r/20140109182756.17abaaa8@gandalf.local.home Cc: stable@vger.kernel.org Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-12thp: fix copy_page_rep GPF by testing is_huge_zero_pmd once onlyHugh Dickins1-3/+3
We see General Protection Fault on RSI in copy_page_rep: that RSI is what you get from a NULL struct page pointer. RIP: 0010:[<ffffffff81154955>] [<ffffffff81154955>] copy_page_rep+0x5/0x10 RSP: 0000:ffff880136e15c00 EFLAGS: 00010286 RAX: ffff880000000000 RBX: ffff880136e14000 RCX: 0000000000000200 RDX: 6db6db6db6db6db7 RSI: db73880000000000 RDI: ffff880dd0c00000 RBP: ffff880136e15c18 R08: 0000000000000200 R09: 000000000005987c R10: 000000000005987c R11: 0000000000000200 R12: 0000000000000001 R13: ffffea00305aa000 R14: 0000000000000000 R15: 0000000000000000 FS: 00007f195752f700(0000) GS:ffff880c7fc20000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000093010000 CR3: 00000001458e1000 CR4: 00000000000027e0 Call Trace: copy_user_huge_page+0x93/0xab do_huge_pmd_wp_page+0x710/0x815 handle_mm_fault+0x15d8/0x1d70 __do_page_fault+0x14d/0x840 do_page_fault+0x2f/0x90 page_fault+0x22/0x30 do_huge_pmd_wp_page() tests is_huge_zero_pmd(orig_pmd) four times: but since shrink_huge_zero_page() can free the huge_zero_page, and we have no hold of our own on it here (except where the fourth test holds page_table_lock and has checked pmd_same), it's possible for it to answer yes the first time, but no to the second or third test. Change all those last three to tests for NULL page. (Note: this is not the same issue as trinity's DEBUG_PAGEALLOC BUG in copy_page_rep with RSI: ffff88009c422000, reported by Sasha Levin in https://lkml.org/lkml/2013/3/29/103. I believe that one is due to the source page being split, and a tail page freed, while copy is in progress; and not a problem without DEBUG_PAGEALLOC, since the pmd_same check will prevent a miscopy from being made visible.) Fixes: 97ae17497e99 ("thp: implement refcounting for huge zero page") Signed-off-by: Hugh Dickins <hughd@google.com> Cc: stable@vger.kernel.org # v3.10 v3.11 v3.12 Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-12arch: Introduce smp_load_acquire(), smp_store_release()Peter Zijlstra11-2/+234
A number of situations currently require the heavyweight smp_mb(), even though there is no need to order prior stores against later loads. Many architectures have much cheaper ways to handle these situations, but the Linux kernel currently has no portable way to make use of them. This commit therefore supplies smp_load_acquire() and smp_store_release() to remedy this situation. The new smp_load_acquire() primitive orders the specified load against any subsequent reads or writes, while the new smp_store_release() primitive orders the specifed store against any prior reads or writes. These primitives allow array-based circular FIFOs to be implemented without an smp_mb(), and also allow a theoretical hole in rcu_assign_pointer() to be closed at no additional expense on most architectures. In addition, the RCU experience transitioning from explicit smp_read_barrier_depends() and smp_wmb() to rcu_dereference() and rcu_assign_pointer(), respectively resulted in substantial improvements in readability. It therefore seems likely that replacing other explicit barriers with smp_load_acquire() and smp_store_release() will provide similar benefits. It appears that roughly half of the explicit barriers in core kernel code might be so replaced. [Changelog by PaulMck] Reviewed-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Michael Neuling <mikey@neuling.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Victor Kaplansky <VICTORK@il.ibm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Link: http://lkml.kernel.org/r/20131213150640.908486364@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-01-12arch: Clean up asm/barrier.h implementations using asm-generic/barrier.hPeter Zijlstra24-414/+58
We're going to be adding a few new barrier primitives, and in order to avoid endless duplication make more agressive use of asm-generic/barrier.h. Change the asm-generic/barrier.h such that it allows partial barrier definitions and fills out the rest with defaults. There are a few architectures (m32r, m68k) that could probably do away with their barrier.h file entirely but are kept for now due to their unconventional nop() implementation. Suggested-by: Geert Uytterhoeven <geert@linux-m68k.org> Reviewed-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Michael Neuling <mikey@neuling.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Victor Kaplansky <VICTORK@il.ibm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Link: http://lkml.kernel.org/r/20131213150640.846368594@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-01-12arch: Move smp_mb__{before,after}_atomic_{inc,dec}.h into asm/atomic.hPeter Zijlstra4-10/+10
Move the barriers functions that depend on the atomic implementation into the atomic implementation. Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Vineet Gupta <vgupta@synopsys.com> [for arch/arc bits] Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/20131213150640.786183683@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-01-12locking/doc: Rename LOCK/UNLOCK to ACQUIRE/RELEASEPeter Zijlstra1-116/+121
The LOCK and UNLOCK barriers as described in our barrier document are generally known as ACQUIRE and RELEASE barriers in other literature. Since we plan to introduce the acquire and release nomenclature in generic kernel primitives we should amend the document to avoid confusion as to what an acquire/release means. Reviewed-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Michael Neuling <mikey@neuling.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Victor Kaplansky <VICTORK@il.ibm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Link: http://lkml.kernel.org/r/20131217092435.GC21999@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-01-12block: null_blk: fix queue leak inside removing deviceMing Lei1-8/+2
When queue_mode is NULL_Q_MQ and null_blk is being removed, blk_cleanup_queue() isn't called to cleanup queue, so the queue allocated won't be freed. This patch calls blk_cleanup_queue() for MQ to drain all pending requests first and release the reference counter of queue kobject, then blk_mq_free_queue() will be called in queue kobject's release handler when queue kobject's reference counter drops to zero. Signed-off-by: Ming Lei <tom.leiming@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-12sched_clock: Disable seqlock lockdep usage in sched_clock()John Stultz1-3/+3
Unfortunately the seqlock lockdep enablement can't be used in sched_clock(), since the lockdep infrastructure eventually calls into sched_clock(), which causes a deadlock. Thus, this patch changes all generic sched_clock() usage to use the raw_* methods. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Reported-by: Krzysztof Hałasa <khalasa@piap.pl> Signed-off-by: John Stultz <john.stultz@linaro.org> Cc: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Cc: Willy Tarreau <w@1wt.eu> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1388704274-5278-2-git-send-email-john.stultz@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-01-12seqlock: Use raw_ prefix instead of _no_lockdepJohn Stultz2-12/+23
Linus disliked the _no_lockdep() naming, so instead use the more-consistent raw_* prefix to the non-lockdep enabled seqcount methods. This also adds raw_ methods for the write operations as well, which will be utilized in a following patch. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Krzysztof Hałasa <khalasa@piap.pl> Cc: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Cc: Willy Tarreau <w@1wt.eu> Link: http://lkml.kernel.org/r/1388704274-5278-1-git-send-email-john.stultz@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-01-12sched: Calculate effective load even if local weight is 0Rik van Riel1-1/+1
Thomas Hellstrom bisected a regression where erratic 3D performance is experienced on virtual machines as measured by glxgears. It identified commit 58d081b5 ("sched/numa: Avoid overloading CPUs on a preferred NUMA node") as the problem which had modified the behaviour of effective_load. Effective load calculates the difference to the system-wide load if a scheduling entity was moved to another CPU. The task group is not heavier as a result of the move but overall system load can increase/decrease as a result of the change. Commit 58d081b5 ("sched/numa: Avoid overloading CPUs on a preferred NUMA node") changed effective_load to make it suitable for calculating if a particular NUMA node was compute overloaded. To reduce the cost of the function, it assumed that a current sched entity weight of 0 was uninteresting but that is not the case. wake_affine() uses a weight of 0 for sync wakeups on the grounds that it is assuming the waking task will sleep and not contribute to load in the near future. In this case, we still want to calculate the effective load of the sched entity hierarchy. As effective_load is no longer used by task_numa_compare since commit fb13c7ee (sched/numa: Use a system-wide search to find swap/migration candidates), this patch simply restores the historical behaviour. Reported-and-tested-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Rik van Riel <riel@redhat.com> [ Wrote changelog] Signed-off-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140106113912.GC6178@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-01-11x86, fpu, amd: Clear exceptions in AMD FXSAVE workaroundLinus Torvalds1-6/+7
Before we do an EMMS in the AMD FXSAVE information leak workaround we need to clear any pending exceptions, otherwise we trap with a floating-point exception inside this code. Reported-by: halfdog <me@halfdog.net> Tested-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/CA%2B55aFxQnY_PCG_n4=0w-VG=YLXL-yr7oMxyy0WU2gCBAf3ydg@mail.gmail.com Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-01-11ARM: 7939/1: traps: fix opcode endianness when read from user memoryTaras Kondratiuk1-2/+3
Currently code has an inverted logic: opcode from user memory is swapped to a proper endianness only in case of read error. While normally opcode should be swapped only if it was read correctly from user memory. Reviewed-by: Victor Kamensky <victor.kamensky@linaro.org> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Signed-off-by: Taras Kondratiuk <taras.kondratiuk@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-01-11ARM: 7937/1: perf_event: Silence sparse warningStephen Boyd1-1/+1
arch/arm/kernel/perf_event_cpu.c:274:25: warning: incorrect type in assignment (different modifiers) arch/arm/kernel/perf_event_cpu.c:274:25: expected int ( *init_fn )( ... ) arch/arm/kernel/perf_event_cpu.c:274:25: got void const *const data Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-01-11ARM: 7934/1: DT/kernel: fix arch_match_cpu_phys_id to avoid erroneous matchSudeep Holla1-1/+1
The MPIDR contains specific bitfields(MPIDR.Aff{2..0}) which uniquely identify a CPU, in addition to some non-identifying information and reserved bits. The ARM cpu binding defines the 'reg' property to only contain the affinity bits, and any cpu nodes with other bits set in their 'reg' entry are skipped. As such it is not necessary to mask the phys_id with MPIDR_HWID_BITMASK, and doing so could lead to matching erroneous CPU nodes in the device tree. This patch removes the masking of the physical identifier. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-01-10leds: lp5521/5523: Remove duplicate mutexMilo Kim2-16/+8
It can be a problem when a pattern is loaded via the firmware interface. LP55xx common driver has already locked the mutex in 'lp55xx_firmware_loaded()'. So it should be deleted. On the other hand, locks are required in store_engine_load() on updating program memory. Reported-by: Pali Rohár <pali.rohar@gmail.com> Reported-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Milo Kim <milo.kim@ti.com> Signed-off-by: Bryan Wu <cooloney@gmail.com> Cc: <stable@vger.kernel.org>
2014-01-10drm/i915/bdw: make sure south port interrupts are enabled properly v2Jesse Barnes1-0/+2
We were apparently relying on the defaults on BDW, which resulted in no hotplug or AUX interrupts. So be sure to call the ibx_irq_preinstall to enable all interrupts. v2: use preinstall instead of redundant SDIER write References: https://bugs.freedesktop.org/show_bug.cgi?id=72834 References: https://bugs.freedesktop.org/show_bug.cgi?id=72833 Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-01-10xfs: Calling destroy_work_on_stack() to pair with INIT_WORK_ONSTACK()Chuansheng Liu1-0/+1
In case CONFIG_DEBUG_OBJECTS_WORK is defined, it is needed to call destroy_work_on_stack() which frees the debug object to pair with INIT_WORK_ONSTACK(). Signed-off-by: Liu, Chuansheng <chuansheng.liu@intel.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit 6f96b3063cdd473c68664a190524ed966ac0cd92)
2014-01-10xfs: fix off-by-one error in xfs_attr3_rmt_verifyJie Liu1-1/+1
With CRC check is enabled, if trying to set an attributes value just equal to the maximum size of XATTR_SIZE_MAX would cause the v3 remote attr write verification procedure failure, which would yield the back trace like below: <snip> XFS (sda7): Internal error xfs_attr3_rmt_write_verify at line 191 of file fs/xfs/xfs_attr_remote.c <snip> Call Trace: [<ffffffff816f0042>] dump_stack+0x45/0x56 [<ffffffffa0d99c8b>] xfs_error_report+0x3b/0x40 [xfs] [<ffffffffa0d96edd>] ? _xfs_buf_ioapply+0x6d/0x390 [xfs] [<ffffffffa0d99ce5>] xfs_corruption_error+0x55/0x80 [xfs] [<ffffffffa0dbef6b>] xfs_attr3_rmt_write_verify+0x14b/0x1a0 [xfs] [<ffffffffa0d96edd>] ? _xfs_buf_ioapply+0x6d/0x390 [xfs] [<ffffffffa0d97315>] ? xfs_bdstrat_cb+0x55/0xb0 [xfs] [<ffffffffa0d96edd>] _xfs_buf_ioapply+0x6d/0x390 [xfs] [<ffffffff81184cda>] ? vm_map_ram+0x31a/0x460 [<ffffffff81097230>] ? wake_up_state+0x20/0x20 [<ffffffffa0d97315>] ? xfs_bdstrat_cb+0x55/0xb0 [xfs] [<ffffffffa0d9726b>] xfs_buf_iorequest+0x6b/0xc0 [xfs] [<ffffffffa0d97315>] xfs_bdstrat_cb+0x55/0xb0 [xfs] [<ffffffffa0d97906>] xfs_bwrite+0x46/0x80 [xfs] [<ffffffffa0dbfa94>] xfs_attr_rmtval_set+0x334/0x490 [xfs] [<ffffffffa0db84aa>] xfs_attr_leaf_addname+0x24a/0x410 [xfs] [<ffffffffa0db8893>] xfs_attr_set_int+0x223/0x470 [xfs] [<ffffffffa0db8b76>] xfs_attr_set+0x96/0xb0 [xfs] [<ffffffffa0db13b2>] xfs_xattr_set+0x42/0x70 [xfs] [<ffffffff811df9b2>] generic_setxattr+0x62/0x80 [<ffffffff811e0213>] __vfs_setxattr_noperm+0x63/0x1b0 [<ffffffff81307afe>] ? evm_inode_setxattr+0xe/0x10 [<ffffffff811e0415>] vfs_setxattr+0xb5/0xc0 [<ffffffff811e054e>] setxattr+0x12e/0x1c0 [<ffffffff811c6e82>] ? final_putname+0x22/0x50 [<ffffffff811c708b>] ? putname+0x2b/0x40 [<ffffffff811cc4bf>] ? user_path_at_empty+0x5f/0x90 [<ffffffff811bdfd9>] ? __sb_start_write+0x49/0xe0 [<ffffffff81168589>] ? vm_mmap_pgoff+0x99/0xc0 [<ffffffff811e07df>] SyS_setxattr+0x8f/0xe0 [<ffffffff81700c2d>] system_call_fastpath+0x1a/0x1f Tests: setfattr -n user.longxattr -v `perl -e 'print "A"x65536'` testfile This patch fix it to check the remote EA size is greater than the XATTR_SIZE_MAX rather than more than or equal to it, because it's valid if the specified EA value size is equal to the limitation as per VFS setxattr interface. Signed-off-by: Jie Liu <jeff.liu@oracle.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit 85dd0707f0cad26d60f2dc574d17a5ab948d10f7)
2014-01-10qlcnic: Fix ethtool statistics length calculationShahed Shaikh1-18/+21
o Consider number of Tx queues while calculating the length of Tx statistics as part of ethtool stats. o Calculate statistics lenght properly for 82xx and 83xx adapter Signed-off-by: Shahed Shaikh <shahed.shaikh@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-01-10qlcnic: Fix bug in TX statisticsManish Chopra3-1/+5
o Driver was not updating TX stats so it was not populating statistics in `ifconfig` command output. Signed-off-by: Manish Chopra <manish.chopra@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-01-10net: core: explicitly select a txq before doing l2 forwardingJason Wang21-62/+80
Currently, the tx queue were selected implicitly in ndo_dfwd_start_xmit(). The will cause several issues: - NETIF_F_LLTX were removed for macvlan, so txq lock were done for macvlan instead of lower device which misses the necessary txq synchronization for lower device such as txq stopping or frozen required by dev watchdog or control path. - dev_hard_start_xmit() was called with NULL txq which bypasses the net device watchdog. - dev_hard_start_xmit() does not check txq everywhere which will lead a crash when tso is disabled for lower device. Fix this by explicitly introducing a new param for .ndo_select_queue() for just selecting queues in the case of l2 forwarding offload. netdev_pick_tx() was also extended to accept this parameter and dev_queue_xmit_accel() was used to do l2 forwarding transmission. With this fixes, NETIF_F_LLTX could be preserved for macvlan and there's no need to check txq against NULL in dev_hard_start_xmit(). Also there's no need to keep a dedicated ndo_dfwd_start_xmit() and we can just reuse the code of dev_queue_xmit() to do the transmission. In the future, it was also required for macvtap l2 forwarding support since it provides a necessary synchronization method. Cc: John Fastabend <john.r.fastabend@intel.com> Cc: Neil Horman <nhorman@tuxdriver.com> Cc: e1000-devel@lists.sourceforge.net Signed-off-by: Jason Wang <jasowang@redhat.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Acked-by: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-01-10macvlan: forbid L2 fowarding offload for macvtapJason Wang1-1/+4
L2 fowarding offload will bypass the rx handler of real device. This will make the packet could not be forwarded to macvtap device. Another problem is the dev_hard_start_xmit() called for macvtap does not have any synchronization. Fix this by forbidding L2 forwarding for macvtap. Cc: John Fastabend <john.r.fastabend@intel.com> Cc: Neil Horman <nhorman@tuxdriver.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Acked-by: John Fastabend <john.r.fastabend@intel.com.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-01-10bnx2x: fix DMA unmapping of TSO split BDsMichal Schmidt1-4/+9
bnx2x triggers warnings with CONFIG_DMA_API_DEBUG=y: WARNING: CPU: 0 PID: 2253 at lib/dma-debug.c:887 check_unmap+0xf8/0x920() bnx2x 0000:28:00.0: DMA-API: device driver frees DMA memory with different size [device address=0x00000000da2b389e] [map size=1490 bytes] [unmap size=66 bytes] The reason is that bnx2x splits a TSO BD into two BDs (headers + data) using one DMA mapping for both, but it uses only the length of the first BD when unmapping. This patch fixes the bug by unmapping the whole length of the two BDs. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: Dmitry Kravkov <dmitry@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-01-10EDAC: Don't try to cancel workqueue when it's never setupStephen Boyd1-0/+3
We only setup a workqueue for edac devices that use the polling method. We still try to cancel the workqueue if an edac_device uses the irq method though. This causes a warning from debug objects when we remove an edac device: WARNING: CPU: 0 PID: 56 at lib/debugobjects.c:260 debug_print_object+0x98/0xc0() ODEBUG: assert_init not available (active state 0) object type: timer_list hint: stub_timer+0x0/0x28 Modules linked in: krait_edac(-) CPU: 0 PID: 56 Comm: rmmod Not tainted 3.12.0-rc2-00035-g15292a0 #69 (unwind_backtrace+0x0/0x144) (show_stack+0x20/0x24) (dump_stack+0x74/0xb4) (warn_slowpath_common+0x78/0x9c) (warn_slowpath_fmt+0x40/0x48) (debug_print_object+0x98/0xc0) (debug_object_assert_init+0xdc/0xec) (del_timer+0x24/0x7c) (try_to_grab_pending+0xc0/0x1b0) (cancel_delayed_work+0x2c/0xa0) (edac_device_workq_teardown+0x1c/0x38) (edac_device_del_device+0xb8/0xe4) (krait_edac_remove+0x50/0x70 [krait_edac]) (platform_drv_remove+0x24/0x28) (__device_release_driver+0x68/0xc0) (driver_detach+0xc4/0xc8) (bus_remove_driver+0xac/0x114) (driver_unregister+0x38/0x58) (platform_driver_unregister+0x1c/0x20) (krait_edac_driver_exit+0x14/0x1c [krait_edac]) (SyS_delete_module+0x178/0x2b4) Fix it by skipping the workqueue teardown for such devices. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Link: http://lkml.kernel.org/r/1388434457-4194-2-git-send-email-sboyd@codeaurora.org Signed-off-by: Borislav Petkov <bp@suse.de>
2014-01-10drm/i915: Don't grab crtc mutexes in intel_modeset_gem_init()Ville Syrjälä1-2/+2
When the pipe A force quirk is applied the code will attempt to grab a crtc mutex during intel_modeset_setup_hw_state(). If we're already holding all crtc mutexes this will obviously deadlock every time. So instead of using drm_modeset_lock_all() just grab the mode_config.mutex. This is enough to avoid the unlocked mutex warnings from certain lower level functions. The regression was introduced in: commit 027476642811f8559cbe00ef6cc54db230e48a20 Author: Ville Syrjälä <ville.syrjala@linux.intel.com> Date: Mon Dec 2 11:08:06 2013 +0200 drm/i915: Take modeset locks around intel_modeset_setup_hw_state() Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: stable@vger.kernel.org [danvet: Add cc: stable since the offending commit has that, too.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-01-09ipv6: add link-local, sit and loopback address with INFINITY_LIFE_TIMEHannes Frederic Sowa1-2/+4
In the past the IFA_PERMANENT flag indicated, that the valid and preferred lifetime where ignored. Since change fad8da3e085ddf ("ipv6 addrconf: fix preferred lifetime state-changing behavior while valid_lft is infinity") we honour at least the preferred lifetime on those addresses. As such the valid lifetime gets recalculated and updated to 0. If loopback address is added manually this problem does not occur. Also if NetworkManager manages IPv6, those addresses will get added via inet6_rtm_newaddr and thus will have a correct lifetime, too. Reported-by: François-Xavier Le Bail <fx.lebail@yahoo.com> Reported-by: Damien Wyart <damien.wyart@gmail.com> Fixes: fad8da3e085ddf ("ipv6 addrconf: fix preferred lifetime state-changing behavior while valid_lft is infinity") Cc: Yasushi Asano <yasushi.asano@jp.fujitsu.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-01-09bnx2x: prevent WARN during driver unloadYuval Mintz2-18/+38
Starting with commit 80c33dd "net: add might_sleep() call to napi_disable" bnx2x fails the might_sleep tests causing a stack trace to appear whenever the driver is unloaded, as local_bh_disable() is being called before napi_disable(). This changes the locking schematics related to CONFIG_NET_RX_BUSY_POLL, preventing the need for calling local_bh_disable() and thus eliminating the issue. Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com> Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com> Signed-off-by: Ariel Elior <ariele@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-01-10intel_idle: close avn_cstates array with correct markerJiang Liu1-1/+3
Close avn_cstates array with correct marker to avoid overflow in function intel_idle_cpu_init(). [rjw: The problem was introduced when commit 22e580d07f65 was merged on top of eba682a5aeb6 (intel_idle: shrink states tables).] Fixes: 22e580d07f65 (intel_idle: Fixed C6 state on Avoton/Rangeley processors) Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2014-01-09ftrace/x86: Load ftrace_ops in parameter not the variable holding itSteven Rostedt2-3/+3
Function tracing callbacks expect to have the ftrace_ops that registered it passed to them, not the address of the variable that holds the ftrace_ops that registered it. Use a mov instead of a lea to store the ftrace_ops into the parameter of the function tracing callback. Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Link: http://lkml.kernel.org/r/20131113152004.459787f9@gandalf.local.home Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: <stable@vger.kernel.org> # v3.8+
2014-01-09Revert "intel_idle: mark states tables with __initdata tag"Jiang Liu1-5/+5
This reverts commit 9d046ccb98085f1d437585f84748c783a04ba240. Commit 9d046ccb98085 marks all state tables with __initdata, but the state table may be accessed when doing CPU online, which then causing system crash as below: [ 204.188841] BUG: unable to handle kernel paging request at ffffffff8227cce8 [ 204.196844] IP: [<ffffffff814aa1c0>] intel_idle_cpu_init+0x40/0x130 [ 204.203996] PGD 1e11067 PUD 1e12063 PMD 455859063 PTE 800000000227c062 [ 204.211638] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC [ 204.216975] Modules linked in: x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd gpio_ich microcode joydev sb_edac edac_core ipmi_si lpc_ich ipmi_msghandler lp tpm_tis parport wmi mac_hid acpi_pad hid_generic ixgbe isci usbhid dca hid libsas ptp ahci libahci scsi_transport_sas megaraid_sas pps_core mdio [ 204.262815] CPU: 11 PID: 1489 Comm: bash Not tainted 3.13.0-rc7+ #48 [ 204.269993] Hardware name: Intel Corporation BRICKLAND/BRICKLAND, BIOS BRIVTIN1.86B.0047.L09.1312061514 12/06/2013 [ 204.281646] task: ffff8804303a24a0 ti: ffff880440fac000 task.ti: ffff880440fac000 [ 204.290311] RIP: 0010:[<ffffffff814aa1c0>] [<ffffffff814aa1c0>] intel_idle_cpu_init+0x40/0x130 [ 204.300184] RSP: 0018:ffff880440fadd28 EFLAGS: 00010286 [ 204.306192] RAX: ffffffff8227cca0 RBX: ffffe8fff1a03400 RCX: 0000000000000007 [ 204.314244] RDX: ffff88045f400000 RSI: 0000000000000009 RDI: 0000000000001120 [ 204.322296] RBP: ffff880440fadd38 R08: 0000000000000000 R09: 0000000000000001 [ 204.330411] R10: 0000000000000001 R11: 0000000000000000 R12: 000000000000001e [ 204.338482] R13: 00000000ffffffdb R14: 0000000000000001 R15: 0000000000000000 [ 204.346743] FS: 00007f64f7b0c740(0000) GS:ffff88045ce00000(0000) knlGS:0000000000000000 [ 204.355919] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 204.362449] CR2: ffffffff8227cce8 CR3: 0000000444ab0000 CR4: 00000000001407e0 [ 204.370520] Stack: [ 204.372853] 000000000000001e ffffffff81f10240 ffff880440fadd50 ffffffff814aa307 [ 204.381519] ffffffff81ea80e0 ffff880440fadda0 ffffffff8185a230 0000000000000000 [ 204.390196] 000000000000001e 0000000000000002 0000000000000002 0000000000000000 [ 204.398856] Call Trace: [ 204.401683] [<ffffffff814aa307>] cpu_hotplug_notify+0x57/0x70 [ 204.408638] [<ffffffff8185a230>] notifier_call_chain+0x100/0x150 [ 204.415553] [<ffffffff810a7dae>] __raw_notifier_call_chain+0xe/0x10 [ 204.422772] [<ffffffff81072163>] cpu_notify+0x23/0x50 [ 204.428616] [<ffffffff810723b2>] _cpu_up+0x132/0x1a0 [ 204.434361] [<ffffffff8107249d>] cpu_up+0x7d/0xa0 [ 204.439819] [<ffffffff81836c9c>] cpu_subsys_online+0x3c/0x90 [ 204.446345] [<ffffffff81554625>] device_online+0x45/0xa0 [ 204.452471] [<ffffffff815546ce>] online_store+0x4e/0x80 [ 204.458511] [<ffffffff815519a8>] dev_attr_store+0x18/0x30 [ 204.464744] [<ffffffff812a68f1>] sysfs_write_file+0x151/0x1c0 [ 204.471681] [<ffffffff81217ef1>] vfs_write+0xe1/0x160 [ 204.477524] [<ffffffff8121889c>] SyS_write+0x4c/0x90 [ 204.483270] [<ffffffff8185f2ed>] system_call_fastpath+0x1a/0x1f [ 204.490081] Code: 41 54 41 89 fc 8b 3d 48 25 85 01 53 48 8b 1d 30 25 85 01 48 03 1c c5 40 90 fb 81 48 8b 05 19 25 85 01 c7 43 0c 01 00 00 00 66 90 <48> 83 78 48 00 74 4f 41 83 c0 01 41 39 f0 7e 10 48 c7 c7 38 79 [ 204.515723] RIP [<ffffffff814aa1c0>] intel_idle_cpu_init+0x40/0x130 [ 204.522996] RSP <ffff880440fadd28> [ 204.526976] CR2: ffffffff8227cce8 [ 204.530766] ---[ end trace 336f56cc3d1cfc8c ]--- Fixes: 9d046ccb98085 (intel_idle: mark states tables with __initdata tag) Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>