aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/media (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2010-10-21fib: introduce fib_alias_accessed() helperEric Dumazet4-3/+12
Perf tools session at NFWS 2010 pointed out a false sharing on struct fib_alias that can be avoided pretty easily, if we set FA_S_ACCESSED bit only if needed (ie : not already set) Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21net/sched: fix missing spinlock initEric Dumazet1-0/+2
Under network load, doing : tc qdisc del dev eth0 root triggers : [ 167.193087] BUG: spinlock bad magic on CPU#3, udpflood/4928 [ 167.193139] lock: c15bc324, .magic: 00000000, .owner: <none>/-1, .owner_cpu: -1 [ 167.193193] Pid: 4928, comm: udpflood Not tainted 2.6.36-rc7-11417-g215340c-dirty #323 [ 167.193245] Call Trace: [ 167.193292] [<c13abaa0>] ? printk+0x18/0x20 [ 167.193342] [<c11afb53>] spin_bug+0xa3/0xf0 [ 167.193389] [<c11afcdd>] do_raw_spin_lock+0x7d/0x160 [ 167.193440] [<c1313d4e>] ? __dev_xmit_skb+0x27e/0x2b0 [ 167.193496] [<c107382b>] ? trace_hardirqs_on+0xb/0x10 [ 167.193545] [<c13ae99a>] _raw_spin_lock+0x3a/0x40 [ 167.193593] [<c1313d4e>] ? __dev_xmit_skb+0x27e/0x2b0 [ 167.193641] [<c1313d4e>] __dev_xmit_skb+0x27e/0x2b0 commit 79640a4ca695 (add additional lock to qdisc to increase throughput) forgot to initialize noop_qdisc and noqueue_qdisc busylock Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21r8169: print errors when dma mapping failStanislaw Gruszka1-3/+13
If dma mapping fail we are dropping packages or fail to open device. But exact reason of drop/fail stays unknow for a user, so print errors. Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21r8169: (re)init phy on resumeStanislaw Gruszka1-0/+5
Fix switching device to low-speed mode after resume reported in: https://bugzilla.redhat.com/show_bug.cgi?id=502974 Reported-and-tested-by: Laurentiu Badea <bugzilla-redhat@wotevah.com> Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21r8169: changing mtu clean upStanislaw Gruszka1-41/+6
Since we do not change rx buffer size any longer, we can clean up rtl8169_change_mtu and in consequence rtl8169_down. Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21r8169: do not account fragments as packetsStanislaw Gruszka1-5/+3
Only increase tx_{packets,dropped} statistics when transmit or drop full skb, not just fragment. Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21r8169: use pointer to struct device as local variableStanislaw Gruszka1-26/+25
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21r8169: replace PCI_DMA_{TO,FROM}DEVICE to DMA_{TO,FROM}_DEVICEStanislaw Gruszka1-7/+7
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21r8169: init rx ring cleanupStanislaw Gruszka1-30/+22
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21r8169: check dma mapping failuresStanislaw Gruszka1-18/+48
Check possible dma mapping errors and do clean up if it happens. Fix overwrap bug in rtl8169_tx_clear on the way. Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21bnx2x: Update bnx2x to use new vlan accleration.Hao Zheng4-84/+27
Make the bnx2x driver use the new vlan accleration model. Signed-off-by: Hao Zheng <hzheng@nicira.com> Signed-off-by: Jesse Gross <jesse@nicira.com> CC: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21ixgbe: Update ixgbe to use new vlan accleration.Jesse Gross3-79/+74
Make the ixgbe driver use the new vlan accleration model. Signed-off-by: Jesse Gross <jesse@nicira.com> CC: Peter Waskiewicz <peter.p.waskiewicz.jr@intel.com> CC: Emil Tantilov <emil.s.tantilov@intel.com> CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21bnx2: Update bnx2 to use new vlan accleration.Jesse Gross2-73/+28
Make the bnx2 driver use the new vlan accleration model. Signed-off-by: Jesse Gross <jesse@nicira.com> CC: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21bridge: Add support for TX vlan offload.Jesse Gross1-1/+7
If some of the underlying devices support it, enable vlan offload on transmit for bridge devices. This allows senders to take advantage of the hardware support, similar to other forms of acceleration. Signed-off-by: Jesse Gross <jesse@nicira.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21ethtool: Add support for vlan accleration.Jesse Gross2-1/+4
Now that vlan acceleration is handled consistently regardless of usage, it is possible to enable and disable it at will. This adds support for Ethtool operations that change the offloading status for debugging purposes, similar to other forms of hardware acceleration. Signed-off-by: Jesse Gross <jesse@nicira.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21vlan: Centralize handling of hardware acceleration.Jesse Gross5-140/+48
Currently each driver that is capable of vlan hardware acceleration must be aware of the vlan groups that are configured and then pass the stripped tag to a specialized receive function. This is different from other types of hardware offload in that it places a significant amount of knowledge in the driver itself rather keeping it in the networking core. This makes vlan offloading function more similarly to other forms of offloading (such as checksum offloading or TSO) by doing the following: * On receive, stripped vlans are passed directly to the network core, without attempting to check for vlan groups or reconstructing the header if no group * vlans are made less special by folding the logic into the main receive routines * On transmit, the device layer will add the vlan header in software if the hardware doesn't support it, instead of spreading that logic out in upper layers, such as bonding. There are a number of advantages to this: * Fixes all bugs with drivers incorrectly dropping vlan headers at once. * Avoids having to disable VLAN acceleration when in promiscuous mode (good for bridging since it always puts devices in promiscuous mode). * Keeps VLAN tag separate until given to ultimate consumer, which avoids needing to do header reconstruction as in tg3 unless absolutely necessary. * Consolidates common code in core networking. Signed-off-by: Jesse Gross <jesse@nicira.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21vlan: Avoid hash table lookup to find group.Jesse Gross5-73/+34
A struct net_device always maps to zero or one vlan groups and we always know the device when we are looking up a group. We currently do a hash table lookup on the device to find the group but it is much simpler to just store a pointer. Signed-off-by: Jesse Gross <jesse@nicira.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21vlan: Enable software emulation for vlan accleration.Jesse Gross2-6/+44
Currently users of hardware vlan accleration need to know whether the device supports it before generating packets. However, vlan acceleration will soon be available in a more flexible manner so knowing ahead of time becomes much more difficult. This adds a software fallback path for vlan packets on devices without the necessary offloading support, similar to other types of hardware accleration. Signed-off-by: Jesse Gross <jesse@nicira.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21vlan: Don't check for vlan group before vlan_tx_tag_present.Jesse Gross29-43/+38
Many (but not all) drivers check to see whether there is a vlan group configured before using a tag stored in the skb. There's not much point in this check since it just throws away data that should only be present in the expected circumstances. However, it will soon be legal and expected to get a vlan tag when no vlan group is configured, so remove this check from all drivers to avoid dropping the tags. Signed-off-by: Jesse Gross <jesse@nicira.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21vlan: Rename VLAN_GROUP_ARRAY_LEN to VLAN_N_VID.Jesse Gross16-27/+27
VLAN_GROUP_ARRAY_LEN is simply the number of possible vlan VIDs. Since vlan groups will soon be more of an implementation detail for vlan devices, rename the constant to be descriptive of its actual purpose. Signed-off-by: Jesse Gross <jesse@nicira.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21ebtables: Allow filtering of hardware accelerated vlan frames.Jesse Gross3-18/+34
An upcoming commit will allow packets with hardware vlan acceleration information to be passed though more parts of the network stack, including packets trunked through the bridge. This adds support for matching and filtering those packets through ebtables. Signed-off-by: Jesse Gross <jesse@nicira.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21enic: Fix log messageVasanthy Kolluri1-1/+1
Fix a log message Signed-off-by: Vasanthy Kolluri <vkolluri@cisco.com> Signed-off-by: Roopa Prabhu <roprabhu@cisco.com> Signed-off-by: David Wang <dwang2@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21enic: Change min MTUVasanthy Kolluri1-1/+1
Change min MTU to 68. Signed-off-by: Vasanthy Kolluri <vkolluri@cisco.com> Signed-off-by: Roopa Prabhu <roprabhu@cisco.com> Signed-off-by: David Wang <dwang2@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21enic: Replace firmware devcmd CMD_ENABLE with CMD_ENABLE_WAITVasanthy Kolluri3-4/+10
Replace no wait CMD_ENABLE firmware devcmd with CMD_ENABLE_WAIT Signed-off-by: Vasanthy Kolluri <vkolluri@cisco.com> Signed-off-by: Roopa Prabhu <roprabhu@cisco.com> Signed-off-by: David Wang <dwang2@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21enic: Make firmware cognizant of the user set mac addressVasanthy Kolluri1-1/+12
Let the firmware know about the mac address set by the user using ndo_set_mac_address Signed-off-by: Vasanthy Kolluri <vkolluri@cisco.com> Signed-off-by: Roopa Prabhu <roprabhu@cisco.com> Signed-off-by: David Wang <dwang2@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21enic: Add support for multiple hardware receive queuesVasanthy Kolluri5-124/+368
Add support for multiple hardware receive queues. The ingress traffic is hashed into one of the receive queues based on IP or TCP or both headers. The max no. of receive queues supported is 8. Signed-off-by: Vasanthy Kolluri <vkolluri@cisco.com> Signed-off-by: Roopa Prabhu <roprabhu@cisco.com> Signed-off-by: David Wang <dwang2@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21ibmveth: Free irq on error pathDenis Kirjanov1-2/+4
Free irq on error path. Signed-off-by: Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-21ibmveth: Cleanup error handling inside ibmveth_openDenis Kirjanov1-24/+20
Remove duplicated code in one place. Signed-off-by: Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-20net: avoid RCU for NOCACHE dstEric Dumazet3-19/+33
There is no point using RCU for dst we allocate for a very short time (used once). Change dst_release() to take DST_NOCACHE into account, but also change skb_dst_set_noref() to force a refcount increment for such dst. This is a _huge_ gain, because we dont waste memory to store xx thousand of dsts. Instead of queueing them to RCU, we can free them instantly. CPU caches can stay hot, re-using same memory blocks to hold temporary dsts. Note : remove unneeded smp_mb__before_atomic_dec(); in dst_release(), since atomic_dec_return() implies a full memory barrier. Stress test, 160.000.000 udp frames sent, IP route cache disabled (DDOS). Before: real 0m38.091s user 0m13.189s sys 7m53.018s After: real 0m29.946s user 0m12.157s sys 7m40.605s For reference, if IP route cache was enabled : real 0m32.030s user 0m10.521s sys 8m15.243s Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-20net: allocate tx queues in register_netdeviceTom Herbert2-55/+55
This patch introduces netif_alloc_netdev_queues which is called from register_device instead of alloc_netdev_mq. This makes TX queue allocation symmetric with RX allocation. Also, queue locks allocation is done in netdev_init_one_queue. Change set_real_num_tx_queues to fail if requested number < 1 or greater than number of allocated queues. Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-20net: cleanups in RX queue allocationTom Herbert1-19/+17
Clean up in RX queue allocation. In netif_set_real_num_rx_queues return error on attempt to set zero queues, or requested number is greater than number of allocated queues. In netif_alloc_rx_queues, do BUG_ON if queue_count is zero. Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-20net: fail alloc_netdev_mq if queue count < 1Tom Herbert1-0/+6
In alloc_netdev_mq fail if requested queue_count < 1. Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-20phonet: remove the unused variable pnChangli Gao1-1/+0
Signed-off-by: Changli Gao <xiaosuo@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-20netpoll: Revert napi_poll fix for bonding driverNeil Horman1-8/+1
In an erlier patch I modified napi_poll so that devices with IFF_MASTER polled the per_cpu list instead of the device list for napi. I did this because the bonding driver has no napi instances to poll, it instead expects to check the slave devices napi instances, which napi_poll was unaware of. Looking at this more closely however, I now see this isn't strictly needed. As the bond driver poll_controller calls the slaves poll_controller via netpoll_poll_dev, which recursively calls poll_napi on each slave, allowing those napi instances to get serviced. The earlier patch isn't at all harmfull, its just not needed, so lets revert it to make the code cleaner. Sorry for the noise, Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Reviewed-by: WANG Cong <amwang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-20netpoll: Remove netpoll blocking from uninit pathNeil Horman1-3/+0
Some recent testing in netpoll with bonding showed this backtrace ------------[ cut here ]------------ kernel BUG at drivers/net/bonding/bonding.h:134! invalid opcode: 0000 [#1] SMP last sysfs file: /sys/devices/pci0000:00/0000:00:1d.2/usb7/devnum CPU 0 Pid: 1876, comm: rmmod Not tainted 2.6.36-rc3+ #10 D26928/ RIP: 0010:[<ffffffffa0514ba4>] [<ffffffffa0514ba4>] bond_uninit+0x6f4/0x7a0 RSP: 0018:ffff88003b1b5d58 EFLAGS: 00010296 RAX: ffff88003b9b6200 RBX: ffff8800373e8e00 RCX: 00000000000f4240 RDX: 00000000ffffffff RSI: 0000000000000286 RDI: 0000000000000286 RBP: ffff88003b1b5dc8 R08: 0000000000000000 R09: 00000001af7de920 R10: 0000000000000000 R11: ffff880002495e98 R12: ffff880037922700 R13: ffff880038c31000 R14: ffff880037922730 R15: 0000000000000286 FS: 00007f90e6d72700(0000) GS:ffff880002400000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 000000346f0d9ad0 CR3: 000000003b263000 CR4: 00000000000006f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Process rmmod (pid: 1876, threadinfo ffff88003b1b4000, task ffff88003b36aa80) Stack: 00000000ffffffff ffff88003b1b5d7a ffff8800379221e8 ffff880037922000 <0> ffff88003b1b5dc8 ffffffff813eb5fb ffff88003b1b5da8 0000000031b177a3 <0> ffff88003b1b5da8 ffff880037922000 ffff88003b1b5e48 ffff88003b1b5e48 Call Trace: [<ffffffff813eb5fb>] ? rtmsg_ifinfo+0xcb/0xf0 [<ffffffff813daad8>] rollback_registered_many+0x168/0x280 [<ffffffff813dac09>] unregister_netdevice_many+0x19/0x80 [<ffffffff813e97b3>] __rtnl_kill_links+0x63/0x90 [<ffffffff813e980b>] __rtnl_link_unregister+0x2b/0x60 [<ffffffff813e9bde>] rtnl_link_unregister+0x1e/0x30 [<ffffffffa052124b>] bonding_exit+0x37/0x51 [bonding] [<ffffffff81098b2e>] sys_delete_module+0x19e/0x270 [<ffffffff810bb2b2>] ? audit_syscall_entry+0x252/0x280 [<ffffffff8100b0b2>] system_call_fastpath+0x16/0x1b RIP [<ffffffffa0514ba4>] bond_uninit+0x6f4/0x7a0 [bonding] RSP <ffff88003b1b5d58> ---[ end trace 1395ad691cea24d1 ]--- It occurs because of my recent netpoll blocking patches, which I added to avoid recursive deadlock in the bonding driver. It relies on some per cpu bits, but the shutdown path forces some rescheduling as we cancel workqueues for the driver and wait for some device refcounts. If after the forced reschedule, we wind up on a different cpu we trigger the bughalt in unblock_netpoll_tx. The fix is to remove the netpoll block/unblock calls from bond_release_all. This is safe to do because bond_uninit, which is called via ndo_uninit in rollback_registered_many, doesn't occur until we send a NETDEV_UNREGISTER event, which triggers netconsole to remove us as a netpoll client, so we are guaranteed not to recurse into our own tx path here. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Reviewed-by: WANG Cong <amwang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-19bnx2x: update version to 1.60.00-3Dmitry Kravkov1-2/+2
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com> Signed-off-by: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-19bnx2x: prevent false parity error in MSI-X memory of HC blockVladislav Zolotarov2-2/+32
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com> Signed-off-by: Vladislav Zolotarov <vladz@broadcom.com> Signed-off-by: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-19bnx2x: fix possible deadlock in HC hw blockDmitry Kravkov1-8/+29
The possible deadlock (on 57710 devices only) will prevent from the device to generate interrupts. Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com> Signed-off-by: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-19inet: RCU changes in inetdev_by_index()Eric Dumazet4-22/+16
Convert inetdev_by_index() to not increment in_dev refcount. Callers hold RCU or RTNL, and should not decrement in_dev refcount. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-19net: avoid a dev refcount in ip_mc_find_dev()Eric Dumazet2-3/+3
We hold RTNL in ip_mc_find_dev(), no need to touch device refcount. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-193c52x: remove IRQF_SAMPLE_RANDOM from legacy MCA drivers.Paul Gortmaker2-2/+2
If you are genuinely using one of these legacy MCA drivers then you are tragically on hardware where you really don't have the extra CPU cycles to be wasting on this. In addition, it makes two less cases for people to inadvertently blindly copy flags from without explicitly thinking whether it makes sense -- see the addition to feature-removal.txt as per commit 9d9b8fb0e5ebf4b0398e579f6061d4451fea3242. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-18bonding: Re-enable netpoll over bondingNeil Horman1-19/+10
With the inclusion of previous fixup patches, netpoll over bonding apears to work reliably with failover conditions. This reverts Gospos previous commit c22d7ac844f1cb9c6a5fd20f89ebadc2feef891b, and allows access again to the netpoll functionality in the bonding driver. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-18bonding: Fix netconsole to not deadlock on rmmodNeil Horman1-1/+8
Netconsole calls netpoll_cleanup on receipt of a NETDEVICE_UNREGISTER event. The notifier subsystem calls these event handlers with rtnl_lock held, which netpoll_cleanup also takes, resulting in deadlock. Fix this by calling the __netpoll_cleanup interior function instead, and fixing up the additional pointers. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-18bonding: Fix napi poll for bonding driverNeil Horman1-1/+8
Usually the netpoll path, when preforming a napi poll can get away with just polling all the napi instances of the configured device. Thats not the case for the bonding driver however, as the napi instances which may wind up getting flagged as needing polling after the poll_controller call don't belong to the bonded device, but rather to the slave devices. Fix this by checking the device in question for the IFF_MASTER flag, if set, we know we need to check the full poll list for this cpu, rather than just the devices napi instance list. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-18bonding: Fix deadlock in bonding driver resulting from internal locking when using netpollNeil Horman3-4/+80
The monitoring paths in the bonding driver take write locks that are shared by the tx path. If netconsole is in use, these paths can call printk which puts us in the netpoll tx path, which, if netconsole is attached to the bonding driver, result in deadlock (the xmit_lock guards are useless in netpoll_send_skb, as the monitor paths in the bonding driver don't claim the xmit_lock, nor should they). The solution is to use a per cpu flag internal to the driver to indicate when a cpu is holding the lock in a path that might recusrse into the tx path for the driver via netconsole. By checking this flag on transmit, we can defer the sending of the netconsole frames until a later time using the retransmit feature of netpoll_send_skb that is triggered on the return code NETDEV_TX_BUSY. I've tested this and am able to transmit via netconsole while causing failover conditions on the bond slave links. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-18bonding: Fix bonding drivers improper modification of netpoll structureNeil Horman3-11/+19
The bonding driver currently modifies the netpoll structure in its xmit path while sending frames from netpoll. This is racy, as other cpus can access the netpoll structure in parallel. Since the bonding driver points np->dev to a slave device, other cpus can inadvertently attempt to send data directly to slave devices, leading to improper locking with the bonding master, lost frames, and deadlocks. This patch fixes that up. This patch also removes the real_dev pointer from the netpoll structure as that data is really only used by bonding in the poll_controller, and we can emulate its behavior by check each slave for IS_UP. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-18e1000e: Fix for offline diag test failure at first callCarolyn Wyborny1-11/+8
Move link test call to later in the offline sequence, move the restore settings block to afterwards and add another reset to ensure the hardware is in a known state afterwards. Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com> Acked-by: Bruce Allan <bruce.w.allan@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-18igbvf: Remove unneeded pm_qos* callsGreg Rose1-5/+0
Power Management Quality of Service is not supported or used by the VF driver. Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-18igb: fix stats handlingEric Dumazet3-45/+129
There are currently some problems with igb. - On 32bit arches, maintaining 64bit counters without proper synchronization between writers and readers. - Stats updated every two seconds, as reported by Jesper. (Jesper provided a patch for this) - Potential problem between worker thread and ethtool -S This patch uses u64_stats_sync, and convert everything to be 64bit safe, SMP safe, even on 32bit arches. It integrates Jesper idea of providing accurate stats at the time user reads them. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Tested-by: Emil Tantilov <emil.s.tantilov@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-10-18netxen: mask correctable erroramit salecha2-2/+28
HW workaround: Disable logging of correctable error for some NX3031 based adapter. Signed-off-by: Amit Kumar Salecha <amit.salecha@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>