aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/net/ethernet/intel (follow)
AgeCommit message (Collapse)AuthorFilesLines
2012-10-29igb: Enable auto-crossover during forced operation on 82580 and above.Matthew Vick2-12/+21
Newer devices supported by igb can support auto-crossover detection in forced operation modes. Enable this in the driver, rather than clobbering this functionality in forced operation. Signed-off-by: Matthew Vick <matthew.vick@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-29igbvf: Check for error on dma_map_single callGreg Rose1-0/+13
Ignoring the return value from a call to the kernel dma_map API functions can cause data corruption and system instability. Check the return value and take appropriate action. Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-29ixgbevf: Do not forward LLDP type framesGreg Rose1-0/+5
The driver should not forward LLDP type frames. Inspect the ether type and do not send if it is an LLDP ethertype frame. Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-29ixgbe: reduce PTP rx path overheadJiri Benc1-2/+4
Hw timestamping code caused performance regression in ixgbe driver when the timestamping is not enabled. The culprit is IXGBE_READ_REG call in the Rx path which is executed for every received skb. This call is not needed when the timestamping is disabled or for non-ptp packets. netperf results: The ixgbe side of the connection was acting as a server, the netperf command line on the other side was: netperf -H 192.168.1.23 -T0,0 -t UDP_STREAM -l 20 The values below mean throughput as reported by netperf (local/remote), for 3 runs, with timestamping not enabled. 3.7.0-rc1+ with CONFIG_IXGBE_PTP off: 5373.83 / 3329.32 5721.88 / 3033.89 5653.42 / 3112.38 3.7.0-rc1+ with CONFIG_IXGBE_PTP on: 5233.64 / 1226.85 5448.67 / 1039.32 5421.36 / 1095.66 Patched 3.7.0-rc1+ with CONFIG_IXGBE_PTP on: 5594.72 / 2942.53 5428.95 / 3110.16 5343.56 / 3200.48 Reported-by: Jesper Brouer <jbrouer@redhat.com> Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: Andy Gospodarek <gospo@redhat.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-29ixgbe: add/update descriptor maps in commentsJosh Hay1-6/+55
Adds/updates ASCII descriptor maps for 82598 and 82599 Tx/Rx descriptors. Current descriptor maps were out of date for 82598 and incorrect for 82599. Signed-off-by: Josh Hay <joshua.a.hay@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-29ixgbe: Do not decrement budget in ixgbe_clean_rx_irqAlexander Duyck1-5/+4
This change makes it so that compare the total_rx_packets cleaned to budget instead of decrementing budget. The advantage to this approach is that budget can now be const and we only end up modifying total_rx_packets instead of modifying both it and budget. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-29ixgbe: Return success or failure on VF MAC filter setGreg Rose1-1/+1
When setting a MAC filter for the VF the function should return a success or failure code, not the index of the new filter. It causes spurious NACK returns to the VF driver. Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-29ixgbe: clean up the condition for turning on/off the laserEmil Tantilov1-23/+8
This patch simplifies the check for calling en/disable_tx_laser() function pointer. The pointer is only set on parts that can use it. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-29net, ixgbe: handle link local multicast addresses in SR-IOV modeJohn Fastabend2-1/+4
In SR-IOV mode the PF driver acts as the uplink port and is used to send control packets e.g. lldpad, stp, etc. eth0.1 eth0.2 eth0 VF VF PF | | | <-- stand-in for uplink | | | -------------------------- | Embedded Switch | -------------------------- | MAC <-- uplink But the embedded switch is setup to forward multicast addresses to all interfaces both VFs and PF and onto the physical link. This results in reserved MAC addresses used by control protocols to be forwarded over the switch onto the VF. In the LLDP case the PF sends an LLDPDU and it is currently being forwarded to all the VFs who then see the PF as a peer. This is incorrect. This patch adds the multicast addresses to the RAR table in the hardware to prevent this behavior. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-29ixgbe: Fix return value from macvlan filter functionGreg Rose1-1/+2
The function to set the macvlan filter should return success or failure instead of the index of the filter. The message processing function was misinterpreting the index as a non-zero return code indicating failure and NACKing MAC filter set messages that actually succeeded. Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-29ixgbe: Add support for pipeline resetDon Skidmore3-32/+153
Calling the ixgbe_reset_pipeline_82599 function will ensure a full pipeline reset on all 82599 devices. This is necessary to avoid possible link issues. Since this patch accomplishes this by modifying AUTOC.LMS we need to wrap all AUTOC writes when LESM is enabled. v2- fix LMS behaviour based on feedback by Martin Josefsson CC: Martin Josefsson <gandalf@mjufs.se> Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-23ixgbevf: Update version stringGreg Rose1-1/+1
Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-23ixgbevf: fix softirq-safe to unsafe splat on internal mbx_lockJohn Fastabend1-20/+20
The lockdep splat below identifies a case where irq safe to unsafe lock order is detected. Resolved by making mbx_lock bh. ====================================================== [ INFO: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected ] 3.6.0-rc5jk-net-next+ #119 Not tainted ------------------------------------------------------ ip/2608 [HC0[0]:SC0[2]:HE1:SE0] is trying to acquire: (&(&adapter->mbx_lock)->rlock){+.+...}, at: [<ffffffffa008114e>] ixgbevf_set_rx_mode+0x36/0xd2 [ixgbevf] and this task is already holding: (_xmit_ETHER){+.....}, at: [<ffffffff814097c8>] dev_set_rx_mode+0x1e/0x33 which would create a new lock dependency: (_xmit_ETHER){+.....} -> (&(&adapter->mbx_lock)->rlock){+.+...} but this new dependency connects a SOFTIRQ-irq-safe lock: (&(&mc->mca_lock)->rlock){+.-...} ... which became SOFTIRQ-irq-safe at: [<ffffffff81092ee5>] __lock_acquire+0x2f2/0xdf3 [<ffffffff81093b11>] lock_acquire+0x12b/0x158 [<ffffffff814bdbcd>] _raw_spin_lock_bh+0x4a/0x7d [<ffffffffa011a740>] mld_ifc_timer_expire+0x1b2/0x282 [ipv6] [<ffffffff81054580>] run_timer_softirq+0x2a2/0x3ee [<ffffffff8104cc42>] __do_softirq+0x161/0x2b9 [<ffffffff814c6a7c>] call_softirq+0x1c/0x30 [<ffffffff81011bc7>] do_softirq+0x4b/0xa3 [<ffffffff8104c8d5>] irq_exit+0x53/0xd7 [<ffffffff814c734d>] do_IRQ+0x9d/0xb4 [<ffffffff814be56f>] ret_from_intr+0x0/0x1a [<ffffffff813de21c>] cpuidle_enter+0x12/0x14 [<ffffffff813de235>] cpuidle_enter_state+0x17/0x3f [<ffffffff813deb6c>] cpuidle_idle_call+0x140/0x21c [<ffffffff8101764c>] cpu_idle+0x79/0xcd [<ffffffff814a59f5>] rest_init+0x149/0x150 [<ffffffff81ca2cbc>] start_kernel+0x37c/0x389 [<ffffffff81ca22dd>] x86_64_start_reservations+0xb8/0xbd [<ffffffff81ca23e3>] x86_64_start_kernel+0x101/0x110 to a SOFTIRQ-irq-unsafe lock: (&(&adapter->mbx_lock)->rlock){+.+...} ... which became SOFTIRQ-irq-unsafe at: ... [<ffffffff81092f59>] __lock_acquire+0x366/0xdf3 [<ffffffff81093b11>] lock_acquire+0x12b/0x158 [<ffffffff814bd862>] _raw_spin_lock+0x45/0x7a [<ffffffffa0080fde>] ixgbevf_negotiate_api+0x3d/0x6d [ixgbevf] [<ffffffffa008404b>] ixgbevf_open+0x6c/0x43e [ixgbevf] [<ffffffff8140b2c1>] __dev_open+0xa0/0xe6 [<ffffffff814099b6>] __dev_change_flags+0xbe/0x142 [<ffffffff8140b1eb>] dev_change_flags+0x21/0x57 [<ffffffff8141a523>] do_setlink+0x2e2/0x7f4 [<ffffffff8141ad8c>] rtnl_newlink+0x277/0x4bb [<ffffffff81419c08>] rtnetlink_rcv_msg+0x236/0x253 [<ffffffff8142f92d>] netlink_rcv_skb+0x43/0x94 [<ffffffff814199cb>] rtnetlink_rcv+0x26/0x2d [<ffffffff8142f6dc>] netlink_unicast+0xee/0x174 [<ffffffff8142ff12>] netlink_sendmsg+0x26a/0x288 [<ffffffff813f5a0d>] __sock_sendmsg_nosec+0x58/0x61 [<ffffffff813f7d57>] __sock_sendmsg+0x3d/0x48 [<ffffffff813f7ed9>] sock_sendmsg+0x6e/0x87 [<ffffffff813f93d4>] __sys_sendmsg+0x206/0x288 [<ffffffff813f95ce>] sys_sendmsg+0x42/0x60 [<ffffffff814c57a9>] system_call_fastpath+0x16/0x1b other info that might help us debug this: Chain exists of: &(&mc->mca_lock)->rlock --> _xmit_ETHER --> &(&adapter->mbx_lock)->rlock Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&(&adapter->mbx_lock)->rlock); local_irq_disable(); lock(&(&mc->mca_lock)->rlock); lock(_xmit_ETHER); <Interrupt> lock(&(&mc->mca_lock)->rlock); *** DEADLOCK *** Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-23ixgbevf: Check for error on dma_map_single callGreg Rose1-1/+7
Ignoring the return value from a call to the kernel dma_map API functions can cause data corruption and system instability. Check the return value and take appropriate action. Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-23ixgbevf: make netif_napi_add and netif_napi_del symmetricJohn Fastabend1-7/+2
ixgbevf_alloc_q_vectors() calls netif_napi_add for each qvector where qvectors is determined by the number of msix vectors. This makes perfect sense. However on cleanup when ixgbevf_free_q_vectors() is called and for each qvector we should call netif_napi_del there is some extra logic to add a dependency on RX queues. This patch makes the add/del operations symmetric by removing the RX queues dependency. Without this if free_netdev() is called we see the general protection fault below in netif_napi_del when list_del_init() is called. # addr2line -e ./vmlinux ffffffff8140810c net-next/include/linux/list.h:88 general protection fault: 0000 [#1] SMP Modules linked in: bonding ixgbevf ixgbe(-) mdio libfc scsi_transport_fc scsi_tgt 8021q garp stp llc cpufreq_ondemand acpi_cpufreq freq_table mperf ipv6 uinput coretemp lpc_ich i2c_i801 shpchp hwmon i2c_core serio_raw crc32c_intel mfd_core joydev pcspkr microcode ioatdma igb dca pata_acpi ata_generic usb_storage pata_jmicron [last unloaded: bonding] CPU 10 Pid: 4174, comm: rmmod Tainted: G W 3.6.0-rc3jk-net-next+ #104 Supermicro X8DTN/X8DTN RIP: 0010:[<ffffffff8140810c>] [<ffffffff8140810c>] netif_napi_del+0x24/0x87 RSP: 0018:ffff88027f5e9b48 EFLAGS: 00010293 RAX: ffff8806224b4768 RBX: ffff8806224b46e8 RCX: 6b6b6b6b6b6b6b6b RDX: 6b6b6b6b6b6b6b6b RSI: ffffffff810bf6c5 RDI: ffff8806224b46e8 RBP: ffff88027f5e9b58 R08: ffff88033200b180 R09: ffff88027f5e98a8 R10: ffff88033320b000 R11: ffff88027f5e9ae8 R12: 6b6b6b6b6b6b6aeb R13: ffff8806221d11c0 R14: 0000000000000000 R15: ffff88027f5e9cf8 FS: 00007f5e58b9b700(0000) GS:ffff880333200000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 00000000010ef2b8 CR3: 0000000281fff000 CR4: 00000000000007e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Process rmmod (pid: 4174, threadinfo ffff88027f5e8000, task ffff88032f888000) Stack: ffff8806221d1160 6b6b6b6b6b6b6aeb ffff88027f5e9b88 ffffffff81408e46 ffff8806221d1160 ffff8806221d1160 ffff8806221d1ae0 ffff8806221d5668 ffff88027f5e9bb8 ffffffffa009153c ffffffffa0092a30 ffff8806221d5700 Call Trace: [<ffffffff81408e46>] free_netdev+0x64/0xd7 [<ffffffffa009153c>] ixgbevf_remove+0xa6/0xbc [ixgbevf] [<ffffffff8127a7a1>] pci_device_remove+0x2d/0x51 [<ffffffff8131f503>] __device_release_driver+0x6c/0xc2 [<ffffffff8131f640>] device_release_driver+0x25/0x32 [<ffffffff8131e821>] bus_remove_device+0x148/0x15d [<ffffffff8131cb6b>] device_del+0x130/0x1a4 [<ffffffff8131cc2a>] device_unregister+0x4b/0x57 [<ffffffff81275c27>] pci_stop_bus_device+0x63/0x85 [...] Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Greg Rose <gregory.v.rose@intel.com> Acked-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-23igb: Update versionCarolyn Wyborny1-1/+1
This patch updates the igb driver version to 4.0.17. Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-23igb: Update get cable length function for i210/i211Carolyn Wyborny1-0/+20
There was a problem in the initial implementation of the get cable length function for i210 and it did not work properly. This patch fixes that problem for i210/i211 devices. Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-23e1000e: Minimum packet size must be 17 bytesTushar Dave1-0/+11
This is a HW requirement. Although a buffer as short as 1 byte is allowed, the total length of packet before, padding and CRC insertion, must be at least 17 bytes. So pad all small packets manually up to 17 bytes before delivering them to HW. Signed-off-by: Tushar Dave <tushar.n.dave@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-22ixgbe: Fix possible memory leak in ixgbe_set_ringparamAlexander Duyck1-52/+50
We were not correctly freeing the temporary rings on error in ixgbe_set_ring_param. In order to correct this I am unwinding a number of changes that were made in order to get things back to the original working form with modification for the current ring layouts. This approach has multiple advantages including a smaller memory footprint, and the fact that the interface is stopped while we are allocating the rings meaning that there is less potential for some sort of memory corruption on the ring. The only disadvantage I see with this approach is that on a Rx allocation failure we will report an error and only update the Tx rings. However the adapter should be fully functional in this state and the likelihood of such an error is very low. In addition it is not unreasonable to expect the user to need to recheck the ring configuration should they experience an error setting the ring sizes. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-22ixgbe: Add function ixgbe_reset_pipeline_82599Don Skidmore2-0/+45
This patch adds a function that forces a full pipeline reset. This function will be used in following patches to completely reset the PHY during resets. Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-22ixgbe: Drop unnecessary addition from ixgbe_set_rx_buffer_lenAlexander Duyck1-3/+0
We still had some code floating around from the old single buffer receive path. As a result we were adding VLAN_HLEN to max_frame although the resultant value was never used. Since that is the case we can drop this from the function. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-22ixgbe: Correcting small packet paddingTushar Dave1-0/+1
Driver pad skb up to 17 bytes because of the HW requirement. However, that code implementation mess up the skb tail pointer after padding. This patch sets skb->tail correctly. Signed-off-by: Tushar Dave <tushar.n.dave@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-22ixgbe: using is_zero_ether_addr() to simplify the codeWei Yongjun1-2/+1
Using is_zero_ether_addr() to simplify the code. Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-22ixgbe: (PTP) refactor init, cyclecounter and resetJacob Keller3-73/+66
This patch modifies when and where PTP registers and data are set. Previously a work-around was used inside cyclecounter_start in order to reset some of the time registers. This patch creates a new ixgbe_ptp_reset specifically for this purpose. The cyclecounter configuration has trimmed down to only modify what is necessary. Due to hardware conditions after probe and before open, PTP init has now moved into the ixgbe_open call. This allows the ptp device name in the sysfs to be the ethernet device name instead of the MAC address. The cyclecounter check flag is renamed to PTP_ENABLED and is used to prevent PTP init from happening when PTP has not been enabled. CC: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-22ixgbe: add WOL support for new subdevice idEmil Tantilov2-0/+2
This patch adds a subdevice id for new 82599 device. The define is needed to allow enabling WOL support. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-22ixgbevf: Add VF DCB + SR-IOV supportAlexander Duyck6-8/+232
This change adds support for DCB and SR-IOV from the VF. With this change in place the VF will correctly use a traffic class other than 0 in the case that the PF is configured with the default user priority belonging to a traffic class other than 0. Cc: Greg Rose <gregory.v.rose@intel.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-22ixgbe: Enable support for VF API version 1.1 in the PF.Alexander Duyck1-8/+21
This change switches on the last few bits for us enabling version 1.1 VF support in the PF. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Robert Garrett <RobertX.Garrett@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-22ixgbe: Add support for GET_QUEUES message to get DCB configurationAlexander Duyck2-0/+52
This patch addresses several issues in regards to the combination of DCB and SR-IOV. Specifically it allows us to send information to the VF on which queues it should be using. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-22ixgbe: Add support for tracking the default user priority to SR-IOVAlexander Duyck3-20/+55
It is necessary to track the default user priority in the PF so that we can force it upon the VFs. The motivation behind this is to keep the VFs from getting access to user priorities meant for things like storage. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-22ixgbe: Add support for IPv6 and UDP to ixgbe_get_headlenAlexander Duyck1-1/+14
This change adds support for IPv6 and UDP to ixgbe_get_headlen. The advantage to this is that we can now handle ipv4/UDP, ipv6/TCP, and ipv6/UDP with a single memcpy instead of having to do them in multiple pskb_may_pull calls. A quick bit of testing shows that we increase throughput for a single session of netperf from 8800Mpbs to about 9300Mpbs in the case of ipv6/TCP. As such overall ipv6 performance should improve with this change. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Stephen Ko <stephen.s.ko@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-19igb: Split igb_update_dca into separate Tx and Rx functionsAlexander Duyck2-31/+52
This change makes it so that igb_update_dca is broken into two halves, one for Rx and one for Tx. The advantage to this is primarily readability. In addition I am enabling relaxed ordering for reads from hardware since this is supported on all of the igb parts. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-19igb: Move the calls to set the Tx and Rx queues into igb_openAlexander Duyck1-13/+18
This change helps to address locking issues seen with netif_set_real_num_tx_queues and netif_set_real_num_rx_queues when used in the igb_set_interrupt_capability function. To resolve these locking issues I have moved the two function calls into __igb_open so that they can be called while the RTNL lock is held. An added advantage to this is that the number of queues is not updated until the last possible moment so if there are any issues in allocating MSI-X interrupts or resources for the rings we have time to change the values prior to updating the netdev. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-19igb: Combine q_vector and ring allocation into a single functionAlexander Duyck2-202/+215
This change combines the the allocation of q_vectors and rings into a single function. The advantage of this is that we are guaranteed we will avoid overlap in the L1 cache sets. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-19igb: Lock buffer size at 2K even on systems with larger pagesAlexander Duyck3-15/+23
This change locks us in at 2K buffers even on a system that supports larger frames. The reason for this change is to make better use of pages and to reduce the overall truesize of frames generated by igb. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-19igb: Move rx_buffer related code in Rx cleanup path into separate functionAlexander Duyck1-86/+120
In order to try and isolate things a bit further I am moving the code related to retrieving data from the rx_buffer_info structure into a separate function. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-19igb: Map entire page and sync half instead of mapping and unmapping half pagesAlexander Duyck3-57/+181
This change makes it so that we map the entire page and just sync half of it for the device at a time. The advantage to this approach is that we can avoid the locking on map/unmap seen in many IOMMU implementations. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-19igb: Combine post-processing of skb into a single functionAlexander Duyck1-25/+44
This change is meant to just clean-up a number of function calls that were made at the end of the Rx clean-up path by combining them into a single function call. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-19igb: Do not use header split, instead receive all frames into a single bufferAlexander Duyck3-152/+312
This change makes it so that we no longer use header split. The idea is to reduce partial cache line writes by hardware when handling frames larger then header size. We can compensate for the extra overhead of having to memcpy the header buffer by avoiding the cache misses seen by leaving an full skb allocated and sitting on the ring. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-19igb: Split Rx timestamping into two separate functionsAlexander Duyck2-19/+53
In order to support page based receive we will need to split up the two different types of timestamping into two separate functions. The first one will handle legacy timestamps with the value in the register, and the new one will handle timestamps in the Rx buffer itself. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Matthew Vick <matthew.vick@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-19igb: Correcting and improving small packet check and paddingTushar Dave1-2/+3
Current implementation mess up the tail pointer. This patch sets skb->tail correctly. Also, the small packet check and padding is optimized by using unlikely and calling skb_pad directly. Signed-off-by: Tushar Dave <tushar.n.dave@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-19ixgbe: Add mailbox API version negotiation support to ixgbe PFAlexander Duyck3-7/+48
This change allows us to add a mailbox versioning API. This will allow us to determine the features supported by the VFs from the PF. For example we will be implementing a version 1.1 API for the VF that will indicate that it can support us enabling Jumbo frames as the VF will support buffer chaining. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Robert Garrett <RobertX.Garrett@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-19ixgbe: Move message handling routines into their own functionsAlexander Duyck1-93/+130
Instead of trying to maintain one large monolithic function that handles most of the different messages from the VF it makes sense to break the message handling function up so that we can just go through one switch statement and call the correct routine for a given message. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-19ixgbe: Enable jumbo frames support w/ SR-IOVAlexander Duyck3-31/+93
This change makes it so that we can have limited support for jumbo frames when SR-IOV is enabled. In order to accomplish this it is necessary to disable all VFs when the PF has jumbo frames enabled. If the VFs then request the same maximum frame size as the PF they will be re-enabled. A follow on patch will add a means of identifying when a VF can support spanning buffers and does not need to be worried about the actual supported max frame size. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Tested-by: Robert Garrett <robertx.e.garrett@intel.com> Tested-by: Sibai Li <Sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-19ixgbe: Initialize q_vector cpu and affinity masks correctlyAlexander Duyck1-2/+5
When enabling DCB the rings belonging to a q_vector on CPU 0 were not reinitializing their DCA registers. Upon closer inspection the issue was that the q_vector CPU variable was left at 0 resulting in the driver not updating the DCA registers. In order to guarantee the DCA registers will be updated I am adding a couple line change so that we initialize the CPU variable to -1 which will force a DCA update the first time an interrupt fires on that q_vector. In addition we were setting the CPU affinity hint to all CPUs when we were not specifying a CPU. Instead we should leave it as all zeros to avoid any possible confusion about the fact that we shouldn't be giving a hint. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-10e1000e: Change wthresh to 1 to avoid possible Tx stallsHiroaki SHIMODA2-4/+4
This patch originated from Hiroaki SHIMODA but has been modified by Intel with some minor cleanups and additional commit log text. Denys Fedoryshchenko and others reported Tx stalls on e1000e with BQL enabled. Issue was root caused to hardware delays. They were introduced because some of the e1000e hardware with transmit writeback bursting enabled, waits until the driver does an explict flush OR there are WTHRESH descriptors to write back. Sometimes the delays in question were on the order of seconds, causing visible lag for ssh sessions and unacceptable tx completion latency, especially for BQL enabled kernels. To avoid possible Tx stalls, change WTHRESH back to 1. The current plan is to investigate a method for re-enabling WTHRESH while not harming BQL, but those patches will be later for net-next if they work. please enqueue for stable since v3.3 as this bug was introduced in commit 3f0cfa3bc11e7f00c9994e0f469cbc0e7da7b00c Author: Tom Herbert <therbert@google.com> Date: Mon Nov 28 16:33:16 2011 +0000 e1000e: Support for byte queue limits Changes to e1000e to use byte queue limits. Reported-by: Denys Fedoryshchenko <denys@visp.net.lb> Tested-by: Denys Fedoryshchenko <denys@visp.net.lb> Signed-off-by: Hiroaki SHIMODA <shimoda.hiroaki@gmail.com> CC: eric.dumazet@gmail.com CC: therbert@google.com Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-09e1000e: add device IDs for i218Bruce Allan2-0/+4
i218 is the next-generation LOM that will be available on systems with the Lynx Point LP Platform Controller Hub (PCH) chipset from Intel. This patch provides the initial support of those devices. Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-09ixgbe/ixgbevf: Limit maximum jumbo frame size to 9.5K to avoid Tx hangsAlexander Duyck2-2/+2
This change limits the PF/VF driver to 9.5K max jumbo frame size in order prevent a possible Tx hang in the adapter when sending frames between pools. All of the parts in ixgbe support a maximum frame of 15.5K for standard traffic, however with SR-IOV or DCB enabled they should be limiting the MTU size to 9.5K. Instead of adding extra checks which would have to change the MTU when we go into or out of these modes it is preferred to just use a standard 9.5K MTU limit for all modes so that this extra overhead can be avoided. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-09ixgbevf: Set the netdev number of Tx queuesGreg Rose1-0/+7
The driver was not setting the number of real Tx queues in the net_device structure. This caused some serious issues such as Tx hangs and extremely poor performance with some usages of the driver. The issue is best observed by running: iperf -c <host> -P <n> Where n is greater than one. The greater the value of n the more likely the problem is to show up. Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-03ixgbe: add support for X540-AT1joshua.a.hay@intel.com3-0/+4
This patch adds device support for Ethernet Controller X540-AT1. Signed-off-by: Josh Hay <joshua.a.hay@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-03ixgbe: fix poll loop for FDIRCTRL.INIT_DONE bitEmil Tantilov1-1/+1
The loop in ixgbe_reinit_fdir_tables_82599() only polls for up to 100us resulting in failures to update the FDIR filter table at 1Gbps and 10Gbps when under load. The poll times for FDIRCTRL.INIT_DONE are 55us, 550us and 5.5ms for 10Gbps, 1Gbps and 100Mbps respectively. This patch sets the wait time to be the same as in ixgbe_fdir_enable_82599() Reported-by: Bhushan <shashi-sm@users.sf.net> Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>