aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/net/ethernet/calxeda/xgmac.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2013-10-02net: calxedaxgmac: fix clearing of old filter addressesRob Herring1-2/+2
In commit 2ee68f621af280 (net: calxedaxgmac: fix various errors in xgmac_set_rx_mode), a fix to clean-up old address entries was added. However, the loop to zero out the entries failed to increment the register address resulting in only 1 entry getting cleared. Fix this to correctly use the loop index. Also, the end of the loop condition was off by 1 and should have been <= rather than <. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-03net: calxedaxgmac: fix xgmac_xmit DMA mapping error handlingRob Herring1-5/+26
On a DMA mapping error in xgmac_xmit, we should simply free the skb and return NETDEV_TX_OK rather than -EIO. In the case of errors in mapping frags, we need to undo everything that has been setup. Reported-by: Andreas Herrmann <andreas.herrmann@calxeda.com> Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-03net: calxedaxgmac: fix rx DMA mapping API size mismatchesRob Herring1-6/+12
Fix the mismatch in the DMA mapping and unmapping sizes for receive. The unmap size must be equal to the map size and should not be the actual received frame length. The map size should also be adjusted by the NET_IP_ALIGN size since the h/w buffer size (dma_buf_sz) includes this offset. Also, add a missing dma_mapping_error check in xgmac_rx_refill. Reported-by: Lennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-03net: calxedaxgmac: remove some unused statistic countersRob Herring1-3/+0
rx_sa_filter_fail and tx_undeflow events are unused and impossible to occur based on how the h/w is used. We never filter on source MAC address and TX store and forward mode prevents underflow events. Reported-by: Lennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-03net: calxedaxgmac: fix various errors in xgmac_set_rx_modeRob Herring1-4/+13
Fix xgmac_set_rx_mode to handle several conditions that were not handled correctly as Lennert Buytenhek describes: If we have, say, 100 unicast addresses, and 5 multicast addresses, the unicast address count check will evaluate to true, and set use_hash to true. The multicast address check will however evaluate to false, and use_hash won't be set to true again, and XGMAC_FRAME_FILTER_HMC won't be OR'd into XGMAC_FRAME_FILTER, but since use_hash was still true from the unicast check, netdev_for_each_mc_addr() will program the multicast addresses into the hash table instead of using the MAC address registers, but since the HMC bit wasn't set, the hash table won't be checked for multicast addresses on receive, and we'll stop receiving multicast packets entirely. Also, there is no code that zeroes out MAC address registers reg..31 at the end of this function, meaning that under the right conditions, unicast/multicast addresses that were previously in the filter but were then deleted won't be cleared out. Reported-by: Lennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-03net: calxedaxgmac: enable interrupts after napi_enableRob Herring1-3/+5
Fix a race condition where the interrupt handler may have called napi_schedule before napi_enable is called. This would disable interrupts and never actually schedule napi to run. Reported-by: Lennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-03net: calxedaxgmac: fix race with tx queue stop/wakeRob Herring1-5/+15
Since the xgmac transmit start and completion work locklessly, it is possible for xgmac_xmit to stop the tx queue after the xgmac_tx_complete has run resulting in the tx queue never being woken up. Fix this by ensuring that ring buffer index updates are visible and recheck the ring space after stopping the queue. Also fix an off-by-one bug where we need to stop the queue when the ring buffer space is equal to MAX_SKB_FRAGS. The implementation used here was copied from drivers/net/ethernet/broadcom/tg3.c. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Reviewed-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-03net: calxedaxgmac: update ring buffer tx_head after barriersRob Herring1-1/+2
Ensure that the descriptor writes are visible before the ring buffer head is updated. Since writel is a barrier, we can simply update the head after the writel. Reported-by: Lennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-03net: calxedaxgmac: fix possible skb free before tx completeRob Herring1-31/+24
The TX completion code may have freed an skb before the entire sg list was transmitted. The DMA unmap calls for the fragments could also get skipped. Now set the skb pointer on every entry in the ring, not just the head of the sg list. We then use the FS (first segment) bit in the descriptors to determine skb head vs. fragment. This also fixes similar bug in xgmac_free_tx_skbufs where clean-up of a sg list that wraps at the end of the ring buffer would not get unmapped. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-03net: calxedaxgmac: fix race between xgmac_tx_complete and xgmac_tx_errRob Herring1-20/+18
It is possible for the xgmac_tx_complete to run concurrently with xgmac_tx_err since there are no locks. Fix this by moving the tx error handling to a workqueue so we can disable napi while we reset the transmitter. Reported-by: Andreas Herrmann <andreas.herrmann@calxeda.com> Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-03net: calxedaxgmac: read correct field in xgmac_desc_get_buf_lenRob Herring1-1/+1
xgmac_desc_get_buf_len appears to have a copy/paste error. flags is the wrong field to read. We should be reading buf_size field. cpu_to_le32 should also be le32_to_cpu. This never really mattered as this function is only used for DMA mapping calls which happen to be nops with coherent DMA. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-03net: calxedaxgmac: remove NETIF_F_FRAGLIST settingRob Herring1-1/+1
The xgmac does not actually handle frag lists, so this option should not be set. It does not appear to have had any impact though. Reported-by: Lennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-05-27net: ethernet: remove unnecessary platform_set_drvdata()Jingoo Han1-2/+0
The driver core clears the driver data to NULL after device_release or on probe failure, since commit 0998d0631001288a5974afc0b2a5f568bcdecb4d (device-core: Ensure drvdata = NULL when no driver is bound). Thus, it is not needed to manually clear the device driver data to NULL. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com> Acked-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Roland Stigge <stigge@antcom.de> Acked-by: Mugunthan V N <mugunthanvnm@ti.com> Reviewed-by: H Hartley Sweeten <hsweeten@visionengravers.com> Tested-by: Roland Stigge <stigge@antcom.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-04-25net: calxedaxgmac: fix condition in xgmac_set_features()Dan Carpenter1-1/+1
The "changed" variable should be a 64 bit type, otherwise it can't store all the features. The way the code is now the test for whether NETIF_F_RXCSUM changed is always false and we return immediately. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-04-16xgmac: Remove unneeded PM_OPS definitionsFabio Estevam1-5/+2
SIMPLE_DEV_PM_OPS macro can handle !CONFIG_PM_SLEEP case nicely, so there is no need to define PM_OPS for both CONFIG_PM_SLEEP and !CONFIG_PM_SLEEP cases. Remove the unneeded definitions. Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-03-29net: calxedaxgmac: Wake-on-LAN fixesRob Herring1-1/+5
WOL is broken because the magic packet status bit is getting set rather than the enable bit. The PMT interrupt is not getting serviced because the PMT interrupt is also enabled on the global interrupt, but not cleared by the global interrupt and the global interrupt is higher priority. This fixes both of these issues to get WOL working. There's still a problem with receive after resume, but at least now we can wake-up. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-03-29net: calxedaxgmac: fix rx ring handling when OOMRob Herring1-0/+3
If skb allocation for the rx ring fails repeatedly, we can reach a point were the ring is empty. In this condition, the driver is out of sync with the h/w. While this has always been possible, the removal of the skb recycling seems to have made triggering this problem easier. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-01-29Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller1-0/+4
Bring in the 'net' tree so that we can get some ipv4/ipv6 bug fixes that some net-next work will build upon. Signed-off-by: David S. Miller <davem@davemloft.net>
2013-01-18net: calxedaxgmac: throw away overrun framesRob Herring1-0/+4
The xgmac driver assumes 1 frame per descriptor. If a frame larger than the descriptor's buffer size is received, the frame will spill over into the next descriptor. So check for received frames that span more than one descriptor and discard them. This prevents a crash if we receive erroneous large packets. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
2013-01-03net: remove unnecessary NET_ADDR_RANDOM "bitclean"Jiri Pirko1-1/+0
NET_ADDR_SET is set in dev_set_mac_address() no need to alter dev->addr_assign_type value in drivers. Signed-off-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-07net: calxedaxgmac: ip align receive buffersRob Herring1-5/+6
On gcc 4.7, we will get alignment traps in the ip stack if we don't align the ip headers on receive. The h/w can support this, so use ip aligned allocations. Cut down the unnecessary padding on the allocation. The buffer can start on any byte alignment, but the size including the begining offset must be 8 byte aligned. So the h/w buffer size must include the NET_IP_ALIGN offset. Thanks to Eric Dumazet for the initial patch highlighting the padding issues. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-07net: calxedaxgmac: rework transmit ring handlingRob Herring1-12/+12
Only generate tx interrupts on every ring size / 4 descriptors. Move the netif_stop_queue call to the end of the xmit function rather than checking at the beginning. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-07net: calxedaxgmac: drop some unnecessary register writesRob Herring1-6/+0
The interrupts have already been cleared, so we don't need to clear them again. Also, we could miss interrupts if they are cleared, but we don't process the packet. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-07net: calxedaxgmac: use raw i/o accessors in rx and tx pathsRob Herring1-6/+6
The standard readl/writel accessors involve a spinlock and cache sync operation on ARM platforms with an outer cache. Only DMA triggering accesses need this, so use the raw variants instead in the critical paths. The relaxed variants would be more appropriate, but don't exist on all arches. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-07net: calxedaxgmac: remove explicit rx dma buffer pollingRob Herring1-3/+1
New received frames will trigger the rx DMA to poll the DMA descriptors, so there is no need to tell the h/w to poll. We also want to enable dropping frames from the fifo when there is no buffer. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-07net: calxedaxgmac: enable operate on 2nd frame modeRob Herring1-2/+2
Enable the tx dma to start reading the next frame while sending the current frame. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-07net: remove skb recyclingEric Dumazet1-17/+2
Over time, skb recycling infrastructure got litle interest and many bugs. Generic rx path skb allocation is now using page fragments for efficient GRO / TCP coalescing, and recyling a tx skb for rx path is not worth the pain. Last identified bug is that fat skbs can be recycled and it can endup using high order pages after few iterations. With help from Maxime Bizon, who pointed out that commit 87151b8689d (net: allow pskb_expand_head() to get maximum tailroom) introduced this regression for recycled skbs. Instead of fixing this bug, lets remove skb recycling. Drivers wanting really hot skbs should use build_skb() anyway, to allocate/populate sk_buff right before netif_receive_skb() Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Maxime Bizon <mbizon@freebox.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-10net: calxedaxgmac: enable rx cut-thru modeRob Herring1-2/+3
Enabling RX cut-thru mode yields better performance as received frames start getting written to memory before a whole frame is received. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-10net: calxedaxgmac: set outstanding AXI bus transactions to 8Rob Herring1-1/+1
Increase the number of outstanding read and write AXI transactions from 1 to 8 for better performance. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-10net: calxedaxgmac: fix hang on rx refillRob Herring1-15/+12
Fix intermittent hangs in xgmac_rx_refill. If a ring buffer entry already had an skb allocated, then xgmac_rx_refill would get stuck in a loop. This can happen on a rx error when we just leave the skb allocated to the entry. [ 7884.510000] INFO: rcu_preempt detected stall on CPU 0 (t=727315 jiffies) [ 7884.510000] [<c0010a59>] (unwind_backtrace+0x1/0x98) from [<c006fd93>] (__rcu_pending+0x11b/0x2c4) [ 7884.510000] [<c006fd93>] (__rcu_pending+0x11b/0x2c4) from [<c0070b95>] (rcu_check_callbacks+0xed/0x1a8) [ 7884.510000] [<c0070b95>] (rcu_check_callbacks+0xed/0x1a8) from [<c0036abb>] (update_process_times+0x2b/0x48) [ 7884.510000] [<c0036abb>] (update_process_times+0x2b/0x48) from [<c004e8fd>] (tick_sched_timer+0x51/0x94) [ 7884.510000] [<c004e8fd>] (tick_sched_timer+0x51/0x94) from [<c0045527>] (__run_hrtimer+0x4f/0x1e8) [ 7884.510000] [<c0045527>] (__run_hrtimer+0x4f/0x1e8) from [<c0046003>] (hrtimer_interrupt+0xd7/0x1e4) [ 7884.510000] [<c0046003>] (hrtimer_interrupt+0xd7/0x1e4) from [<c00101d3>] (twd_handler+0x17/0x24) [ 7884.510000] [<c00101d3>] (twd_handler+0x17/0x24) from [<c006be39>] (handle_percpu_devid_irq+0x59/0x114) [ 7884.510000] [<c006be39>] (handle_percpu_devid_irq+0x59/0x114) from [<c0069aab>] (generic_handle_irq+0x17/0x2c) [ 7884.510000] [<c0069aab>] (generic_handle_irq+0x17/0x2c) from [<c000cc8d>] (handle_IRQ+0x35/0x7c) [ 7884.510000] [<c000cc8d>] (handle_IRQ+0x35/0x7c) from [<c033b153>] (__irq_svc+0x33/0xb8) [ 7884.510000] [<c033b153>] (__irq_svc+0x33/0xb8) from [<c0244b06>] (xgmac_rx_refill+0x3a/0x140) [ 7884.510000] [<c0244b06>] (xgmac_rx_refill+0x3a/0x140) from [<c02458ed>] (xgmac_poll+0x265/0x3bc) [ 7884.510000] [<c02458ed>] (xgmac_poll+0x265/0x3bc) from [<c029fcbf>] (net_rx_action+0xc3/0x200) [ 7884.510000] [<c029fcbf>] (net_rx_action+0xc3/0x200) from [<c0030cab>] (__do_softirq+0xa3/0x1bc) Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-10net: calxedaxgmac: fix net timeout recoveryRob Herring1-0/+1
Fix net tx watchdog timeout recovery. The descriptor ring was reset, but the DMA engine was not reset to the beginning of the ring. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-02-15net: use eth_hw_addr_random() and reset addr_assign_typeDanny Kukawka1-1/+2
Use eth_hw_addr_random() instead of calling random_ether_addr() to set addr_assign_type correctly to NET_ADDR_RANDOM. Reset the state to NET_ADDR_PERM as soon as the MAC get changed via .ndo_set_mac_address. v2: adapt to renamed eth_hw_addr_random() Signed-off-by: Danny Kukawka <danny.kukawka@bisect.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-01-05xgmac: cleanupsstephen hemminger1-2/+2
Make local function static, make ethtool_ops const. Compile tested only. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-11-29net: add calxeda xgmac ethernet driverRob Herring1-0/+1928
Add support for the XGMAC 10Gb ethernet device in the Calxeda Highbank SOC. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>