Age | Commit message (Collapse) | Author | Files | Lines |
|
CPDMA interrupts are not properly acknowledged which leads to interrupt
storm, only cpdma interrupt 0 is acknowledged in Davinci CPDMA driver.
Changed cpdma_ctlr_eoi api to acknowledge 1 and 2 interrupts which are
used for rx and tx respectively.
Reported-by: Pantelis Antoniou <panto@antoniou-consulting.com>
Signed-off-by: Mugunthan V N <mugunthanvnm@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Upper case macros for various chip attributes are slightly
difficult to read and are a bit out of characterto the other
tg3_<foo> attribute functions.
Convert:
GET_ASIC_REV(tp->pci_chip_rev_id) -> tg3_asic_rev(tp)
GET_CHIP_REV(tp->pci_chip_rev_id) -> tg3_chip_rev(tp)
Remove:
GET_METAL_REV(tp->pci_chip_rev_id) -> tg3_metal_rev(tp) (unused)
Add:
tg3_chip_rev_id(tp) for tp->pci_chip_rev_id so access styles
are similar to tg3_asic_rev and tg3_chip_rev.
These macros are not converted to static inline functions
because gcc (tested with 4.7.2) is currently unable to
optimize the object code it produces the same way and code
is otherwise larger.
Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
It's the same value as tp->pci_chip_rev_id so use that
instead. This makes all CHIPREV_ID_<foo> tests the same.
Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Without this patch I get many unaligned access warnings per packet,
this patches fixes them all. This should improve performance on some
systems like mips.
Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Same problem as IPv6
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Should not use assignment in conditional:
warning: suggest parentheses around assignment used as truth value [-Wparentheses]
Problem introduced by:
commit 14bbd6a565e1bcdc240d44687edb93f721cfdf99
Author: Pravin B Shelar <pshelar@nicira.com>
Date: Thu Feb 14 09:44:49 2013 +0000
net: Add skb_unclone() helper function.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
They well deserve a separated unit.
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In ehea.h the minimal entries is 2^7 - 1:
#define EHEA_MIN_ENTRIES_QP 127
Thus change the module param description accordinglly
Signed-off-by: Dave Young <dyoung@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use ipv6_addr_hash() and a single jhash invocation.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Recent changes have made it so that MAX_SKB_FRAGS is now never less than 16.
As a result we were seeing issues on systems with 64K pages as it would
cause DESC_NEEDED to increase to 68, and we would need over 136 descriptors
free before clean_tx_irq would wake the queue.
This patch makes it so that DESC_NEEDED is always MAX_SKB_FRAGS + 4. This
should prevent any possible deadlocks on the systems with 64K pages as we will
now only require 42 descriptors to wake.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This patch makes sure that TXDCTL.WTHRESH is set to 1 when BQL is enabled
and EITR is set to more than 100k interrupts per second to avoid Tx timeouts.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This patch adds support for reading data from SFP+ modules over i2c.
Signed-off-by: Aurélien Guillaume <footplus@gmail.com>
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This patch replaces instances where a return code from i2c operations
were checked against a list of error codes with a much simpler
if ( status != 0 ) check.
Some whitespace cleanups included.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This patch makes sure that the SW lock is released after all i2c
operations complete in the retry code path.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This change adds support for the ethtool set_channels operation.
Since the ixgbe driver has to support DCB as well as the other modes the
assumption I made here is that the number of channels in DCB modes refers
to the number of queues per traffic class, not the number of queues total.
CC: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This patch adds support for the ethtool get_channels operation.
Since the ixgbe driver has to support DCB as well as the other modes the
assumption I made here is that the number of channels in DCB modes refers
to the number of queues per traffic class, not the number of queues total.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Reviewed-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
The ixgbe_setup_tc code is essentially the same code we need any time we have
to update the number of queues. As such I am making it available always and
just stripping the DCB specific bits out when DCB is disabled instead of
stripping the entire function.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Reviewed-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This change updates the ixgbe driver to use __netdev_pick_tx instead of
the current logic it is using to select a queue. The main result of this
change is that ixgbe can now fully support XPS, and in the case of non-FCoE
enabled configs it means we don't need to have our own ndo_select_queue.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Reviewed-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This change adds support for ixgbe to configure the XPS queue mapping on
load. The result of this change is that on open we will now be resetting
the number of Tx queues, and then setting the default configuration for XPS
based on if ATR is enabled or disabled.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Reviewed-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Instead of adjusting the FCoE and Flow director limits based on the number
of CPUs we can define them much sooner. This allows the user to come
through later and adjust them once we have updated the code to support the
set_channels ethtool operation.
I am still allowing for FCoE and RSS queues to be separated if the number
queues is less than the number of CPUs. This essentially treats the two
groupings like they are two separate traffic classes.
In addition I am changing the initialization to use the MAX_TX/RX_QUEUES
defines instead of trying to compute the value as it will be possible in
upcoming patches for the user to request the maximum number of queues.
I have also updated things so that the upper limit on queues is exactly 63
instead of allowing it to go up to 64. The reason for this change is to
address the fact thqt the driver only supports up to 63 queue vectors since
the hardware supports 64 MSI-X vectors, but one must be reserved for "other"
causes.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
On several machines with i350 adapters the ethtool offline self-test sometimes
fails. This happens because link auto negotiation may take longer than the
timeout of 4 seconds. Increasing the timeout by 1 seconds resolves the issue.
Output from a failing i350 offline self-test:
while [ 1 ]; do ethtool -t eth2 offline; done
The test result is PASS
The test extra info:
Register test (offline) 0
Eeprom test (offline) 0
Interrupt test (offline) 0
Loopback test (offline) 0
Link test (on/offline) 0
The test result is FAIL
The test extra info:
Register test (offline) 0
Eeprom test (offline) 0
Interrupt test (offline) 0
Loopback test (offline) 0
Link test (on/offline) 1
The test result is PASS
The test extra info:
Register test (offline) 0
Eeprom test (offline) 0
Interrupt test (offline) 0
Loopback test (offline) 0
Link test (on/offline) 0
Signed-off-by: Stefan Assmann <sassmann@kpanic.de>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This change is meant to address several race issues that become possible
because next_to_watch could possibly be set to a value that shows that the
descriptor is done when it is not. In order to correct that we instead make
next_to_watch a pointer that is set to NULL during cleanup, and set to the
eop_desc after the descriptor rings have been written.
To enforce proper ordering the next_to_watch pointer is not set until after
a wmb writing the values to the last descriptor in a transmit. In order to
guarantee that the descriptor is not read until after the eop_desc we use the
read_barrier_depends which is only really necessary on the alpha architecture.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Greg Rose <gregory.v.rose@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Current e1000e driver doesn't tell nothing when Link Speed is downgraded due to
SmartSpeed. As a result, users suspect that there is something wrong with
NIC. If the cause of it is SmartSpeed, there is no means to replace NIC. This
patch make e1000e notify users that SmartSpeed worked.
Signed-off-by: Koki Sanagi <sanagi.koki@jp.fujitsu.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Fixes whitespace issues, such as lines exceeding 80 chars, needless blank
lines and the use of spaces where tabs are needed. In addition, fix
multi-line comments to align with the networking standard.
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
|
|
As the number of iovecs in a send request is already limited within
UIO_MAXIOV(i.e. 1024) in __sys_sendmsg(), it's unnecessary to check it
again in TIPC stack.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
|
|
Change overload control to be purely byte-based, using
sk->sk_rmem_alloc as byte counter, and compare it to a calculated
upper limit for the socket receive queue.
For all connection messages, irrespective of message importance,
the overload limit is set to a constant value (i.e, 67MB). This
limit should normally never be reached because of the lower
limit used by the flow control algorithm, and is there only
as a last resort in case a faulty peer doesn't respect the send
window limit.
For datagram messages, message importance is taken into account
when calculating the overload limit. The calculation is based
on sk->sk_rcvbuf, and is hence configurable via the socket option
SO_RCVBUF.
Cc: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
|
|
The tipc function discard_rx_queue() is just a duplicated
implementation of __skb_queue_purge(). Remove the former
and directly invoke __skb_queue_purge().
In doing so, the underscores convey to the code reader, more
information about the current locking state that is assumed.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
|
|
We no longer need to use mac_len, lets cleanup things.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
There is no need to implement empty suspend/resume callbacks if there is nothing
to do during suspend/resume. The drivers will behave the same with no callbacks
or empty callbacks during suspend/resume.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Following patch adds GRE protocol offload handler so that
skb_gso_segment() can segment GRE packets.
SKB GSO CB is added to keep track of total header length so that
skb_segment can push entire header. e.g. in case of GRE, skb_segment
need to push inner and outer headers to every segment.
New NETIF_F_GRE_GSO feature is added for devices which support HW
GRE TSO offload. Currently none of devices support it therefore GRE GSO
always fall backs to software GSO.
[ Compute pkt_len before ip_local_out() invocation. -DaveM ]
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This function will be used in next GRE_GSO patch. This patch does
not change any functionality. It only exports skb_mac_gso_segment()
function.
[ Use skb_reset_mac_len() -DaveM ]
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This function will be used in next GRE_GSO patch. This patch does
not change any functionality.
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Acked-by: Eric Dumazet <edumazet@google.com>
|
|
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Before the device is opened, the carrier state should be off. It
will not race with the link interrupt if we set it before calling
register_netdev().
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Don't set the default size to 128K if it is 5762. Instead, rely on the
size we obtain from NVRAM location 0xf0.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This chip supports Energy Efficient Ethernet. The existing code only
supports a smaller set of devices with 5718 PCI ID. Expand support for
all devices with the same 5717 B0 chip ID.
Signed-off-by: Michael Chan <mchan@broadocm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The patch also adds a couple of fixes
- For the 57766 and non Ax versions of 57765, bootcode needs to setup
the PCIE Fast Training Sequence (FTS) value to prevent transmit hangs.
Unfortunately, it does not have enough room in the selfboot case (i.e.
devices with no NVRAM). The driver needs to implement this.
- For performance reasons, the 2k DMA engine mode on the 57766 should
be enabled and dma size limited to 2k for standard sized packets.
Signed-off-by: Nithin Nayak Sujir <nsujir@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch reshuffles the switch/case structure of the flag assignment to
allow for the flags to be set for each MAC type separately. This is needed
for new HW that does not have feature parity with older HW.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Jack Morgan <jack.morgan@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This patch simplifies igb_get_invariants function by moving all implemented
function pointers in this function to individual separate functions,
based on their functionalities, this would make debugging much easier.
Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This patch initializes MAC function pointers for device configuration.
Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This patch initializes NVM function pointers for device configuration.
Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This patch initializes PHY function pointers for device configuration.
Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
After reviewing the igb and ixgbe code I realized there are a few issues in
how the code is structured. Specifically we are not checking the size of the
buffers being used in transmits and we are not using the same value to
determine when to stop or start a Tx queue. As such the code is prone to be
buggy.
This patch makes it so that we have one value DESC_NEEDED that we will use for
starting and stopping the queue. In addition we will check the size of
buffers being used when setting up a transmit so as to avoid a possible buffer
overrun if we were to receive a frame with a block of data larger than 32K in
skb->data.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This patch correctly resolves the sparse warnings found with this
function.
Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This patch fixes the allocation function in igb_get_i2c_client to use
GFP_ATOMIC instead of GFP_KERNEL because we have a spinlock.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This patch fixes an issue where we check for irq's disabled then exit after
explicitly disabling them with spin_lock_irqsave.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com>
Tested-by: Aaron Brown <arron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This change makes it so that we can enable the use of build_skb for cases
where jumbo frames are disabled. The advantage to this is that we do not
have to perform a memcpy to populate the header and as a result we see a
significant performance improvement.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Even for non-pfmalloc SKBs, __netif_receive_skb() will do a
tsk_restore_flags() on current unconditionally.
Make __netif_receive_skb() a shim around the existing code, renamed to
__netif_receive_skb_core(). Let __netif_receive_skb() wrap the
__netif_receive_skb_core() call with the task flag modifications, if
necessary.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This fixes a less obvious error on one hand, and prevents futher
similar errors by disambiguating and optimizing RxFCB indication,
on the other hand.
The error consists in NETIF_F_HW_VLAN_TX flag being used as an
indication of Rx FCB insertion. This happened as soon gfar_uses_fcb(),
which despite its name indicates Rx FCB insertion, started
incorporating is_vlan_on().
is_vlan_on(), on the other hand, is also a misleading construct because
we need to differentiate b/w hw VLAN extraction/VLEX (marked by VLAN_RX
flag) and hw VLAN insertion/VLINS (VLAN_TX flag), which are different
mechanisms using different types of FCBs.
The hw spec for the RxFCB feature is as follows:
In the case of RxBD rings, FCBs (Frame Control Block) are inserted by
the eTSEC whenever RCTRL[PRSDEP] is set to a non-zero value. Only one
FCB is inserted per frame (in the buffer pointed to by the RxBD with
bit F set). TOE acceleration for receive is enabled for all rx frames
in this case.
This patch introduces priv->uses_rxfcb field to quickly signal RxFCB
insertion in accordance with the specification above.
The dependency on FSL_GIANFAR_DEV_HAS_TIMER was also eliminated as
another source of confusion. The actual dependency is to priv->hwts_rx_en.
Upon changing priv->hwts_rx_en via IOCTL, the gfar device is being
restarted and on init_mac() the priv->hwts_rx_en flag determines RxFCB
insertion, and rctrl is programmed accordingly. The patch takes care
of this case too.
Though maybe not as self documenting as the inlining version uses_fcb(),
priv->uses_rxfcb has the main purpose to quickly signal, on the hot path,
that the incoming frame has a *Rx* FCB block inserted which needs to be
pulled out before passing the skb to the stack. This is a performance
critical operation, it needs to happen fast, that's why uses_rxfcb is
placed in the first cacheline of gfar_private.
This is also why a cached rctrl cannot be used instead: 1) because
we don't have 32 bits available in the first cacheline of gfar_priv
(but only 16); 2) bit operations are expensive on the hot path.
Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The controller's ref manual states clearly that when the hw Rx vlan
offload feature is enabled, meaning that the VLEX bit from RCTRL is
correctly enabled, then the hw performs automatic VLAN tag extraction
and deletion from the ethernet frames. So there's no point in trying to
increase the rx buff size when rxvlan is on, as the frame is actually
smaller.
And the Tx vlan hw accel feature (VLINS) has nothing to do with rx buff
size computation.
Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|