Age | Commit message (Collapse) | Author | Files | Lines |
|
All DSA tag receive functions do strictly the same thing after they have located
the originating source port from their tag specific protocol:
- push ETH_HLEN bytes
- set pkt_type to PACKET_HOST
- call eth_type_trans()
- bump up counters
- call netif_receive_skb()
Factor all of that into dsa_switch_rcv(). This also makes us return a pointer to
a sk_buff, which makes us symetric with the xmit function.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
All DSA tag receive functions need to unshare the skb before mangling it, move
this to the generic dsa_switch_rcv() function which will allow us to make the
tag receive function return their mangled skb without caring about freeing a
NULL skb.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
dsa_switch_rcv() already tests for dst == NULL, so there is no need to duplicate
the same check within the tag receive functions.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
All available gso_type flags are currently in use, so
extend gso_type from 'unsigned short' to 'unsigned int'
to be able to add further flags.
We reorder the struct skb_shared_info to use
two bytes of the four byte hole before dataref.
All fields before dataref are cleared, i.e.
four bytes more than before the change.
The remaining two byte hole is moved to the
beginning of the structure, this protects us
from immediate overwites on out of bound writes
to the sk_buff head.
Structure layout on x86-64 before the change:
struct skb_shared_info {
unsigned char nr_frags; /* 0 1 */
__u8 tx_flags; /* 1 1 */
short unsigned int gso_size; /* 2 2 */
short unsigned int gso_segs; /* 4 2 */
short unsigned int gso_type; /* 6 2 */
struct sk_buff * frag_list; /* 8 8 */
struct skb_shared_hwtstamps hwtstamps; /* 16 8 */
u32 tskey; /* 24 4 */
__be32 ip6_frag_id; /* 28 4 */
atomic_t dataref; /* 32 4 */
/* XXX 4 bytes hole, try to pack */
void * destructor_arg; /* 40 8 */
skb_frag_t frags[17]; /* 48 272 */
/* --- cacheline 5 boundary (320 bytes) --- */
/* size: 320, cachelines: 5, members: 12 */
/* sum members: 316, holes: 1, sum holes: 4 */
};
Structure layout on x86-64 after the change:
struct skb_shared_info {
short unsigned int _unused; /* 0 2 */
unsigned char nr_frags; /* 2 1 */
__u8 tx_flags; /* 3 1 */
short unsigned int gso_size; /* 4 2 */
short unsigned int gso_segs; /* 6 2 */
struct sk_buff * frag_list; /* 8 8 */
struct skb_shared_hwtstamps hwtstamps; /* 16 8 */
unsigned int gso_type; /* 24 4 */
u32 tskey; /* 28 4 */
__be32 ip6_frag_id; /* 32 4 */
atomic_t dataref; /* 36 4 */
void * destructor_arg; /* 40 8 */
skb_frag_t frags[17]; /* 48 272 */
/* --- cacheline 5 boundary (320 bytes) --- */
/* size: 320, cachelines: 5, members: 13 */
};
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
For security reasons, NIC firmware does not allow VF to set its VLAN if PF
set it already. Firmware allows VF to set its VLAN if PF did not set it.
After the VF instructs the firmware to set the VLAN, VF always indicates
(via return 0) that the operation is successful--even for the times when it
isn't.
Put in a mechanism for the VF's set VLAN function to receive the firmware
response code, then make that function return -EPERM if the firmware
forbids the operation.
Make that mechanism available for other functions that may, in the future,
be interested in receiving the response code from the firmware. That
mechanism involves adding new fields to struct octnic_ctrl_pkt, so make all
users of struct octnic_ctrl_pkt initialize the struct to zero before using
it; otherwise, the mechanism might act on uninitialized garbage.
Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com>
Signed-off-by: Derek Chickles <derek.chickles@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Prior to opening the channel we should have all the state setup to handle
interrupts. The current code does not do that; fix the bug. This bug
can result in faults in the interrupt path.
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The SMI clause 22 & 45 read/write operations are local to the global2.c file,
so make them static. This eliminates the following warning:
drivers/net/dsa/mv88e6xxx/global2.c:571:5: warning: no previous prototype for 'mv88e6xxx_g2_smi_phy_read_c45' [-Wmissing-prototypes]
int mv88e6xxx_g2_smi_phy_read_c45(struct mv88e6xxx_chip *chip, int addr,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/net/dsa/mv88e6xxx/global2.c:602:5: warning: no previous prototype for 'mv88e6xxx_g2_smi_phy_read_c22' [-Wmissing-prototypes]
int mv88e6xxx_g2_smi_phy_read_c22(struct mv88e6xxx_chip *chip, int addr,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/net/dsa/mv88e6xxx/global2.c:635:5: warning: no previous prototype for 'mv88e6xxx_g2_smi_phy_write_c45' [-Wmissing-prototypes]
int mv88e6xxx_g2_smi_phy_write_c45(struct mv88e6xxx_chip *chip, int addr,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/net/dsa/mv88e6xxx/global2.c:664:5: warning: no previous prototype for 'mv88e6xxx_g2_smi_phy_write_c22' [-Wmissing-prototypes]
int mv88e6xxx_g2_smi_phy_write_c22(struct mv88e6xxx_chip *chip, int addr,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Suggested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
It's rather confusing that the netlink message flags are
numbered 1, 2, 4, 8, 16, 32, <unused>, 0x100. Make that
more understandable by numbering the lower ones with hex
constants as well.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
On the case where nn->eth_port is null the warning message
is printing the port by dereferencing this null pointer.
Remove the deference to avoid a crash when printing the
warning message.
Detected by CoverityScan, CID#1426198 ("Dereference after null check")
Fixes: ce22f5a2cbe3c627 ("nfp: separate high level and low level NSP headers")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Added a per socket traffic monitoring option to illustrate the usage
of new getsockopt SO_COOKIE. The program is based on the socket traffic
monitoring program using xt_eBPF and in the new option the data entry
can be directly accessed using socket cookie. The cookie retrieved
allow us to lookup an element in the eBPF for a specific socket.
Signed-off-by: Chenbo Feng <fengc@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Introduce a new getsockopt operation to retrieve the socket cookie
for a specific socket based on the socket fd. It returns a unique
non-decreasing cookie for each socket.
Tested: https://android-review.googlesource.com/#/c/358163/
Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Chenbo Feng <fengc@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch is meant to improve the performance of the Rx path.
Specifically by using build_skb we have several distinct advantages.
In the case of small frames we were previously using a copy-break approach.
This means that we were allocating a page fragment to use for skb->head,
and were having to copy the packet into that region. Both of those calls
are now avoided since we just build the skb around the data.
In the case of large frames the gains are much more significant.
Specifically we were having to allocate skb->head, and copy the headers as
before. However in addition we were having to parse the header using
eth_get_headlen which could be quite expensive. All of this is avoided by
building the frame around the data. I have seen gains as high as 30% when
using VXLAN for instance due to just header pulling overhead.
Finally with all this in place it also sets us up to start looking at
enabling XDP. Specifically we now have a path in which the data is in the
page and the frame is built around it. So if we parse it with XDP before
we call build_skb we can take care of any necessary processing there.
Change-ID: Id4bdd618e94473d41f892417e5d8019639e421e3
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This patch adds padding to the start of frames to make room for headroom
for us to eventually start using build_skb. Right now we guarantee at
least NET_SKB_PAD + NET_IP_ALIGN, however we allocate more space if more is
available. For example on x86 the headroom should be 192 bytes.
On systems that have too large of a cache line size to support storing 1.5K
padding and shared info we default to using 3K buffers and reserve
everything that isn't used for skb_shared_info or the data buffer for
headroom.
Change-ID: I33c641c9a1ea10cf7cc484c2d20985368d2d709a
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
There are situations where adding padding to the front and back of an Rx
buffer will require that we add additional padding. Specifically if
NET_IP_ALIGN is non-zero, or the MTU size is larger than 7.5K we would need
to use 2K buffers which leaves us with no room for the padding.
To preemptively address these cases I am adding support for 3K buffers to
the Rx path so that we can provide the additional padding needed in the
event of NET_IP_ALIGN being non-zero or a cache line being greater than 64.
Change-ID: I938bc1ba611285428df39a613cd66f98e60b55c7
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Since an early commit a few flags have no longer
been used. Remove these definitions to reduce code clutter.
Change-ID: I3589be4622574e747013cd4dc403e18b039f4965
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
The I40E_FLAG_NEED_LINK_UPDATE was never used. Remove the flag
definitions.
Change-ID: If59d0c6b4af85ca27281f3183c54b055adb439a4
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
We can simply check both Tx and Rx queues in a single loop, rather than
repeating the loop twice.
Change-ID: Ic06f26b0e3c2620e0e33c1a2999edda488e647ad
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Look up the MAC address from the eth_get_platform_mac_address() function
first before checking what the firmware provides. We already handle the
case of re-writing the MAC-VLAN filter, so there is no need to add extra
code for this. However, update the comment where we do this to indicate
that it does impact the Open Firmware MAC address case.
Change-ID: I73e59fbe0b0e7e6f3ee9f5170d0bd3a4d5faf4db
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This patch greatly reduces the unneeded complexity in the
i40e_detect_recover_hung_queue code path. The previous implementation
set a 'hung bit' which would then get cleared while polling. If the
detection routine was called a second time with the bit already set, we
would issue a software interrupt. This patch makes it such that if
interrupts are disabled and we have pending TX descriptors, we trigger a
software interrupt since in, the worst case, queues are already clean
and we have an extra interrupt.
Additionally this patch removes the workaround for lost interrupts as
calling napi_reschedule in this context can cause software interrupts to
fire on the wrong CPU.
Change-ID: Iae108582a3ceb6229ed1d22e4ed6e69cf97aad8d
Signed-off-by: Alan Brady <alan.brady@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Previously rtnl lock was held during whole reset procedure that
was stopping other PFs running their reset procedures. In the result
reset was not handled properly and host reset was the only way
to recover.
Change-ID: I23c0771c0303caaa7bd64badbf0c667e25142954
Signed-off-by: Maciej Sosin <maciej.sosin@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This is a minor cleanup so that we are always updating pf->flags when we
make a change to the private flags instead of updating a mix of either
pf->flags and/or pf->hw_disabled_flags.
In addition I went through and cleaned out all the spots where we were
using the X722 define in regards to this flag.
Lastly since we changed the logic I went through and flushed out any
redundancy and cleaned up the handling of the flags in the Tx path.
Change-ID: I79ff95a7272bb2533251ff11ef91e89ccb80b610
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Re-word the error message displayed when adding a filter with an
invalid flow type. Additionally, report a distinct error message when
the IPv4 protocol is at fault.
Change-ID: Iba3d85b87f8d383c97c8bdd180df34a6adf3ee67
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
The client interface is only intended for use on devices that support
iWarp. Only register with the client if this is the case.
This fixes a panic when loading i40iw on X710 devices.
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Reported-by: Stefan Assmann <sassmann@kpanic.de>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Adding support for TSO and checksum hardware offloads for ipv6.
Signed-off-by: Thanneeru Srinivasulu <tsrinivasulu@cavium.com>
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
MT7530 is a 7-ports Gigabit Ethernet Switch that could be found on
Mediatek router platforms such as MT7623A or MT7623N platform which
includes 7-port Gigabit Ethernet MAC and 5-port Gigabit Ethernet PHY.
Among these ports, The port from 0 to 4 are the user ports connecting
with the remote devices while the port 5 and 6 are the CPU ports
connecting into Mediatek Ethernet GMAC.
For port 6, it can communicate with the CPU via Mediatek Ethernet GMAC
through either the TRGMII or RGMII which could be controlled by phy-mode
in the dt-bindings to specify which mode is preferred to use. And for
port 5, only RGMII can be specified. However, currently, only port 6 is
being supported in this DSA driver.
The driver is made with the reference to qca8k and other existing DSA
driver. The most of the essential callbacks of the DSA are already
support in the driver, including tag insert for user port distinguishing,
port control, bridge offloading, STP setup and ethtool operation to allow
DSA to model each user port into a standalone netdevice as the other DSA
driver had done.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: Landen Chao <Landen.Chao@mediatek.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
the patch adds the setup of the corresponding device node of GMAC into the
netdev instance which could allow other modules such as DSA to find the
instance through the node in dt-bindings using of_find_net_device_by_node()
call.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The patch adds the setup for allowing CDM can recognize these packets with
carrying port-distinguishing tag. Otherwise, these tagging packets will be
handled incorrectly by CDM. The setup is also working out for general
untag packets as well.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: Landen Chao <Landen.Chao@mediatek.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add the support for the 4-bytes tag for DSA port distinguishing inserted
allowing receiving and transmitting the packet via the particular port.
The tag is being added after the source MAC address in the ethernet
header.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: Landen Chao <Landen.Chao@mediatek.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|