aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-sqlite.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2022-10-03octeontx2-af: cn10k: mcs: Handle MCS block interruptsGeetha sowjanya8-12/+865
Hardware triggers an interrupt for events like PN wrap to zero, PN crosses set threshold. This interrupt is received by the MCS_AF. MCS AF then finds the PF/VF to which SA is mapped and notifies them using mcs_intr_notify mbox message. PF/VF using mcs_intr_cfg mbox can configure the list of interrupts for which they want to receive the notification from AF. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03octeontx2-af: cn10k: mcs: Support for stats collectionGeetha sowjanya6-0/+1048
Add mailbox messages to return the resource stats to the caller. Stats of SecY, SC and SAs as per the macsec standard, TCAM flow id hits/miss, mailbox to clear the stats are implemented. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03octeontx2-af: cn10k: mcs: Install a default TCAM for normal trafficGeetha sowjanya3-0/+69
Out of all the TCAM entries, reserve last TX and RX TCAM flow entry(low priority) so that normal traffic can be sent out and received. The traffic which needs macsec processing hits the high priority TCAM flows. Also install a FLR handler to free the allocated resources for PF/VF. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03octeontx2-af: cn10k: mcs: Manage the MCS block hardware resourcesGeetha sowjanya6-1/+1530
To establish a macsec connection association netdev driver needs hardware resources like SecY, TCAM flows, SCs and SAs. This patch manages allocating, freeing and configuring those resources. AF consumers can request resources and configure them via these mailbox messages. AF can allocate until it runs out of hardware resources. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03octeontx2-af: cn10k: mcs: Add mailboxes for port related operationsGeetha sowjanya5-4/+376
There are set of configurations to be done at MCS port level like bringing port out of reset, making port as operational or bypass. This patch adds all the port related mailbox message handlers so that AF consumers can use them. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03octeontx2-af: cn10k: Introduce driver for macsec block.Geetha sowjanya8-1/+678
CN10K-B and CNF10K-B has macsec block(MCS) to encrypt and decrypt packets at MAC level. This block is a global resource with hardware resources like SecYs, SCs and SAs and is in between NIX block and RPM LMAC. CN10K-B silicon has only one MCS block which receives packets from all LMACS whereas CNF10K-B has seven MCS blocks for seven LMACs. Both MCS blocks are similar in operation except for few register offsets and some configurations require writing to different registers. Those differences between IPs are handled using separate ops. This patch adds basic driver and does the initial hardware calibration and parser configuration. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: lan966x: Add port mirroring support using tc-matchallHoratiu Vultur5-1/+193
Add support for port mirroring. It is possible to mirror only one port at a time and it is possible to have both ingress and egress mirroring. Frames injected by the CPU don't get egress mirrored because they are bypassing the analyzer module. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: lan966x: Add port police support using tc-matchallHoratiu Vultur6-1/+468
Add support for port police. It is possible to police only on the ingress side. To be able to add police support also it was required to add tc-matchall classifier offload support. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: fec: using page pool to manage RX buffersShenwei Wang3-58/+119
This patch optimizes the RX buffer management by using the page pool. The purpose for this change is to prepare for the following XDP support. The current driver uses one frame per page for easy management. Added __maybe_unused attribute to the following functions to avoid the compiling warning. Those functions will be removed by a separate patch once this page pool solution is accepted. - fec_enet_new_rxbdp - fec_enet_copybreak The following are the comparing result between page pool implementation and the original implementation (non page pool). --- small packet (64 bytes) testing are almost the same --- no matter what the implementation is --- on both i.MX8 and i.MX6SX platforms. shenwei@5810:~/pktgen$ iperf -c 10.81.16.245 -w 2m -i 1 -l 64 ------------------------------------------------------------ Client connecting to 10.81.16.245, TCP port 5001 TCP window size: 416 KByte (WARNING: requested 1.91 MByte) ------------------------------------------------------------ [ 1] local 10.81.17.20 port 39728 connected with 10.81.16.245 port 5001 [ ID] Interval Transfer Bandwidth [ 1] 0.0000-1.0000 sec 37.0 MBytes 311 Mbits/sec [ 1] 1.0000-2.0000 sec 36.6 MBytes 307 Mbits/sec [ 1] 2.0000-3.0000 sec 37.2 MBytes 312 Mbits/sec [ 1] 3.0000-4.0000 sec 37.1 MBytes 312 Mbits/sec [ 1] 4.0000-5.0000 sec 37.2 MBytes 312 Mbits/sec [ 1] 5.0000-6.0000 sec 37.2 MBytes 312 Mbits/sec [ 1] 6.0000-7.0000 sec 37.2 MBytes 312 Mbits/sec [ 1] 7.0000-8.0000 sec 37.2 MBytes 312 Mbits/sec [ 1] 0.0000-8.0943 sec 299 MBytes 310 Mbits/sec --- Page Pool implementation on i.MX8 ---- shenwei@5810:~$ iperf -c 10.81.16.245 -w 2m -i 1 ------------------------------------------------------------ Client connecting to 10.81.16.245, TCP port 5001 TCP window size: 416 KByte (WARNING: requested 1.91 MByte) ------------------------------------------------------------ [ 1] local 10.81.17.20 port 43204 connected with 10.81.16.245 port 5001 [ ID] Interval Transfer Bandwidth [ 1] 0.0000-1.0000 sec 111 MBytes 933 Mbits/sec [ 1] 1.0000-2.0000 sec 111 MBytes 934 Mbits/sec [ 1] 2.0000-3.0000 sec 112 MBytes 935 Mbits/sec [ 1] 3.0000-4.0000 sec 111 MBytes 933 Mbits/sec [ 1] 4.0000-5.0000 sec 111 MBytes 934 Mbits/sec [ 1] 5.0000-6.0000 sec 111 MBytes 933 Mbits/sec [ 1] 6.0000-7.0000 sec 111 MBytes 931 Mbits/sec [ 1] 7.0000-8.0000 sec 112 MBytes 935 Mbits/sec [ 1] 8.0000-9.0000 sec 111 MBytes 933 Mbits/sec [ 1] 9.0000-10.0000 sec 112 MBytes 935 Mbits/sec [ 1] 0.0000-10.0077 sec 1.09 GBytes 933 Mbits/sec --- Non Page Pool implementation on i.MX8 ---- shenwei@5810:~$ iperf -c 10.81.16.245 -w 2m -i 1 ------------------------------------------------------------ Client connecting to 10.81.16.245, TCP port 5001 TCP window size: 416 KByte (WARNING: requested 1.91 MByte) ------------------------------------------------------------ [ 1] local 10.81.17.20 port 49154 connected with 10.81.16.245 port 5001 [ ID] Interval Transfer Bandwidth [ 1] 0.0000-1.0000 sec 104 MBytes 868 Mbits/sec [ 1] 1.0000-2.0000 sec 105 MBytes 878 Mbits/sec [ 1] 2.0000-3.0000 sec 105 MBytes 881 Mbits/sec [ 1] 3.0000-4.0000 sec 105 MBytes 879 Mbits/sec [ 1] 4.0000-5.0000 sec 105 MBytes 878 Mbits/sec [ 1] 5.0000-6.0000 sec 105 MBytes 878 Mbits/sec [ 1] 6.0000-7.0000 sec 104 MBytes 875 Mbits/sec [ 1] 7.0000-8.0000 sec 104 MBytes 875 Mbits/sec [ 1] 8.0000-9.0000 sec 104 MBytes 873 Mbits/sec [ 1] 9.0000-10.0000 sec 104 MBytes 875 Mbits/sec [ 1] 0.0000-10.0073 sec 1.02 GBytes 875 Mbits/sec --- Page Pool implementation on i.MX6SX ---- shenwei@5810:~/pktgen$ iperf -c 10.81.16.245 -w 2m -i 1 ------------------------------------------------------------ Client connecting to 10.81.16.245, TCP port 5001 TCP window size: 416 KByte (WARNING: requested 1.91 MByte) ------------------------------------------------------------ [ 1] local 10.81.17.20 port 57288 connected with 10.81.16.245 port 5001 [ ID] Interval Transfer Bandwidth [ 1] 0.0000-1.0000 sec 78.8 MBytes 661 Mbits/sec [ 1] 1.0000-2.0000 sec 82.5 MBytes 692 Mbits/sec [ 1] 2.0000-3.0000 sec 82.4 MBytes 691 Mbits/sec [ 1] 3.0000-4.0000 sec 82.4 MBytes 691 Mbits/sec [ 1] 4.0000-5.0000 sec 82.5 MBytes 692 Mbits/sec [ 1] 5.0000-6.0000 sec 82.4 MBytes 691 Mbits/sec [ 1] 6.0000-7.0000 sec 82.5 MBytes 692 Mbits/sec [ 1] 7.0000-8.0000 sec 82.4 MBytes 691 Mbits/sec [ 1] 8.0000-9.0000 sec 82.4 MBytes 691 Mbits/sec [ 1] 9.0000-9.5506 sec 45.0 MBytes 686 Mbits/sec [ 1] 0.0000-9.5506 sec 783 MBytes 688 Mbits/sec --- Non Page Pool implementation on i.MX6SX ---- shenwei@5810:~/pktgen$ iperf -c 10.81.16.245 -w 2m -i 1 ------------------------------------------------------------ Client connecting to 10.81.16.245, TCP port 5001 TCP window size: 416 KByte (WARNING: requested 1.91 MByte) ------------------------------------------------------------ [ 1] local 10.81.17.20 port 36486 connected with 10.81.16.245 port 5001 [ ID] Interval Transfer Bandwidth [ 1] 0.0000-1.0000 sec 70.5 MBytes 591 Mbits/sec [ 1] 1.0000-2.0000 sec 64.5 MBytes 541 Mbits/sec [ 1] 2.0000-3.0000 sec 73.6 MBytes 618 Mbits/sec [ 1] 3.0000-4.0000 sec 73.6 MBytes 618 Mbits/sec [ 1] 4.0000-5.0000 sec 72.9 MBytes 611 Mbits/sec [ 1] 5.0000-6.0000 sec 73.4 MBytes 616 Mbits/sec [ 1] 6.0000-7.0000 sec 73.5 MBytes 617 Mbits/sec [ 1] 7.0000-8.0000 sec 73.4 MBytes 616 Mbits/sec [ 1] 8.0000-9.0000 sec 73.4 MBytes 616 Mbits/sec [ 1] 9.0000-10.0000 sec 73.9 MBytes 620 Mbits/sec [ 1] 0.0000-10.0174 sec 723 MBytes 605 Mbits/sec Signed-off-by: Shenwei Wang <shenwei.wang@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: Remove DECnet leftovers from flow.h.Guillaume Nault1-26/+0
DECnet was removed by commit 1202cdd66531 ("Remove DECnet support from kernel"). Let's also revome its flow structure. Compile-tested only (allmodconfig). Signed-off-by: Guillaume Nault <gnault@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03gro: add support of (hw)gro packets to gro stackCoco Li2-6/+29
Current GRO stack only supports incoming packets containing one frame/MSS. This patch changes GRO to accept packets that are already GRO. HW-GRO (aka RSC for some vendors) is very often limited in presence of interleaved packets. Linux SW GRO stack can complete the job and provide larger GRO packets, thus reducing rate of ACK packets and cpu overhead. This also means BIG TCP can still be used, even if HW-GRO/RSC was able to cook ~64 KB GRO packets. v2: fix logic in tcp_gro_receive() Only support TCP for the moment (Paolo) Co-Developed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Coco Li <lixiaoyan@google.com> Acked-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03mptcp: update misleading comments.Paolo Abeni1-7/+7
The MPTCP data path is quite complex and hard to understend even without some foggy comments referring to modified code and/or completely misleading from the beginning. Update a few of them to more accurately describing the current status. Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03selftests: mptcp: update and extend fastclose test-casesPaolo Abeni2-25/+130
After the previous patches, the MPTCP protocol can generate fast-closes on both ends of the connection. Rework the relevant test-case to carefully trigger the fast-close code-path on a single end at the time, while ensuring than a predictable amount of data is spooled on both ends. Additionally add another test-cases for the passive socket fast-close. Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03mptcp: use fastclose on more edge scenariosPaolo Abeni1-19/+44
Daire reported a user-space application hang-up when the peer is forcibly closed before the data transfer completion. The relevant application expects the peer to either do an application-level clean shutdown or a transport-level connection reset. We can accommodate a such user by extending the fastclose usage: at fd close time, if the msk socket has some unread data, and at FIN_WAIT timeout. Note that at MPTCP close time we must ensure that the TCP subflows will reset: set the linger socket option to a suitable value. Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03mptcp: propagate fastclose errorPaolo Abeni1-11/+36
When an mptcp socket is closed due to an incoming FASTCLOSE option, so specific sk_err is set and later syscall will fail usually with EPIPE. Align the current fastclose error handling with TCP reset, properly setting the socket error according to the current msk state and propagating such error. Additionally sendmsg() is currently not handling properly the sk_err, always returning EPIPE. Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: sfp: add support for multigig RollBall transceiversMarek Behún1-10/+39
This adds support for multigig copper SFP modules from RollBall/Hilink. These modules have a specific way to access clause 45 registers of the internal PHY. We also need to wait at least 22 seconds after deasserting TX disable before accessing the PHY. The code waits for 25 seconds just to be sure. Signed-off-by: Marek Behún <kabel@kernel.org> Reviewed-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: phy: mdio-i2c: support I2C MDIO protocol for RollBall SFP modulesMarek Behún3-7/+313
Some multigig SFPs from RollBall and Hilink do not expose functional MDIO access to the internal PHY of the SFP via I2C address 0x56 (although there seems to be read-only clause 22 access on this address). Instead these SFPs PHY can be accessed via I2C via the SFP Enhanced Digital Diagnostic Interface - I2C address 0x51. The SFP_PAGE has to be selected to 3 and the password must be filled with 0xff bytes for this PHY communication to work. This extends the mdio-i2c driver to support this protocol by adding a special parameter to mdio_i2c_alloc function via which this RollBall protocol can be selected. Signed-off-by: Marek Behún <kabel@kernel.org> Cc: Andrew Lunn <andrew@lunn.ch> Cc: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: sfp: create/destroy I2C mdiobus before PHY probe/after PHY releaseMarek Behún2-13/+57
Instead of configuring the I2C mdiobus when SFP driver is probed, create/destroy the mdiobus before the PHY is probed for/after it is released. This way we can tell the mdio-i2c code which protocol to use for each SFP transceiver. Move the code that determines MDIO I2C protocol from sfp_sm_probe_for_phy() to sfp_sm_mod_probe(), where most of the SFP ID parsing is done. Don't allocate I2C bus if no PHY is expected. Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: sfp: Add and use macros for SFP quirks definitionsMarek Behún1-35/+26
Add macros SFP_QUIRK(), SFP_QUIRK_M() and SFP_QUIRK_F() for defining SFP quirk table entries. Use them to deduplicate the code a little bit. Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: phylink: allow attaching phy for SFP modules on 802.3z modeMarek Behún1-4/+1
Some SFPs may contain an internal PHY which may in some cases want to connect with the host interface in 1000base-x/2500base-x mode. Do not fail if such PHY is being attached in one of these PHY interface modes. Signed-off-by: Marek Behún <kabel@kernel.org> Reviewed-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Pali Rohár <pali@kernel.org> Cc: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: phy: marvell10g: select host interface configurationRussell King1-0/+112
Select the host interface configuration according to the capabilities of the host if the host provided them. This is currently provided only when connecting PHY that is inside a SFP. The PHY supports several configurations of host communication: - always communicate with host in 10gbase-r, even if copper speed is lower (rate matching mode), - the same as above but use xaui/rxaui instead of 10gbase-r, - switch host SerDes mode between 10gbase-r, 5gbase-r, 2500base-x and sgmii according to copper speed, - the same as above but use xaui/rxaui instead of 10gbase-r. This mode of host communication, called MACTYPE, is by default selected by strapping pins, but it can be changed in software. This adds support for selecting this mode according to which modes are supported by the host. This allows the kernel to: - support SFP modules with 88X33X0 or 88E21X0 inside them Note: we use mv3310_select_mactype() for both 88X3310 and 88X3340, although 88X3340 does not support XAUI. This is not a problem because 88X3340 does not declare XAUI in it's supported_interfaces, and so this function will never choose that MACTYPE. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> [ rebase, updated, also added support for 88E21X0 ] Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: phy: marvell10g: Use tabs instead of spaces for indentationMarek Behún1-9/+9
Some register definitions were defined with spaces used for indentation. Change them to tabs. Signed-off-by: Marek Behún <kabel@kernel.org> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: phylink: pass supported host PHY interface modes to phylib for SFP's PHYsMarek Behún2-0/+21
Pass the supported PHY interface types to phylib if the PHY we are connecting is inside a SFP, so that the PHY driver can select an appropriate host configuration mode for their interface according to the host capabilities. For example the Marvell 88X3310 PHY inside RollBall SFP modules defaults to 10gbase-r mode on host's side, and the marvell10g driver currently does not change this setting. But a host may not support 10gbase-r. For example Turris Omnia only supports sgmii, 1000base-x and 2500base-x modes. The PHY can be configured to use those modes, but in order for the PHY driver to do that, it needs to know which modes are supported. Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: phylink: rename phylink_sfp_config()Russell King (Oracle)1-6/+5
phylink_sfp_config() now only deals with configuring the MAC for a SFP containing a PHY. Rename it to be specific. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: phylink: use phy_interface_t bitmaps for optical modulesRussell King1-30/+134
Where a MAC provides a phy_interface_t bitmap, use these bitmaps to select the operating interface mode for optical SFP modules, rather than using the linkmode bitmaps. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: sfp: augment SFP parsing with phy_interface_t bitmapRussell King9-28/+78
We currently parse the SFP EEPROM to a bitmap of ethtool link modes, and then attempt to convert the link modes to a PHY interface mode. While this works at present, there are cases where this is sub-optimal. For example, where a module can operate with several different PHY interface modes. To start addressing this, arrange for the SFP EEPROM parsing to also provide a bitmap of the possible PHY interface modes. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: phylink: add ability to validate a set of interface modesRussell King (Oracle)1-7/+10
Rather than having the ability to validate all supported interface modes or a single interface mode, introduce the ability to validate a subset of supported modes. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> [ rebased on current net-next ] Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: Add helper function to parse netlink msg of ip_tunnel_parmLiu Jian4-49/+37
Add ip_tunnel_netlink_parms to parse netlink msg of ip_tunnel_parm. Reduces duplicate code, no actual functional changes. Signed-off-by: Liu Jian <liujian56@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: Add helper function to parse netlink msg of ip_tunnel_encapLiu Jian5-107/+44
Add ip_tunnel_netlink_encap_parms to parse netlink msg of ip_tunnel_encap. Reduces duplicate code, no actual functional changes. Signed-off-by: Liu Jian <liujian56@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-02net: sched: use tc_cls_bind_class() in filterZhengchao Shao9-54/+9
Use tc_cls_bind_class() in filter. Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-02net: sched: cls_api: introduce tc_cls_bind_class() helperZhengchao Shao1-0/+12
All the bind_class callback duplicate the same logic, this patch introduces tc_cls_bind_class() helper for common usage. Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-02net: sched: ensure n arg not empty before call bind_classZhengchao Shao1-1/+1
All bind_class callbacks are directly returned when n arg is empty. Therefore, bind_class is invoked only when n arg is not empty. Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-01net/mlx5e: xsk: Use queue indices starting from 0 for XSK queuesMaxim Mikityanskiy14-205/+52
In the initial implementation of XSK in mlx5e, XSK RQs coexisted with regular RQs in the same channel. The main idea was to allow RSS work the same for regular traffic, without need to reconfigure RSS to exclude XSK queues. However, this scheme didn't prove to be beneficial, mainly because of incompatibility with other vendors. Some tools don't properly support using higher indices for XSK queues, some tools get confused with the double amount of RQs exposed in sysfs. Some use cases are purely XSK, and allocating the same amount of unused regular RQs is a waste of resources. This commit changes the queuing scheme to the standard one, where XSK RQs replace regular RQs on the channels where XSK sockets are open. Two RQs still exist in the channel to allow failsafe disable of XSK, but only one is exposed at a time. The next commit will achieve the desired memory save by flushing the buffers when the regular RQ is unused. As the result of this transition: 1. It's possible to use RSS contexts over XSK RQs. 2. It's possible to dedicate all queues to XSK. 3. When XSK RQs coexist with regular RQs, the admin should make sure no unwanted traffic goes into XSK RQs by either excluding them from RSS or settings up the XDP program to return XDP_PASS for non-XSK traffic. 4. When using a mixed fleet of mlx5e devices and other netdevs, the same configuration can be applied. If the application supports the fallback to copy mode on unsupported drivers, it will work too. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Introduce the mlx5e_flush_rq functionMaxim Mikityanskiy3-24/+29
Add a function to flush an RQ: clean up descriptors, release pages and reset the RQ. This procedure is used by the recovery flow, and it will also be used in a following commit to free some memory when switching a channel to the XSK mode. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: xsk: Support XDP metadata on XSK RQsMaxim Mikityanskiy1-8/+12
Add support for XDP metadata on XSK RQs for cross-program communication. The driver no longer calls xdp_set_data_meta_invalid and copies the metadata to a newly allocated SKB on XDP_PASS. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Optimize RQ page deallocationMaxim Mikityanskiy2-19/+24
mlx5e_free_rx_mpwqe loops over all pages of a MPWQE, calling mlx5e_page_release for ones that are not scheduled for XDP_TX or XDP_REDIRECT; and mlx5e_page_release checks whether it's an XSK RQ or a regular one for each page/XSK frame. This check can be moved outside the loop to reduce the number of branches. mlx5e_free_rx_wqe loops over all fragments, calling mlx5e_page_release for the ones that are last in a page; and mlx5e_page_release checks whether it's an XSK RQ or a regular one for each fragment. Using the fact that XSK doesn't support multiple fragments, it can be optimized for both XSK and regular usages: 1. Make an early check for XSK and call its deallocator directly, saving 3 branches (loop condition, frag->last_in_page and selection of deallocator). 2. Call the regular deallocator directly in the non-XSK case, saving a branch per fragment, except the first one. After the changes, mlx5e_page_release is removed, as there are no callers left. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Call mlx5e_page_release_dynamic directly where possibleMaxim Mikityanskiy1-16/+4
mlx5e_page_release calls the appropriate deallocator depending on whether it's an XSK RQ or a regular one. Some flows that call this function are not compatible with XSK, so they can call the non-XSK deallocator directly to save a branch. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Use non-XSK page allocator in SHAMPOMaxim Mikityanskiy1-11/+1
The SHAMPO flow is not compatible with XSK, it can call the page pool allocator directly to save a branch. mlx5e_page_alloc is removed, as it's no longer used in any flow. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: xsk: Use xsk_buff_alloc_batch on striding RQMaxim Mikityanskiy4-48/+106
XSK provides a function to allocate frames in batches for more efficient processing. This commit starts using this function on striding RQ and creates an optimized flow for XSK. A side effect is an opportunity to optimize the regular RX flow by dropping branching for XSK cases. Performance improvement is up to 6.4% in the aligned mode and up to 7.5% in the unaligned mode. Aligned mode, 2048-byte frames: 12.9 Mpps -> 13.8 Mpps Aligned mode, 4096-byte frames: 11.8 Mpps -> 12.5 Mpps Unaligned mode, 2048-byte frames: 11.9 Mpps -> 12.8 Mpps Unaligned mode, 3072-byte frames: 11.4 Mpps -> 12.1 Mpps Unaligned mode, 4096-byte frames: 11.0 Mpps -> 11.2 Mpps CPU: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: xsk: Use xsk_buff_alloc_batch on legacy RQMaxim Mikityanskiy4-0/+55
XSK provides a function to allocate frames in batches for more efficient processing. This commit starts using this function on legacy RQ, adding a special case for XSK. The new branch introduced basically replaces the branch that was removed from the same place a few commits before. A check is made that DMA sync is not needed, because the batching allocator falls back to returning one frame when DMA sync is needed, and this is best handled by the loop in the standard case. Performance improvement is up to 8% in the aligned mode and up to 9% in the unaligned mode. Aligned mode, 2048-byte frames: 12.8 Mpps -> 13.5 Mpps Aligned mode, 4096-byte frames: 11.5 Mpps -> 12.4 Mpps Unaligned mode, 2048-byte frames: 12.2 Mpps -> 13.4 Mpps Unaligned mode, 3072-byte frames: 11.6 Mpps -> 12.5 Mpps Unaligned mode, 4096-byte frames: 11.2 Mpps -> 12.2 Mpps CPU: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: xsk: Split out WQE allocation for legacy XSK RQMaxim Mikityanskiy3-4/+34
Allocation of XSK frames on legacy RQ may be made more efficient with a specialized routine that relies on certain assumptions, such as there is only one fragment, allocation units (XSK frames) are not shared among multiple packets. It reduces the number of branches both in the XSK code and in the regular RQ, because with this approach there is only a single check whether it's an XSK or regular RQ. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Remove the outer loop when allocating legacy RQ WQEsMaxim Mikityanskiy2-22/+17
Legacy RQ WQEs are allocated in a loop in small batches (8 WQEs). As partial batches are allowed, there is no point to have a loop in a loop, so the outer loop is removed, and the batch size is increased up to the total number of WQEs to allocate, still not smaller than 8. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: xsk: Use partial batches in legacy RQ with XSKMaxim Mikityanskiy1-13/+1
The previous commit allowed allocating WQE batches in legacy RQ partially, however, XSK still checks whether there are enough frames in the fill ring. Remove this check to allow to allocate batches partially also with XSK. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Use partial batches in legacy RQMaxim Mikityanskiy1-18/+21
Legacy RQ allocates WQEs in batches. If the batch allocation fails, the pages of the allocated part are released. This commit changes this behavior to allow to use the pages that have been already allocated. After this change, we need to be careful about indexing rq->wqe.frags[]. The WQ size is a power of two that divides by wqe_bulk (8), and the old code used whole bulks, which allowed to use indices [8*K; 8*K+7] without overflowing. Now that the bulks may be partial, the range can start at any location (not only at 8*K), so we need to wrap them around to avoid out-of-bounds array access. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Make the wqe_index_mask calculation more exactMaxim Mikityanskiy1-1/+20
The old calculation of wqe_index_mask may give false positives, i.e. request bulking of pairs of WQEs when not strictly needed, for example, when the first fragment size is equal to the PAGE_SIZE, bulking is not needed, even if the number of fragments is odd. Make the calculation more exact to cut false positives. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Introduce wqe_index_mask for legacy RQMaxim Mikityanskiy2-4/+22
When fragments of different WQEs share the same page, mlx5e_post_rx_wqes must wait until the old WQE stops using the page, only then the new WQE can allocate the new page. Essentially, it means that if WQE index i is still in use, the allocation must stop before `i % bulk`, where bulk is the number of WQEs that may share the same page. As bulk is always a power of two, `i % bulk = i & (bulk - 1)`, and the new wqe_index_mask field will be equal to `bulk - 1`. At the same time, wqe_bulk remains for optimization purposes and stores `max(bulk, 8)`, which allows to skip the allocation until we have at least 8 WQEs free. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: xsk: Drop the check for XSK state in mlx5e_xsk_wakeupMaxim Mikityanskiy2-4/+1
The MLX5E_CHANNEL_STATE_XSK flag checked in mlx5e_xsk_wakeup indicates that XSK queues are open, but not necessarily activated. This check is not very useful, because: 0. Both XSK setup and netdev state transitions take the same state_lock mutex, so they can't happen at the same time. 1. If the netdev is up, xsk_is_bound can return true only when MLX5E_CHANNEL_STATE_XSK is set on the corresponding channel. mlx5e_xsk_wakeup is only called when xsk_is_bound is true. 2. If the XSK socket is bound, and the netdev is going up or down, mlx5e_xsk_wakeup can take one of two branches, depending on the return value of napi_if_scheduled_mark_missed: 2.1. True means one of two things: either NAPI was enabled at this point, which means MLX5E_CHANNEL_STATE_XSK was also set; or NAPI was disabled, and nothing really happened. 2.2. False means that NAPI was enabled by this point, which also implies MLX5E_CHANNEL_STATE_XSK was set. Additionally, mlx5e_xsk_wakeup contains a following check for MLX5E_SQ_STATE_ENABLED on async_icosq, and this flag implies MLX5E_CHANNEL_STATE_XSK too on XSK channels. As checking this flag doesn't cut any flows, remove the check. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: xsk: Use mlx5e_trigger_napi_icosq for XSK wakeupMaxim Mikityanskiy1-3/+1
mlx5e_xsk_wakeup triggers an IRQ by posting a NOP to async_icosq, taking a spinlock to protect from concurrent access. There is already a function that does the same: mlx5e_trigger_napi_icosq. Use this function in mlx5e_xsk_wakeup. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30nfp: add support restart of link auto-negotiationFei Qin1-0/+33
Add support restart of link auto-negotiation. This may be initiated using: # ethtool -r <intf> Signed-off-by: Fei Qin <fei.qin@corigine.com> Signed-off-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30nfp: add support for link auto negotiationYinjun Zhang4-5/+33
Report the auto negotiation capability if it's supported in management firmware, and advertise it if it's enabled. Changing port speed is not allowed when autoneg is enabled. The ethtool <intf> command displays the auto-neg capability: # ethtool enp1s0np0 Settings for enp1s0np0: Supported ports: [ FIBRE ] Supported link modes: Not reported Supported pause frame use: Symmetric Supports auto-negotiation: Yes Supported FEC modes: None RS BASER Advertised link modes: Not reported Advertised pause frame use: Symmetric Advertised auto-negotiation: Yes Advertised FEC modes: None RS BASER Speed: 25000Mb/s Duplex: Full Auto-negotiation: on Port: FIBRE PHYAD: 0 Transceiver: internal Link detected: yes Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com> Signed-off-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>