aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h (follow)
AgeCommit message (Collapse)AuthorFilesLines
2020-01-22net/mlx5e: Enable all available stats for uplink repsVlad Buslov1-2/+1
Extend stats group array of uplink representor with all stats that are available for PF in legacy mode, besides ipsec and TLS which are not supported. Don't output vport stats for uplink representor because they are already handled by 802_3 group (with different names: {tx|rx}_{bytes|packets}_phy). Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-01-22net/mlx5e: IPoIB, use separate stats groupsSaeed Mahameed1-0/+14
Don't copy all of the stats groups used for mlx5e ethernet NIC profile, have a separate stats groups for IPoIB with the set of the needed stats only. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-01-22net/mlx5e: Convert stats groups array to array of group pointersSaeed Mahameed1-2/+10
Convert stats groups array to array of "stats group" pointers to allow sharing and individual selection of groups per profile as illustrated in the next patches. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
2020-01-22net/mlx5e: Declare stats groups via macroSaeed Mahameed1-1/+24
Introduce new macros to declare stats callbacks and groups, for better code reuse and for individual groups selection per profile which will be introduced in next patches. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
2020-01-22net/mlx5e: Profile specific stats groupsSaeed Mahameed1-15/+22
Attach stats groups array to the profiles and make the stats utility functions (get_num, update, fill, fill_strings) generic and use the profile->stats_grps rather the hardcoded NIC stats groups. This will allow future extension to have per profile stats groups. In this patch mlx5e NIC and IPoIB will still share the same stats groups. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
2019-10-18net/mlx5e: kTLS, Enhance TX resync flowTariq Toukan1-4/+6
Once the kTLS TX resync function is called, it used to return a binary value, for success or failure. However, in case the TLS SKB is a retransmission of the connection handshake, it initiates the resync flow (as the tcp seq check holds), while regular packet handle is expected. In this patch, we identify this case and skip the resync operation accordingly. Counters: - Add a counter (tls_skip_no_sync_data) to monitor this. - Bump the dump counters up as they are used more frequently. - Add a missing counter descriptor declaration for tls_resync_bytes in sq_stats_desc. Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support") Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-09-05net/mlx5e: Add port buffer's congestion countersAya Levin1-0/+2
Add 3 counters per priority to ethtool using PPCNT: 1) rx_prio[p]_buf_discard - the number of packets discarded by device due to lack of per host receive buffers 2) rx_prio[p]_cong_discard - the number of packets discarded by device due to per host congestion 3) rx_prio[p]_marked - the number of packets ECN marked by device due to per host congestion Signed-off-by: Aya Levin <ayal@mellanox.com> Reviewed-by: Moshe Shemesh <moshe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-08-20net/mlx5e: Report and recover from CQE error on ICOSQAya Levin1-0/+2
Add support for report and recovery from error on completion on ICOSQ. Deactivate RQ and flush, then deactivate ICOSQ. Set the queue back to ready state (firmware) and reset the ICOSQ and the RQ (software resources). Finally, activate the ICOSQ and the RQ. Signed-off-by: Aya Levin <ayal@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-08-01net/mlx5e: XDP, Close TX MPWQE session when no room for inline packet leftShay Agroskin1-0/+3
In MPWQE mode, when transmitting packets with XDP, a packet that is smaller than a certain size (set to 256 bytes) would be sent inline within its WQE TX descriptor (mem-copied), in case the hardware tx queue is congested beyond a pre-defined water-mark. If a MPWQE cannot contain an additional inline packet, we close this MPWQE session, and send the packet inlined within the next MPWQE. To save some MPWQE session close+open operations, we don't open MPWQE sessions that are contiguously smaller than certain size (set to the HW MPWQE maximum size). If there isn't enough contiguous room in the send queue, we fill it with NOPs and wrap the send queue index around. This way, qualified packets are always sent inline. Perf tests: Tested packet rate for UDP 64Byte multi-stream over two dual port ConnectX-5 100Gbps NICs. CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz XDP_TX: With 24 channels: | ------ | bounced packets | inlined packets | inline ratio | | before | 113.6Mpps | 96.3Mpps | 84% | | after | 115Mpps | 99.5Mpps | 86% | With one channel: | ------ | bounced packets | inlined packets | inline ratio | | before | 6.7Mpps | 0pps | 0% | | after | 6.8Mpps | 0pps | 0% | As we can see, there is improvement in both inline ratio and overall packet rate for 24 channels. Also, we see no degradation for the one-channel case. Signed-off-by: Shay Agroskin <shayag@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-07-05net/mlx5e: Add kTLS TX HW offload supportTariq Toukan1-0/+14
Add support for transmit side kernel-TLS acceleration. Offload the crypto encryption to HW. Per TLS connection: - Use a separate TIS to maintain the HW context. - Use a separate encryption key. - Maintain static and progress HW contexts by posting the proper WQEs at creation time, or upon resync. - Use a special DUMP opcode to replay the previous frags and sync the HW context. To make sure the SQ is able to serve an xmit request, increase SQ stop room to cover: - static params WQE, - progress params WQE, and - resync DUMP per frag. Currently supporting TLS 1.2, and key size 128bit. Tested over SimX simulator. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-27net/mlx5e: Add XSK zero-copy supportMaxim Mikityanskiy1-0/+30
This commit adds support for AF_XDP zero-copy RX and TX. We create a dedicated XSK RQ inside the channel, it means that two RQs are running simultaneously: one for non-XSK traffic and the other for XSK traffic. The regular and XSK RQs use a single ID namespace split into two halves: the lower half is regular RQs, and the upper half is XSK RQs. When any zero-copy AF_XDP socket is active, changing the number of channels is not allowed, because it would break to mapping between XSK RQ IDs and channels. XSK requires different page allocation and release routines. Such functions as mlx5e_{alloc,free}_rx_mpwqe and mlx5e_{get,put}_rx_frag are generic enough to be used for both regular and XSK RQs, and they use the mlx5e_page_{alloc,release} wrappers around the real allocation functions. Function pointers are not used to avoid losing the performance with retpolines. Wherever it's certain that the regular (non-XSK) page release function should be used, it's called directly. Only the stats that could be meaningful for XSK are exposed to the userspace. Those that don't take part in the XSK flow are not considered. Note that we don't wait for WQEs on the XSK RQ (unlike the regular RQ), because the newer xdpsock sample doesn't provide any Fill Ring entries at the setup stage. We create a dedicated XSK SQ in the channel. This separation has its advantages: 1. When the UMEM is closed, the XSK SQ can also be closed and stop receiving completions. If an existing SQ was used for XSK, it would continue receiving completions for the packets of the closed socket. If a new UMEM was opened at that point, it would start getting completions that don't belong to it. 2. Calculating statistics separately. When the userspace kicks the TX, the driver triggers a hardware interrupt by posting a NOP to a dedicated XSK ICO (internal control operations) SQ, in order to trigger NAPI on the right CPU core. This XSK ICO SQ is protected by a spinlock, as the userspace application may kick the TX from any core. Store the pointers to the UMEMs in the net device private context, independently from the kernel. This way the driver can distinguish between the zero-copy and non-zero-copy UMEMs. The kernel function xdp_get_umem_from_qid does not care about this difference, but the driver is only interested in zero-copy UMEMs, particularly, on the cleanup it determines whether to close the XSK RQ and SQ or not by looking at the presence of the UMEM. Use state_lock to protect the access to this area of UMEM pointers. LRO isn't compatible with XDP, but there may be active UMEMs while XDP is off. If this is the case, don't allow LRO to ensure XDP can be reenabled at any time. The validation of XSK parameters typically happens when XSK queues open. However, when the interface is down or the XDP program isn't set, it's still possible to have active AF_XDP sockets and even to open new, but the XSK queues will be closed. To cover these cases, perform the validation also in these flows: 1. A new UMEM is registered, but the XSK queues aren't going to be created due to missing XDP program or interface being down. 2. MTU changes while there are UMEMs registered. Having this early check prevents mlx5e_open_channels from failing at a later stage, where recovery is impossible and the application has no chance to handle the error, because it got the successful return value for an MTU change or XSK open operation. The performance testing was performed on a machine with the following configuration: - 24 cores of Intel Xeon E5-2620 v3 @ 2.40 GHz - Mellanox ConnectX-5 Ex with 100 Gbit/s link The results with retpoline disabled, single stream: txonly: 33.3 Mpps (21.5 Mpps with queue and app pinned to the same CPU) rxdrop: 12.2 Mpps l2fwd: 9.4 Mpps The results with retpoline enabled, single stream: txonly: 21.3 Mpps (14.1 Mpps with queue and app pinned to the same CPU) rxdrop: 9.9 Mpps l2fwd: 6.8 Mpps Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Acked-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-23net/mlx5e: Remove unused rx_page_reuse statMaxim Mikityanskiy1-2/+0
Remove the no longer used page_reuse stat of RQs. Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-04-23net/mlx5e: XDP, Inline small packets into the TX MPWQE in XDP xmit flowShay Agroskin1-0/+3
Upon high packet rate with multiple CPUs TX workloads, much of the HCA's resources are spent on prefetching TX descriptors, thus affecting transmission rates. This patch comes to mitigate this problem by moving some workload to the CPU and reducing the HW data prefetch overhead for small packets (<= 256B). When forwarding packets with XDP, a packet that is smaller than a certain size (set to ~256 bytes) would be sent inline within its WQE TX descrptor (mem-copied), when the hardware tx queue is congested beyond a pre-defined water-mark. This is added to better utilize the HW resources (which now makes one less packet data prefetch) and allow better scalability, on the account of CPU usage (which now 'memcpy's the packet into the WQE). To load balance between HW and CPU and get max packet rate, we use watermarks to detect how much the HW is congested and move the work loads back and forth between HW and CPU. Performance: Tested packet rate for UDP 64Byte multi-stream over two dual port ConnectX-5 100Gbps NICs. CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz * Tested with hyper-threading disabled XDP_TX: | | before | after | | | 24 rings | 51Mpps | 116Mpps | +126% | | 1 ring | 12Mpps | 12Mpps | same | XDP_REDIRECT: ** Below is the transmit rate, not the redirection rate which might be larger, and is not affected by this patch. | | before | after | | | 32 rings | 64Mpps | 92Mpps | +43% | | 1 ring | 6.4Mpps | 6.4Mpps | same | As we can see, feature significantly improves scaling, without hurting single ring performance. Signed-off-by: Shay Agroskin <shayag@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-04-23net/mlx5e: XDP, Add TX MPWQE session counterShay Agroskin1-0/+3
This counter tracks how many TX MPWQE sessions are started in XDP SQ in XDP TX/REDIRECT flow. It counts per-channel and global stats. Signed-off-by: Shay Agroskin <shayag@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-04-09net/mlx5e: Rx, Fixup skb checksum for packets with tail paddingSaeed Mahameed1-0/+4
When an ethernet frame with ip payload is padded, the padding octets are not covered by the hardware checksum. Prior to the cited commit, skb checksum was forced to be CHECKSUM_NONE when padding is detected. After it, the kernel will try to trim the padding bytes and subtract their checksum from skb->csum. In this patch we fixup skb->csum for any ip packet with tail padding of any size, if any padding found. FCS case is just one special case of this general purpose patch, hence, it is removed. Fixes: 88078d98d1bb ("net: pskb_trim_rcsum() and CHECKSUM_COMPLETE are friends"), Cc: Eric Dumazet <edumazet@google.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-01-25net/mlx5e: Separate between ethtool and netdev software stats foldingSaeed Mahameed1-1/+0
mlx5e_grp_sw_update_stats can be called from two threads, 1) ndo_get_stats64 2) get_ethtool_stats For this reason and to minimize concurrency issue impact on 64bit machines mlx5e_grp_sw_update_stats folds the software stats into a temporary variable then copies it to the global driver stats, both ethtool and ndo statistics callbacks will use the global software stats variable to report whatever stats they need. Actually ndo_get_stats64 doesn't need to fold the whole software stats (mlx5e_grp_sw_update_stats), all it needs is five counters to fill the rtnl_link_stats64 relevant stats parameter. Hence this patch introduces a simpler helper function to fold software stats for ndo_get_stats64 which will work directly on rtnl_link_stats64 stats parameter and not on the global or even temporary mlx5e_sw_stats variable. Since now mlx5e_grp_sw_update_stats is not called by ndo_get_stats64 we can make it static and remove the temp var. Unlike mlx5e_grp_sw_update_stats the new fold stats function doesn't need to zero out the output statistics parameter since it is already done by the stack @dev_get_stats(). This patch is fixing stack usage of mlx5e_grp_sw_update_stats on x86 gcc-4.9 and higher, the concurrency issue between mlx5's ndo_get_stats64 and get_ethtool_stats is resolved as well. Fixes: 8bfaf07f7806 ("net/mlx5e: Present SW stats when state is not opened") Reported-by: Arnd Bergmann <arnd@arndb.de> Reported-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-12-20Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller1-2/+0
Lots of conflicts, by happily all cases of overlapping changes, parallel adds, things of that nature. Thanks to Stephen Rothwell, Saeed Mahameed, and others for their guidance in these resolutions. Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-17net/mlx5e: Use dedicated uplink vport netdev representorOr Gerlitz1-0/+1
Currently, when running in sriov switchdev mode, we are using the PF netdevice as the uplink representor, this is problematic from few aspects: - will break when the PF isn't eswitch manager (e.g smart NIC env) - misalignment with other NIC switchdev drivers - makes us have and maintain special code, hurts the driver quality/robustness - which in turn opens the door for future bugs As of each and all of the above, we move to have a dedicated netdev representor for the uplink vport in a similar manner done for for the VF vports. This includes the following: 1. have an uplink rep netdev as we have for VF reps 2. all reps use same load/unload functions 3. HW stats for uplink based on physical port counters and not vport counters 4. link state for the uplink managed through PAOS and not vport state 5. the uplink rep has sysfs link to the PF PCI function && uses the PF MAC address Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-12-13net/mlx5e: Remove unused UDP GSO remaining counterMikhael Goikhman1-2/+0
Remove tx_udp_seg_rem counter from ethtool output, as it is no longer being updated in the driver's data flow. Fixes: 3f44899ef2ce ("net/mlx5e: Use PARTIAL_GSO for UDP segmentation") Signed-off-by: Mikhael Goikhman <migo@mellanox.com> Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-11-19net/mlx5e: RX, verify received packet size in Linear Striding RQMoshe Shemesh1-0/+2
In case of striding RQ, we use MPWRQ (Multi Packet WQE RQ), which means that WQE (RX descriptor) can be used for many packets and so the WQE is much bigger than MTU. In virtualization setups where the port mtu can be larger than the vf mtu, if received packet is bigger than MTU, it won't be dropped by HW on too small receive WQE. If we use linear SKB in striding RQ, since each stride has room for mtu size payload and skb info, an oversized packet can lead to crash for crossing allocated page boundary upon the call to build_skb. So driver needs to check packet size and drop it. Introduce new SW rx counter, rx_oversize_pkts_sw_drop, which counts the number of packets dropped by the driver for being too large. As a new field is added to the RQ struct, re-open the channels whenever this field is being used in datapath (i.e., in the case of linear Striding RQ). Fixes: 619a8f2a42f1 ("net/mlx5e: Use linear SKB in Striding RQ") Signed-off-by: Moshe Shemesh <moshe@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-10-03net/mlx5e: Add new counter for aRFS rule insertion failuresEran Ben Elisha1-0/+2
Count aRFS rules insertion failure for ethtool output. In addition, move the error print into debug prints mechanism, as it could flood the dmesg and reduce system BW dramatically. Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-09-05net/mlx5e: Set ECN for received packets using CQE indicationNatali Shechtman1-0/+2
In multi-host (MH) NIC scheme, a single HW port serves multiple hosts or sockets on the same host. The HW uses a mechanism in the PCIe buffer which monitors the amount of consumed PCIe buffers per host. On a certain configuration, under congestion, the HW emulates a switch doing ECN marking on packets using ECN indication on the completion descriptor (CQE). The driver needs to set the ECN bits on the packet SKB, such that the network stack can react on that, this commit does that. Signed-off-by: Natali Shechtman <natali@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-07-26net/mlx5e: Add support for XDP_REDIRECT in device-out sideTariq Toukan1-0/+5
Add implementation for the ndo_xdp_xmit callback. Dedicate a new set of XDP-SQ instances to satisfy the XDP_REDIRECT requests. These instances are totally separated from the existing XDP-SQ objects that satisfy local XDP_TX actions. Performance tests: xdp_redirect_map from ConnectX-5 to ConnectX-5. CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz Packet-rate of 64B packets. Single queue: 7 Mpps. Multi queue: 55 Mpps. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-07-26net/mlx5e: Re-order fields of struct mlx5e_xdpsqTariq Toukan1-1/+2
In the downstream patch that adds support to XDP_REDIRECT-out, the XDP xmit frame function doesn't share the same run context as the NAPI that polls the XDP-SQ completion queue. Hence, need to re-order the XDP-SQ fields to avoid cacheline false-sharing. Take redirect_flush and doorbell out of DB, into separated cachelines. Add a cacheline breaker within the stats struct. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-07-26net/mlx5e: Refactor XDP countersTariq Toukan1-5/+11
Separate the XDP counters into two sets: (1) One set reside in the RQ stats, and they monitor XDP stats in the RQ side. (2) Another set is per XDP-SQ, and they monitor XDP stats that are related to XDP transmit flow. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-07-26net/mlx5e: Add counter for XDP redirect in RXTariq Toukan1-0/+2
Add per-ring and total stats for received packets that goes into XDP redirection. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-06-28net/mlx5e: Add counter for total num of NOP operationsTariq Toukan1-0/+1
A per-ring counter for NOP operations already exists. Here I add a counter that sums them up. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-06-28net/mlx5e: Add counter for MPWQE filler stridesTariq Toukan1-2/+4
Add ethtool counter to indicate the number of strides consumed by filler CQEs. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-06-28net/mlx5e: Add channel events counterTariq Toukan1-0/+2
Add per-channel and global ethtool counters for channel events. Each event indicates an interrupt on one of the channel's completion queues. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-06-28net/mlx5e: Add a counter for congested UMRsTariq Toukan1-0/+2
Add per-ring and global ethtool counters for congested UMR requests. These events indicate congestion in UMR handlers in HW. Such event is concluded when there's an outstanding UMR post, yet the SW consumed at least two additional MPWQEs in the meanwhile. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-06-28net/mlx5e: Add NAPI statisticsTariq Toukan1-0/+6
Add per-channel and global ethtool counters for NAPI. This helps us monitor and analyze performance in general. - ch[i]_poll: the number of times the channel's NAPI poll was invoked. - ch[i]_arm: the number of times the channel's NAPI poll completed and armed the completion queues. - ch[i]_aff_change: the number of times the channel's NAPI poll explicitly stopped execution on a cpu due to a change in affinity. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-06-28net/mlx5e: Add XDP_TX completions statisticsTariq Toukan1-0/+2
Add per-ring and global ethtool counters for XDP_TX completions. This helps us monitor and analyze XDP_TX flow performance. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-06-28net/mlx5e: Add TX completions statisticsTariq Toukan1-1/+3
Add per-ring and global ethtool counters for TX completions. This helps us monitor and analyze TX flow performance. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-06-28net/mlx5e: Add UDP GSO remaining counterBoris Pismenny1-0/+2
This patch adds a counter for tx UDP GSO packets that contain a segment that is not aligned to MSS - remaining segment. Signed-off-by: Boris Pismenny <borisp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-06-01net/mlx5e: TX, Separate cachelines of xmit and completion statsTariq Toukan1-4/+5
Avoid false sharing of cachelines by separating the cachelines of TX stats that are dertied in xmit flow and in completion flow. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-05-25net/mlx5e: Introducing new statistics rwlockShalom Lagziel1-0/+2
Introduce a new read/write lock that will protect statistics gathering from netdev channels configuration changes. e.g. when channels are being replaced (increase/decrease number of rings) prevent statistic gathering (ndo_get_stats64) to read the statistics of in-active channels (channels that are being closed). Plus update channels software statistics on the fly when calling ndo_get_stats64, and remove it from stats periodic work. Fixes: 9218b44dcc05 ("net/mlx5e: Statistics handling refactoring") Signed-off-by: Shalom Lagziel <shaloml@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-05-25net/mlx5e: Move phy link down events counter out of SW statsSaeed Mahameed1-3/+0
PHY link down events counter belongs to phy_counters group. although it has special handling, it doesn't mean it can't be there. Move it to phy_counters_grp handler. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-05-01net/mlx5e: TLS, Add Innova TLS TX offload data pathIlya Lesokhin1-0/+9
Implement the TLS tx offload data path according to the requirements of the TLS generic NIC offload infrastructure. Special metadata ethertype is used to pass information to the hardware. Signed-off-by: Ilya Lesokhin <ilyal@mellanox.com> Signed-off-by: Boris Pismenny <borisp@mellanox.com> Acked-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27net/mlx5e: Recover Send Queue (SQ) from error stateEran Ben Elisha1-0/+2
An error TX completion (CQE) which arrived on a specific SQ indicates that this SQ got moved by the hardware to error state, which means all pending and incoming TX requests are dropped or will be dropped and no further "Good" CQEs will be generated for that SQ. Before this patch TX completions (CQEs) were not monitored and were handled as a regular CQE. This caused the SQ to stay in an error state, making it useless for xmiting new packets. Mitigation plan: In case of an error completion, schedule a recovery work which would do the following: - Mark the TXQ as DRV_XOFF to disable new packets to arrive from the stack - NAPI to flush all pending SQ WQEs (via flush_in_error_en bit) to release SW and HW resources(SKB, DMA, etc) and have the SQ and CQ consumer/producer indices synced. - Modify the SQ state ERR -> RST -> RDY (restart the SQ). - Reactivate the SQ and reset SQ cc and pc If we identify two consecutive requests for SQ recover in less than 500 msecs, drop the recover request to avoid CPU overload, as this scenario most likely happened due to a severe repeated bug. In addition, add SQ recover SW counter to monitor successful recoveries. Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27net/mlx5e: Dump xmit error completionsEran Ben Elisha1-0/+2
Monitor and dump xmit error completions. In addition, add err_cqe counter to track the number of error completion per send queue. Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-26net/mlx5e: Add interface down dropped packets statisticsMoshe Shemesh1-0/+1
Added the following packets drop counter: Rx interface down dropped packets - counts packets which were received while the ETH interface was down. This counter will be shown on ethtool as a new counter called rx_if_down_packets. The implementation allocates a q_counter for drop rq which gets all the received traffic while the interface is down. Signed-off-by: Moshe Shemesh <moshe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-26net/mlx5e: Add vnic steering drop statisticsMoshe Shemesh1-0/+5
Added the following packets drop counter: Rx steering missed dropped packets - counts packets which were dropped due to miss on NIC rx steering rules. This counter will be shown on ethtool as a new counter called rx_steer_missed_packets. Signed-off-by: Moshe Shemesh <moshe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-01-19net/mlx5e: Extend the stats group API to have update_stats()Kamal Heib1-0/+6
Extend the stats group API to have an update_stats() callback which will be used to fetch the hardware or software counters data. Signed-off-by: Kamal Heib <kamalh@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-01-19net/mlx5e: Add per-channel counters infrastructure, use it upon TX timeoutEran Ben Elisha1-0/+6
Add per-channel counter ch#_eq_rearm to monitor how many lost interrupt recovery actions happened upon TX timeouts. Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2017-11-09net/mlx5e: Add VLAN offloads statisticsGal Pressman1-0/+4
The following counters are now exposed through ethtool -S: rx[i]_removed_vlan_packets (per channel) rx_removed_vlan_packets tx[i]_added_vlan_packets (per channel) tx_added_vlan_packets rx_removed_vlan_packets: The number of packets that had their outer VLAN header stripped to the CQE by the hardware. tx_added_vlan_packets: The number of packets that had their outer VLAN header inserted by the hardware. Signed-off-by: Gal Pressman <galp@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2017-10-31net/mlx5e: Switch channels counters to use stats group APIKamal Heib1-45/+0
Switch the channels counters to use the new stats group API. Signed-off-by: Kamal Heib <kamalh@mellanox.com> Reviewed-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2017-10-31net/mlx5e: Switch pme counters to use stats group APIKamal Heib1-10/+0
Switch the pme counters to use the new stats group API. Signed-off-by: Kamal Heib <kamalh@mellanox.com> Reviewed-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2017-10-31net/mlx5e: Switch per prio pfc counters to use stats group APIKamal Heib1-14/+0
Switch the per prio pfc counters to use the new stats group API. Signed-off-by: Kamal Heib <kamalh@mellanox.com> Reviewed-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2017-10-31net/mlx5e: Switch per prio traffic counters to use stats group APIKamal Heib1-11/+0
Switch the per prio traffic counters to use the new stats group API. Signed-off-by: Kamal Heib <kamalh@mellanox.com> Reviewed-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2017-10-31net/mlx5e: Switch pcie counters to use stats group APIKamal Heib1-32/+0
Switch the pcie counters to use the new stats group API. Signed-off-by: Kamal Heib <kamalh@mellanox.com> Reviewed-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>