aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2021-09-13net: hns3: pad the short tunnel frame before sending to hardwareYufeng Mo1-2/+6
The hardware cannot handle short tunnel frames below 65 bytes, and will cause vlan tag missing problem. So pads packet size to 65 bytes for tunnel frames to fix this bug. Fixes: 3db084d28dc0("net: hns3: Fix for vxlan tx checksum bug") Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-09-13net: hns3: add option to turn off page pool featureYunsheng Lin1-1/+5
When page pool is added to the hns3 driver, it is always enabled unconditionally, which means spilt page handling in the hns3 driver is dead code. As there is a requirement to test the performance between spilt page handling in driver and page pool, so add a module param to support disabling the page pool. When the page pool is proved to perform better in most case, the spilt page handling in driver can be removed. Fixes: 93188e9642c3 ("net: hns3: support skb's frag page recycling based on page pool") Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-31net: hns3: remove unnecessary spacesHao Chen1-1/+1
This patch removes some unnecessary spaces for cleanup. Signed-off-by: Hao Chen <chenhao288@hisilicon.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-31net: hns3: add some required spacesHao Chen1-1/+1
Add some required spaces to improve readability. Signed-off-by: Hao Chen <chenhao288@hisilicon.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-31net: hns3: refine function hns3_set_default_feature()Jian Shen1-46/+16
Currently, the driver sets default feature for netdev->features, netdev->hw_features, netdev->vlan_features and netdev->hw_enc_features separately. It's fussy, because most of the feature bits are same. So refine it by copy value from netdev->features. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-30net: hnss3: use max() to simplify codeHao Chen1-2/+1
Replace the "? :" statement wich max() to simplify code. Signed-off-by: Hao Chen <chenhao288@hisilicon.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-27net: hns3: add hns3_state_init() to do state initializationHuazhong Tan1-10/+19
To improve the readability and maintainability, add hns3_state_init() to initialize the state, and this new function will be used to add more state initialization in the future. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-24net: hns3: add ethtool support for CQE/EQE mode configurationYufeng Mo1-3/+3
Add support in ethtool for switching EQE/CQE mode. Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-24net: hns3: add support for EQE/CQE mode configurationYufeng Mo1-2/+47
For device whose version is above V3(include V3), the GL can select EQE or CQE mode, so adds support for it. In CQE mode, the coalesced timer will restart when the first new completion occurs, while in EQE mode, the timer will not restart. Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-09net: hns3: support skb's frag page recycling based on page poolYunsheng Lin1-5/+74
This patch adds skb's frag page recycling support based on the frag page support in page pool. The performance improves above 10~20% for single thread iperf TCP flow with IOMMU disabled when iperf server and irq/NAPI have a different CPU. The performance improves about 135%(14Gbit to 33Gbit) for single thread iperf TCP flow when IOMMU is in strict mode and iperf server shares the same cpu with irq/NAPI. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-07-27dev_ioctl: split out ndo_eth_ioctlArnd Bergmann1-1/+1
Most users of ndo_do_ioctl are ethernet drivers that implement the MII commands SIOCGMIIPHY/SIOCGMIIREG/SIOCSMIIREG, or hardware timestamping with SIOCSHWTSTAMP/SIOCGHWTSTAMP. Separate these from the few drivers that use ndo_do_ioctl to implement SIOCBOND, SIOCBR and SIOCWANDEV commands. This is a purely cosmetic change intended to help readers find their way through the implementation. Cc: Doug Ledford <dledford@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jay Vosburgh <j.vosburgh@gmail.com> Cc: Veaceslav Falico <vfalico@gmail.com> Cc: Andy Gospodarek <andy@greyhouse.net> Cc: Andrew Lunn <andrew@lunn.ch> Cc: Vivien Didelot <vivien.didelot@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Vladimir Oltean <olteanv@gmail.com> Cc: Leon Romanovsky <leon@kernel.org> Cc: linux-rdma@vger.kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-18net: hns3: fix reuse conflict of the rx pageYunsheng Lin1-7/+19
In the current rx page reuse handling process, the rx page buffer may have conflict between driver and stack in high-pressure scenario. To fix this problem, we need to check whether the page is only owned by driver at the begin and at the end of a page to make sure there is no reuse conflict between driver and stack when desc_cb->page_offset is rollbacked to zero or increased. Fixes: fa7711b888f2 ("net: hns3: optimize the rx page reuse handling process") Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-16net: hns3: use bounce buffer when rx page can not be reusedYunsheng Lin1-0/+23
Currently rx page will be reused to receive future packet when the stack releases the previous skb quickly. If the old page can not be reused, a new page will be allocated and mapped, which comsumes a lot of cpu when IOMMU is in the strict mode, especially when the application and irq/NAPI happens to run on the same cpu. So allocate a new frag to memcpy the data to avoid the costly IOMMU unmapping/mapping operation, and add "frag_alloc_err" and "frag_alloc" stats in "ethtool -S ethX" cmd. The throughput improves above 50% when running single thread of iperf using TCP when IOMMU is in strict mode and iperf shares the same cpu with irq/NAPI(rx_copybreak = 2048 and mtu = 1500). Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-16net: hns3: optimize the rx page reuse handling processYunsheng Lin1-22/+22
Current rx page offset only reset to zero when all the below conditions are satisfied: 1. rx page is only owned by driver. 2. rx page is reusable. 3. the page offset that is above to be given to the stack has reached the end of the page. If the page offset is over the hns3_buf_size(), it means the buffer below the offset of the page is usable when the above condition 1 & 2 are satisfied, so page offset can be reset to zero instead of increasing the offset. We may be able to always reuse the first 4K buffer of a 64K page, which means we can limit the hot buffer size as much as possible. The above optimization is a side effect when refacting the rx page reuse handling in order to support the rx copybreak. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-16net: hns3: support dma_map_sg() for multi frags skbYunsheng Lin1-5/+106
Using the queue based tx buffer, it is also possible to allocate a sgl buffer, and use skb_to_sgvec() to convert the skb to the sgvec in order to support the dma_map_sg() to decreases the overhead of IOMMU mapping and unmapping. Firstly, it reduces the number of buffers. For example, a tcp skb may have a 66-byte header and 3 fragments of 4328, 32768, and 28064 bytes. With this patch, dma_map_sg() will combine them into two buffers, 66-bytes header and one 65160-bytes fragment by using IOMMU. Secondly, it reduces the number of dma mapping and unmapping. All the original 4 buffers are mapped only once rather than 4 times. The throughput improves above 10% when running single thread of iperf using TCP when IOMMU is in strict mode. Suggested-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-16net: hns3: add support to query tx spare buffer size for pfHuazhong Tan1-2/+5
Add support to query tx spare buffer size from configuration file, and use this info to do spare buffer initialization when the module parameter 'tx_spare_buf_size' is not specified. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-16net: hns3: use tx bounce buffer for small packetsYunsheng Lin1-6/+283
when the packet or frag size is small, it causes both security and performance issue. As dma can't map sub-page, this means some extra kernel data is visible to devices. On the other hand, the overhead of dma map and unmap is huge when IOMMU is on. So add a queue based tx shared bounce buffer to memcpy the small packet when the len of the xmitted skb is below tx_copybreak. Add tx_spare_buf_size module param to set the size of tx spare buffer, and add set/get_tunable to set or query the tx_copybreak. The throughtput improves from 30 Gbps to 90+ Gbps when running 16 netperf threads with 32KB UDP message size when IOMMU is in the strict mode(tx_copybreak = 2000 and mtu = 1500). Suggested-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-16net: hns3: refactor for hns3_fill_desc() functionYunsheng Lin1-39/+48
Factor out hns3_fill_desc() so that it can be reused in the tx bounce supporting. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-16net: hns3: minor refactor related to desc_cb handlingYunsheng Lin1-22/+18
desc_cb is used to store mapping and freeing info for the corresponding desc, which is used in the cleaning process. There will be more desc_cb type coming up when supporting the tx bounce buffer, change desc_cb type to bit-wise value in order to reduce the desc_cb type checking operation in the data path. Also move the desc_cb type definition to hns3_enet.h because it is only used in hns3_enet.c, and declare a local variable desc_cb in hns3_clear_desc() to reduce lines of code. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-11net: hns3: add support for PTPHuazhong Tan1-0/+27
Adds PTP support for HNS3 ethernet driver. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-31net: hns3: add support for modify VLAN filter stateJian Shen1-25/+14
Previously, with hardware limitation, the port VLAN filter are effective for both PF and its VFs simultaneously, so a single function is not able to enable/disable separately, and the VLAN filter state is default enabled. Now some device supports each function to bypass port VLAN filter, then each function can switch VLAN filter separately. Add capability flag to check whether the device supports modify VLAN filter state. If flag on, user will be able to modify the VLAN filter state by ethtool -K. Furtherly, the default VLAN filter state is also changed according to whether non-zero VLAN used. Then the device can receive packet with any VLAN tag if only VLAN 0 used. The function hclge_need_enable_vport_vlan_filter() is used to help implement above changes. And the VLAN filter handle for promisc mode can also be simplified by this function. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-05-27Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-59/+51
cdc-wdm: s/kill_urbs/poison_urbs/ to fix build Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-05-25net: hns3: switch to dim algorithm for adaptive interrupt moderationHuazhong Tan1-124/+69
The Linux kernel has support for a dynamic interrupt moderation algorithm known as "dimlib". Replace the custom driver-specific implementation of dynamic interrupt moderation with the kernel's algorithm. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-18net: hns3: check the return of skb_checksum_help()Yunsheng Lin1-7/+3
Currently skb_checksum_help()'s return is ignored, but it may return error when it fails to allocate memory when linearizing. So adds checking for the return of skb_checksum_help(). Fixes: 76ad4f0ee747("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Fixes: 3db084d28dc0("net: hns3: Fix for vxlan tx checksum bug") Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-18net: hns3: fix user's coalesce configuration lost issueHuazhong Tan1-44/+40
Currently, when adaptive is on, the user's coalesce configuration may be overwritten by the dynamic one. The reason is that user's configurations are saved in struct hns3_enet_tqp_vector whose value maybe changed by the dynamic algorithm. To fix it, use struct hns3_nic_priv instead of struct hns3_enet_tqp_vector to save and get the user's configuration. BTW, operations of storing and restoring coalesce info in the reset process are unnecessary now, so remove them as well. Fixes: 434776a5fae2 ("net: hns3: add ethtool_ops.set_coalesce support to PF") Fixes: 7e96adc46633 ("net: hns3: add ethtool_ops.get_coalesce support to PF") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-18net: hns3: put off calling register_netdev() until client initialize completeJian Shen1-8/+8
Currently, the netdevice is registered before client initializing complete. So there is a timewindow between netdevice available and usable. In this case, if user try to change the channel number or ring param, it may cause the hns3_set_rx_cpu_rmap() being called twice, and report bug. [47199.416502] hns3 0000:35:00.0 eth1: set channels: tqp_num=1, rxfh=0 [47199.430340] hns3 0000:35:00.0 eth1: already uninitialized [47199.438554] hns3 0000:35:00.0: rss changes from 4 to 1 [47199.511854] hns3 0000:35:00.0: Channels changed, rss_size from 4 to 1, tqps from 4 to 1 [47200.163524] ------------[ cut here ]------------ [47200.171674] kernel BUG at lib/cpu_rmap.c:142! [47200.177847] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [47200.185259] Modules linked in: hclge(+) hns3(-) hns3_cae(O) hns_roce_hw_v2 hnae3 vfio_iommu_type1 vfio_pci vfio_virqfd vfio pv680_mii(O) [last unloaded: hclge] [47200.205912] CPU: 1 PID: 8260 Comm: ethtool Tainted: G O 5.11.0-rc3+ #1 [47200.215601] Hardware name: , xxxxxx 02/04/2021 [47200.223052] pstate: 60400009 (nZCv daif +PAN -UAO -TCO BTYPE=--) [47200.230188] pc : cpu_rmap_add+0x38/0x40 [47200.237472] lr : irq_cpu_rmap_add+0x84/0x140 [47200.243291] sp : ffff800010e93a30 [47200.247295] x29: ffff800010e93a30 x28: ffff082100584880 [47200.254155] x27: 0000000000000000 x26: 0000000000000000 [47200.260712] x25: 0000000000000000 x24: 0000000000000004 [47200.267241] x23: ffff08209ba03000 x22: ffff08209ba038c0 [47200.273789] x21: 000000000000003f x20: ffff0820e2bc1680 [47200.280400] x19: ffff0820c970ec80 x18: 00000000000000c0 [47200.286944] x17: 0000000000000000 x16: ffffb43debe4a0d0 [47200.293456] x15: fffffc2082990600 x14: dead000000000122 [47200.300059] x13: ffffffffffffffff x12: 000000000000003e [47200.306606] x11: ffff0820815b8080 x10: ffff53e411988000 [47200.313171] x9 : 0000000000000000 x8 : ffff0820e2bc1700 [47200.319682] x7 : 0000000000000000 x6 : 000000000000003f [47200.326170] x5 : 0000000000000040 x4 : ffff800010e93a20 [47200.332656] x3 : 0000000000000004 x2 : ffff0820c970ec80 [47200.339168] x1 : ffff0820e2bc1680 x0 : 0000000000000004 [47200.346058] Call trace: [47200.349324] cpu_rmap_add+0x38/0x40 [47200.354300] hns3_set_rx_cpu_rmap+0x6c/0xe0 [hns3] [47200.362294] hns3_reset_notify_init_enet+0x1cc/0x340 [hns3] [47200.370049] hns3_change_channels+0x40/0xb0 [hns3] [47200.376770] hns3_set_channels+0x12c/0x2a0 [hns3] [47200.383353] ethtool_set_channels+0x140/0x250 [47200.389772] dev_ethtool+0x714/0x23d0 [47200.394440] dev_ioctl+0x4cc/0x640 [47200.399277] sock_do_ioctl+0x100/0x2a0 [47200.404574] sock_ioctl+0x28c/0x470 [47200.409079] __arm64_sys_ioctl+0xb4/0x100 [47200.415217] el0_svc_common.constprop.0+0x84/0x210 [47200.422088] do_el0_svc+0x28/0x34 [47200.426387] el0_svc+0x28/0x70 [47200.431308] el0_sync_handler+0x1a4/0x1b0 [47200.436477] el0_sync+0x174/0x180 [47200.441562] Code: 11000405 79000c45 f8247861 d65f03c0 (d4210000) [47200.448869] ---[ end trace a01efe4ce42e5f34 ]--- The process is like below: excuting hns3_client_init | register_netdev() | hns3_set_channels() | | hns3_set_rx_cpu_rmap() hns3_reset_notify_uninit_enet() | | | quit without calling function | hns3_free_rx_cpu_rmap for flag | HNS3_NIC_STATE_INITED is unset. | | | hns3_reset_notify_init_enet() | | set HNS3_NIC_STATE_INITED call hns3_set_rx_cpu_rmap()-- crash Fix it by calling register_netdev() at the end of function hns3_client_init(). Fixes: 08a100689d4b ("net: hns3: re-organize vector handle") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-14net: hns3: refactor dump bd info of debugfsHuazhong Tan1-1/+1
Currently, the debugfs command for bd info is implemented by "echo xxxx > cmd", and record the information in dmesg. It's unnecessary and heavy. To improve it, add two debugfs directories "tx_bd_info" and "rx_bd_info", and create a file for each queue under these two directories, and query the bd info of specific queue by "cat tx_bd_info/tx_bd_queue*" or "cat rx_bd_info/rx_bd_queue*", return the result to userspace, rather than record in dmesg. The display style is below: $ cat rx_bd_info/rx_bd_queue0 Queue 0 rx bd info: BD_IDX L234_INFO PKT_LEN SIZE... 0 0x0 60 60... 1 0x0 1512 1512... $ cat tx_bd_info/tx_bd_queue0 Queue 0 tx bd info: BD_IDX ADDRESS VLAN_TAG SIZE... 0 0x0 0 0... 1 0x0 0 0... Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-14net: hns3: refactor the debugfs processYufeng Mo1-1/+6
Currently, each debugfs command needs to create a file to get the information. To better support more debugfs commands, the debugfs process is reconstructed, including the process of creating dentries and files, and obtaining information. Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-14net: hns3: refactor out RX completion checksumHuazhong Tan1-15/+17
Only when RXD advanced layout is enabled, in some cases (e.g. ip fragments), the checksum of entire packet will be calculated and filled in the least significant 16 bits of the unused addr field. So refactor out the handling of RX completion checksum: adjust the location of the checksum in RX descriptor, and use ptype table to identify whether this kind of checksum is calculated. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-14net: hns3: support RXD advanced layoutHuazhong Tan1-26/+332
Currently, the driver gets packet type by parsing the L3_ID/L4_ID/OL3_ID/OL4_ID from RX descriptor, it's time-consuming. Now some new devices support RXD advanced layout, which combines previous OL3_ID/OL4_ID to 8bit ptype field, so the driver gets packet type by looking up only one table, and L3_ID/L4_ID become reserved fields. Considering compatibility, the firmware will report capability of RXD advanced layout, the driver will identify and enable it by default. This patch provides basic function: identify and enable the RXD advanced layout, and refactor out hns3_rx_checksum() by using ptype table to handle RX checksum if supported. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-30net: hns3: use netif_tx_disable to stop the transmit queuePeng Li1-1/+1
Currently, netif_tx_stop_all_queues() is used to ensure that the xmit is not running, but for the concurrent case it will not take effect, since netif_tx_stop_all_queues() just sets a flag without locking to indicate that the xmit queue(s) should not be run. So use netif_tx_disable() to replace netif_tx_stop_all_queues(), it takes the xmit queue lock while marking the queue stopped. Fixes: 76ad4f0ee747 ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-30net: hns3: fix for vxlan gpe tx checksum bugHao Chen1-2/+3
When skb->ip_summed is CHECKSUM_PARTIAL, for non-tunnel udp packet, which has a dest port as the IANA assigned, the hardware is expected to do the checksum offload, but the hardware whose version is below V3 will not do the checksum offload when udp dest port is 4790. So fixes it by doing the checksum in software for this case. Fixes: 76ad4f0ee747 ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: Hao Chen <chenhao288@hisilicon.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-29net: hns3: add check for HNS3_NIC_STATE_INITED in hns3_reset_notify_up_enet()Jian Shen1-0/+5
In some cases, the device is not initialized because reset failed. If another task calls hns3_reset_notify_up_enet() before reset retry, it will cause an error since uninitialized pointer access. So add check for HNS3_NIC_STATE_INITED before calling hns3_nic_net_open() in hns3_reset_notify_up_enet(). Fixes: bb6b94a896d4 ("net: hns3: Add reset interface implementation in client") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-12net: hns3: Fix potential null pointer defererence of null ae_devColin Ian King1-4/+4
The reset_prepare and reset_done calls have a null pointer check on ae_dev however ae_dev is being dereferenced via the call to ns3_is_phys_func with the ae->pdev argument. Fix this by performing a null pointer check on ae_dev and hence short-circuiting the dereference to ae_dev on the call to ns3_is_phys_func. Addresses-Coverity: ("Dereference before null check") Fixes: 715c58e94f0d ("net: hns3: add suspend and resume pm_ops") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-08net: hns3: add suspend and resume pm_opsJiaran Zhang1-0/+29
To implement the system suspend/resume functions, the NIC driver needs to support: 1. When the system enters the suspend mode, the driver needs to implement the suspend callback function of the NIC device. The driver needs to mute the device, stop all RX/TX activities of the device, and unmap the interrupt. 2. When the system enters the resume mode, the driver needs to implement the resume callback function of the NIC device and restore the device to the state before suspension. When the system enters the suspend and resume mode, the NIC driver actually executes the PF function reset process. When the PFs are suspending/resuming, VFs also enter the suspend/resume state because the PFs trigger the VFs to reset, therefore no operation is required when the VF pci_driver is suspending or resuming. Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-08net: hns3: change flr_prepare/flr_done function namesJiaran Zhang1-4/+4
The flr_prepare/flr_done functions are not only used in the FLR scenario, but also used in the suspend/resume. Change the function names to prepare_for_reset/rebuild_for_reset, change the flr_prepare/flr_done to reset_prepare/reset_done in hnae3_ae_ops. Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-05net: hns3: Limiting the scope of vector_ring_chain variableSalil Mehta1-1/+2
Limiting the scope of the variable vector_ring_chain to the block where it is used. Fixes: 424eb834a9be ("net: hns3: Unified HNS3 {VF|PF} Ethernet Driver for hip08 SoC") Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: hns3: add stats logging when skb padding failsYunsheng Lin1-0/+5
skb_put_padto() may fails because of memory failure, sw_err_cnt is already used to log memory failure in hns3_skb_linearize(), so use it to log the memory failure for skb_put_padto() too. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: hns3: add tx send size handling for tso skbYunsheng Lin1-7/+18
The actual size on wire for tso skb should be (gso_segs - 1) * hdr + skb->len instead of skb->len, which can be seen by user using 'ethtool -S ethX' cmd, and 'Byte Queue Limit' also use the send size stat to do the queue limiting, so add send_bytes in the desc_cb to record the actual send size for a skb. And send_bytes is only for tx desc_cb and page_offset is only for rx desc, so reuse the same space for both of them. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: hns3: add handling for xmit skb with recursive fraglistYunsheng Lin1-41/+74
Currently hns3 driver only handle the xmit skb with one level of fraglist skb, add handling for multi level by calling hns3_tx_bd_num() recursively when calculating bd num and calling hns3_fill_skb_to_desc() recursively when filling tx desc. When the skb has a fraglist level of 24, the skb is simply dropped and stats.max_recursion_level is added to record the error. Move the stat handling from hns3_nic_net_xmit() to hns3_nic_maybe_stop_tx() in order to handle different error stat and add the 'max_recursion_level' and 'hw_limitation' stat. Note that the max recursive level as 24 is chose according to below: commit 48a1df65334b ("skbuff: return -EMSGSIZE in skb_to_sgvec to prevent overflow"). And that we are not able to find a testcase to verify the recursive fraglist case, so Fixes tag is not provided. Reported-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: hns3: optimize the process of queue resetYufeng Mo1-4/+4
Currently, the queue reset process needs to be performed one by one, which is inefficient. However, the queue reset of the same function is always performed at the same time. Therefore, according to the UM, command HCLGE_OPC_CFG_RST_TRIGGER can be used to reset all queues of the same function at a time, in order to optimize the queue reset process. Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-26net: hns3: remove redundant blank linesPeng Li1-7/+0
Remove some redundant blank lines. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-22net: hns3: refine for hns3_del_all_fd_entries()Jian Shen1-10/+0
For only PF driver can configure flow director rule, it's better to call hclge_del_all_fd_entries() directly in hclge layer, rather than call hns3_del_all_fd_entries() in hns3 layer. Then the ae_algo->ops.del_all_fd_entries can be removed. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-11net: hns3: fix return of random stack valueGustavo A. R. Silva1-2/+1
Currently, a random stack value is being returned because variable _ret_ is not properly initialized. This variable is actually not used anymore and it should be removed. Fix this by removing all instances of variable ret and return 0. Fixes: 64749c9c38a9 ("net: hns3: remove redundant return value of hns3_uninit_all_ring()") Addresses-Coverity-ID: 1501700 ("Uninitialized scalar variable") Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-09net: hns3: remove redundant return value of hns3_uninit_all_ring()Huazhong Tan1-9/+3
Since hns3_uninit_all_ring() only returns 0, so remove this redundant return value and function declaration in hns3_enet.h. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-09net: hns3: modify some unmacthed types print parameterJiaran Zhang1-1/+1
Fix an issue where the formatting symbol of the formatting input and output function does not match the actual type. Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-09net: hns3: clean up unnecessary parentheses in macro definitionsYufeng Mo1-1/+1
In macro definitions, parentheses are unnecessary in some cases, such as the calling parameter of a function, the left variable of the equal sign, and so on. So remove these unnecessary parentheses according to these rules. Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-09net: hns3: remove redundant client_setup_tc handleJian Shen1-15/+0
Since the real tx queue number and real rx queue number always be updated when netdev opens, it's redundant to call hclge_client_setup_tc to do the same thing. So remove it. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-06net: hns3: add support for obtaining the maximum frame sizeYufeng Mo1-2/+1
Since the newer hardware may supports different frame size, so add support to obtain the capability from the firmware instead of the fixed value. Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-02-04net: use the new dev_page_is_reusable() instead of private versionsAlexander Lobakin1-11/+6
Now we can remove a bunch of identical functions from the drivers and make them use common dev_page_is_reusable(). All {,un}likely() checks are omitted since it's already present in this helper. Also update some comments near the call sites. Suggested-by: David Rientjes <rientjes@google.com> Suggested-by: Jakub Kicinski <kuba@kernel.org> Cc: John Hubbard <jhubbard@nvidia.com> Signed-off-by: Alexander Lobakin <alobakin@pm.me> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>