aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/task_work.c (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2019-10-22r8169: remove rtl_hw_start_8168befHeiner Kallweit1-11/+4
We can remove rtl_hw_start_8168bef() and use rtl_hw_start_8168b() instead because setting register Config4 is done in rtl_jumbo_config(), being called from rtl_hw_start(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22r8169: remove rtl_hw_start_8168dpHeiner Kallweit1-8/+1
We can remove rtl_hw_start_8168dp() because it's the same as rtl_hw_start_8168dp() now. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22r8169: simplify setting PCI_EXP_DEVCTL_NOSNOOP_ENHeiner Kallweit1-24/+10
r8168b_0_hw_jumbo_enable() and r8168b_0_hw_jumbo_disable() both do the same and just set PCI_EXP_DEVCTL_NOSNOOP_EN. We can simplify the code by moving this setting for RTL8168B to rtl_hw_start_8168(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22r8169: remove fiddling with the PCIe max read request sizeHeiner Kallweit1-36/+4
The attempt to improve performance by changing the PCIe max read request size was added in the vendor driver more than 10 years back and copied to r8169 driver. In the vendor driver this has been removed long ago. Obviously it had no effect, also in my tests I didn't see any difference. Typically the max payload size is less than 512 bytes anyway, and the PCI core takes care that the maximum supported value is set. So let's remove fiddling with PCIe max read request size from r8169 too. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22net/smc: remove close abort workerUrsula Braun4-10/+19
With the introduction of the link group termination worker there is no longer a need to postpone smc_close_active_abort() to a worker. To protect socket destruction due to normal and abnormal socket closing, the socket refcount is increased. Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22net/smc: introduce link group termination workerUrsula Braun4-6/+22
Use a worker for link group termination to guarantee process context. Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22net/smc: improve abnormal termination of link groupsUrsula Braun1-11/+29
If a link group and its connections must be terminated, * wake up socket waiters * do not enable buffer reuse A linkgroup might be terminated while normal connection closing is running. Avoid buffer reuse and its related LLC DELETE RKEY call, if linkgroup termination has started. And use the earliest indication of linkgroup termination possible, namely the removal from the linkgroup list. Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22net/smc: tell peers about abnormal link group terminationUrsula Braun3-6/+6
There are lots of link group termination scenarios. Most of them still allow to inform the peer of the terminating sockets about aborting. This patch tries to call smc_close_abort() for terminating sockets. And the internal TCP socket is reset with tcp_abort(). Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22net/smc: improve link group freeingUrsula Braun2-17/+32
Usually link groups are freed delayed to enable quick connection creation for a follow-on SMC socket. Terminated link groups are freed faster. This patch makes sure, fast schedule of link group freeing is not rescheduled by a delayed schedule. And it makes sure link group freeing is not rescheduled, if the real freeing is already running. Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22net/smc: improve abnormal termination lockingUrsula Braun1-5/+8
Locking hierarchy requires that the link group conns_lock can be taken if the socket lock is held, but not vice versa. Nevertheless socket termination during abnormal link group termination should be protected by the socket lock. This patch reduces the time segments the link group conns_lock is held to enable usage of lock_sock in smc_lgr_terminate(). Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22net/smc: terminate link group without holding lgr lockUrsula Braun1-8/+17
When a link group is to be terminated, it is sufficient to hold the lgr lock when unlinking the link group from its list. Move the lock-protected link group unlinking into smc_lgr_terminate(). Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22net/smc: cancel send and receive for terminated socketUrsula Braun6-17/+32
The resources for a terminated socket are being cleaned up. This patch makes sure * no more data is received for an actively terminated socket * no more data is sent for an actively or passively terminated socket Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22mlxsw: core: Extend QSFP EEPROM size for ethtoolVadim Pasternak1-6/+17
Extend the size of QSFP EEPROM for the cable types SSF8436 and SFF8636 from 256 to 640 bytes in order to expose all the EEPROM pages by ethtool. For SFF-8636 and SFF-8436 specifications, the driver exposes 256 bytes of data for ethtool's get_module_eeprom() callback. This is because the driver uses the below defines to specify SFF module length in ethtool's get_module_info() callback: 'ETH_MODULE_SFF_8636_LEN' and 'ETH_MODULE_SFF_8436_LEN' (both are 256). As a result of exposing 256 bytes only, ethtool shows wrong "zero" info for pages 1, 2, 3. The patch changes the length returned by callback for get_module_info() to the values from the next defines: 'ETH_MODULE_SFF_8636_MAX_LEN' and 'ETH_MODULE_SFF_8436_MAX_LEN' (both are 640) to allow exposing of upper page 1, 2 and 3. Signed-off-by: Vadim Pasternak <vadimp@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22mlxsw: reg: Add macro for getting QSFP module EEPROM page numberVadim Pasternak1-0/+9
Provide a macro for getting QSFP module EEPROM page number from the optional upper page number row offset, specified in request. Signed-off-by: Vadim Pasternak <vadimp@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22xen/netback: cleanup init and deinit codeJuergen Gross1-60/+54
Do some cleanup of the netback init and deinit code: - add an omnipotent queue deinit function usable from xenvif_disconnect_data() and the error path of xenvif_connect_data() - only install the irq handlers after initializing all relevant items (especially the kthreads related to the queue) - there is no need to use get_task_struct() after creating a kthread and using put_task_struct() again after having stopped it. - use kthread_run() instead of kthread_create() to spare the call of wake_up_process(). Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Paul Durrant <pdurrant@gmail.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22r8152: support firmware of PHY NC for RTL8153AHayes Wang1-2/+282
Support the firmware of PHY NC which is used to fix the issue found for PHY. Currently, only RTL_VER_04, RTL_VER_05, and RTL_VER_06 need it. The order of loading PHY firmware would be RTL_FW_PHY_START RTL_FW_PHY_NC RTL_FW_PHY_STOP The RTL_FW_PHY_START/RTL_FW_PHY_STOP are used to lock/unlock the PHY, and set/clear the patch key from the firmware file. Signed-off-by: Hayes Wang <hayeswang@realtek.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22r8152: move r8153_patch_request forwardHayes Wang1-27/+27
Move r8153_patch_request() forward for later patch. Signed-off-by: Hayes Wang <hayeswang@realtek.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22r8152: add checking fw_offset field of struct fw_macHayes Wang1-3/+9
Make sure @fw_offset field of struct fw_mac is more than the size of struct fw_mac. Signed-off-by: Hayes Wang <hayeswang@realtek.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-22r8152: rename fw_type_1 with fw_macHayes Wang1-41/+41
The struct fw_type_1 is used by MAC only, so rename it to a meaningful one. Besides, adjust two messages. Replace "load xxx fail" with "check xxx fail" Signed-off-by: Hayes Wang <hayeswang@realtek.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-10-21net/sched: act_police: re-use tcf_tm_dump()Davide Caratti1-4/+1
Use tcf_tm_dump(), instead of an open coded variant (no functional change in this patch). Signed-off-by: Davide Caratti <dcaratti@redhat.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: phy: marvell: remove superseded function marvell_set_downshiftHeiner Kallweit1-53/+35
Instead of superseded function marvell_set_downshift() we can use new function m88e1111_set_downshift() in m88e1116r_config_init(). For this m88e1116r_config_init() has to be moved in the code. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: phy: marvell: support downshift as PHY tunableHeiner Kallweit1-1/+87
So far downshift is implemented for one small use case only and can't be controlled from userspace. So let's implement this feature properly as a PHY tunable so that it can be controlled via ethtool. More Marvell PHY's may support downshift, but I restricted it for now to the ones where I have the datasheet. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21tc-testing: updated pedit TDC testsRoman Mashak1-0/+200
Added test cases for IP header operations: - set tos/precedence - add value to tos/precedence - clear tos/precedence - invert tos/precedence Signed-off-by: Roman Mashak <mrv@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: mvneta: add XDP_TX supportLorenzo Bianconi1-7/+121
Implement XDP_TX verdict and ndo_xdp_xmit net_device_ops function pointer Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: mvneta: make tx buffer array agnosticLorenzo Bianconi1-23/+43
Allow tx buffer array to contain both skb and xdp buffers in order to enable xdp frame recycling adding XDP_TX verdict support Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: mvneta: move header prefetch in mvneta_swbm_rx_frameLorenzo Bianconi1-4/+3
Move data buffer prefetch in mvneta_swbm_rx_frame after dma_sync_single_range_for_cpu Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: mvneta: add basic XDP supportLorenzo Bianconi1-9/+139
Add basic XDP support to mvneta driver for devices that rely on software buffer management. Currently supported verdicts are: - XDP_DROP - XDP_PASS - XDP_REDIRECT - XDP_ABORTED - iptables drop: $iptables -t raw -I PREROUTING -p udp --dport 9 -j DROP $nstat -n && sleep 1 && nstat IpInReceives 151169 0.0 IpExtInOctets 6953544 0.0 IpExtInNoECTPkts 151165 0.0 - XDP_DROP via xdp1 $./samples/bpf/xdp1 3 proto 0: 421419 pkt/s proto 0: 421444 pkt/s proto 0: 421393 pkt/s proto 0: 421440 pkt/s proto 0: 421184 pkt/s Tested-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: mvneta: rely on build_skb in mvneta_rx_swbm poll routineLorenzo Bianconi1-84/+95
Refactor mvneta_rx_swbm code introducing mvneta_swbm_rx_frame and mvneta_swbm_add_rx_fragment routines. Rely on build_skb in oreder to allocate skb since the previous patch introduced buffer recycling using the page_pool API. This patch fixes even an issue in the original driver where dma buffers are accessed before dma sync. mvneta driver can run on not cache coherent devices so it is necessary to sync DMA buffers before sending them to the device in order to avoid memory corruptions. Running perf analysis we can see a performance cost associated with this DMA-sync (anyway it is already there in the original driver code). In follow up patches we will add more logic to reduce DMA-sync as much as possible. Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: mvneta: introduce page pool API for sw buffer managerLorenzo Bianconi2-19/+65
Use the page_pool api for allocations and DMA handling instead of __dev_alloc_page()/dma_map_page() and free_page()/dma_unmap_page(). Pages are unmapped using page_pool_release_page before packets go into the network stack. The page_pool API offers buffer recycling capabilities for XDP but allocates one page per packet, unless the driver splits and manages the allocated page. This is a preliminary patch to add XDP support to mvneta driver Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: mvneta: introduce mvneta_update_stats routineLorenzo Bianconi1-21/+22
Introduce mvneta_update_stats routine to collect {rx/tx} statistics (packets and bytes). This is a preliminary patch to add XDP support to mvneta driver Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21igc: Clean up unused shadow_vfta pointerSasha Neftin2-2/+0
VLAN filter table array not implemented yet and shadow_vfta pointer not used. Clean up the code and remove the unused shadow_vfta pointer. Signed-off-by: Sasha Neftin <sasha.neftin@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-10-21igc: Add Rx checksum supportSasha Neftin2-1/+47
Extend the socket buffer field process and add Rx checksum functionality Minor: fix indentation with tab instead of spaces. Signed-off-by: Sasha Neftin <sasha.neftin@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-10-21igc: Add set_rx_mode supportSasha Neftin5-0/+291
Add multicast addresses list to the MTA table. Implement basic Rx mode support. Add option for IPv6 address settings. Signed-off-by: Sasha Neftin <sasha.neftin@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-10-21e1000e: Add support for S0ixSasha Neftin2-0/+186
Implement flow for S0ix support. Modern SoCs support S0ix low power states during idle periods, which are sub-states of ACPI S0 that increase power saving while supporting an instant-on experience for providing lower latency that ACPI S0. The S0ix states shut off parts of the SoC when they are not in use, while still maintaning optimal performance. This patch add support for S0ix started from an Ice Lake platform. Suggested-by: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com> Signed-off-by: Vitaly Lifshits <vitaly.lifshits@intel.com> Signed-off-by: Rajneesh Bhardwaj <rajneesh.bhardwaj@linux.intel.com> Signed-off-by: Sasha Neftin <sasha.neftin@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-10-21igc: Add SCTP CRC checksumming functionalitySasha Neftin1-0/+1
Add stream control transmission protocol CRC checksum. Signed-off-by: Sasha Neftin <sasha.neftin@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-10-21net: hns3: log and clear hardware error after reset completeJian Shen1-0/+3
When device is resetting, the CMDQ service may be stopped until reset completed. If a new RAS error occurs at this moment, it will no be able to clear the RAS source. This patch fixes it by clear the RAS source after reset complete. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: hns3: do not allocate linear data for fraglist skbYunsheng Lin1-2/+1
Currently, napi_alloc_skb() is used to allocate skb for fraglist when the head skb is not enough to hold the remaining data, and the remaining data is added to the frags part of the fraglist skb, leaving the linear part unused. So this patch passes length of 0 to allocate fraglist skb with zero size of linear data. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: hns3: minor cleanup for hns3_handle_rx_bd()Yunsheng Lin1-25/+13
Since commit e55970950556 ("net: hns3: Add handling of GRO Pkts not fully RX'ed in NAPI poll"), ring->skb is used to record the current SKB when processing the RX BD in hns3_handle_rx_bd(), so the parameter out_skb is unnecessary. This patch also adjusts the err checking to reduce duplication in hns3_handle_rx_bd(), and "err == -ENXIO" is rare case, so put it in the unlikely annotation. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: hns3: make struct hns3_enet_ring cacheline alignedYunsheng Lin1-1/+1
Since struct hns3_enet_ring is a frequently used in critical data path, so make it cacheline aligned as struct hns3_enet_tqp_vector. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: hns3: introduce ring_to_netdev() in enet moduleYunsheng Lin2-7/+9
There are a few places that need to access the netdev of a ring through ring->tqp->handle->kinfo.netdev, and ring->tqp is a struct which both in enet and hclge modules, it is better to use the struct that is only used in enet module. This patch adds the ring_to_netdev() to access the netdev of ring through ring->tqp_vector->napi.dev. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: hns3: minor optimization for barrier in IO pathYunsheng Lin1-3/+7
Currently, the TX and RX ring in a queue is bounded to the same IRQ, there may be unnecessary barrier op when only one of the ring need to be processed. This patch adjusts the location of rmb() in hns3_clean_tx_ring() and adds a checking in hns3_clean_rx_ring() to avoid unnecessary barrier op when there is nothing to do for the ring. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: hns3: optimized MAC address in management table.Guojia Liao2-4/+3
mac_addr_hi32 and mac_addr_lo16 are used to store the MAC address for management table. But using array of mac_addr[ETH_ALEN] would be more general and not need to care about the big-endian mode of the CPU. Signed-off-by: Guojia Liao <liaoguojia@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-21net: hns3: remove struct hns3_nic_ring_data in hns3_enet moduleYunsheng Lin4-138/+74
Only the queue_index field in struct hns3_nic_ring_data is used, other field is unused and unnecessary for hns3 driver, so this patch removes it and move the queue_index field to hns3_enet_ring. This patch also removes an unused struct hns_queue declaration. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-19net: reorder 'struct net' fields to avoid false sharingEric Dumazet1-8/+17
Intel test robot reported a ~7% regression on TCP_CRR tests that they bisected to the cited commit. Indeed, every time a new TCP socket is created or deleted, the atomic counter net->count is touched (via get_net(net) and put_net(net) calls) So cpus might have to reload a contended cache line in net_hash_mix(net) calls. We need to reorder 'struct net' fields to move @hash_mix in a read mostly cache line. We move in the first cache line fields that can be dirtied often. We probably will have to address in a followup patch the __randomize_layout that was added in linux-4.13, since this might break our placement choices. Fixes: 355b98553789 ("netns: provide pure entropy for net_hash_mix()") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: kernel test robot <oliver.sang@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-19net: dsa: fix switch tree listVivien Didelot1-1/+1
If there are multiple switch trees on the device, only the last one will be listed, because the arguments of list_add_tail are swapped. Fixes: 83c0afaec7b7 ("net: dsa: Add new binding implementation") Signed-off-by: Vivien Didelot <vivien.didelot@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-19net: ethernet: dwmac-sun8i: show message only when switching to promiscMans Rullgard1-1/+2
Printing the info message every time more than the max number of mac addresses are requested generates unnecessary log spam. Showing it only when the hw is not already in promiscous mode is equally informative without being annoying. Signed-off-by: Mans Rullgard <mans@mansr.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-19net: aquantia: add an error handling in aq_nic_set_multicast_listChenwandun1-0/+2
add an error handling in aq_nic_set_multicast_list, it may not work when hw_multicast_list_set error; and at the same time it will remove gcc Wunused-but-set-variable warning. Signed-off-by: Chenwandun <chenwandun@huawei.com> Reviewed-by: Igor Russkikh <igor.russkikh@aquantia.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-19net: netem: correct the parent's backlog when corrupted packet was droppedJakub Kicinski1-0/+2
If packet corruption failed we jump to finish_segs and return NET_XMIT_SUCCESS. Seeing success will make the parent qdisc increment its backlog, that's incorrect - we need to return NET_XMIT_DROP. Fixes: 6071bd1aa13e ("netem: Segment GSO packets on enqueue") Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-19net: netem: fix error path for corrupted GSO framesJakub Kicinski1-3/+6
To corrupt a GSO frame we first perform segmentation. We then proceed using the first segment instead of the full GSO skb and requeue the rest of the segments as separate packets. If there are any issues with processing the first segment we still want to process the rest, therefore we jump to the finish_segs label. Commit 177b8007463c ("net: netem: fix backlog accounting for corrupted GSO frames") started using the pointer to the first segment in the "rest of segments processing", but as mentioned above the first segment may had already been freed at this point. Backlog corrections for parent qdiscs have to be adjusted. Fixes: 177b8007463c ("net: netem: fix backlog accounting for corrupted GSO frames") Reported-by: kbuild test robot <lkp@intel.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Reported-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-19macb: propagate errors when getting optional clocksMichael Tretter1-6/+6
The tx_clk, rx_clk, and tsu_clk are optional. Currently the macb driver marks clock as not available if it receives an error when trying to get a clock. This is wrong, because a clock controller might return -EPROBE_DEFER if a clock is not available, but will eventually become available. In these cases, the driver would probe successfully but will never be able to adjust the clocks, because the clocks were not available during probe, but became available later. For example, the clock controller for the ZynqMP is implemented in the PMU firmware and the clocks are only available after the firmware driver has been probed. Use devm_clk_get_optional() in instead of devm_clk_get() to get the optional clock and propagate all errors to the calling function. Signed-off-by: Michael Tretter <m.tretter@pengutronix.de> Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com> Tested-by: Nicolas Ferre <nicolas.ferre@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>