aboutsummaryrefslogtreecommitdiffstats
path: root/include/dt-bindings (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2019-04-18net: phy: remove dead code from phy_sanitize_settingsHeiner Kallweit1-4/+0
phy_sanitize_settings() is called from phy_start_aneg() only, and only if phydev->autoneg isn't set. Therefore the removed code does nothing. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-18net: phy: don't set autoneg if it's not supportedHeiner Kallweit1-0/+4
In phy_device_create() we set phydev->autoneg = 1. This isn't changed even if the PHY doesn't support autoneg. This seems to affect very few PHY's, and they disable phydev->autoneg in their config_init callback. So it's more of an improvement, therefore net-next. The patch also wouldn't apply to older kernel versions because the link mode bitmaps have been introduced recently. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-18ice: Calculate ITR increment based on direct calculationBrett Creeley1-72/+63
Currently when calculating how much to increment ITR by inside of ice_update_itr() we do some estimations and intermediate calculations. Instead of doing estimations, just do the calculation directly. This allows for a more accurate value and it makes it easier for the next person to understand and update. Also, remove the dividing the ITR value by 2 when latency driven because the ITR values are already so low for 100Gbps speed. This should help get to the desired ITR value faster. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18ice: Bump driver versionAnirudh Venkataramanan1-1/+1
Update driver version to 0.7.4 Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18ice: Add code to control FW LLDP and DCBXAnirudh Venkataramanan5-2/+159
This patch adds code to start or stop LLDP and DCBX in firmware through use of ethtool private flags. Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18ice: Add code for DCB rebuildAnirudh Venkataramanan3-0/+85
This patch introduces a new function ice_dcb_rebuild which reinitializes DCB after a reset. Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18ice: Add code to get DCB related statisticsAnirudh Venkataramanan6-2/+86
This patch adds a new function ice_update_dcb_stats to get DCB stats from the hardware and ethtool support for displaying these stats. Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18ice: Add priority information into VLAN headerAnirudh Venkataramanan4-3/+54
This patch introduces a new function ice_tx_prepare_vlan_flags_dcb to insert 802.1p priority information into the VLAN header Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18ice: Update rings based on TC informationAnirudh Venkataramanan5-0/+52
This patch adds a new function ice_vsi_cfg_dcb_rings which updates a VSI's rings based on DCB traffic class information. Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18ice: Add code to process LLDP MIB change eventsAnirudh Venkataramanan5-3/+47
This patch adds support to process LLDP MIB change notifications sent by the firmware. Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18ice: Add code for DCB initialization part 4/4Anirudh Venkataramanan2-2/+81
When the firmware doesn't support LLDP or DCBX, the driver should switch to "software LLDP mode". This patch adds support for the same. Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18ice: Add code for DCB initialization part 3/4Anirudh Venkataramanan14-82/+997
This patch adds a new function ice_pf_dcb_cfg (and related helpers) which applies the DCB configuration obtained from the firmware. As part of this, VSIs/netdevs are updated with traffic class information. This patch requires a bit of a refactor of existing code. 1. For a MIB change event, the associated VSI is closed and brought up again. The gap between closing and opening the VSI can cause a race condition. Fix this by grabbing the rtnl_lock prior to closing the VSI and then only free it after re-opening the VSI during a MIB change event. 2. ice_sched_query_elem is used in ice_sched.c and with this patch, in ice_dcb.c as well. However, ice_dcb.c is not built when CONFIG_DCB is unset. This results in namespace warnings (ice_sched.o: Externally defined symbols with no external references) when CONFIG_DCB is unset. To avoid this move ice_sched_query_elem from ice_sched.c to ice_common.c. Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18ice: Add code for DCB initialization part 2/4Anirudh Venkataramanan6-1/+1057
This patch introduces a new top level function ice_init_dcb (and related lower level helper functions) which continues the DCB init flow. This function uses ice_get_dcb_cfg to get, parse and store the DCB configuration. Once this is done, it sets itself up to be notified by the firmware on LLDP MIB change events. Reviewed-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18ice: Add code for DCB initialization part 1/4Anirudh Venkataramanan10-0/+230
This patch introduces a skeleton for ice_init_pf_dcb, the top level function for DCB initialization. Subsequent patches will add to this DCB init flow. In this patch, ice_init_pf_dcb checks if DCB is a supported capability. If so, an admin queue call to start the LLDP and DCBx in firmware is issued. If not, an error is reported. Note that we don't fail the driver init if DCB init fails. Reviewed-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18ice: Bump versionAnirudh Venkataramanan1-1/+1
Bump driver version to 0.7.3 Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18ice: Fix incorrect use of abbreviationsAnirudh Venkataramanan16-294/+294
Capitalize abbreviations and spell out some that aren't obvious. Reviewed-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18ice: Fix typos in code commentsAnirudh Venkataramanan4-6/+6
This patch fixes typos in code comments. Reviewed-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-04-18rtlwifi: rtl8188ee: Remove extraneous fileLarry Finger1-10/+0
Somehow file drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.c.rej was incorporated into the sources. Obviously, it can be removed. Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Reported-by: Andrew Morton <akpm@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2019-04-17net ipv6: Prevent neighbor add if protocol is disabled on deviceDavid Ahern3-0/+24
Disabling IPv6 on an interface removes existing entries but nothing prevents new entries from being manually added. To that end, add a new neigh_table operation, allow_add, that is called on RTM_NEWNEIGH to see if neighbor entries are allowed on a given device. If IPv6 is disabled on the device, allow_add returns false and passes a message back to the user via extack. $ echo 1 > /proc/sys/net/ipv6/conf/eth1/disable_ipv6 $ ip -6 neigh add fe80::4c88:bff:fe21:2704 dev eth1 lladdr de:ad:be:ef:01:01 Error: IPv6 is disabled on this device. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17ipv6: Add fib6_type and fib6_flags to fib6_resultDavid Ahern4-39/+52
Add the fib6_flags and fib6_type to fib6_result. Update the lookup helpers to set them and update post fib lookup users to use the version from the result. This allows nexthop objects to have blackhole nexthop. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17ipv6: Pass fib6_result to fib lookupsDavid Ahern7-53/+46
Change fib6_lookup and fib6_table_lookup to take a fib6_result and set f6i and nh rather than returning a fib6_info. For now both always return 0. A later patch set can make these more like the IPv4 counterparts and return EINVAL, EACCESS, etc based on fib6_type. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17ipv6: Pass fib6_result to fib6_table_lookup tracepointDavid Ahern2-11/+11
Change fib6_table_lookup tracepoint to take the fib6_result and use the fib6_info and fib6_nh from it. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17ipv6: Pass fib6_result to rt6_select and find_rr_leafDavid Ahern1-39/+43
Pass fib6_result to rt6_select. Instead of returning the fib entry, it will set f6i and nh based on the lookup. find_rr_leaf is changed to remove the match option in favor of taking fib6_result and having __find_rr_leaf set f6i in the result. In the process, update fib6_info references in __find_rr_leaf to f6i names. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17ipv6: Pass fib6_result to rt6_device_matchDavid Ahern1-19/+30
Pass fib6_result to rt6_device_match with f6i set. rt6_device_match updates f6i in the result if it finds a better match and sets nh. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17ipv6: Pass fib6_result to ip6_mtu_from_fib6 and fib6_mtuDavid Ahern5-18/+25
Change ip6_mtu_from_fib6 and fib6_mtu to take a fib6_result over a fib6_info. Update both to use the fib6_nh from fib6_result. Since the signature of ip6_mtu_from_fib6 is already changing, add const to daddr and saddr. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17ipv6: Pass fib6_result to rt6_insert_exceptionDavid Ahern1-16/+17
Update rt6_insert_exception to take a fib6_result over a fib6_info. Change ort to f6i from the fib6_result and rename to better reflect what it references (a fib6_info). Since this function is already getting changed, update the comments to reference fib6_info variables rather than the older rt6_info. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17ipv6: Pass fib6_result to ip6_rt_get_dev_rcu and ip6_rt_copy_initDavid Ahern1-22/+27
Now that all callers are update to have a fib6_result, pass it down to ip6_rt_get_dev_rcu, ip6_rt_copy_init, and ip6_rt_init_dst. In the process, change ort to f6i in ip6_rt_copy_init to make it clear it is a reference to a fib6_info. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17ipv6: Pass fib6_result to pcpu route functionsDavid Ahern1-13/+14
Update ip6_rt_pcpu_alloc, rt6_get_pcpu_route and rt6_make_pcpu_route to a fib6_result over a fib6_info. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17ipv6: Pass fib6_result to ip6_create_rt_rcuDavid Ahern1-16/+21
Change ip6_create_rt_rcu to take fib6_result over a fib6_info. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17ipv6: Pass fib6_result to ip6_rt_cache_allocDavid Ahern1-22/+26
Change ip6_rt_cache_alloc to take a fib6_result over a fib6_info. Since ip6_rt_cache_alloc is only the caller, update the rt6_is_gw_or_nonexthop helper to take fib6_result. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17ipv6: Pass fib6_result to rt6_find_cached_rtDavid Ahern1-14/+21
Simplify rt6_find_cached_rt for the fast path cases and pass fib6_result to rt6_find_cached_rt. Rename the local return variable to ret to maintain consisting with fib6_result name. Update the comment in rt6_find_cached_rt to reference the new names in a fib6_info vs the old name when fib entries were an rt6_info. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17ipv6: Rename fib6_multipath_select and pass fib6_resultDavid Ahern6-64/+68
Add 'struct fib6_result' to hold the fib entry and fib6_nh from a fib lookup as separate entries, similar to what IPv4 now has with fib_result. Rename fib6_multipath_select to fib6_select_path, pass fib6_result to it, and set f6i and nh in the result once a path selection is done. Call fib6_select_path unconditionally for path selection which means moving the sibling and oif check to fib6_select_path. To handle the two different call paths (2 only call multipath_select if flowi6_oif == 0 and the other always calls it), add a new have_oif_match that controls the sibling walk if relevant. Update callers of fib6_multipath_select accordingly and have them use the fib6_info and fib6_nh from the result. This is needed for multipath nexthop objects where a single f6i can point to multiple fib6_nh (similar to IPv4). Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17s390/qeth: stop/wake TX queues based on their fill levelJulian Wiedmann5-48/+71
Current xmit code only stops the txq after attempting to fill an IO buffer that hasn't been TX-completed yet. In many-connection scenarios, this can result in frequent rejected TX attempts, requeuing of skbs with NETDEV_TX_BUSY and extra overhead. Now that we have a proper 1-to-1 relation between stack-side txqs and our HW Queues, overhaul the stop/wake logic so that the xmit code stops the txq as needed. Given that we might map multiple skbs into a single buffer, it's crucial to ensure that the queue always provides an _entirely_ empty IO buffer. Otherwise large skbs (eg TSO) might not fit into the last available buffer. So whenever qeth_do_send_packet() first utilizes an _empty_ buffer, it updates & checks the used_buffers count. This now ensures that an skb passed to qeth_xmit() can always be mapped into an IO buffer, so remove all of the -EBUSY roll-back handling in the TX path. We preserve the minimal safety-checks ("Is this IO buffer really available?"), just in case some nasty future bug ever attempts to corrupt an in-use buffer. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17s390/qeth: simplify QoS codeJulian Wiedmann2-20/+7
qeth_get_priority_queue() is no longer used for IQD devices, remove the special-casing of their mcast queue. This effectively reverts commit 70deb01662b1 ("qeth: omit outbound queue 3 for unicast packets in Priority Queuing on HiperSockets"). Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17s390/qeth: add TX multiqueue support for OSA devicesJulian Wiedmann4-52/+51
This adds trivial support for multiple TX queues on OSA-style devices (both real HW and z/VM NICs). For now we expose the driver's existing QoS mechanism via .ndo_select_queue, and adjust the number of available TX queues when qeth_update_from_chp_desc() detects that the HW configuration has changed. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17s390/qeth: add TX multiqueue support for IQD devicesJulian Wiedmann6-26/+97
qeth has been supporting multiple HW Output Queues for a long time. But rather than exposing those queues to the stack, it uses its own queue selection logic in .ndo_start_xmit... with all the drawbacks that entails. Start off by switching IQD devices over to a proper mqs net_device, and converting all the netdev_queue management code. One oddity with IQD devices is the requirement to place all mcast traffic on the _highest_ established HW queue. Doing so via .ndo_select_queue seems straight-forward - but that won't work if only some of the HW queues are active (ie. when dev->real_num_tx_queues < dev->num_tx_queues), since netdev_cap_txqueue() will not allow us to put skbs on the higher queues. To make this work, we 1. let .ndo_select_queue() map all mcast traffic to netdev_queue 0, and 2. later re-map the netdev_queue and HW queue indices in .ndo_start_xmit and the TX completion handler. With this patch we default to a fixed set of 1 ucast and 1 mcast queue. Support for dynamic reconfiguration is added at a later time. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17s390/qeth: don't keep statistics for tx timeoutJulian Wiedmann2-3/+0
struct netdev_queue contains a counter for tx timeouts, which gets updated by dev_watchdog(). So let's not attempt to maintain our own statistics, in particular not by overloading the skb-error counter. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17s390/qeth: don't bother updating the last-tx timeJulian Wiedmann1-1/+0
As the documentation for netif_trans_update() says, netdev_start_xmit() already updates the last-tx time after every good xmit. So don't duplicate that effort. One odd case is that qeth_flush_buffers() also gets called from our TX completion handler, to flush out any partially filled buffer when we switch the queue to non-packing mode. But as the TX completion handler will _always_ wake the txq, we don't have to worry about the TX watchdog there. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17s390/qeth: handle error from qeth_update_from_chp_desc()Julian Wiedmann1-4/+10
Subsequent code relies on the values that qeth_update_from_chp_desc() reads from the CHP descriptor. Rather than dealing with weird errors later on, just handle it properly here. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17s390/qeth: clarify naming for some QDIO helpersJulian Wiedmann4-23/+22
The naming of several QDIO helpers doesn't match their actual functionality, or the structures they operate on. Clean this up. s/qeth_alloc_qdio_buffers/qeth_alloc_qdio_queues s/qeth_free_qdio_buffers/qeth_free_qdio_queues s/qeth_alloc_qdio_out_buf/qeth_alloc_output_queue s/qeth_clear_outq_buffers/qeth_drain_output_queue s/qeth_clear_qdio_buffers/qeth_drain_output_queues Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17net: stmmac: Set Flow Control to automatic mode in the driverJose Abreu1-1/+1
By default Flow Control feature is not being enabled in stmmac. This is a useful feature that can prevent loss of packets and now that XGMAC already supports it (along with GMAC and QoS) it makes sense to activate it. Switch the module parameter to FLOW_AUTO so that Flow Control is activated. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: Joao Pinto <jpinto@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17net: stmmac: dwxgmac: Finish the Flow Control implementationJose Abreu2-0/+51
Finish the implementation of Flow Control feature. In order for it to work correctly we need to set EHFC bit and the correct threshold values for activating and deactivating it. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: Joao Pinto <jpinto@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17ipmi: fix sleep-in-atomic in free_user at cleanup SRCU user->release_barrierCorey Minyard1-2/+17
free_user() could be called in atomic context. This patch pushed the free operation off into a workqueue. Example: BUG: sleeping function called from invalid context at kernel/workqueue.c:2856 in_atomic(): 1, irqs_disabled(): 0, pid: 177, name: ksoftirqd/27 CPU: 27 PID: 177 Comm: ksoftirqd/27 Not tainted 4.19.25-3 #1 Hardware name: AIC 1S-HV26-08/MB-DPSB04-06, BIOS IVYBV060 10/21/2015 Call Trace: dump_stack+0x5c/0x7b ___might_sleep+0xec/0x110 __flush_work+0x48/0x1f0 ? try_to_del_timer_sync+0x4d/0x80 _cleanup_srcu_struct+0x104/0x140 free_user+0x18/0x30 [ipmi_msghandler] ipmi_free_recv_msg+0x3a/0x50 [ipmi_msghandler] deliver_response+0xbd/0xd0 [ipmi_msghandler] deliver_local_response+0xe/0x30 [ipmi_msghandler] handle_one_recv_msg+0x163/0xc80 [ipmi_msghandler] ? dequeue_entity+0xa0/0x960 handle_new_recv_msgs+0x15c/0x1f0 [ipmi_msghandler] tasklet_action_common.isra.22+0x103/0x120 __do_softirq+0xf8/0x2d7 run_ksoftirqd+0x26/0x50 smpboot_thread_fn+0x11d/0x1e0 kthread+0x103/0x140 ? sort_range+0x20/0x20 ? kthread_destroy_worker+0x40/0x40 ret_from_fork+0x1f/0x40 Fixes: 77f8269606bf ("ipmi: fix use-after-free of user->release_barrier.rda") Reported-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Signed-off-by: Corey Minyard <cminyard@mvista.com> Cc: stable@vger.kernel.org # 5.0 Cc: Yang Yingliang <yangyingliang@huawei.com>
2019-04-16socket: fix compat SO_RCVTIMEO_NEW/SO_SNDTIMEO_NEWArnd Bergmann1-2/+2
It looks like the new socket options only work correctly for native execution, but in case of compat mode fall back to the old behavior as we ignore the 'old_timeval' flag. Rework so we treat SO_RCVTIMEO_NEW/SO_SNDTIMEO_NEW the same way in compat and native 32-bit mode. Cc: Deepa Dinamani <deepa.kernel@gmail.com> Fixes: a9beb86ae6e5 ("sock: Add SO_RCVTIMEO_NEW and SO_SNDTIMEO_NEW") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Deepa Dinamani <deepa.kernel@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-16tcp: tcp_grow_window() needs to respect tcp_space()Eric Dumazet1-5/+5
For some reason, tcp_grow_window() correctly tests if enough room is present before attempting to increase tp->rcv_ssthresh, but does not prevent it to grow past tcp_space() This is causing hard to debug issues, like failing the (__tcp_select_window(sk) >= tp->rcv_wnd) test in __tcp_ack_snd_check(), causing ACK delays and possibly slow flows. Depending on tcp_rmem[2], MTU, skb->len/skb->truesize ratio, we can see the problem happening on "netperf -t TCP_RR -- -r 2000,2000" after about 60 round trips, when the active side no longer sends immediate acks. This bug predates git history. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Acked-by: Wei Wang <weiwan@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-16dpaa2-eth: Add flow steering support without maskingIoana Ciocoi Radulescu3-32/+127
On platforms that lack a TCAM (like LS1088A), masking of flow steering keys is not supported. Until now we didn't offer flow steering capabilities at all on these platforms, since our driver implementation configured a "comprehensive" FS key (containing all supported header fields), with masks used to ignore the fields not present in the rules provided by the user. We now allow ethtool rules that share a common key (i.e. have the same header fields). The FS key is now kept in the driver private data and initialized when the first rule is added to an empty table, rather than at probe time. If a rule with a new composition key is wanted, the user must first manually delete all previous rules. When building a FS table entry to pass to firmware, we still use the old building algorithm, which assumes an all-supported-fields key, and later collapse the fields which aren't actually needed. Masked rules are not supported; if provided, the mask value will be ignored. For firmware versions older than MC10.7.0 (that only offer the legacy ABIs for configuring distribution keys) flow steering without masking support remains unavailable. Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-16dpaa2-eth: Update hash key composition codeIoana Ciocoi Radulescu2-2/+30
Introduce an internal id bitfield to uniquely identify header fields supported by the Rx distribution keys. For the hash key, add a conversion from the RXH_* bitmask provided by ethtool to the internal ids. Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-16dpaa2-eth: Add a couple of macrosIoana Ciocoi Radulescu2-2/+7
Add two macros to simplify reading DPNI options. Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-16dpaa2-eth: Fix Rx classification statusIoana Ciocoi Radulescu1-1/+6
Set the Rx flow classification enable flag only if key config operation is successful. Fixes 3f9b5c9 ("dpaa2-eth: Configure Rx flow classification key") Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-16ocelot: Clean up stats update deferred workClaudiu Manoil1-8/+14
This is preventive cleanup that may save troubles later. No need to cancel repeateadly queued work if code is properly refactored. Don't let the ethtool -s process interfere with the stat workqueue scheduling. Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>