aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/s390/net/qeth_l3_main.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2020-01-26s390/qeth: remove HARDSETUP stateJulian Wiedmann1-11/+3
qeth_l?_stop_card() is _never_ called while in HARDSETUP state, and there's no other usage of the card state that relies on the DOWN -> HARDSETUP -> SOFTSETUP transition. As related cleanup, remove the check in qeth_realloc_buffer_pool() as it is already done by the callers. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-01-26s390/qeth: consolidate online/offline codeJulian Wiedmann1-83/+4
Large parts of the online/offline code are identical now, and cleaning up the remaining stuff is easier with a shared core. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-01-26s390/qeth: consolidate QDIO queue setupJulian Wiedmann1-7/+1
Move some duplicated logic into a shared code path. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-31Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller1-1/+1
Simple overlapping changes in bpf land wrt. bpf_helper_defs.h handling. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-26s390/qeth: consolidate RX codeJulian Wiedmann1-91/+0
To reduce the path length and levels of indirection, move the RX processing from the sub-drivers into the core. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-24s390/qeth: fix qdio teardown after early init errorJulian Wiedmann1-1/+1
qeth_l?_set_online() goes through a number of initialization steps, and on any error uses qeth_l?_stop_card() to tear down the residual state. The first initialization step is qeth_core_hardsetup_card(). When this fails after having established a QDIO context on the device (ie. somewhere after qeth_mpc_initialize()), qeth_l?_stop_card() doesn't shut down this QDIO context again (since the card state hasn't progressed from DOWN at this stage). Even worse, we then call qdio_free() as final teardown step to free the QDIO data structures - while some of them are still hooked into wider QDIO infrastructure such as the IRQ list. This is inevitably followed by use-after-frees and other nastyness. Fix this by unconditionally calling qeth_qdio_clear_card() to shut down the QDIO context, and also to halt/clear any pending activity on the various IO channels. Remove the naive attempt at handling the teardown in qeth_mpc_initialize(), it clearly doesn't suffice and we're handling it properly now in the wider teardown code. Fixes: 4a71df50047f ("qeth: new qeth device driver") Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-22Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller1-0/+1
Mere overlapping changes in the conflicts here. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-20s390/qeth: fix promiscuous mode after resetJulian Wiedmann1-0/+1
When managing the promiscuous mode during an RX modeset, qeth caches the current HW state to avoid repeated programming of the same state on each modeset. But while tearing down a device, we forget to clear the cached state. So when the device is later set online again, the initial RX modeset doesn't program the promiscuous mode since we believe it is already enabled. Fix this by clearing the cached state in the tear-down path. Note that for the SBP variant of promiscuous mode, this accidentally works right now because we unconditionally restore the SBP role while re-initializing. Fixes: 4a71df50047f ("qeth: new qeth device driver") Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Alexandra Winter <wintera@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-18s390/qeth: stop yielding the ip_lock during IPv4 registrationJulian Wiedmann1-32/+2
As commit df2a2a5225cc ("s390/qeth: convert IP table spinlock to mutex") converted the ip_lock to a mutex, we no longer have to yield it while the subsequent IO sleep-waits for completion. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-18s390/qeth: don't raise NETDEV_REBOOT event from L3 offline pathJulian Wiedmann1-6/+0
This is a leftover from back when a recovery action didn't go through dev_close(), and was meant to shoot down all remaining af_iucv sockets on the interface. Now that the offline path always calls dev_close(), the NETDEV_GOING_DOWN event from __dev_close_many() is sufficient and this hack can be removed. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-18s390/qeth: remove open-coded inet_make_mask()Julian Wiedmann1-19/+15
Use inet_make_mask() to replace some complicated bit-fiddling. Also use the right data types to replace some raw memcpy calls with proper assignments. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-18s390/qeth: clean up L3 sysfs codeJulian Wiedmann1-1/+11
Consolidate some duplicated code for adding RXIP/VIPA addresses, and move the locking to where it's actually needed. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-18s390/qeth: overhaul L3 IP address dump codeJulian Wiedmann1-15/+5
The current code that dumps the RXIP/VIPA/IPATO addresses via sysfs first checks whether the buffer still provides sufficient space to hold another formatted address. But the maximum length of an formatted IPv4 address is 15 characters, not 12. So we underestimate the max required length and if the buffer was previously filled to _just_ the right level, a formatted address can end up being truncated. Revamp these code paths to use the _actually_ required length of the formatted IP address, and while at it suppress a gratuitous newline. Also use scnprintf() to format the output. In case of a truncation, this would allow us to return the number of characters that were actually written. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-05s390/qeth: fix dangling IO buffers after halt/clearJulian Wiedmann1-6/+7
The cio layer's intparm logic does not align itself well with how qeth manages cmd IOs. When an active IO gets terminated via halt/clear, the corresponding IRQ's intparm does not reflect the cmd buffer but rather the intparm that was passed to ccw_device_halt() / ccw_device_clear(). This behaviour was recently clarified in commit b91d9e67e50b ("s390/cio: fix intparm documentation"). As a result, qeth_irq() currently doesn't cancel a cmd that was terminated via halt/clear. This primarily causes us to leak card->read_cmd after the qeth device is removed, since our IO path still holds a refcount for this cmd. For qeth this means that we need to keep track of which IO is pending on a device ('active_cmd'), and use this as the intparm when calling halt/clear. Otherwise qeth_irq() can't match the subsequent IRQ to its cmd buffer. Since we now keep track of the _expected_ intparm, we can also detect any mismatch; this would constitute a bug somewhere in the lower layers. In this case cancel the active cmd - we effectively "lost" the IRQ and should not expect any further notification for this IO. Fixes: 405548959cc7 ("s390/qeth: add support for dynamically allocated cmds") Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-14s390/qeth: replace qeth_l3_get_addr_buffer()Julian Wiedmann1-40/+22
The remaining usage effectively is a kmemdup() of the query object. By not wrapping it, some of the callers can now use GFP_KERNEL for the allocation. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-14s390/qeth: remove VLAN tracking for L3 devicesJulian Wiedmann1-31/+11
Use vlan_for_each() instead of tracking each registered VID internally. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-14s390/qeth: consolidate L3 mcast registration codeJulian Wiedmann1-90/+30
Current code processes each (VLAN) device twice - once to inspect the IPv4 mcast addresses, and then a second time to walk the IPv6 mcast addresses. Unify all this into a single helper, thus removing some checks and a duplicated VLAN lookup. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-14s390/qeth: remove gratuitious RX modesetJulian Wiedmann1-2/+0
Trust the IPv4/IPv6 code to properly remove its mcast addresses when a VLAN device is unregistered, and then also trigger an RX modeset whenever it's needed. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-14s390/qeth: fine-tune L3 mcast lockingJulian Wiedmann1-4/+3
Push the inet6_dev locking down into the helper that actually needs it for walking the mc_list. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-14s390/qeth: drop unwanted packets earlier in RX pathJulian Wiedmann1-18/+8
Packets with an unexpected HW format are currently first extracted from the RX buffer, passed upwards to the layer-specific driver and only then finally dropped. Enhance the RX path so that we can drop such packets before even allocating an skb. For this, add some additional logic so that when a packet is meant to be dropped, we can still walk along the packet's data chunks in the RX buffer. This allows us to extract the following packet(s) from the buffer. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-14s390/qeth: gather more detailed RX dropped/error statisticsJulian Wiedmann1-0/+1
Where available, use the fine-grained counters in rtnl_link_stats64 to indicate different RX error causes. For drop reasons, use driver-private ethtool counters. In particular this patch allows us to keep track of driver-side drops due to unknown/unsupported HW descriptor format. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-31s390/qeth: don't cache MAC addresses for multicast IPsJulian Wiedmann1-14/+10
Instead of storing the multicast-mapped MAC address in an IP address object, just calculate the MAC address when actually building a cmd for the IP address. While at it, also clean up some rather verbose copying of IP addresses. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-31s390/qeth: use helpers for IP address hashingJulian Wiedmann1-4/+4
Replace our custom implementations with the stack's version of IP address hashing. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-31s390/qeth: consolidate some duplicated HW cmd codeJulian Wiedmann1-9/+0
When setting a device online, both subdrivers have the same code to program the HW trap and Isolation mode. Move that code into a single place. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-08-24s390/qeth: collect accurate TX statisticsJulian Wiedmann1-6/+3
This consolidates the SW statistics code, and improves it to (1) account for the header overhead of each segment on a TSO skb, (2) count dangling packets as in-error (during eg. shutdown), and (3) only count offloads when the skb was successfully transmitted. We also count each segment of an TSO skb as one packet - except for tx_dropped, to be consistent with dev->tx_dropped. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-08-20s390/qeth: streamline control code for promisc modeJulian Wiedmann1-17/+7
We have logic to determine the desired promisc mode in _each_ code path. Change things around so that there is a clean split between (a) high-level code that selects the new mode, and (b) implementations of the various mechanisms to program this mode. This also keeps qeth_promisc_to_bridge() from polluting the debug logs on each RX modeset. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Alexandra Winter <wintera@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-27s390/qeth: move cast type selection into fill_header()Julian Wiedmann1-24/+19
The cast type currently gets selected in .ndo_start_xmit, and is then piped through several layers until it's stored into the HW header. Push the selection down into qeth_l?_fill_header() to (1) reduce the number of xmit-wide parameters, and (2) merge the two route validation checks into just one. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-27s390/qeth: extract helper for route validationJulian Wiedmann1-28/+21
As follow-up to commit 0cd6783d3c7d ("s390/qeth: check dst entry before use"), consolidate the dst_check() logic into a single helper and add a wrapper around the cast type selection. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-27s390/qeth: consolidate skb RX processing in L3 driverJulian Wiedmann1-18/+12
Use napi_gro_receive() to pass up all types of packets that a L3 device may receive. 1) For proper L2 packets received by the IQD sniffer, this is the obvious thing to do. 2) For af_iucv (which doesn't provide a GRO assist), the GRO code will transparently fall back to netif_receive_skb(). So there's no need to special-case this traffic in our code. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-27s390/qeth: consolidate pm codeJulian Wiedmann1-30/+0
De-duplicate the pm callback implementations from the two sub-drivers, replacing them with core helpers that delegate to the .set_online and .set_offline callbacks. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-27s390/qeth: remove static cmd buffer infrastructureJulian Wiedmann1-1/+0
Now that all cmds are dynamically allocated, the code for static cmd buffers can go away entirely. Resulting in a nice reduction of code/data size & complexity, while removing the risk that qeth_clear_cmd_buffers() releases cmds that are still in-flight. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-27s390/qeth: dynamically allocate diag cmdsJulian Wiedmann1-3/+1
Add a new wrapper that allocates DIAG cmds of the right size, and fills in the common fields. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-27s390/qeth: dynamically allocate various cmds with sub-typesJulian Wiedmann1-5/+5
This patch converts the adapter, assist and bridgeport cmd paths to dynamic allocation. Most of the work is about re-organizing the cmd headers, calculating the correct cmd length, and filling in the right value in the sub-cmd's length field. Since we now also set the correct length for cmds that are not reflected by a fixed struct (ie SNMP), we can remove the work-around from qeth_snmp_command(). Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-27s390/qeth: clarify parameter for simple assist cmdsJulian Wiedmann1-12/+14
For code that uses qeth_send_simple_setassparms_prot(), we currently can't differentiate whether the cmd should contain (1) no parameter, or (2) a 4-byte parameter with value 0. At the moment this doesn't cause any trouble. But when using dynamically allocated cmds, we need to know whether to allocate & transmit an additional 4 bytes of zeroes. So instead of the raw parameter value, pass a parameter pointer (or NULL) to qeth_send_simple_setassparms_prot(). Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-27s390/qeth: dynamically allocate simple IPA cmdsJulian Wiedmann1-7/+10
This patch reduces the usage of the write channel's static cmd buffers, by dynamically allocating all simple IPA cmds (eg. STARTLAN, SETVMAC). It also converts the OSN path. Doing so requires some changes to how we calculate the cmd length. Currently when building IPA cmds, we're quite generous in how much data we send down to the device (basically the size of the biggest cmd we know). This is no real concern at the moment, since the static cmd buffers are backed with zeroed pages. But for dynamic allocations, the exact length matters. So this patch also adds the needed length calculations to each cmd path. Commands that have multiple subtypes (eg. SETADP) of differing length will be converted with follow-up patches. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-13s390/qeth: allocate a single cmd on read channelJulian Wiedmann1-1/+0
We statically allocate 8 cmd buffers on the read channel, when the only IO left that's still using them is the long-running READ. Replace this with a single allocated cmd, that gets restarted whenever the READ completed. This introduces refcounting for allocated cmds, so that the READ cmd can survive the IO completion. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-13s390/qeth: convert device-specific trace entriesJulian Wiedmann1-19/+16
The vast majority of SETUP-classified trace entries can be moved to their device-specific trace file. This reduces pollution of the global SETUP file, and provides a consistent trace view of all activity on the device. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-13s390/qeth: simplify DOWN state handlingJulian Wiedmann1-5/+2
When the tear down sequence in qeth_l?_stop_card() has finished, the card is guaranteed to be in DOWN state and we don't have to check for it again. With this insight we can also remove the redundant setting of card->state in qeth_l?_set_online()'s error path. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-05s390/qeth: check dst entry before useJulian Wiedmann1-5/+25
While qeth_l3 uses netif_keep_dst() to hold onto the dst, a skb's dst may still have been obsoleted (via dst_dev_put()) by the time that we end up using it. The dst then points to the loopback interface, which means the neighbour lookup in qeth_l3_get_cast_type() determines a bogus cast type of RTN_BROADCAST. For IQD interfaces this causes us to place such skbs on the wrong HW queue, resulting in TX errors. Fix-up the various call sites to first validate the dst entry with dst_check(), and fall back accordingly. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-05s390/qeth: handle limited IPv4 broadcast in L3 TX pathJulian Wiedmann1-0/+2
When selecting the cast type of a neighbourless IPv4 skb (eg. on a raw socket), qeth_l3 falls back to the packet's destination IP address. For this case we should classify traffic sent to 255.255.255.255 as broadcast. This fixes DHCP requests, which were misclassified as unicast (and for IQD interfaces thus ended up on the wrong HW queue). Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-26s390/qeth: trust non-IP cast type in qeth_l3_fill_header()Julian Wiedmann1-8/+3
When building the L3 HW header for non-IP packets, trust the cast type that was passed as parameter. qeth_l3_get_cast_type() has most likely also used h_dest to determine the cast type, so we get the same result, and can remove that duplicated code. In the unlikely case that we would get a _different_ cast type, then that's based off a route lookup and should be considered authoritative. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-26s390/qeth: extract helper to determine L2 cast typeJulian Wiedmann1-7/+1
This de-duplicates the L2 and L3 cast-type code, and makes the L2 code a bit more robust by removing the fragile assumption that skb->data always points to the Ethernet Header. This would break in code paths where we pushed the HW header onto the skb. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-26s390/qeth: use IS_* helpers for checking device typeJulian Wiedmann1-13/+11
We have helper macros for all possible device types, replace all remaining open-coded accesses to the type fields. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17s390/qeth: stop/wake TX queues based on their fill levelJulian Wiedmann1-16/+2
Current xmit code only stops the txq after attempting to fill an IO buffer that hasn't been TX-completed yet. In many-connection scenarios, this can result in frequent rejected TX attempts, requeuing of skbs with NETDEV_TX_BUSY and extra overhead. Now that we have a proper 1-to-1 relation between stack-side txqs and our HW Queues, overhaul the stop/wake logic so that the xmit code stops the txq as needed. Given that we might map multiple skbs into a single buffer, it's crucial to ensure that the queue always provides an _entirely_ empty IO buffer. Otherwise large skbs (eg TSO) might not fit into the last available buffer. So whenever qeth_do_send_packet() first utilizes an _empty_ buffer, it updates & checks the used_buffers count. This now ensures that an skb passed to qeth_xmit() can always be mapped into an IO buffer, so remove all of the -EBUSY roll-back handling in the TX path. We preserve the minimal safety-checks ("Is this IO buffer really available?"), just in case some nasty future bug ever attempts to corrupt an in-use buffer. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17s390/qeth: add TX multiqueue support for OSA devicesJulian Wiedmann1-1/+10
This adds trivial support for multiple TX queues on OSA-style devices (both real HW and z/VM NICs). For now we expose the driver's existing QoS mechanism via .ndo_select_queue, and adjust the number of available TX queues when qeth_update_from_chp_desc() detects that the HW configuration has changed. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17s390/qeth: add TX multiqueue support for IQD devicesJulian Wiedmann1-7/+23
qeth has been supporting multiple HW Output Queues for a long time. But rather than exposing those queues to the stack, it uses its own queue selection logic in .ndo_start_xmit... with all the drawbacks that entails. Start off by switching IQD devices over to a proper mqs net_device, and converting all the netdev_queue management code. One oddity with IQD devices is the requirement to place all mcast traffic on the _highest_ established HW queue. Doing so via .ndo_select_queue seems straight-forward - but that won't work if only some of the HW queues are active (ie. when dev->real_num_tx_queues < dev->num_tx_queues), since netdev_cap_txqueue() will not allow us to put skbs on the higher queues. To make this work, we 1. let .ndo_select_queue() map all mcast traffic to netdev_queue 0, and 2. later re-map the netdev_queue and HW queue indices in .ndo_start_xmit and the TX completion handler. With this patch we default to a fixed set of 1 ucast and 1 mcast queue. Support for dynamic reconfiguration is added at a later time. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17s390/qeth: clarify naming for some QDIO helpersJulian Wiedmann1-1/+1
The naming of several QDIO helpers doesn't match their actual functionality, or the structures they operate on. Clean this up. s/qeth_alloc_qdio_buffers/qeth_alloc_qdio_queues s/qeth_free_qdio_buffers/qeth_free_qdio_queues s/qeth_alloc_qdio_out_buf/qeth_alloc_output_queue s/qeth_clear_outq_buffers/qeth_drain_output_queue s/qeth_clear_qdio_buffers/qeth_drain_output_queues Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-28s390/qeth: convert IP table spinlock to mutexJulian Wiedmann1-17/+17
All users of the lock are running in process context now. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-28s390/qeth: defer IPv6 address notifier eventsJulian Wiedmann1-6/+54
The inet6addr_chain is atomic. So instead of starting the cmd IO for SETIP / DELIP straight from the notifier callback, run it from a workqueue. This is the last step towards removal of cmd IO completion polling. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-28s390/qeth: add wrapper for IP table accessJulian Wiedmann1-16/+17
Extract a little helper, so that high-level callers can manipulate the IP table without worrying about the locking. This will make it easier to convert the code to a different locking primitive later on. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>