aboutsummaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2017-03-21Cleanup some warning from timestamping code.Ezequiel Lara Gomez1-11/+8
Following checkpatch.pl recommendations (which include replacing with <linux/io.h> the <asm/io.h>, since linux/io.h includes it). Signed-off-by: Ezequiel Lara Gomez <ezegomez@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-21Enable tx timestamping on loopback and dummyEzequiel Lara Gomez2-0/+30
This enables developing code that uses SOF_TIMESTAMPING_TX_SOFTWARE by using localhost addresses (without needing to send packets outside), as well as enabling unit and functional testing of TX timestamping code without needing hardware support or network access. It also fulfills the expectation of software network devices supporting software-based timestamping. Tested on qemu using txtimestamping.c from the kernel selftests, and ethtool -T. Signed-off-by: Ezequiel Lara Gomez <ezegomez@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-21Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queueDavid S. Miller5-224/+273
Jeff Kirsher says: ==================== 40GbE Intel Wired LAN Driver Updates 2017-03-20 This series contains updates to i40e and i40evf only. Philippe Reynes updates i40e and i40evf to use the new ethtool API for {get|set}_link_ksettings. Jake provides the remaining patches in the series, starting with a fix for i40e where the firmware expected the port numbers for the offloaded UDP tunnels in Little Endian format and we were sending them in Big Endian format which put the wrong port number to be put in the UDP tunnel list. Changed the driver to use __be32 values instead of arrays for (src|dst)_ip. Refactored the exit flow of i40e_add_fdir_ethtool() which removes the dependency on having a non-zero return value. Fixed a memory leak by running kfree() and returning immediately when we fail to add flow director filter. Fixed a potential issue where could update the filter count without actually succeeding in adding a filter, by moving the ATR exit check to after we have sent the TCP/IPv4 filter to the ring successfully. Ensures that the fd_tcp_rule count is reset to 0, before we reprogram the filters so that we do not end up with a stale count which does not correctly reflect the number of programmed filters. Added a check whether we have TCP/IPv4 filters before re-enabling ATR after flushing and replaying FDIR filters. Added counters for each filter type in preparation for adding code to properly check the mask value. Fixed potential issues by explicitly checking the flow type at the start of i40e_add_fdir_ethtool(). To avoid possible memory leaks, we now unconditionally delete the old filter, even if it is identical to the new filter and ensures will always update the filters as expected. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-21Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-nextDavid S. Miller54-297/+615
Pablo Neira Ayuso says: ==================== Netfilter/IPVS updates for net-next The following patchset contains Netfilter/IPVS updates for your net-next tree. A couple of new features for nf_tables, and unsorted cleanups and incremental updates for the Netfilter tree. More specifically, they are: 1) Allow to check for TCP option presence via nft_exthdr, patch from Phil Sutter. 2) Add symmetric hash support to nft_hash, from Laura Garcia Liebana. 3) Use pr_cont() in ebt_log, from Joe Perches. 4) Remove some dead code in arp_tables reported via static analysis tool, from Colin Ian King. 5) Consolidate nf_tables expression validation, from Liping Zhang. 6) Consolidate set lookup via nft_set_lookup(). 7) Remove unnecessary rcu read lock side in bridge netfilter, from Florian Westphal. 8) Remove unused variable in nf_reject_ipv4, from Tahee Yoo. 9) Pass nft_ctx struct to object initialization indirections, from Florian Westphal. 10) Add code to integrate conntrack helper into nf_tables, also from Florian. 11) Allow to check if interface index or name exists via NFTA_FIB_F_PRESENT, from Phil Sutter. 12) Simplify resolve_normal_ct(), from Florian. 13) Use per-limit spinlock in nft_limit and xt_limit, from Liping Zhang. 14) Use rwlock in nft_set_rbtree set, also from Liping Zhang. 15) One patch to remove a useless printk at netns init path in ipvs, and several patches to document IPVS knobs. 16) Use refcount_t for reference counter in the Netfilter/IPVS code, from Elena Reshetova. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-21Merge branch '1GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queueDavid S. Miller5-203/+439
Jeff Kirsher says: ==================== 1GbE Intel Wired LAN Driver Updates 2017-03-17 This series contains updates to mainly igb, with one fix for ixgbe. Alex does all the changes in the series, starting with adding support for DMA_ATTR_WEAK_ORDERING to improve performance on some platforms. Modified igb to use the length of the packet instead of the DD status bit to determine if a new descriptor is ready to be processed. Modified the driver to only go through the region in the receive ring that was designated to be cleaned up, instead of going through the entire ring on cleanup. Cleaned up the transmit side, by clearing the transmit buffer_info only when resetting the rings. Added a new upper limit for receive, which is based on the size of a 2K buffer minus padding, which will allow us to support build_skb going forward. Fixed ethtool testing to only sync on the size of the frame that is being tested, instead of the entire receive buffer. Updated the handling of page addresses to always use a void pointer with the consistent name of "va" to indicate that we are working with a virtual address. Added a "chicken bit" so that we can turn off the new receive allocation feature, in the case where we need to fallback to the legacy receive path. Added support for using 3K buffers in order 1 pages the same way we were using 2K buffers in 4K pages. Added support for padding packet, since we limit the size of the frame, we are able to write to an offset within the buffer instead of having to write at the very start of the buffer. This allows us to leaving padding room for things like supporting XDP in the future. Refactored the receive buffer page management, since there are 2-3 paths that can be taken depending on what receive modes are enabled, so to improve maintainability, break out the common bits into their own functions. Add support for build_skb, again. Lastly, fixed a typo in igb and ixgbe code comments. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-20i40e: always remove old filter when adding new FDir filterJacob Keller1-25/+7
The previous code relied on i40e_match_fdir_input_set to determine when determining whether to free the old filter. Change this code so that we simply unconditionally delete the old filter, even if it's identical to the new filter. This ensures that we don't leak any memory, and that we always update the filters as expected. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-20i40e: explicitly fail on extended MAC field for ethtool_rx_flow_specJacob Keller1-0/+4
Although we will fail the filter later due to checking flow_type which will have a bogus invalid type, it is possible future refactoring will remove this hidden failure case. Avoid a possible issue in the future by explicitly checking the flow type at the start. Change-Id: Ia98eb26f7b93ccbe38c7141e8f203ef496fc6598 Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-20i40e: add counters for UDP/IPv4 and IPv4 filtersJacob Keller3-11/+34
In preparation for adding code to properly check the mask values, we will need to know the number of active filters for each type. Add counters for each filter type. Rename the already existing fd_tcp_rule to fd_tcp4_filter_cnt to match the style of other names. To avoid style warnings, avoid assigning multiple parameters at once, and fix up one other case where we did so previously. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-20i40e: don't re-enable ATR when flushing filters if SB has TCP4/IPv4 rulesJacob Keller1-1/+1
When flushing and replaying FDIR filters, it is possible we would disable ATR, and then re-enable it even though we should have kept it disabled due to existing TCP/IPv4 filters. Fix this by checking whether we have TCP4/IPv4 filters before re-enabling. Alternatively, we could instead restore ATR and then replay filters, however, this would cause us to rapidly enable and then disable ATR in some cases. Change-ID: I076e4cc1e4409bce7f98f3c213295433a4ff43d8 Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Avinash Dayanand <avinash.dayanand@intel.com> Reviewed-by: Alan Brady <alan.brady@intel.com> Reviewed-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-20i40e: reset fd_tcp_rule count when restoring filtersJacob Keller1-0/+3
Since we're about to reprogram the filters, we need to ensure that the fd_tcp_rule count is correctly reset to 0. Otherwise, we will keep a stale count that does not accurately reflect the number of programmed TCPv4 filters. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-20i40e: remove redundant check for fd_tcp_rule when restoring filtersJacob Keller1-6/+0
i40e_fdir_filter_restore re-adds all existing filters, which already checks when adding a TCPv4 filter to disable ATR. We don't need to make the check twice, so remove this redundant code. Change-ID: Ia0b0690e23523915199d601494557def135c9d7f Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-20i40e: exit ATR mode only when adding TCP/IPv4 filter succeedsJacob Keller1-17/+17
Move ATR exit check after we have sent the TCP/IPv4 filter to the ring successfully. This avoids an issue where we potentially update the filter count without actually succeeding in adding the filter. Now, we only increment the fd_tcp_rule after we've succeeded. Additionally, we will re-enable ATR mode only after deletion of the filter is actually posted to the FDIR ring. Change-ID: If5c1dea422081cc5e2de65618b01b4c3bf6bd586 Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-20i40e: return immediately when failing to add fdir filterJacob Keller1-19/+14
Instead of setting err=true and checking this to determine when to free the raw_packet near the end of the function, simply kfree and return immediately. The resulting code is a bit cleaner and has one less variable. This also resolves a subtle bug in the ipv4 case which could fail to add the first filter and then never free the memory, resulting in a small memory leak. Change-ID: I7583aac033481dc794b4acaa14445059c8930ff1 Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Avinash Dayanand <avinash.dayanand@intel.com> Reviewed-by: Alan Brady <alan.brady@intel.com> Reviewed-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-20i40e: rework exit flow of i40e_add_fdir_ethtoolJacob Keller1-4/+11
Refactor the exit flow of the i40e_add_fdir_ethtool function. Move the input_label to the end of the function, removing the dependency on having a non-zero return value. Add a comment explaining why it is ok not to free the fdir data structure, because the structure is now stored in the fdir_filter_list. Change-Id: I723342181d59cd0c9f3b31140c37961ba37bb242 Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-20i40e: don't use arrays for (src|dst)_ipJacob Keller3-14/+14
The code originally included src_ip and dst_ip with enough space to support ipv6 filters. However, no actual support for ipv6 filters has been implemented. Thus, remove the arrays and just use __be32 values. Should ipv6 support be added in the future, we can replace these with a union that has sizes for both values. Change-Id: I1bc04032244a80eb6ebc8a4e6c723a4a665c1dd5 Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-20i40e: send correct port number to AdminQ when enabling UDP tunnelsJacob Keller2-10/+10
The firmware expects the port numbers for offloaded UDP tunnels in Little Endian format. We accidentally sent the value in Big Endian format which obviously will cause the wrong port number to be put into the UDP tunnels list. This results in VxLAN and Geneve tunnel Rx offloads being essentially disabled, unless the port number happens to be identical after byte swapping. Note that i40e_aq_add_udp_tunnel() will byteswap the parameter from host order into Little Endian so we don't need worry about passing strictly a __le16 value to the command. This patch essentially reverts b3f5c7bc88ba ("i40e: Fix for extra byte swap in tunnel setup", 2016-08-24), but in a way that makes the result much more clear to the reader. Fixes: b3f5c7bc88ba ("i40e: Fix for extra byte swap in tunnel setup", 2016-08-24) Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Williams, Mitch A <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-20i40evf: use new api ethtool_{get|set}_link_ksettingsPhilippe Reynes1-16/+15
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-20netfilter: fix the warning on unused refcount variableReshetova, Elena1-1/+0
net/netfilter/nfnetlink_acct.c: In function 'nfnl_acct_try_del': net/netfilter/nfnetlink_acct.c:329:15: warning: unused variable 'refcount' [-Wunused-variable] unsigned int refcount; ^ Fixes: b54ab92b84b6 ("netfilter: refcounter conversions") Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2017-03-17i40e: use new api ethtool_{get|set}_link_ksettingsPhilippe Reynes1-111/+153
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-17igb/ixgbe: Fix typo in igb_build_skb and/or ixgbe_build_skb code commentAlexander Duyck2-2/+2
There was a typo that I had left in the code comments for the igb and ixgbe functions that enabled build_skb support. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-17igb: Re-add support for build_skb in igbAlexander Duyck1-0/+47
This reverts commit f9d40f6a9921 ("igb: Revert support for build_skb in igb") and adds a few changes to update it to work with the latest version of igb. We are now able to revert the removal of this due to the fact that with the recent changes to the page count and the use of DMA_ATTR_SKIP_CPU_SYNC we can make the pages writable so we should not be invalidating the additional data added when we call build_skb. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-17igb: Break out Rx buffer page managementAlexander Duyck1-114/+121
At this point we have 2 to 3 paths that can be taken depending on what Rx modes are enabled. In order to better support that and improve the maintainability I am breaking out the common bits from those paths and making them into their own functions. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-17igb: Add support for padding packetAlexander Duyck2-2/+23
With the size of the frame limited we can now write to an offset within the buffer instead of having to write at the very start of the buffer. The advantage to this is that it allows us to leave padding room for things like supporting XDP in the future. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-17igb: Add support for using order 1 pages to receive large framesAlexander Duyck2-18/+76
This patch adds support for using 3K buffers in order 1 pages the same way we were using 2K buffers in 4K pages. We are reserving 1K of room for now to have space available for future headroom and tailroom when we enable build_skb support. One side effect of this patch is that we can end up using a larger buffer if jumbo frames is enabled. The impact shouldn't be too great, but it could hurt small packet performance for UDP workloads if jumbo frames is enabled as the truesize of frames will be larger. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-17igb: Add support for ethtool private flag to allow use of legacy RxAlexander Duyck2-0/+49
Since there are potential drawbacks to the new Rx allocation approach I thought it best to add a "chicken bit" so that we can turn the feature off if in the event that a problem is found. It also provides a means of validating the legacy Rx path in the event that we are forced to fall back. At some point in the future when we are convinced we don't need it anymore we might be able to drop the legacy-rx flag. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-17igb: Use page_address offset from page instead of masking virtual addressAlexander Duyck3-9/+7
Update the handling of page addresses so that we always refer to them using a void pointer, and try to use the consistent name of va indicating we are working with a virtual address. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-17igb: Only sync size of expected frame in ethtool testingAlexander Duyck1-2/+2
We only need to sync the size of the frame that is read to test. We don't need to sync the entire Rx buffer. This way the testing is more consistent with how we handle things in the receive path. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-17igb: Limit maximum frame Rx based on MTUAlexander Duyck2-5/+26
In order to support the use of build_skb going forward it will be necessary to place a maximum limit on the amount of data we can receive when jumbo frames is not enabled. In order to do this I am adding a new upper limit for receive based on the size of a 2K buffer minus padding. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-17igb: Don't bother clearing Tx buffer_info in igb_clean_tx_ringAlexander Duyck3-49/+83
In the case of the Tx rings we need to only clear the Tx buffer_info when we are resetting the rings. Ideally we do this when we configure the ring to bring it back up instead of when we are taking it down in order to avoid dirtying pages we don't need to. In addition we don't need to clear the Tx descriptor ring since we will fully repopulate it when we begin transmitting frames and next_to_watch can be cleared to prevent the ring from being cleaned beyond that point instead of needing to touch anything in the Tx descriptor ring. Finally with these changes we can avoid having to reset the skb member of the Tx buffer_info structure in the cleanup path since the skb will always be associated with the first buffer which has next_to_watch set. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-17igb: Clear Rx buffer_info in configure instead of cleanAlexander Duyck1-14/+10
This change makes it so that instead of going through the entire ring on Rx cleanup we only go through the region that was designated to be cleaned up and stop when we reach the region where new allocations should start. In addition we can avoid having to perform a memset on the Rx buffer_info structures until we are about to start using the ring again. By deferring this we can avoid dirtying the cache any more than we have to which can help to improve the time needed to bring the interface down and then back up again in a reset or suspend/resume cycle. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-17igb: Use length to determine if descriptor is doneAlexander Duyck2-7/+9
This change makes it so that we use the length of the packet instead of the DD status bit to determine if a new descriptor is ready to be processed. The obvious advantage is that it cuts down on reads as we don't really even need the DD bit if going from a 0 to a non-zero value on size is enough to inform us that the packet has been completed. In addition I have updated the code so that we only reset the Rx descriptor length for descriptor zero when resetting a ring instead of having to do a memset with 0 over the entire ring. By doing this we can save some time on initialization. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-17igb: Add support for DMA_ATTR_WEAK_ORDERINGAlexander Duyck2-3/+6
Since we are already using DMA attributes in igb for Rx there is no reason why we can't also apply DMA_ATTR_WEAK_ORDERING which is needed on some platforms to improve performance. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-03-17netfilter: refcounter conversionsReshetova, Elena21-75/+85
refcount_t type and corresponding API (see include/linux/refcount.h) should be used instead of atomic_t when the variable is used as a reference counter. This allows to avoid accidental refcounter overflows that might lead to use-after-free situations. Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: David Windsor <dwindsor@gmail.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2017-03-16liquidio: fix wrong information about link modes reported to ethtoolManish Awasthi1-4/+10
Information reported to ethtool about link modes is wrong for 25G NIC. Fix it by checking for presence of 25G NIC, checking the link speed reported by NIC firmware, and then assigning proper values to the ethtool_link_ksettings struct. Signed-off-by: Manish Awasthi <manish.awasthi@cavium.com> Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16Merge branch 'netvsc-small-changes'David S. Miller3-22/+27
Stephen Hemminger says: ==================== netvsc: small changes for net-next One bugfix, and two non-code patches ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16netvsc: remove unused #definestephen hemminger1-3/+0
Not used anywhere. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16netvsc: add comments about callback's and NAPIstephen hemminger1-1/+12
Add some short description of how callback's and NAPI interoperate. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16netvsc: avoid race with callbackstephen hemminger2-18/+15
Change the argument to channel callback from the channel pointer to the internal data structure containing per-channel info. This avoids any possible races when callback happens during initialization and makes IRQ code simpler. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16Merge branch 'bpf-inline-lookups'David S. Miller9-65/+261
Alexei Starovoitov says: ==================== bpf: inline bpf_map_lookup_elem() bpf_map_lookup_elem() is one of the most frequently used helper functions. Improve JITed program performance by inlining this helper. bpf_map_type before after hash 58M 74M array 174M 280M The values are number of lookups per second in ideal conditions measured by micro-benchmark in patch 6. The 'perf report' for HASH map type: before: 54.23% map_perf_test [kernel.kallsyms] [k] __htab_map_lookup_elem 14.24% map_perf_test [kernel.kallsyms] [k] lookup_elem_raw 8.84% map_perf_test [kernel.kallsyms] [k] htab_map_lookup_elem 5.93% map_perf_test [kernel.kallsyms] [k] bpf_map_lookup_elem 2.30% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2 1.49% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler after: 60.03% map_perf_test [kernel.kallsyms] [k] __htab_map_lookup_elem 18.07% map_perf_test [kernel.kallsyms] [k] lookup_elem_raw 2.91% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2 1.94% map_perf_test [kernel.kallsyms] [k] _einittext 1.90% map_perf_test [kernel.kallsyms] [k] __audit_syscall_exit 1.72% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler so the cost of htab_map_lookup_elem() and bpf_map_lookup_elem() is gone after inlining. 'per-cpu' and 'lru' map types can be optimized similarly in the future. Note the sparse will complain that bpf is addictive ;) kernel/bpf/hashtab.c:438:19: sparse: subtraction of functions? Share your drugs kernel/bpf/verifier.c:3342:38: sparse: subtraction of functions? Share your drugs it's not a new warning, just in new places. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16samples/bpf: add map_lookup microbenchmarkAlexei Starovoitov2-0/+65
$ map_perf_test 128 speed of HASH bpf_map_lookup_elem() in lookups per second w/o JIT w/JIT before 46M 58M after 42M 74M perf report before: 54.23% map_perf_test [kernel.kallsyms] [k] __htab_map_lookup_elem 14.24% map_perf_test [kernel.kallsyms] [k] lookup_elem_raw 8.84% map_perf_test [kernel.kallsyms] [k] htab_map_lookup_elem 5.93% map_perf_test [kernel.kallsyms] [k] bpf_map_lookup_elem 2.30% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2 1.49% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler after: 60.03% map_perf_test [kernel.kallsyms] [k] __htab_map_lookup_elem 18.07% map_perf_test [kernel.kallsyms] [k] lookup_elem_raw 2.91% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2 1.94% map_perf_test [kernel.kallsyms] [k] _einittext 1.90% map_perf_test [kernel.kallsyms] [k] __audit_syscall_exit 1.72% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler Notice that bpf_map_lookup_elem() and htab_map_lookup_elem() are trivial functions, yet they take sizeable amount of cpu time. htab_map_gen_lookup() removes bpf_map_lookup_elem() and converts htab_map_lookup_elem() into three BPF insns which causing cpu time for bpf_prog_da4fc6a3f41761a2() slightly increase. $ map_perf_test 256 speed of ARRAY bpf_map_lookup_elem() in lookups per second w/o JIT w/JIT before 97M 174M after 64M 280M before: 37.33% map_perf_test [kernel.kallsyms] [k] array_map_lookup_elem 13.95% map_perf_test [kernel.kallsyms] [k] bpf_map_lookup_elem 6.54% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2 4.57% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler after: 32.86% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2 6.54% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler array_map_gen_lookup() removes calls to array_map_lookup_elem() and bpf_map_lookup_elem() and replaces them with 7 bpf insns. The performance without JIT is slower, since executing extra insns in the interpreter is slower than running native C code, but with JIT the performance gains are obvious, since native C->x86 code is replaced with fewer bpf->x86 instructions. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16bpf: inline htab_map_lookup_elem()Alexei Starovoitov1-1/+30
Optimize: bpf_call bpf_map_lookup_elem map->ops->map_lookup_elem htab_map_lookup_elem __htab_map_lookup_elem into: bpf_call __htab_map_lookup_elem to improve performance of JITed programs. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16bpf: add helper inlining infra and optimize map_array lookupAlexei Starovoitov5-4/+77
Optimize bpf_call -> bpf_map_lookup_elem() -> array_map_lookup_elem() into a sequence of bpf instructions. When JIT is on the sequence of bpf instructions is the sequence of native cpu instructions with significantly faster performance than indirect call and two function's prologue/epilogue. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16bpf: adjust insn_aux_data when patching insnsAlexei Starovoitov1-5/+39
convert_ctx_accesses() replaces single bpf instruction with a set of instructions. Adjust corresponding insn_aux_data while patching. It's needed to make sure subsequent 'for(all insn)' loops have matching insn and insn_aux_data. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16bpf: refactor fixup_bpf_calls()Alexei Starovoitov1-41/+35
reduce indent and make it iterate over instructions similar to convert_ctx_accesses(). Also convert hard BUG_ON into soft verifier error. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16bpf: move fixup_bpf_calls() functionAlexei Starovoitov2-56/+57
no functional change. move fixup_bpf_calls() to verifier.c it's being refactored in the next patch Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16tcp: remove tcp_tw_recycleSoheil Hassas Yeganeh9-59/+9
The tcp_tw_recycle was already broken for connections behind NAT, since the per-destination timestamp is not monotonically increasing for multiple machines behind a single destination address. After the randomization of TCP timestamp offsets in commit 8a5bd45f6616 (tcp: randomize tcp timestamp offsets for each connection), the tcp_tw_recycle is broken for all types of connections for the same reason: the timestamps received from a single machine is not monotonically increasing, anymore. Remove tcp_tw_recycle, since it is not functional. Also, remove the PAWSPassive SNMP counter since it is only used for tcp_tw_recycle, and simplify tcp_v4_route_req and tcp_v6_route_req since the strict argument is only set when tcp_tw_recycle is enabled. Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Cc: Lutz Vieweg <lvml@5t9.de> Cc: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16tcp: remove per-destination timestamp cacheSoheil Hassas Yeganeh6-179/+11
Commit 8a5bd45f6616 (tcp: randomize tcp timestamp offsets for each connection) randomizes TCP timestamps per connection. After this commit, there is no guarantee that the timestamps received from the same destination are monotonically increasing. As a result, the per-destination timestamp cache in TCP metrics (i.e., tcpm_ts in struct tcp_metrics_block) is broken and cannot be relied upon. Remove the per-destination timestamp cache and all related code paths. Note that this cache was already broken for caching timestamps of multiple machines behind a NAT sharing the same address. Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Cc: Lutz Vieweg <lvml@5t9.de> Cc: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16Merge branch 'sunvnet-better-connection-management'David S. Miller4-25/+201
Shannon Nelson says: ==================== sunvnet: better connection management These patches remove some problems in handling of carrier state with the ldmvsw vswitch, remove an xoff misuse in sunvnet, and add stats for debug and tracking of point-to-point connections between the ldom VMs. v2: - added ldmvsw ndo_open to reset the LDC channel - updated copyrights ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16sunvnet: xoff not needed when removing port linkShannon Nelson1-4/+0
The sunvnet netdev is connected to the controlling ldom's vswitch for network bridging. However, for higher performance between ldoms, there also is a channel between each client ldom. These connections are represented in the sunvnet driver by a queue for each ldom. The driver uses select_queue to tell the stack which queue to use by tracking the mac addresses on the other end of each port. When a connected ldom shuts down, the driver receives an LDC_EVENT_RESET and the port is removed from the driver, thus a queue with no ldom on the other end will never be selected for Tx. The driver was trying to reinforce the "don't use this queue" notion with netif_tx_stop_queue() and netif_tx_wake_queue(), which really should only be used to signal a Tx queue is full (aka XOFF). This misuse of queue state resulted in NETDEV WATCHDOG messages and lots of unnecessary calls into the driver's tx_timeout handler. Simply removing these takes care of the problem. Orabug: 25190537 Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-16sunvnet: count multicast packetsShannon Nelson1-0/+2
Make sure multicast packets get counted in the device. Orabug: 25190537 Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>