aboutsummaryrefslogtreecommitdiffstats
path: root/net/ipv6/ip6_tunnel.c (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2017-12-03enic: add sw timestamp supportGovindarajulu Varadarajan2-0/+13
Add ethtool ops to advertise sw timestamping. Call skb_tx_timestamp() just before ringing the wq doorbell. Signed-off-by: Govindarajulu Varadarajan <gvaradar@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-03vmbus: make hv_get_ringbuffer_availbytes localStephen Hemminger2-22/+23
The last use of hv_get_ringbuffer_availbytes in drivers is now gone. Only used by the debug info routine so make it static. Also, add READ_ONCE() to avoid any possible issues with potentially volatile index values. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-03hv_netvsc: optimize initialization of RNDIS headerStephen Hemminger1-31/+26
The memset of the whole maximum possible RNDIS header is unnecessary. For the main part of the header use a structure assignment. No need to memset the whole per packet info. Instead rely on caller to set what it wants. Also get rid of cast to void and signed/unsigned conversion. Now return pointer to per packet data (rather than the header) which simplifies use by code setting up the packet data. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-03hv_netvsc: use reciprocal divide to speed up percent calculationStephen Hemminger4-26/+21
Every packet sent checks the available ring space. The calculation can be sped up by using reciprocal divide which is multiplication. Since ring_size can only be configured by module parameter, so it doesn't have to be passed around everywhere. Also it should be unsigned since it is number of pages. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-03hv_netvsc: replace divide with mask when computing paddingStephen Hemminger1-1/+2
Packet alignment is always a power of 2 therefore modulus can be replaced with a faster and operation Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-03hv_netvsc: don't need local xmit_moreStephen Hemminger1-2/+1
Since skb is always non-NULL in the copy portion of netvsc_send do not need local variable. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-03hv_netvsc: drop unused macrosStephen Hemminger1-26/+0
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-03ipvlan: Add new func ipvlan_is_valid_dev instead of duplicated codesGao Feng1-16/+17
There are multiple duplicated condition checks in the current codes, so I add the new func ipvlan_is_valid_dev instead of the duplicated codes to check if the netdev is real ipvlan dev. Signed-off-by: Gao Feng <gfree.wind@vip.163.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-03net: phy: realtek: add utility functions to read/write page addressesMartin Blumenstingl1-30/+53
Realtek PHYs implement the concept of so-called "extension pages". The reason for this is probably because these PHYs expose more registers than available in the standard address range. After all read/write operations on such a page are done the driver should switch back to page 0 where the standard MII registers (such as MII_BMCR) are available. When referring to such a register the datasheets of RTL8211E and RTL8211F always specify: - the page / "ext. page" which has to be written to RTL821x_PAGE_SELECT - an address (sometimes also called reg) These new utility functions make the existing code easier to read since it removes some duplication (switching back to page 0 is done within the new helpers for example). No functional changes are intended. Signed-off-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-03net: phy: realtek: use the same indentation for all #definesMartin Blumenstingl1-13/+14
This simply makes the code easier to read. No functional changes. Signed-off-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-03net: phy: realtek: group all register bit #defines for RTL821x_INERMartin Blumenstingl1-2/+5
This simply moves all register bit #defines which describe the (PHY specific) bits in the RTL821x_INER right below the RTL821x_INER register definition. This makes it easier to spot which registers and bits belong together. No functional changes. Signed-off-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-03net: phy: realtek: rename RTL821x_INER_INIT to RTL8211B_INER_INITMartin Blumenstingl1-2/+2
This macro is only used by the RTL8211B code. RTL8211E and RTL8211F both use other bits to initialize the RTL821x_INER register. No functional changes. Signed-off-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-03net: phy: realtek: use the BIT and GENMASK macrosMartin Blumenstingl1-5/+6
This makes it easier to compare the #defines with the datasheets. No functional changes. Signed-off-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-02net: dsa: support cross-chip FDB operationsVivien Didelot1-10/+4
When a MAC address is added to or removed from a switch port in the fabric, the target switch must program its port and adjacent switches must program their local DSA port used to reach the target switch. For this purpose, use the dsa_towards_port() helper to identify the local switch port which must be programmed. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-02net: dsa: introduce dsa_towards_port helperVivien Didelot1-10/+13
Add a new helper returning the local port used to reach an arbitrary switch port in the fabric. Its only user at the moment is the dsa_upstream_port helper, which returns the local port reaching the dedicated CPU port, but it will be used in cross-chip FDB operations. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-02net: dsa: add switch mdb bitmap functionsVivien Didelot1-15/+33
This patch brings no functional changes. It moves out the MDB code iterating on a multicast group into new dsa_switch_mdb_{prepare,add}_bitmap() functions. This gives us a better isolation of the two switchdev phases. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-02net: dsa: add switch vlan bitmap functionsVivien Didelot1-15/+34
This patch brings no functional changes. It moves out the VLAN code iterating on a list of VLAN members into new dsa_switch_vlan_{prepare,add}_bitmap() functions. This gives us a better isolation of the two switchdev phases. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-02net: dsa: remove trans argument from mdb opsVivien Didelot5-20/+12
The DSA switch MDB ops pass the switchdev_trans structure down to the drivers, but no one is using them and they aren't supposed to anyway. Remove the trans argument from MDB prepare and add operations. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-02net: dsa: remove trans argument from vlan opsVivien Didelot7-29/+18
The DSA switch VLAN ops pass the switchdev_trans structure down to the drivers, but no one is using them and they aren't supposed to anyway. Remove the trans argument from VLAN prepare and add operations. At the same time, fix the following checkpatch warning: WARNING: line over 80 characters #74: FILE: drivers/net/dsa/dsa_loop.c:177: + const struct switchdev_obj_port_vlan *vlan) Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-02openvswitch: do not propagate headroom updates to internal portPaolo Abeni1-8/+1
After commit 3a927bc7cf9d ("ovs: propagate per dp max headroom to all vports") the need_headroom for the internal vport is updated accordingly to the max needed headroom in its datapath. That avoids the pskb_expand_head() costs when sending/forwarding packets towards tunnel devices, at least for some scenarios. We still require such copy when using the ovs-preferred configuration for vxlan tunnels: br_int / \ tap vxlan (remote_ip:X) br_phy \ NIC where the route towards the IP 'X' is via 'br_phy'. When forwarding traffic from the tap towards the vxlan device, we will call pskb_expand_head() in vxlan_build_skb() because br-phy->needed_headroom is equal to tun->needed_headroom. With this change we avoid updating the internal vport needed_headroom, so that in the above scenario no head copy is needed, giving 5% performance improvement in UDP throughput test. As a trade-off, packets sent from the internal port towards a tunnel device will now experience the head copy overhead. The rationale is that the latter use-case is less relevant performance-wise. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Acked-by: Pravin B Shelar <pshelar@ovn.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: ale: fix port check in cpsw_ale_control_set/getGrygorii Strashko1-2/+2
ALE ports number includes the Host port and ext Ports, and ALE ports numbering starts from 0, so correct corresponding port checks in cpsw_ale_control_set/get(). Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: ale: use devm_kzalloc in cpsw_ale_create()Grygorii Strashko4-22/+8
Use cpsw_ale_create in cpsw_ale_create(). This also makes cpsw_ale_destroy() function nop, so remove it. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: ale: move static initialization in cpsw_ale_create()Grygorii Strashko1-29/+28
Move static initialization from cpsw_ale_start() to cpsw_ale_create() as it does not make much sence to perform static initializtion in cpsw_ale_start() which is called everytime netif[s] is opened. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: ale: optimize ale entry mask bits configuartionGrygorii Strashko1-10/+3
The ale->params.ale_ports parameter can be used to deriver values for all ale entry mask bits: port_mask_bits, port_mask_bits, port_num_bits. Hence, calculate above values and drop all hardcoded values. For port_num_bits calcualtion use order_base_2() API. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: ale: disable ale from stop()Grygorii Strashko1-1/+1
ALE is enabled from cpsw_ale_start() now, but disabled only from cpsw_ale_destroy() which introduces inconsitance as cpsw_ale_start() is called when netif[s] is opened, but cpsw_ale_destroy() is called when driver is removed. Hence, move ALE disabling in cpsw_ale_stop(). Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: ale: use proper io apisGrygorii Strashko1-13/+13
Switch to use writel_relaxed/readl_relaxed() IO API instead of raw version as it is recommended. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: cpsw: fix ale port numbersGrygorii Strashko1-1/+2
TI OMAP/Sitara SoCs have fixed number of ALE ports 3, which includes Host port also. Hence, use fixed value instead of value calcualted from DT, which can be set by user and might not reflect actual HW configuration. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: cpsw: move mac_hi/lo defines in cpsw.hGrygorii Strashko3-8/+5
Move mac_hi/lo defines in common header cpsw.h and re-use them for netcp_ethss.c. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: cpsw: move platform data struct to .c fileGrygorii Strashko2-21/+21
CPSW platform data struct cpsw_platform_data and struct cpsw_slave_data are used only incide cpsw.c module, so move these definitions there. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: cpsw: use proper io apisGrygorii Strashko1-18/+18
Switch to use writel_relaxed/readl_relaxed() IO API instead of raw version as it is recommended. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: cpsw: drop unused var poll from cpsw_update_channels_resGrygorii Strashko1-3/+0
Drop unused variable "poll" from cpsw_update_channels_res(). Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: phy: remove generic settings for callbacks config_aneg and read_status from driversHeiner Kallweit28-169/+0
Remove generic settings for callbacks config_aneg and read_status from drivers. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: phy: core: use genphy version of callbacks read_status and config_aneg per defaultHeiner Kallweit2-16/+22
read_status and config_aneg are the only mandatory callbacks and most of the time the generic implementation is used by drivers. So make the core fall back to the generic version if a driver doesn't implement the respective callback. Also currently the core doesn't seem to verify that drivers implement the mandatory calls. If a driver doesn't do so we'd just get a NPE. With this patch this potential issue doesn't exit any longer. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01ip6_gre: Add ERSPAN native tunnel supportWilliam Tu2-4/+267
The patch adds support for ERSPAN tunnel over ipv6. Signed-off-by: William Tu <u9012063@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01ip6_gre: Refactor ip6gre xmit codesWilliam Tu1-48/+75
This patch refactors the ip6gre_xmit_{ipv4, ipv6}. It is a prep work to add the ip6erspan tunnel. Signed-off-by: William Tu <u9012063@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01ip_gre: Refector the erpsan tunnel code.William Tu2-49/+56
Move two erspan functions to header file, erspan.h, so ipv6 erspan implementation can use it. Signed-off-by: William Tu <u9012063@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01bnxt_en: Add ETH_RESET_AP supportScott Branden2-0/+12
Add ETH_RESET_AP support handling to reset the internal Application Processor(s) of the SmartNIC card. Signed-off-by: Scott Branden <scott.branden@broadcom.com> Acked-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethtool: add support for reset of AP inside NIC interface.Scott Branden1-0/+1
Add ETH_RESET_AP to reset the application processor(s) inside the NIC interface. Current ETH_RESET_MGMT supports a management processor inside this NIC. This is typically used for remote NIC management purposes. Application processors exist inside some SmartNICs to run various applications inside the NIC processor - be it a simple algorithm without an OS to as complex as hosting multiple VMs. Signed-off-by: Scott Branden <scott.branden@broadcom.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01rds: tcp: atomically purge entries from rds_tcp_conn_list during netns deleteSowmini Varadhan2-2/+8
The rds_tcp_kill_sock() function parses the rds_tcp_conn_list to find the rds_connection entries marked for deletion as part of the netns deletion under the protection of the rds_tcp_conn_lock. Since the rds_tcp_conn_list tracks rds_tcp_connections (which have a 1:1 mapping with rds_conn_path), multiple tc entries in the rds_tcp_conn_list will map to a single rds_connection, and will be deleted as part of the rds_conn_destroy() operation that is done outside the rds_tcp_conn_lock. The rds_tcp_conn_list traversal done under the protection of rds_tcp_conn_lock should not leave any doomed tc entries in the list after the rds_tcp_conn_lock is released, else another concurrently executiong netns delete (for a differnt netns) thread may trip on these entries. Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01rds: tcp: correctly sequence cleanup on netns deletion.Sowmini Varadhan3-6/+7
Commit 8edc3affc077 ("rds: tcp: Take explicit refcounts on struct net") introduces a regression in rds-tcp netns cleanup. The cleanup_net(), (and thus rds_tcp_dev_event notification) is only called from put_net() when all netns refcounts go to 0, but this cannot happen if the rds_connection itself is holding a c_net ref that it expects to release in rds_tcp_kill_sock. Instead, the rds_tcp_kill_sock callback should make sure to tear down state carefully, ensuring that the socket teardown is only done after all data-structures and workqs that depend on it are quiesced. The original motivation for commit 8edc3affc077 ("rds: tcp: Take explicit refcounts on struct net") was to resolve a race condition reported by syzkaller where workqs for tx/rx/connect were triggered after the namespace was deleted. Those worker threads should have been cancelled/flushed before socket tear-down and indeed, rds_conn_path_destroy() does try to sequence this by doing /* cancel cp_send_w */ /* cancel cp_recv_w */ /* flush cp_down_w */ /* free data structures */ Here the "flush cp_down_w" will trigger rds_conn_shutdown and thus invoke rds_tcp_conn_path_shutdown() to close the tcp socket, so that we ought to have satisfied the requirement that "socket-close is done after all other dependent state is quiesced". However, rds_conn_shutdown has a bug in that it *always* triggers the reconnect workq (and if connection is successful, we always restart tx/rx workqs so with the right timing, we risk the race conditions reported by syzkaller). Netns deletion is like module teardown- no need to restart a reconnect in this case. We can use the c_destroy_in_prog bit to avoid restarting the reconnect. Fixes: 8edc3affc077 ("rds: tcp: Take explicit refcounts on struct net") Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01rds: tcp: remove redundant function rds_tcp_conn_paths_destroy()Sowmini Varadhan1-24/+1
A side-effect of Commit c14b0366813a ("rds: tcp: set linger to 1 when unloading a rds-tcp") is that we always send a RST on the tcp connection for rds_conn_destroy(), so rds_tcp_conn_paths_destroy() is not needed any more and is removed in this patch. Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01tipc: fall back to smaller MTU if allocation of local send skb failsJon Maloy4-13/+55
When sending node local messages the code is using an 'mtu' of 66060 bytes to avoid unnecessary fragmentation. During situations of low memory tipc_msg_build() may sometimes fail to allocate such large buffers, resulting in unnecessary send failures. This can easily be remedied by falling back to a smaller MTU, and then reassemble the buffer chain as if the message were arriving from a remote node. At the same time, we change the initial MTU setting of the broadcast link to a lower value, so that large messages always are fragmented into smaller buffers even when we run in single node mode. Apart from obtaining the same advantage as for the 'fallback' solution above, this turns out to give a significant performance improvement. This can probably be explained with the __pskb_copy() operation performed on the buffer for each recipient during reception. We found the optimal value for this, considering the most relevant skb pool, to be 3744 bytes. Acked-by: Ying Xue <ying.xue@ericsson.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-30net: macb: Added support for RX filteringRafal Ozieblo2-1/+444
This patch allows filtering received packets to different hardware queues (aka ntuple). Signed-off-by: Rafal Ozieblo <rafalo@cadence.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-30net: macb: Added some queue statisticsRafal Ozieblo2-4/+64
Added statistics per queue: - qX_rx_packets - qX_rx_bytes - qX_rx_dropped - qX_tx_packets - qX_tx_bytes - qX_tx_dropped Signed-off-by: Rafal Ozieblo <rafalo@cadence.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-30net: macb: Added support for many RX queuesRafal Ozieblo2-141/+191
To be able for packet reception on different RX queues some configuration has to be performed. This patch checks how many hardware queue does GEM support and initializes them. Signed-off-by: Rafal Ozieblo <rafalo@cadence.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-30vmxnet3: increase default rx ring sizesShrikrishna Khare1-4/+4
There are several reasons for increasing the receive ring sizes: 1. The original ring size of 256 was chosen about 10 years ago when vmxnet3 was first created. At that time, 10Gbps Ethernet was not prevalent and servers were dominated by 1Gbps Ethernet. Now 10Gbps is common place, and higher bandwidth links -- 25Gbps, 40Gbps, 50Gbps -- are starting to appear. 256 Rx ring entries are simply not enough to keep up with higher link speed when there is a burst of network frames coming from these high speed links. Even with full MTU size frames, they are gone in a short time. It is also more common to have a mix of frame sizes, and more likely bi-modal distribution of frame sizes so the average frame size is not close to full MTU. If we consider average frame size of 800B, 1024 frames that come in a burst takes ~0.65 ms to arrive at 10Gbps. With 256 entires, it takes ~0.16 ms to arrive at 10Gbps. At 25Gbps or 40Gbps, this time is reduced accordingly. 2. On a hypervisor where there are many VMs and CPU is over committed, i.e. the number of VCPUs is more than the number of VCPUs, each PCPU is in effect time shared between multiple VMs/VCPUs. The time granularity at which this multiplexing occurs is typically coarser than between processes on a guest OS. Trying to time slice more finely is not efficient, for example, if memory cache is barely warmed up when switching from one VM to another occurs. This CPU overcommit adds delay to when the driver in a VM can service incoming packets. Whether CPU is over committed really depends on customer workloads. For certain situations, it is very common. For example, workloads of desktop VMs and product testing setups. Consolidation and sharing is what drives efficiency of a customer setup for such workloads. In these situations, the raw network bandwidth may not be very high, but the delays between when a VM is running or not running can also be relatively long. Signed-off-by: Shrikrishna Khare <skhare@vmware.com> Acked-by: Jin Heo <heoj@vmware.com> Acked-by: Guolin Yang <gyang@vmware.com> Acked-by: Boon Ang <bang@vmware.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-30net: dsa: bcm_sf2: Utilize b53_get_tag_protocol()Florian Fainelli3-9/+4
Utilize the much more capable b53_get_tag_protocol() which takes care of all Broadcom switches specifics to resolve which port can have Broadcom tags enabled or not. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-30net/reuseport: drop legacy codePaolo Abeni5-55/+15
Since commit e32ea7e74727 ("soreuseport: fast reuseport UDP socket selection") and commit c125e80b8868 ("soreuseport: fast reuseport TCP socket selection") the relevant reuseport socket matching the current packet is selected by the reuseport_select_sock() call. The only exceptions are invalid BPF filters/filters returning out-of-range indices. In the latter case the code implicitly falls back to using the hash demultiplexing, but instead of selecting the socket inside the reuseport_select_sock() function, it relies on the hash selection logic introduced with the early soreuseport implementation. With this patch, in case of a BPF filter returning a bad socket index value, we fall back to hash-based selection inside the reuseport_select_sock() body, so that we can drop some duplicate code in the ipv4 and ipv6 stack. This also allows faster lookup in the above scenario and will allow us to avoid computing the hash value for successful, BPF based demultiplexing - in a later patch. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Acked-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-30Documentation: net: dsa: Cut set_addr() documentationLinus Walleij1-5/+0
This is not supported anymore, devices needing a MAC address just assign one at random, it's just a driver pecularity. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-30net: Remove dst->nextDavid Miller2-5/+0
There are no more users. Signed-off-by: David S. Miller <davem@davemloft.net> Reviewed-by: Eric Dumazet <edumazet@google.com>