aboutsummaryrefslogtreecommitdiffstats
path: root/net/ipv4/netfilter/Makefile (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2019-06-15tcp: add tcp_min_snd_mss sysctlEric Dumazet5-2/+22
Some TCP peers announce a very small MSS option in their SYN and/or SYN/ACK messages. This forces the stack to send packets with a very high network/cpu overhead. Linux has enforced a minimal value of 48. Since this value includes the size of TCP options, and that the options can consume up to 40 bytes, this means that each segment can include only 8 bytes of payload. In some cases, it can be useful to increase the minimal value to a saner value. We still let the default to 48 (TCP_MIN_SND_MSS), for compatibility reasons. Note that TCP_MAXSEG socket option enforces a minimal value of (TCP_MIN_MSS). David Miller increased this minimal value in commit c39508d6f118 ("tcp: Make TCP_MAXSEG minimum more correct.") from 64 to 88. We might in the future merge TCP_MIN_SND_MSS and TCP_MIN_MSS. CVE-2019-11479 -- tcp mss hardcoded to 48 Signed-off-by: Eric Dumazet <edumazet@google.com> Suggested-by: Jonathan Looney <jtl@netflix.com> Acked-by: Neal Cardwell <ncardwell@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Tyler Hicks <tyhicks@canonical.com> Cc: Bruce Curtis <brucec@netflix.com> Cc: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-15tcp: tcp_fragment() should apply sane memory limitsEric Dumazet3-0/+7
Jonathan Looney reported that a malicious peer can force a sender to fragment its retransmit queue into tiny skbs, inflating memory usage and/or overflow 32bit counters. TCP allows an application to queue up to sk_sndbuf bytes, so we need to give some allowance for non malicious splitting of retransmit queue. A new SNMP counter is added to monitor how many times TCP did not allow to split an skb if the allowance was exceeded. Note that this counter might increase in the case applications use SO_SNDBUF socket option to lower sk_sndbuf. CVE-2019-11478 : tcp_fragment, prevent fragmenting a packet when the socket is already using more than half the allowed space Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Jonathan Looney <jtl@netflix.com> Acked-by: Neal Cardwell <ncardwell@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Reviewed-by: Tyler Hicks <tyhicks@canonical.com> Cc: Bruce Curtis <brucec@netflix.com> Cc: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-15tcp: limit payload size of sacked skbsEric Dumazet5-9/+30
Jonathan Looney reported that TCP can trigger the following crash in tcp_shifted_skb() : BUG_ON(tcp_skb_pcount(skb) < pcount); This can happen if the remote peer has advertized the smallest MSS that linux TCP accepts : 48 An skb can hold 17 fragments, and each fragment can hold 32KB on x86, or 64KB on PowerPC. This means that the 16bit witdh of TCP_SKB_CB(skb)->tcp_gso_segs can overflow. Note that tcp_sendmsg() builds skbs with less than 64KB of payload, so this problem needs SACK to be enabled. SACK blocks allow TCP to coalesce multiple skbs in the retransmit queue, thus filling the 17 fragments to maximal capacity. CVE-2019-11477 -- u16 overflow of TCP_SKB_CB(skb)->tcp_gso_segs Fixes: 832d11c5cd07 ("tcp: Try to restore large SKBs while SACK processing") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Jonathan Looney <jtl@netflix.com> Acked-by: Neal Cardwell <ncardwell@google.com> Reviewed-by: Tyler Hicks <tyhicks@canonical.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Bruce Curtis <brucec@netflix.com> Cc: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-15Revert "net: phylink: set the autoneg state in phylink_phy_change"David S. Miller1-1/+0
This reverts commit ef7bfa84725d891bbdb88707ed55b2cbf94942bb. Russell King espressed some strong opposition to this change, explaining that this is trying to make phylink behave outside of how it has been designed. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-15bpf: fix nested bpf tracepoints with per-cpu dataMatt Mullins1-16/+84
BPF_PROG_TYPE_RAW_TRACEPOINTs can be executed nested on the same CPU, as they do not increment bpf_prog_active while executing. This enables three levels of nesting, to support - a kprobe or raw tp or perf event, - another one of the above that irq context happens to call, and - another one in nmi context (at most one of which may be a kprobe or perf event). Fixes: 20b9d7ac4852 ("bpf: avoid excessive stack usage for perf_sample_data") Signed-off-by: Matt Mullins <mmullins@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-06-15bpf: Fix out of bounds memory access in bpf_sk_storageArthur Fabre1-1/+2
bpf_sk_storage maps use multiple spin locks to reduce contention. The number of locks to use is determined by the number of possible CPUs. With only 1 possible CPU, bucket_log == 0, and 2^0 = 1 locks are used. When updating elements, the correct lock is determined with hash_ptr(). Calling hash_ptr() with 0 bits is undefined behavior, as it does: x >> (64 - bits) Using the value results in an out of bounds memory access. In my case, this manifested itself as a page fault when raw_spin_lock_bh() is called later, when running the self tests: ./tools/testing/selftests/bpf/test_verifier 773 775 [ 16.366342] BUG: unable to handle page fault for address: ffff8fe7a66f93f8 Force the minimum number of locks to two. Signed-off-by: Arthur Fabre <afabre@cloudflare.com> Fixes: 6ac99e8f23d4 ("bpf: Introduce bpf sk local storage") Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-06-15vsock/virtio: set SOCK_DONE on peer shutdownStephen Barber1-1/+3
Set the SOCK_DONE flag to match the TCP_CLOSING state when a peer has shut down and there is nothing left to read. This fixes the following bug: 1) Peer sends SHUTDOWN(RDWR). 2) Socket enters TCP_CLOSING but SOCK_DONE is not set. 3) read() returns -ENOTCONN until close() is called, then returns 0. Signed-off-by: Stephen Barber <smbarber@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-15net: dsa: rtl8366: Fix up VLAN filteringLinus Walleij1-3/+4
We get this regression when using RTL8366RB as part of a bridge with OpenWrt: WARNING: CPU: 0 PID: 1347 at net/switchdev/switchdev.c:291 switchdev_port_attr_set_now+0x80/0xa4 lan0: Commit of attribute (id=7) failed. (...) realtek-smi switch lan0: failed to initialize vlan filtering on this port This is because it is trying to disable VLAN filtering on VLAN0, as we have forgot to add 1 to the port number to get the right VLAN in rtl8366_vlan_filtering(): when we initialize the VLAN we associate VLAN1 with port 0, VLAN2 with port 1 etc, so we need to add 1 to the port offset. Fixes: d8652956cf37 ("net: dsa: realtek-smi: Add Realtek SMI driver") Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-15net: phylink: set the autoneg state in phylink_phy_changeIoana Ciornei1-0/+1
The phy_state field of phylink should carry only valid information especially when this can be passed to the .mac_config callback. Update the an_enabled field with the autoneg state in the phylink_phy_change function. Fixes: 9525ae83959b ("phylink: add phylink infrastructure") Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14net: add high_order_alloc_disable sysctl/static keyEric Dumazet3-1/+12
>From linux-3.7, (commit 5640f7685831 "net: use a per task frag allocator") TCP sendmsg() has preferred using order-3 allocations. While it gives good results for most cases, we had reports that heavy uses of TCP over loopback were hitting a spinlock contention in page allocations/freeing. This commits adds a sysctl so that admins can opt-in for order-0 allocations. Hopefully mm layer might optimize order-3 allocations in the future since it could give us a nice boost (see 8 lines of following benchmark) The following benchmark shows a win when more than 8 TCP_STREAM threads are running (56 x86 cores server in my tests) for thr in {1..30} do sysctl -wq net.core.high_order_alloc_disable=0 T0=`./super_netperf $thr -H 127.0.0.1 -l 15` sysctl -wq net.core.high_order_alloc_disable=1 T1=`./super_netperf $thr -H 127.0.0.1 -l 15` echo $thr:$T0:$T1 done 1: 49979: 37267 2: 98745: 76286 3: 141088: 110051 4: 177414: 144772 5: 197587: 173563 6: 215377: 208448 7: 241061: 234087 8: 267155: 263373 9: 295069: 297402 10: 312393: 335213 11: 340462: 368778 12: 371366: 403954 13: 412344: 443713 14: 426617: 473580 15: 474418: 507861 16: 503261: 538539 17: 522331: 563096 18: 532409: 567084 19: 550824: 605240 20: 525493: 641988 21: 564574: 665843 22: 567349: 690868 23: 583846: 710917 24: 588715: 736306 25: 603212: 763494 26: 604083: 792654 27: 602241: 796450 28: 604291: 797993 29: 611610: 833249 30: 577356: 841062 Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14tcp: add tcp_tx_skb_cache sysctlEric Dumazet2-1/+11
Feng Tang reported a performance regression after introduction of per TCP socket tx/rx caches, for TCP over loopback (netperf) There is high chance the regression is caused by a change on how well the 32 KB per-thread page (current->task_frag) can be recycled, and lack of pcp caches for order-3 pages. I could not reproduce the regression myself, cpus all being spinning on the mm spinlocks for page allocs/freeing, regardless of enabling or disabling the per tcp socket caches. It seems best to disable the feature by default, and let admins enabling it. MM layer either needs to provide scalable order-3 pages allocations, or could attempt a trylock on zone->lock if the caller only attempts to get a high-order page and is able to fallback to order-0 ones in case of pressure. Tests run on a 56 cores host (112 hyper threads) - 35.49% netperf [kernel.vmlinux] [k] queued_spin_lock_slowpath - 35.49% queued_spin_lock_slowpath - 18.18% get_page_from_freelist - __alloc_pages_nodemask - 18.18% alloc_pages_current skb_page_frag_refill sk_page_frag_refill tcp_sendmsg_locked tcp_sendmsg inet_sendmsg sock_sendmsg __sys_sendto __x64_sys_sendto do_syscall_64 entry_SYSCALL_64_after_hwframe __libc_send + 17.31% __free_pages_ok + 31.43% swapper [kernel.vmlinux] [k] intel_idle + 9.12% netperf [kernel.vmlinux] [k] copy_user_enhanced_fast_string + 6.53% netserver [kernel.vmlinux] [k] copy_user_enhanced_fast_string + 0.69% netserver [kernel.vmlinux] [k] queued_spin_lock_slowpath + 0.68% netperf [kernel.vmlinux] [k] skb_release_data + 0.52% netperf [kernel.vmlinux] [k] tcp_sendmsg_locked 0.46% netperf [kernel.vmlinux] [k] _raw_spin_lock_irqsave Fixes: 472c2e07eef0 ("tcp: add one skb cache for tx") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Feng Tang <feng.tang@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14tcp: add tcp_rx_skb_cache sysctlEric Dumazet3-4/+19
Instead of relying on rps_needed, it is safer to use a separate static key, since we do not want to enable TCP rx_skb_cache by default. This feature can cause huge increase of memory usage on hosts with millions of sockets. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14sysctl: define proc_do_static_key()Eric Dumazet4-23/+26
Convert proc_dointvec_minmax_bpf_stats() into a more generic helper, since we are going to use jump labels more often. Note that sysctl_bpf_stats_enabled is removed, since it is no longer needed/used. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14hv_netvsc: Set probe mode to syncHaiyang Zhang1-1/+1
For better consistency of synthetic NIC names, we set the probe mode to PROBE_FORCE_SYNCHRONOUS. So the names can be aligned with the vmbus channel offer sequence. Fixes: af0a5646cb8d ("use the new async probing feature for the hyperv drivers") Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14net: sched: flower: don't call synchronize_rcu() on mask creationVlad Buslov1-16/+18
Current flower mask creating code assumes that temporary mask that is used when inserting new filter is stack allocated. To prevent race condition with data patch synchronize_rcu() is called every time fl_create_new_mask() replaces temporary stack allocated mask. As reported by Jiri, this increases runtime of creating 20000 flower classifiers from 4 seconds to 163 seconds. However, this design is no longer necessary since temporary mask was converted to be dynamically allocated by commit 2cddd2014782 ("net/sched: cls_flower: allocate mask dynamically in fl_change()"). Remove synchronize_rcu() calls from mask creation code. Instead, refactor fl_change() to always deallocate temporary mask with rcu grace period. Fixes: 195c234d15c9 ("net: sched: flower: handle concurrent mask insertion") Reported-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Tested-by: Jiri Pirko <jiri@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14net: dsa: fix warning same module namesAnders Roxell5-5/+5
When building with CONFIG_NET_DSA_REALTEK_SMI and CONFIG_REALTEK_PHY enabled as loadable modules, we see the following warning: warning: same module names found: drivers/net/phy/realtek.ko drivers/net/dsa/realtek.ko Rework so the driver name is realtek-smi instead of realtek. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14sctp: Free cookie before we memdup a new oneNeil Horman1-0/+8
Based on comments from Xin, even after fixes for our recent syzbot report of cookie memory leaks, its possible to get a resend of an INIT chunk which would lead to us leaking cookie memory. To ensure that we don't leak cookie memory, free any previously allocated cookie first. Change notes v1->v2 update subsystem tag in subject (davem) repeat kfree check for peer_random and peer_hmacs (xin) v2->v3 net->sctp also free peer_chunks v3->v4 fix subject tags v4->v5 remove cut line Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Reported-by: syzbot+f7e9153b037eac9b1df8@syzkaller.appspotmail.com CC: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> CC: Xin Long <lucien.xin@gmail.com> CC: "David S. Miller" <davem@davemloft.net> CC: netdev@vger.kernel.org Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14net: dsa: microchip: Don't try to read stats for unused portsRobert Hancock1-0/+3
If some of the switch ports were not listed in the device tree, due to being unused, the ksz_mib_read_work function ended up accessing a NULL dp->slave pointer and causing an oops. Skip checking statistics for any unused ports. Fixes: 7c6ff470aa867f53 ("net: dsa: microchip: add MIB counter reading support") Signed-off-by: Robert Hancock <hancock@sedsystems.ca> Reviewed-by: Vivien Didelot <vivien.didelot@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14qmi_wwan: extend permitted QMAP mux_id value rangeReinhard Speyerer2-4/+4
Permit mux_id values up to 254 to be used in qmimux_register_device() for compatibility with ip(8) and the rmnet driver. Fixes: c6adf77953bc ("net: usb: qmi_wwan: add qmap mux protocol support") Cc: Daniele Palmas <dnlplm@gmail.com> Signed-off-by: Reinhard Speyerer <rspmn@arcor.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14qmi_wwan: avoid RCU stalls on device disconnect when in QMAP modeReinhard Speyerer1-4/+7
Switch qmimux_unregister_device() and qmi_wwan_disconnect() to use unregister_netdevice_queue() and unregister_netdevice_many() instead of unregister_netdevice(). This avoids RCU stalls which have been observed on device disconnect in certain setups otherwise. Fixes: c6adf77953bc ("net: usb: qmi_wwan: add qmap mux protocol support") Cc: Daniele Palmas <dnlplm@gmail.com> Signed-off-by: Reinhard Speyerer <rspmn@arcor.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14qmi_wwan: add network device usage statistics for qmimux devicesReinhard Speyerer1-5/+71
Add proper network device usage statistics for qmimux devices instead of reporting all-zero values for them. Fixes: c6adf77953bc ("net: usb: qmi_wwan: add qmap mux protocol support") Cc: Daniele Palmas <dnlplm@gmail.com> Signed-off-by: Reinhard Speyerer <rspmn@arcor.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14qmi_wwan: add support for QMAP padding in the RX pathReinhard Speyerer1-3/+9
The QMAP code in the qmi_wwan driver is based on the CodeAurora GobiNet driver which does not process QMAP padding in the RX path correctly. Add support for QMAP padding to qmimux_rx_fixup() according to the description of the rmnet driver. Fixes: c6adf77953bc ("net: usb: qmi_wwan: add qmap mux protocol support") Cc: Daniele Palmas <dnlplm@gmail.com> Signed-off-by: Reinhard Speyerer <rspmn@arcor.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14bpf, x64: fix stack layout of JITed bpf codeAlexei Starovoitov1-53/+21
Since commit 177366bf7ceb the %rbp stopped pointing to %rbp of the previous stack frame. That broke frame pointer based stack unwinding. This commit is a partial revert of it. Note that the location of tail_call_cnt is fixed, since the verifier enforces MAX_BPF_STACK stack size for programs with tail calls. Fixes: 177366bf7ceb ("bpf: change x86 JITed program stack layout") Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-06-15bpf, devmap: Add missing RCU read lock on flushToshiaki Makita1-0/+4
.ndo_xdp_xmit() assumes it is called under RCU. For example virtio_net uses RCU to detect it has setup the resources for tx. The assumption accidentally broke when introducing bulk queue in devmap. Fixes: 5d053f9da431 ("bpf: devmap prepare xdp frames for bulking") Reported-by: David Ahern <dsahern@gmail.com> Signed-off-by: Toshiaki Makita <toshiaki.makita1@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-15bpf, devmap: Add missing bulk queue freeToshiaki Makita1-0/+1
dev_map_free() forgot to free bulk queue when freeing its entries. Fixes: 5d053f9da431 ("bpf: devmap prepare xdp frames for bulking") Signed-off-by: Toshiaki Makita <toshiaki.makita1@gmail.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-15bpf, devmap: Fix premature entry free on destroying mapToshiaki Makita1-2/+2
dev_map_free() waits for flush_needed bitmap to be empty in order to ensure all flush operations have completed before freeing its entries. However the corresponding clear_bit() was called before using the entries, so the entries could be used after free. All access to the entries needs to be done before clearing the bit. It seems commit a5e2da6e9787 ("bpf: netdev is never null in __dev_map_flush") accidentally changed the clear_bit() and memory access order. Note that the problem happens only in __dev_map_flush(), not in dev_map_flush_old(). dev_map_flush_old() is called only after nulling out the corresponding netdev_map entry, so dev_map_free() never frees the entry thus no such race happens there. Fixes: a5e2da6e9787 ("bpf: netdev is never null in __dev_map_flush") Signed-off-by: Toshiaki Makita <toshiaki.makita1@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-14net: phylink: further mac_config documentation improvementsRussell King - ARM Linux admin1-1/+12
While reviewing the DPAA2 work, it has become apparent that we need better documentation about which members of the phylink link state structure are valid in the mac_config call. Improve this documentation. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14nfc: Ensure presence of required attributes in the deactivate_target handlerYoung Xiao1-1/+2
Check that the NFC_ATTR_TARGET_INDEX attributes (in addition to NFC_ATTR_DEVICE_INDEX) are provided by the netlink client prior to accessing them. This prevents potential unhandled NULL pointer dereference exceptions which can be triggered by malicious user-mode programs, if they omit one or both of these attributes. Signed-off-by: Young Xiao <92siuyang@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14cfg80211: report measurement start TSF correctlyAvraham Stern1-2/+2
Instead of reporting the AP's TSF, host time was reported. Fix it. Signed-off-by: Avraham Stern <avraham.stern@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-06-14cfg80211: fix memory leak of wiphy device nameEric Biggers1-1/+1
In wiphy_new_nm(), if an error occurs after dev_set_name() and device_initialize() have already been called, it's necessary to call put_device() (via wiphy_free()) to avoid a memory leak. Reported-by: syzbot+7fddca22578bc67c3fe4@syzkaller.appspotmail.com Fixes: 1f87f7d3a3b4 ("cfg80211: add rfkill support") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-06-14cfg80211: util: fix bit count off by oneMordechay Goodstein1-1/+1
The bits of Rx MCS Map in VHT capability were enumerated with index transform - index i -> (i + 1) bit => nss i. BUG! while it should be - index i -> (i + 1) bit => (i + 1) nss. The bug was exposed in commit a53b2a0b1245 ("iwlwifi: mvm: implement VHT extended NSS support in rs.c"), where iwlwifi started using the function. Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com> Fixes: b0aa75f0b1b2 ("ieee80211: add new VHT capability fields/parsing") Signed-off-by: Luca Coelho <luciano.coelho@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-06-14mac80211: do not start any work during reconfigure flowNaftali Goldstein2-0/+11
It is not a good idea to try to perform any work (e.g. send an auth frame) during reconfigure flow. Prevent this from happening, and at the end of the reconfigure flow requeue all the works. Signed-off-by: Naftali Goldstein <naftali.goldstein@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-06-14cfg80211: use BIT_ULL in cfg80211_parse_mbssid_data()Luca Coelho1-2/+2
The seen_indices variable is u64 and in other parts of the code we assume mbssid_index_ie[2] can be up to 45, so we should use the 64-bit versions of BIT, namely, BIT_ULL(). Reported-by: Dan Carpented <dan.carpenter@oracle.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-06-14mac80211: only warn once on chanctx_conf being NULLYibo Zhao1-1/+1
In multiple SSID cases, it takes time to prepare every AP interface to be ready in initializing phase. If a sta already knows everything it needs to join one of the APs and sends authentication to the AP which is not fully prepared at this point of time, AP's channel context could be NULL. As a result, warning message occurs. Even worse, if the AP is under attack via tools such as MDK3 and massive authentication requests are received in a very short time, console will be hung due to kernel warning messages. WARN_ON_ONCE() could be a better way for indicating warning messages without duplicate messages to flood the console. Johannes: We still need to address the underlying problem, but we don't really have a good handle on it yet. Suppress the worst side-effects for now. Signed-off-by: Zhi Chen <zhichen@codeaurora.org> Signed-off-by: Yibo Zhao <yiboz@codeaurora.org> [johannes: add note, change subject] Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-06-14mac80211: drop robust management frames from unknown TAJohannes Berg1-0/+2
When receiving a robust management frame, drop it if we don't have rx->sta since then we don't have a security association and thus couldn't possibly validate the frame. Cc: stable@vger.kernel.org Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-06-13powerpc/bpf: use unsigned division instruction for 64-bit operationsNaveen N. Rao3-5/+6
BPF_ALU64 div/mod operations are currently using signed division, unlike BPF_ALU32 operations. Fix the same. DIV64 and MOD64 overflow tests pass with this fix. Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF") Cc: stable@vger.kernel.org # v4.8+ Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-13bpf: fix div64 overflow tests to properly detect errorsNaveen N. Rao1-4/+10
If the result of the division is LLONG_MIN, current tests do not detect the error since the return value is truncated to a 32-bit value and ends up being 0. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-13bpf: sync BPF_FIB_LOOKUP flag changes with BPF uapiMartynas Pumputis1-2/+2
Sync the changes to the flags made in "bpf: simplify definition of BPF_FIB_LOOKUP related flags" with the BPF UAPI headers. Doing in a separate commit to ease syncing of github/libbpf. Signed-off-by: Martynas Pumputis <m@lambda.lt> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-13bpf: simplify definition of BPF_FIB_LOOKUP related flagsMartynas Pumputis1-2/+2
Previously, the BPF_FIB_LOOKUP_{DIRECT,OUTPUT} flags in the BPF UAPI were defined with the help of BIT macro. This had the following issues: - In order to use any of the flags, a user was required to depend on <linux/bits.h>. - No other flag in bpf.h uses the macro, so it seems that an unwritten convention is to use (1 << (nr)) to define BPF-related flags. Fixes: 87f5fc7e48dd ("bpf: Provide helper to do forwarding lookups in kernel FIB table") Signed-off-by: Martynas Pumputis <m@lambda.lt> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-12net: mvpp2: prs: Use the correct helpers when removing all VID filtersMaxime Chevallier1-2/+4
When removing all VID filters, the mvpp2_prs_vid_entry_remove would be called with the TCAM id incorrectly used as a VID, causing the wrong TCAM entries to be invalidated. Fix this by directly invalidating entries in the VID range. Fixes: 56beda3db602 ("net: mvpp2: Add hardware offloading for VLAN filtering") Suggested-by: Yuri Chipchev <yuric@marvell.com> Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: mvpp2: prs: Fix parser range for VID filteringMaxime Chevallier1-9/+8
VID filtering is implemented in the Header Parser, with one range of 11 vids being assigned for each no-loopback port. Make sure we use the per-port range when looking for existing entries in the Parser. Since we used a global range instead of a per-port one, this causes VIDs to be removed from the whitelist from all ports of the same PPv2 instance. Fixes: 56beda3db602 ("net: mvpp2: Add hardware offloading for VLAN filtering") Suggested-by: Yuri Chipchev <yuric@marvell.com> Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12mlxsw: spectrum: Disallow prio-tagged packets when PVID is removedIdo Schimmel1-1/+1
When PVID is removed from a bridge port, the Linux bridge drops both untagged and prio-tagged packets. Align mlxsw with this behavior. Fixes: 148f472da5db ("mlxsw: reg: Add the Switch Port Acceptable Frame Types register") Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12mlxsw: spectrum_buffers: Reduce pool size on Spectrum-2Petr Machata1-2/+2
Due to an issue on Spectrum-2, in front-panel ports split four ways, 2 out of 32 port buffers cannot be used. To work around this, the next FW release will mark them as unused, and will report correspondingly lower total shared buffer size. mlxsw will pick up the new value through a query to cap_total_buffer_size resource. However the initial size for shared buffer pool 0 is hard-coded and therefore needs to be updated. Thus reduce the pool size by 2.7 MiB (which corresponds to 2/32 of the total size of 42 MiB), and round down to the whole number of cells. Fixes: fe099bf682ab ("mlxsw: spectrum_buffers: Add Spectrum-2 shared buffer configuration") Signed-off-by: Petr Machata <petrm@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12selftests: tc_flower: Add TOS matching testJiri Pirko1-1/+35
Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12mlxsw: spectrum_flower: Fix TOS matchingJiri Pirko1-2/+2
The TOS value was not extracted correctly. Fix it. Fixes: 87996f91f739 ("mlxsw: spectrum_flower: Add support for ip tos") Reported-by: Alexander Petrovskiy <alexpe@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12selftests: mlxsw: Test nexthop offload indicationIdo Schimmel1-0/+47
Test that IPv4 and IPv6 nexthops are correctly marked with offload indication in response to neighbour events. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12mlxsw: spectrum_router: Refresh nexthop neighbour when it becomes deadIdo Schimmel1-3/+70
The driver tries to periodically refresh neighbours that are used to reach nexthops. This is done by periodically calling neigh_event_send(). However, if the neighbour becomes dead, there is nothing we can do to return it to a connected state and the above function call is basically a NOP. This results in the nexthop never being written to the device's adjacency table and therefore never used to forward packets. Fix this by dropping our reference from the dead neighbour and associating the nexthop with a new neigbhour which we will try to refresh. Fixes: a7ff87acd995 ("mlxsw: spectrum_router: Implement next-hop routing") Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reported-by: Alex Veber <alexve@mellanox.com> Tested-by: Alex Veber <alexve@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12mlxsw: spectrum: Use different seeds for ECMP and LAG hashIdo Schimmel1-1/+4
The same hash function and seed are used for both ECMP and LAG hash. Therefore, when a LAG device is used as a nexthop device as part of an ECMP group, hash polarization can occur and all the traffic will be hashed to a single LAG slave. Fix this by using a different seed for the LAG hash. Fixes: fa73989f2697 ("mlxsw: spectrum: Use a stable ECMP/LAG seed") Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reported-by: Alex Veber <alexve@mellanox.com> Tested-by: Alex Veber <alexve@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: tls, correctly account for copied bytes with multiple sk_msgsJohn Fastabend1-1/+0
tls_sw_do_sendpage needs to return the total number of bytes sent regardless of how many sk_msgs are allocated. Unfortunately, copied (the value we return up the stack) is zero'd before each new sk_msg is allocated so we only return the copied size of the last sk_msg used. The caller (splice, etc.) of sendpage will then believe only part of its data was sent and send the missing chunks again. However, because the data actually was sent the receiver will get multiple copies of the same data. To reproduce this do multiple sendfile calls with a length close to the max record size. This will in turn call splice/sendpage, sendpage may use multiple sk_msg in this case and then returns the incorrect number of bytes. This will cause splice to resend creating duplicate data on the receiver. Andre created a C program that can easily generate this case so we will push a similar selftest for this to bpf-next shortly. The fix is to _not_ zero the copied field so that the total sent bytes is returned. Reported-by: Steinar H. Gunderson <steinar+kernel@gunderson.no> Reported-by: Andre Tomt <andre@tomt.net> Tested-by: Andre Tomt <andre@tomt.net> Fixes: d829e9c4112b ("tls: convert to generic sk_msg interface") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12vrf: Increment Icmp6InMsgs on the original netdevStephen Suryaputra3-8/+29
Get the ingress interface and increment ICMP counters based on that instead of skb->dev when the the dev is a VRF device. This is a follow up on the following message: https://www.spinics.net/lists/netdev/msg560268.html v2: Avoid changing skb->dev since it has unintended effect for local delivery (David Ahern). Signed-off-by: Stephen Suryaputra <ssuryaextr@gmail.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>