| Age | Commit message (Collapse) | Author | Files | Lines |
|
Macvlan devices are similar to vlans and do not update their
own trans_start. In order for arp monitoring to work for a bond device
when the slaves are macvlans, obtain its real device.
Signed-off-by: Chris Dion <christopher.dion@dell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Using a period after a newline causes bad output.
Signed-off-by: Joe Perches <joe@perches.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Offload IP header checksum to NIC.
This fixes a previous patch which disabled checksum offloading
for both IPv4 and IPv6 packets. So L3 checksum offload was
getting disabled for IPv4 pkts. And HW is dropping these pkts
for some reason.
Without this patch, IPv4 TSO appears to be broken:
WIthout this patch I get ~16kbyte/s, with patch close to 2mbyte/s
when copying files via scp from test box to my home workstation.
Looking at tcpdump on sender it looks like hardware drops IPv4 TSO skbs.
This patch restores performance for me, ipv6 looks good too.
Fixes: fa6d7cb5d76c ("net: thunderx: Fix TCP/UDP checksum offload for IPv6 pkts")
Cc: Sunil Goutham <sgoutham@cavium.com>
Cc: Aleksey Makarov <aleksey.makarov@auriga.com>
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This changes calling conventions (and simplifies the hell out
the callers). New rules: once struct socket had been passed
to sock_alloc_file(), it's been consumed either by struct file
or by sock_release() done by sock_alloc_file(). Either way
the caller should not do sock_release() after that point.
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
simplifies failure exits considerably...
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
1) it's fput() or sock_release(), not both
2) don't do fd_install() until the last failure exit.
3) not a bug per se, but... don't attach socket to struct file
until it's set up.
Take reserving descriptor into the caller, move fd_install() to the
caller, sanitize failure exits and calling conventions.
Cc: stable@vger.kernel.org # v4.6+
Acked-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Whenever the sock object is in DCCP_CLOSED state,
dccp_disconnect() must free dccps_hc_tx_ccid and
dccps_hc_rx_ccid and set to NULL.
Signed-off-by: Mohamed Ghannam <simo.ghannam@gmail.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Alexander Potapenko reported use of uninitialized memory [1]
This happens when inserting a request socket into TCP ehash,
in __sk_nulls_add_node_rcu(), since sk_reuseport is not initialized.
Bug was added by commit d894ba18d4e4 ("soreuseport: fix ordering for
mixed v4/v6 sockets")
Note that d296ba60d8e2 ("soreuseport: Resolve merge conflict for v4/v6
ordering fix") missed the opportunity to get rid of
hlist_nulls_add_tail_rcu() :
Both UDP sockets and TCP/DCCP listeners no longer use
__sk_nulls_add_node_rcu() for their hash insertion.
Since all other sockets have unique 4-tuple, the reuseport status
has no special meaning, so we can always use hlist_nulls_add_head_rcu()
for them and save few cycles/instructions.
[1]
==================================================================
BUG: KMSAN: use of uninitialized memory in inet_ehash_insert+0xd40/0x1050
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.13.0+ #3288
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Call Trace:
<IRQ>
__dump_stack lib/dump_stack.c:16
dump_stack+0x185/0x1d0 lib/dump_stack.c:52
kmsan_report+0x13f/0x1c0 mm/kmsan/kmsan.c:1016
__msan_warning_32+0x69/0xb0 mm/kmsan/kmsan_instr.c:766
__sk_nulls_add_node_rcu ./include/net/sock.h:684
inet_ehash_insert+0xd40/0x1050 net/ipv4/inet_hashtables.c:413
reqsk_queue_hash_req net/ipv4/inet_connection_sock.c:754
inet_csk_reqsk_queue_hash_add+0x1cc/0x300 net/ipv4/inet_connection_sock.c:765
tcp_conn_request+0x31e7/0x36f0 net/ipv4/tcp_input.c:6414
tcp_v4_conn_request+0x16d/0x220 net/ipv4/tcp_ipv4.c:1314
tcp_rcv_state_process+0x42a/0x7210 net/ipv4/tcp_input.c:5917
tcp_v4_do_rcv+0xa6a/0xcd0 net/ipv4/tcp_ipv4.c:1483
tcp_v4_rcv+0x3de0/0x4ab0 net/ipv4/tcp_ipv4.c:1763
ip_local_deliver_finish+0x6bb/0xcb0 net/ipv4/ip_input.c:216
NF_HOOK ./include/linux/netfilter.h:248
ip_local_deliver+0x3fa/0x480 net/ipv4/ip_input.c:257
dst_input ./include/net/dst.h:477
ip_rcv_finish+0x6fb/0x1540 net/ipv4/ip_input.c:397
NF_HOOK ./include/linux/netfilter.h:248
ip_rcv+0x10f6/0x15c0 net/ipv4/ip_input.c:488
__netif_receive_skb_core+0x36f6/0x3f60 net/core/dev.c:4298
__netif_receive_skb net/core/dev.c:4336
netif_receive_skb_internal+0x63c/0x19c0 net/core/dev.c:4497
napi_skb_finish net/core/dev.c:4858
napi_gro_receive+0x629/0xa50 net/core/dev.c:4889
e1000_receive_skb drivers/net/ethernet/intel/e1000/e1000_main.c:4018
e1000_clean_rx_irq+0x1492/0x1d30
drivers/net/ethernet/intel/e1000/e1000_main.c:4474
e1000_clean+0x43aa/0x5970 drivers/net/ethernet/intel/e1000/e1000_main.c:3819
napi_poll net/core/dev.c:5500
net_rx_action+0x73c/0x1820 net/core/dev.c:5566
__do_softirq+0x4b4/0x8dd kernel/softirq.c:284
invoke_softirq kernel/softirq.c:364
irq_exit+0x203/0x240 kernel/softirq.c:405
exiting_irq+0xe/0x10 ./arch/x86/include/asm/apic.h:638
do_IRQ+0x15e/0x1a0 arch/x86/kernel/irq.c:263
common_interrupt+0x86/0x86
Fixes: d894ba18d4e4 ("soreuseport: fix ordering for mixed v4/v6 sockets")
Fixes: d296ba60d8e2 ("soreuseport: Resolve merge conflict for v4/v6 ordering fix")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Alexander Potapenko <glider@google.com>
Acked-by: Craig Gallek <kraig@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If the rmnet device creation fails in the newlink either while
registering with the physical device or after subsequent
operations, the rmnet endpoint information is never freed.
Fixes: ceed73a2cf4a ("drivers: net: ethernet: qualcomm: rmnet: Initial implementation")
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If a skb in transmit path does not have sufficient headroom to add
the map header, the skb is not sent out and is never freed.
Fixes: ceed73a2cf4a ("drivers: net: ethernet: qualcomm: rmnet: Initial implementation")
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Since commit 3b4477d2dcf2709d0be89e2a8dced3d0f4a017f2 ("VSOCK: use TCP
state constants for sk_state") VSOCK has used TCP_* constants for
sk_state.
Commit b4562ca7925a3bedada87a3dd072dd5bad043288 ("hv_sock: add locking
in the open/close/release code paths") reintroduced the SS_DISCONNECTING
constant.
This patch replaces the old SS_DISCONNECTING with the new TCP_CLOSING
constant.
CC: Dexuan Cui <decui@microsoft.com>
CC: Cathy Avery <cavery@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Jorgen Hansen <jhansen@vmware.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When the function tipc_accept_from_sock() fails to create an instance of
struct tipc_subscriber it omits to free the already created instance of
struct tipc_conn instance before it returns.
We fix that with this commit.
Reported-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In tipc_topsrv_kern_subscr() when s->tipc_conn_new() fails
we call tipc_close_conn() to clean up, but in this case
calling conn_put() is just enough.
This fixes the folllowing crash:
kasan: GPF could be caused by NULL-ptr deref or user memory access
general protection fault: 0000 [#1] SMP KASAN
Dumping ftrace buffer:
(ftrace buffer empty)
Modules linked in:
CPU: 0 PID: 3085 Comm: syzkaller064164 Not tainted 4.15.0-rc1+ #137
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
task: 00000000c24413a5 task.stack: 000000005e8160b5
RIP: 0010:__lock_acquire+0xd55/0x47f0 kernel/locking/lockdep.c:3378
RSP: 0018:ffff8801cb5474a8 EFLAGS: 00010002
RAX: dffffc0000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 0000000000000004 RSI: 0000000000000000 RDI: ffffffff85ecb400
RBP: ffff8801cb547830 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000000 R11: ffffffff87489d60 R12: ffff8801cd2980c0
R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000020
FS: 00000000014ee880(0000) GS:ffff8801db400000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffee2426e40 CR3: 00000001cb85a000 CR4: 00000000001406f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4004
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:175
spin_lock_bh include/linux/spinlock.h:320 [inline]
tipc_subscrb_subscrp_delete+0x8f/0x470 net/tipc/subscr.c:201
tipc_subscrb_delete net/tipc/subscr.c:238 [inline]
tipc_subscrb_release_cb+0x17/0x30 net/tipc/subscr.c:316
tipc_close_conn+0x171/0x270 net/tipc/server.c:204
tipc_topsrv_kern_subscr+0x724/0x810 net/tipc/server.c:514
tipc_group_create+0x702/0x9c0 net/tipc/group.c:184
tipc_sk_join net/tipc/socket.c:2747 [inline]
tipc_setsockopt+0x249/0xc10 net/tipc/socket.c:2861
SYSC_setsockopt net/socket.c:1851 [inline]
SyS_setsockopt+0x189/0x360 net/socket.c:1830
entry_SYSCALL_64_fastpath+0x1f/0x96
Fixes: 14c04493cb77 ("tipc: add ability to order and receive topology events in driver")
Reported-by: syzbot <syzkaller@googlegroups.com>
Cc: Jon Maloy <jon.maloy@ericsson.com>
Cc: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Using NULL as argument for the DMA mapping API is bogus, as the DMA
mapping API may use information from the "struct device" to perform
the DMA mapping operation. Therefore, pass the appropriate "struct
device".
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Acked-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
There are two types of "struct device": the one representing the
physical device on its physical bus (platform, SPI, PCI, etc.), and
the one representing the logical device in its device class (net,
etc.).
The DMA mapping API expects to receive as argument a "struct device"
representing the physical device, as the "struct device" contains
information about the bus that the DMA API needs.
However, the sh_eth driver mistakenly uses the "struct device"
representing the logical device (embedded in "struct net_device")
rather than the "struct device" representing the physical device on
its bus.
This commit fixes that by adjusting all calls to the DMA mapping API.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Acked-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Check the qmin & qmax values doesn't overflow for the given Wlog value.
Check that qmin <= qmax.
Fixes: a783474591f2 ("[PKT_SCHED]: Generic RED layer")
Signed-off-by: Nogah Frankel <nogahf@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Do not allow delta value to be zero since it is used as a divisor.
Fixes: 8af2a218de38 ("sch_red: Adaptative RED AQM")
Signed-off-by: Nogah Frankel <nogahf@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
According to LS1021A RM, the value of PAL can be set so that the start of the
IP header in the receive data buffer is aligned to a 32-bit boundary. Normally,
setting PAL = 2 provides minimal padding to ensure such alignment of the IP
header.
However every incoming packet's 8-byte time stamp will be inserted into the
packet data buffer as padding alignment bytes when hardware time stamping is
enabled.
So we set the padding 8+2 here to avoid the flooded alignment faults:
root@128:~# cat /proc/cpu/alignment
User: 0
System: 17539 (inet_gro_receive+0x114/0x2c0)
Skipped: 0
Half: 0
Word: 0
DWord: 0
Multi: 17539
User faults: 2 (fixup)
Also shown when exception report enablement
CPU: 0 PID: 161 Comm: irq/66-eth1_g0_ Not tainted 4.1.21-rt13-WR8.0.0.0_preempt-rt #16
Hardware name: Freescale LS1021A
[<8001b420>] (unwind_backtrace) from [<8001476c>] (show_stack+0x20/0x24)
[<8001476c>] (show_stack) from [<807cfb48>] (dump_stack+0x94/0xac)
[<807cfb48>] (dump_stack) from [<80025d70>] (do_alignment+0x720/0x958)
[<80025d70>] (do_alignment) from [<80009224>] (do_DataAbort+0x40/0xbc)
[<80009224>] (do_DataAbort) from [<80015398>] (__dabt_svc+0x38/0x60)
Exception stack(0x86ad1cc0 to 0x86ad1d08)
1cc0: f9b3e080 86b3d072 2d78d287 00000000 866816c0 86b3d05e 86e785d0 00000000
1ce0: 00000011 0000000e 80840ab0 86ad1d3c 86ad1d08 86ad1d08 806d7fc0 806d806c
1d00: 40070013 ffffffff
[<80015398>] (__dabt_svc) from [<806d806c>] (inet_gro_receive+0x114/0x2c0)
[<806d806c>] (inet_gro_receive) from [<80660eec>] (dev_gro_receive+0x21c/0x3c0)
[<80660eec>] (dev_gro_receive) from [<8066133c>] (napi_gro_receive+0x44/0x17c)
[<8066133c>] (napi_gro_receive) from [<804f0538>] (gfar_clean_rx_ring+0x39c/0x7d4)
[<804f0538>] (gfar_clean_rx_ring) from [<804f0bf4>] (gfar_poll_rx_sq+0x58/0xe0)
[<804f0bf4>] (gfar_poll_rx_sq) from [<80660b10>] (net_rx_action+0x27c/0x43c)
[<80660b10>] (net_rx_action) from [<80033638>] (do_current_softirqs+0x1e0/0x3dc)
[<80033638>] (do_current_softirqs) from [<800338c4>] (__local_bh_enable+0x90/0xa8)
[<800338c4>] (__local_bh_enable) from [<8008025c>] (irq_forced_thread_fn+0x70/0x84)
[<8008025c>] (irq_forced_thread_fn) from [<800805e8>] (irq_thread+0x16c/0x244)
[<800805e8>] (irq_thread) from [<8004e490>] (kthread+0xe8/0x104)
[<8004e490>] (kthread) from [<8000fda8>] (ret_from_fork+0x14/0x2c)
Signed-off-by: Zumeng Chen <zumeng.chen@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This reverts commit d6f295e9def0; some userspace (in the case
we noticed it's wpa_supplicant), is relying on the current
error code to determine that a fixed name interface already
exists.
Reported-by: Jouni Malinen <j@w1.fi>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Previously we swapped the tx_packets, tx_bytes and tx_dropped counters
with rx_packets, rx_bytes and rx_dropped counters, respectively. This
behaviour is correct and expected for VF representors but it should not
be swapped for physical port mac representors.
Fixes: eadfa4c3be99 ("nfp: add stats and xmit helpers for representors")
Signed-off-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
We had to disable BH _before_ calling __inet_twsk_hashdance() in commit
cfac7f836a71 ("tcp/dccp: block bh before arming time_wait timer").
This means we can revert 614bdd4d6e61 ("tcp: must block bh in
__inet_twsk_hashdance()").
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The chip family of TILEPro and TILE-Gx was developed by Tilera, which
was eventually acquired by Mellanox. The tile architecture was added to
the kernel in 2010 and first appeared in 2.6.36.
Now at Mellanox we are developing new chips based on the ARM64
architecture; our last TILE-Gx chip (the Gx72) was released in 2013, and
our customers using tile architecture products are not, as far as we
know, looking to upgrade to newer kernel releases. In the absence of
someone in the community stepping up to take over maintainership, this
commit marks the architecture as orphaned.
Cc: Chris Metcalf <metcalf@alum.mit.edu>
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
After this fix : ("tcp: add tcp_v4_fill_cb()/tcp_v4_restore_cb()"),
socket lookups happen while skb->cb[] has not been mangled yet by TCP.
Fixes: a04a480d4392 ("net: Require exact match for TCP socket lookups if dif is l3mdev")
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
James Morris reported kernel stack corruption bug [1] while
running the SELinux testsuite, and bisected to a recent
commit bffa72cf7f9d ("net: sk_buff rbnode reorg")
We believe this commit is fine, but exposes an older bug.
SELinux code runs from tcp_filter() and might send an ICMP,
expecting IP options to be found in skb->cb[] using regular IPCB placement.
We need to defer TCP mangling of skb->cb[] after tcp_filter() calls.
This patch adds tcp_v4_fill_cb()/tcp_v4_restore_cb() in a very
similar way we added them for IPv6.
[1]
[ 339.806024] SELinux: failure in selinux_parse_skb(), unable to parse packet
[ 339.822505] Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in: ffffffff81745af5
[ 339.822505]
[ 339.852250] CPU: 4 PID: 3642 Comm: client Not tainted 4.15.0-rc1-test #15
[ 339.868498] Hardware name: LENOVO 10FGS0VA1L/30BC, BIOS FWKT68A 01/19/2017
[ 339.885060] Call Trace:
[ 339.896875] <IRQ>
[ 339.908103] dump_stack+0x63/0x87
[ 339.920645] panic+0xe8/0x248
[ 339.932668] ? ip_push_pending_frames+0x33/0x40
[ 339.946328] ? icmp_send+0x525/0x530
[ 339.958861] ? kfree_skbmem+0x60/0x70
[ 339.971431] __stack_chk_fail+0x1b/0x20
[ 339.984049] icmp_send+0x525/0x530
[ 339.996205] ? netlbl_skbuff_err+0x36/0x40
[ 340.008997] ? selinux_netlbl_err+0x11/0x20
[ 340.021816] ? selinux_socket_sock_rcv_skb+0x211/0x230
[ 340.035529] ? security_sock_rcv_skb+0x3b/0x50
[ 340.048471] ? sk_filter_trim_cap+0x44/0x1c0
[ 340.061246] ? tcp_v4_inbound_md5_hash+0x69/0x1b0
[ 340.074562] ? tcp_filter+0x2c/0x40
[ 340.086400] ? tcp_v4_rcv+0x820/0xa20
[ 340.098329] ? ip_local_deliver_finish+0x71/0x1a0
[ 340.111279] ? ip_local_deliver+0x6f/0xe0
[ 340.123535] ? ip_rcv_finish+0x3a0/0x3a0
[ 340.135523] ? ip_rcv_finish+0xdb/0x3a0
[ 340.147442] ? ip_rcv+0x27c/0x3c0
[ 340.158668] ? inet_del_offload+0x40/0x40
[ 340.170580] ? __netif_receive_skb_core+0x4ac/0x900
[ 340.183285] ? rcu_accelerate_cbs+0x5b/0x80
[ 340.195282] ? __netif_receive_skb+0x18/0x60
[ 340.207288] ? process_backlog+0x95/0x140
[ 340.218948] ? net_rx_action+0x26c/0x3b0
[ 340.230416] ? __do_softirq+0xc9/0x26a
[ 340.241625] ? do_softirq_own_stack+0x2a/0x40
[ 340.253368] </IRQ>
[ 340.262673] ? do_softirq+0x50/0x60
[ 340.273450] ? __local_bh_enable_ip+0x57/0x60
[ 340.285045] ? ip_finish_output2+0x175/0x350
[ 340.296403] ? ip_finish_output+0x127/0x1d0
[ 340.307665] ? nf_hook_slow+0x3c/0xb0
[ 340.318230] ? ip_output+0x72/0xe0
[ 340.328524] ? ip_fragment.constprop.54+0x80/0x80
[ 340.340070] ? ip_local_out+0x35/0x40
[ 340.350497] ? ip_queue_xmit+0x15c/0x3f0
[ 340.361060] ? __kmalloc_reserve.isra.40+0x31/0x90
[ 340.372484] ? __skb_clone+0x2e/0x130
[ 340.382633] ? tcp_transmit_skb+0x558/0xa10
[ 340.393262] ? tcp_connect+0x938/0xad0
[ 340.403370] ? ktime_get_with_offset+0x4c/0xb0
[ 340.414206] ? tcp_v4_connect+0x457/0x4e0
[ 340.424471] ? __inet_stream_connect+0xb3/0x300
[ 340.435195] ? inet_stream_connect+0x3b/0x60
[ 340.445607] ? SYSC_connect+0xd9/0x110
[ 340.455455] ? __audit_syscall_entry+0xaf/0x100
[ 340.466112] ? syscall_trace_enter+0x1d0/0x2b0
[ 340.476636] ? __audit_syscall_exit+0x209/0x290
[ 340.487151] ? SyS_connect+0xe/0x10
[ 340.496453] ? do_syscall_64+0x67/0x1b0
[ 340.506078] ? entry_SYSCALL64_slow_path+0x25/0x25
Fixes: 971f10eca186 ("tcp: better TCP_SKB_CB layout to reduce cache line misses")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: James Morris <james.l.morris@oracle.com>
Tested-by: James Morris <james.l.morris@oracle.com>
Tested-by: Casey Schaufler <casey@schaufler-ca.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
Fix the MAINTAINERS record so that it's more obvious who the maintainer for
AF_RXRPC is.
Reported-by: Joe Perches <joe@perches.com>
Reported-by: David Miller <davem@davemloft.net>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In rxrpc_release_sock() there may be no rx->local value to access, so we
can't unconditionally follow it to the rxrpc network namespace information
to poke the connection reapers.
Instead, use the socket's namespace pointer to find the namespace.
This unfixed code causes the following static checker warning:
net/rxrpc/af_rxrpc.c:898 rxrpc_release_sock()
error: we previously assumed 'rx->local' could be null (see line 887)
Fixes: 3d18cbb7fd0c ("rxrpc: Fix conn expiry timers")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Remove one extraneous level of indentation on assignment statement.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The mss variable tracks the last max segment size sent to the TSO
engine. We do not update the hardware as long as we receive skb:s with
the same value in gso_size.
During a network device down/up cycle (mapped to stmmac_release() and
stmmac_open() callbacks) we issue a reset to the hardware and it
forgets the setting for mss. However we did not zero out our mss
variable so the next transmission of a gso packet happens with an
undefined hardware setting.
This triggers a hang in the TSO engine and eventuelly the netdev
watchdog will bark.
Fixes: f748be531d70 ("stmmac: support new GMAC4")
Signed-off-by: Lars Persson <larper@axis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Current codes don't use skb->mark to assign flowi4_mark, it would
make the policy route rule with fwmark doesn't work as expected.
Signed-off-by: Gao Feng <gfree.wind@vip.163.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The current GSO skb size limit was copy&pasted over from the L3 path,
where it is needed due to a TSO limitation.
As L2 devices don't offer TSO support (and thus all GSO skbs are
segmented before they reach the driver), there's no reason to restrict
the stack in how large it may build the GSO skbs.
Fixes: d52aec97e5bc ("qeth: enable scatter/gather in layer 2 mode")
Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Using GSO with small MTUs currently results in a substantial throughput
regression - which is caused by how qeth needs to map non-linear skbs
into its IO buffer elements:
compared to a linear skb, each GSO-segmented skb effectively consumes
twice as many buffer elements (ie two instead of one) due to the
additional header-only part. This causes the Output Queue to be
congested with low-utilized IO buffers.
Fix this as follows:
If the MSS is low enough so that a non-SG GSO segmentation produces
order-0 skbs (currently ~3500 byte), opt out from NETIF_F_SG. This is
where we anticipate the biggest savings, since an SG-enabled
GSO segmentation produces skbs that always consume at least two
buffer elements.
Larger MSS values continue to get a SG-enabled GSO segmentation, since
1) the relative overhead of the additional header-only buffer element
becomes less noticeable, and
2) the linearization overhead increases.
With the throughput regression fixed, re-enable NETIF_F_SG by default to
reap the significant CPU savings of GSO.
Fixes: 5722963a8e83 ("qeth: do not turn on SG per default")
Reported-by: Nils Hoppmann <niho@de.ibm.com>
Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Commit 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback")
reworked how secondary addresses are managed for qeth devices.
Instead of dropping & subsequently re-adding all addresses on every
ndo_set_rx_mode() call, qeth now keeps track of the addresses that are
currently registered with the HW.
On a ndo_set_rx_mode(), we thus only need to do (de-)registration
requests for the addresses that have actually changed.
On L3 devices, the lookup for IPv4 Multicast addresses checks the wrong
hashtable - and thus never finds a match. As a result, we first delete
*all* such addresses, and then re-add them again. So each set_rx_mode()
causes a short period where the IPv4 Multicast addresses are not
registered, and the card stops forwarding inbound traffic for them.
Fix this by setting the ->is_multicast flag on the lookup object, thus
enabling qeth_l3_ip_from_hash() to search the correct hashtable and
find a match there.
Fixes: 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback")
Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
tap_recvmsg() supports accepting skb by msg_control after
commit 3b4ba04acca8 ("tap: support receiving skb from msg_control"),
the skb if presented should be freed within the function, otherwise
it would be leaked.
Signed-off-by: Wei Xu <wexu@redhat.com>
Reported-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
tun_recvmsg() supports accepting skb by msg_control after
commit ac77cfd4258f ("tun: support receiving skb through msg_control"),
the skb if presented should be freed no matter how far it can go
along, otherwise it would be leaked.
This patch fixes several missed cases.
Signed-off-by: Wei Xu <wexu@redhat.com>
Reported-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Matthew found a roughly 40% tcp throughput regression with commit
c67df11f(vhost_net: try batch dequing from skb array) as discussed
in the following thread:
https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html
Eventually we figured out that it was a skb leak in handle_rx()
when sending packets to the VM. This usually happens when a guest
can not drain out vq as fast as vhost fills in, afterwards it sets
off the traffic jam and leaks skb(s) which occurs as no headcount
to send on the vq from vhost side.
This can be avoided by making sure we have got enough headcount
before actually consuming a skb from the batched rx array while
transmitting, which is simply done by moving checking the zero
headcount a bit ahead.
Signed-off-by: Wei Xu <wexu@redhat.com>
Reported-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
short_input variable is assigned to another data pointer which is
referred out of its scope. Fix it by moving short_input definition
to the beginning of bnxt_hwrm_do_send_msg() function.
No failure has been reported so far due to this issue.
Fixes: e605db801bde ("bnxt_en: Support for Short Firmware Message")
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
For flows that involve a vxlan encap action, the vxlan sock
interface may be specified as the outgoing interface. The driver
must resolve the outgoing PF interface used by this socket and
use the dst_fid of the PF in the hwrm_cfa_encap_record_alloc cmd.
Similarily for flows that have a vxlan decap action, the
fid of the incoming PF interface must be used as the src_fid in
the hwrm_cfa_decap_filter_alloc cmd.
Fixes: 8c95f773b4a3 ("bnxt_en: add support for Flower based vxlan encap/decap offload")
Signed-off-by: Sathya Perla <sathya.perla@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
While creating a decap filter the tunnel smac need not (and must not) be
specified as we cannot ascertain the neighbor in the recv path. 'ttl'
match is also not needed for the decap filter and must be wild-carded.
Fixes: f484f6782e01 ("bnxt_en: add hwrm FW cmds for cfa_encap_record and decap_filter")
Signed-off-by: Sunil Challa <sunilkumar.challa@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The current 'bnxt_shutdown' implementation only invokes
'bnxt_ulp_shutdown' to shut down RoCE in the case when the system is in
the path of power off (SYSTEM_POWER_OFF). While this may work in most
cases, it does not work in the smart NIC case, when Linux 'reboot'
command is initiated from the Linux that runs on the ARM cores of the
NIC card. In this particular case, Linux 'reboot' results in a system
'L3' level reset where the entire ARM and associated subsystems are
being reset, but at the same time, Nitro core is being kept in sane state
(to allow external PCIe connected servers to continue to work). Without
properly shutting down RoCE and freeing all associated resources, it
results in the ARM core to hang immediately after the 'reboot'
By always invoking 'bnxt_ulp_shutdown' in 'bnxt_shutdown', it fixes the
above issue
Fixes: 0efd2fc65c92 ("bnxt_en: Add a callback to inform RDMA driver during PCI shutdown.")
Signed-off-by: Ray Jui <ray.jui@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Whoops -- I must have just been being an idiot again. Thanks to Segher
for finding the bug :).
CC: Segher Boessenkool <segher@kernel.crashing.org>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
|
|
Introducing a new include/lib directory just for this file totally
messes up tab completion for include/linux, which is highly annoying.
Move it to include/linux where we have headers for all kinds of other
lib/ code as well.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
|
|
Ensure that we tell the MAC to take the link down when phylink_stop()
is called, and that this completes prior to phylink_stop() returns.
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
We do not support SFP modules which require the address change sequence
as detailed by SFF 8472 revision 1.22 section 8.9. Warn when these
modules are inserted, and treat them as SFF8079 modules for ethtool.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
There are two bits in the option word for the RX_LOS signal. One
reports that the RX_LOS signal is active high, the other reports that
it is active low. When both or neither are set, the result is not
well defined in the specification.
Rather than assuming that neither set means normal RX_LOS, take this
as meaning no RX_LOS signal available, thereby ignoring the signal.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The options word is a be16 quantity, so we need to test the flags
having converted the endian-ness. Convert the flag bits to be16,
which can be optimised by the compiler, rather than converting a
variable at runtime.
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Fix obvious typo that first return value is set but not checked.
Signed-off-by: Max Uvarov <muvarov@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Remove the second tipc_rcv() call in tipc_udp_recv(). We have just
checked that the bearer is not up, and calling tipc_rcv() with a bearer
that is not up leads to a TIPC div-by-zero crash in
tipc_node_calculate_timer(). The crash is rare in practice, but can
happen like this:
We're enabling a bearer, but it's not yet up and fully initialized.
At the same time we receive a discovery packet, and in tipc_udp_recv()
we end up calling tipc_rcv() with the not-yet-initialized bearer,
causing later the div-by-zero crash in tipc_node_calculate_timer().
Jon Maloy explains the impact of removing the second tipc_rcv() call:
"link setup in the worst case will be delayed until the next arriving
discovery messages, 1 sec later, and this is an acceptable delay."
As the tipc_rcv() call is removed, just leave the function via the
rcu_out label, so that we will kfree_skb().
[ 12.590450] Own node address <1.1.1>, network identity 1
[ 12.668088] divide error: 0000 [#1] SMP
[ 12.676952] CPU: 2 PID: 0 Comm: swapper/2 Not tainted 4.14.2-dirty #1
[ 12.679225] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-2.fc27 04/01/2014
[ 12.682095] task: ffff8c2a761edb80 task.stack: ffffa41cc0cac000
[ 12.684087] RIP: 0010:tipc_node_calculate_timer.isra.12+0x45/0x60 [tipc]
[ 12.686486] RSP: 0018:ffff8c2a7fc838a0 EFLAGS: 00010246
[ 12.688451] RAX: 0000000000000000 RBX: ffff8c2a5b382600 RCX: 0000000000000000
[ 12.691197] RDX: 0000000000000000 RSI: ffff8c2a5b382600 RDI: ffff8c2a5b382600
[ 12.693945] RBP: ffff8c2a7fc838b0 R08: 0000000000000001 R09: 0000000000000001
[ 12.696632] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8c2a5d8949d8
[ 12.699491] R13: ffffffff95ede400 R14: 0000000000000000 R15: ffff8c2a5d894800
[ 12.702338] FS: 0000000000000000(0000) GS:ffff8c2a7fc80000(0000) knlGS:0000000000000000
[ 12.705099] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 12.706776] CR2: 0000000001bb9440 CR3: 00000000bd009001 CR4: 00000000003606e0
[ 12.708847] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 12.711016] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 12.712627] Call Trace:
[ 12.713390] <IRQ>
[ 12.714011] tipc_node_check_dest+0x2e8/0x350 [tipc]
[ 12.715286] tipc_disc_rcv+0x14d/0x1d0 [tipc]
[ 12.716370] tipc_rcv+0x8b0/0xd40 [tipc]
[ 12.717396] ? minmax_running_min+0x2f/0x60
[ 12.718248] ? dst_alloc+0x4c/0xa0
[ 12.718964] ? tcp_ack+0xaf1/0x10b0
[ 12.719658] ? tipc_udp_is_known_peer+0xa0/0xa0 [tipc]
[ 12.720634] tipc_udp_recv+0x71/0x1d0 [tipc]
[ 12.721459] ? dst_alloc+0x4c/0xa0
[ 12.722130] udp_queue_rcv_skb+0x264/0x490
[ 12.722924] __udp4_lib_rcv+0x21e/0x990
[ 12.723670] ? ip_route_input_rcu+0x2dd/0xbf0
[ 12.724442] ? tcp_v4_rcv+0x958/0xa40
[ 12.725039] udp_rcv+0x1a/0x20
[ 12.725587] ip_local_deliver_finish+0x97/0x1d0
[ 12.726323] ip_local_deliver+0xaf/0xc0
[ 12.726959] ? ip_route_input_noref+0x19/0x20
[ 12.727689] ip_rcv_finish+0xdd/0x3b0
[ 12.728307] ip_rcv+0x2ac/0x360
[ 12.728839] __netif_receive_skb_core+0x6fb/0xa90
[ 12.729580] ? udp4_gro_receive+0x1a7/0x2c0
[ 12.730274] __netif_receive_skb+0x1d/0x60
[ 12.730953] ? __netif_receive_skb+0x1d/0x60
[ 12.731637] netif_receive_skb_internal+0x37/0xd0
[ 12.732371] napi_gro_receive+0xc7/0xf0
[ 12.732920] receive_buf+0x3c3/0xd40
[ 12.733441] virtnet_poll+0xb1/0x250
[ 12.733944] net_rx_action+0x23e/0x370
[ 12.734476] __do_softirq+0xc5/0x2f8
[ 12.734922] irq_exit+0xfa/0x100
[ 12.735315] do_IRQ+0x4f/0xd0
[ 12.735680] common_interrupt+0xa2/0xa2
[ 12.736126] </IRQ>
[ 12.736416] RIP: 0010:native_safe_halt+0x6/0x10
[ 12.736925] RSP: 0018:ffffa41cc0cafe90 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff4d
[ 12.737756] RAX: 0000000000000000 RBX: ffff8c2a761edb80 RCX: 0000000000000000
[ 12.738504] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[ 12.739258] RBP: ffffa41cc0cafe90 R08: 0000014b5b9795e5 R09: ffffa41cc12c7e88
[ 12.740118] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000002
[ 12.740964] R13: ffff8c2a761edb80 R14: 0000000000000000 R15: 0000000000000000
[ 12.741831] default_idle+0x2a/0x100
[ 12.742323] arch_cpu_idle+0xf/0x20
[ 12.742796] default_idle_call+0x28/0x40
[ 12.743312] do_idle+0x179/0x1f0
[ 12.743761] cpu_startup_entry+0x1d/0x20
[ 12.744291] start_secondary+0x112/0x120
[ 12.744816] secondary_startup_64+0xa5/0xa5
[ 12.745367] Code: b9 f4 01 00 00 48 89 c2 48 c1 ea 02 48 3d d3 07 00
00 48 0f 47 d1 49 8b 0c 24 48 39 d1 76 07 49 89 14 24 48 89 d1 31 d2 48
89 df <48> f7 f1 89 c6 e8 81 6e ff ff 5b 41 5c 5d c3 66 90 66 2e 0f 1f
[ 12.747527] RIP: tipc_node_calculate_timer.isra.12+0x45/0x60 [tipc] RSP: ffff8c2a7fc838a0
[ 12.748555] ---[ end trace 1399ab83390650fd ]---
[ 12.749296] Kernel panic - not syncing: Fatal exception in interrupt
[ 12.750123] Kernel Offset: 0x13200000 from 0xffffffff82000000
(relocation range: 0xffffffff80000000-0xffffffffbfffffff)
[ 12.751215] Rebooting in 60 seconds..
Fixes: c9b64d492b1f ("tipc: add replicast peer discovery")
Signed-off-by: Tommi Rantala <tommi.t.rantala@nokia.com>
Cc: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Maciej Żenczykowski reported some panics in tcp_twsk_destructor()
that might be caused by the following bug.
timewait timer is pinned to the cpu, because we want to transition
timwewait refcount from 0 to 4 in one go, once everything has been
initialized.
At the time commit ed2e92394589 ("tcp/dccp: fix timewait races in timer
handling") was merged, TCP was always running from BH habdler.
After commit 5413d1babe8f ("net: do not block BH while processing
socket backlog") we definitely can run tcp_time_wait() from process
context.
We need to block BH in the critical section so that the pinned timer
has still its purpose.
This bug is more likely to happen under stress and when very small RTO
are used in datacenter flows.
Fixes: 5413d1babe8f ("net: do not block BH while processing socket backlog")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Maciej Żenczykowski <maze@google.com>
Acked-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Now for the abandoned chunks in unsent outq, it would just free the chunks.
Because no tsn is assigned to them yet, there's no need to send fwd tsn to
peer, unlike for the abandoned chunks in sent outq.
The problem is when parts of the msg have been sent and the other frags
are still in unsent outq, if they are abandoned/dropped, the peer would
never get this msg reassembled.
So these frags in unsent outq can't be dropped if this msg already has
outstanding frags.
This patch does the check in sctp_chunk_abandoned and
sctp_prsctp_prune_unsent.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|