aboutsummaryrefslogtreecommitdiffstats
path: root/net/ipv4
diff options
context:
space:
mode:
authorDavid S. Miller <davem@davemloft.net>2013-03-20 12:10:46 -0400
committerDavid S. Miller <davem@davemloft.net>2013-03-20 12:10:46 -0400
commite0ebcb80cbf2022db839186b52f8de4591883559 (patch)
tree115d199ee9cd9fb9dbc296d3daad259cd9e0d682 /net/ipv4
parentgenetlink: trigger BUG_ON if a group name is too long (diff)
parentl2tp: unhash l2tp sessions on delete, not on free (diff)
downloadlinux-dev-e0ebcb80cbf2022db839186b52f8de4591883559.tar.xz
linux-dev-e0ebcb80cbf2022db839186b52f8de4591883559.zip
Merge branch 'l2tp'
Tom Parkin says: ==================== This l2tp bugfix patchset addresses a number of issues. The first five patches in the series prevent l2tp sessions pinning an l2tp tunnel open. This occurs because the l2tp tunnel is torn down in the tunnel socket destructor, but each session holds a tunnel socket reference which prevents tunnels with sessions being deleted. The solution I've implemented here involves adding a .destroy hook to udp code, as discussed previously on netdev[1]. The subsequent seven patches address futher bugs exposed by fixing the problem above, or exposed through stress testing the implementation above. Patch 11 (avoid deadlock in l2tp stats update) isn't directly related to tunnel/session lifetimes, but it does prevent deadlocks on i386 kernels running on 64 bit hardware. This patchset has been tested on 32 and 64 bit preempt/non-preempt kernels, using iproute2, openl2tp, and custom-made stress test code. [1] http://comments.gmane.org/gmane.linux.network/259169 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4')
-rw-r--r--net/ipv4/udp.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 265c42cf963c..0a073a263720 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1762,9 +1762,16 @@ int udp_rcv(struct sk_buff *skb)
void udp_destroy_sock(struct sock *sk)
{
+ struct udp_sock *up = udp_sk(sk);
bool slow = lock_sock_fast(sk);
udp_flush_pending_frames(sk);
unlock_sock_fast(sk, slow);
+ if (static_key_false(&udp_encap_needed) && up->encap_type) {
+ void (*encap_destroy)(struct sock *sk);
+ encap_destroy = ACCESS_ONCE(up->encap_destroy);
+ if (encap_destroy)
+ encap_destroy(sk);
+ }
}
/*