aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/udp.h
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2016-12-08 11:41:56 -0800
committerDavid S. Miller <davem@davemloft.net>2016-12-09 22:12:21 -0500
commit6b229cf77d683f634f0edd876c6d1015402303ad (patch)
treeec877d3e324da74f7bbb179d4cd4a8db0172865c /include/linux/udp.h
parentudp: copy skb->truesize in the first cache line (diff)
downloadlinux-dev-6b229cf77d683f634f0edd876c6d1015402303ad.tar.xz
linux-dev-6b229cf77d683f634f0edd876c6d1015402303ad.zip
udp: add batching to udp_rmem_release()
If udp_recvmsg() constantly releases sk_rmem_alloc for every read packet, it gives opportunity for producers to immediately grab spinlocks and desperatly try adding another packet, causing false sharing. We can add a simple heuristic to give the signal by batches of ~25 % of the queue capacity. This patch considerably increases performance under flood by about 50 %, since the thread draining the queue is no longer slowed by false sharing. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/linux/udp.h')
-rw-r--r--include/linux/udp.h3
1 files changed, 3 insertions, 0 deletions
diff --git a/include/linux/udp.h b/include/linux/udp.h
index d1fd8cd39478..c0f530809d1f 100644
--- a/include/linux/udp.h
+++ b/include/linux/udp.h
@@ -79,6 +79,9 @@ struct udp_sock {
int (*gro_complete)(struct sock *sk,
struct sk_buff *skb,
int nhoff);
+
+ /* This field is dirtied by udp_recvmsg() */
+ int forward_deficit;
};
static inline struct udp_sock *udp_sk(const struct sock *sk)