aboutsummaryrefslogtreecommitdiffstats
path: root/net/ipv4/inet_fragment.c
diff options
context:
space:
mode:
authorPeter Oskolkov <posk@google.com>2018-08-11 20:27:25 +0000
committerDavid S. Miller <davem@davemloft.net>2018-08-11 17:54:18 -0700
commita4fd284a1f8fd4b6c59aa59db2185b1e17c5c11c (patch)
treed75f09d0acf9a10a81723bd1a2028f555e9dc54c /net/ipv4/inet_fragment.c
parentip: add helpers to process in-order fragments faster. (diff)
downloadlinux-dev-a4fd284a1f8fd4b6c59aa59db2185b1e17c5c11c.tar.xz
linux-dev-a4fd284a1f8fd4b6c59aa59db2185b1e17c5c11c.zip
ip: process in-order fragments efficiently
This patch changes the runtime behavior of IP defrag queue: incoming in-order fragments are added to the end of the current list/"run" of in-order fragments at the tail. On some workloads, UDP stream performance is substantially improved: RX: ./udp_stream -F 10 -T 2 -l 60 TX: ./udp_stream -c -H <host> -F 10 -T 5 -l 60 with this patchset applied on a 10Gbps receiver: throughput=9524.18 throughput_units=Mbit/s upstream (net-next): throughput=4608.93 throughput_units=Mbit/s Reported-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Peter Oskolkov <posk@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4/inet_fragment.c')
-rw-r--r--net/ipv4/inet_fragment.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
index 6d258a5669e7..bcb11f3a27c0 100644
--- a/net/ipv4/inet_fragment.c
+++ b/net/ipv4/inet_fragment.c
@@ -146,7 +146,7 @@ void inet_frag_destroy(struct inet_frag_queue *q)
fp = xp;
} while (fp);
} else {
- sum_truesize = skb_rbtree_purge(&q->rb_fragments);
+ sum_truesize = inet_frag_rbtree_purge(&q->rb_fragments);
}
sum = sum_truesize + f->qsize;