aboutsummaryrefslogtreecommitdiffstats
path: root/net/ipv4/tcp_fastopen.c
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2014-09-28 22:18:47 -0700
committerDavid S. Miller <davem@davemloft.net>2014-09-29 12:27:20 -0400
commitb1937227316417aa7568d01e6fa1f272e98fb890 (patch)
tree93891f7672c803b767de6621c028f45edf242f17 /net/ipv4/tcp_fastopen.c
parentMerge branch 'qca7000_spi' (diff)
downloadlinux-dev-b1937227316417aa7568d01e6fa1f272e98fb890.tar.xz
linux-dev-b1937227316417aa7568d01e6fa1f272e98fb890.zip
net: reorganize sk_buff for faster __copy_skb_header()
With proliferation of bit fields in sk_buff, __copy_skb_header() became quite expensive, showing as the most expensive function in a GSO workload. __copy_skb_header() performance is also critical for non GSO TCP operations, as it is used from skb_clone() This patch carefully moves all the fields that were not copied in a separate zone : cloned, nohdr, fclone, peeked, head_frag, xmit_more Then I moved all other fields and all other copied fields in a section delimited by headers_start[0]/headers_end[0] section so that we can use a single memcpy() call, inlined by compiler using long word load/stores. I also tried to make all copies in the natural orders of sk_buff, to help hardware prefetching. I made sure sk_buff size did not change. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4/tcp_fastopen.c')
0 files changed, 0 insertions, 0 deletions