aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/MAINTAINERS
diff options
context:
space:
mode:
authorDavid S. Miller <davem@davemloft.net>2016-12-05 13:32:25 -0500
committerDavid S. Miller <davem@davemloft.net>2016-12-05 13:32:25 -0500
commit3f4888adae7c1619b990d98a9b967536f71822b8 (patch)
treeacd4d2976939d6de5f4495bbebc4a9420e40d38e /MAINTAINERS
parentMerge branch 'bnxt_en-dcbnl' (diff)
parenttcp: tsq: move tsq_flags close to sk_wmem_alloc (diff)
downloadwireguard-linux-3f4888adae7c1619b990d98a9b967536f71822b8.tar.xz
wireguard-linux-3f4888adae7c1619b990d98a9b967536f71822b8.zip
Merge branch 'tcp-tsq-perf'
Eric Dumazet says: ==================== tcp: tsq: performance series Under very high TX stress, CPU handling NIC TX completions can spend considerable amount of cycles handling TSQ (TCP Small Queues) logic. This patch series avoids some atomic operations, but most notable patch is the 3rd one, allowing other cpus processing ACK packets and calling tcp_write_xmit() to grab TCP_TSQ_DEFERRED so that tcp_tasklet_func() can skip already processed sockets. This avoid lots of lock acquisitions and cache lines accesses, particularly under load. In v2, I added : - tcp_small_queue_check() change to allow 1st and 2nd packets in write queue to be sent, even in the case TX completion of already acknowledged packets did not happen yet. This helps when TX completion coalescing parameters are set even to insane values, and/or busy polling is used. - A reorganization of struct sock fields to lower false sharing and increase data locality. - Then I moved tsq_flags from tcp_sock to struct sock also to reduce cache line misses during TX completions. I measured an overall throughput gain of 22 % for heavy TCP use over a single TX queue. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'MAINTAINERS')
0 files changed, 0 insertions, 0 deletions