diff options
author | 2022-06-10 16:21:39 -0700 | |
---|---|---|
committer | 2022-06-10 16:21:40 -0700 | |
commit | e10b02ee5b6c95872064cf0a8e65f31951a31967 (patch) | |
tree | e061107c999e33aac6a61f87cd45a24cd4258422 /drivers/net/ethernet/intel/ice/ice_lib.c | |
parent | Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net (diff) | |
parent | net: unexport __sk_mem_{raise|reduce}_allocated (diff) | |
download | linux-dev-e10b02ee5b6c95872064cf0a8e65f31951a31967.tar.xz linux-dev-e10b02ee5b6c95872064cf0a8e65f31951a31967.zip |
Merge branch 'net-reduce-tcp_memory_allocated-inflation'
Eric Dumazet says:
====================
net: reduce tcp_memory_allocated inflation
Hosts with a lot of sockets tend to hit so called TCP memory pressure,
leading to very bad TCP performance and/or OOM.
The problem is that some TCP sockets can hold up to 2MB of 'forward
allocations' in their per-socket cache (sk->sk_forward_alloc),
and there is no mechanism to make them relinquish their share
under mem pressure.
Only under some potentially rare events their share is reclaimed,
one socket at a time.
In this series, I implemented a per-cpu cache instead of a per-socket one.
Each CPU has a +1/-1 MB (256 pages on x86) forward alloc cache, in order
to not dirty tcp_memory_allocated shared cache line too often.
We keep sk->sk_forward_alloc values as small as possible, to meet
memcg page granularity constraint.
Note that memcg already has a per-cpu cache, although MEMCG_CHARGE_BATCH
is defined to 32 pages, which seems a bit small.
Note that while this cover letter mentions TCP, this work is generic
and supports TCP, UDP, DECNET, SCTP.
====================
Link: https://lore.kernel.org/r/20220609063412.2205738-1-eric.dumazet@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'drivers/net/ethernet/intel/ice/ice_lib.c')
0 files changed, 0 insertions, 0 deletions