aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/testing/selftests/bpf/progs/test_assign_reuse.c
diff options
context:
space:
mode:
authorLorenz Bauer <lmb@isovalent.com>2023-07-20 17:30:05 +0200
committerMartin KaFai Lau <martin.lau@kernel.org>2023-07-25 13:51:43 -0700
commitf0ea27e7bfe1c34e1f451a63eb68faa1d4c3a86d (patch)
treed50c7e7bcfcda04a65cadc9616b4ba027bb7a853 /tools/testing/selftests/bpf/progs/test_assign_reuse.c
parentMAINTAINERS: Replace my email address (diff)
downloadwireguard-linux-f0ea27e7bfe1c34e1f451a63eb68faa1d4c3a86d.tar.xz
wireguard-linux-f0ea27e7bfe1c34e1f451a63eb68faa1d4c3a86d.zip
udp: re-score reuseport groups when connected sockets are present
Contrary to TCP, UDP reuseport groups can contain TCP_ESTABLISHED sockets. To support these properly we remember whether a group has a connected socket and skip the fast reuseport early-return. In effect we continue scoring all reuseport sockets and then choose the one with the highest score. The current code fails to re-calculate the score for the result of lookup_reuseport. According to Kuniyuki Iwashima: 1) SO_INCOMING_CPU is set -> selected sk might have +1 score 2) BPF prog returns ESTABLISHED and/or SO_INCOMING_CPU sk -> selected sk will have more than 8 Using the old score could trigger more lookups depending on the order that sockets are created. sk -> sk (SO_INCOMING_CPU) -> sk (ESTABLISHED) | | `-> select the next SO_INCOMING_CPU sk | `-> select itself (We should save this lookup) Fixes: efc6b6f6c311 ("udp: Improve load balancing for SO_REUSEPORT.") Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/r/20230720-so-reuseport-v6-1-7021b683cdae@isovalent.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Diffstat (limited to 'tools/testing/selftests/bpf/progs/test_assign_reuse.c')
0 files changed, 0 insertions, 0 deletions