aboutsummaryrefslogtreecommitdiffstatshomepage
diff options
context:
space:
mode:
authorMirco Barone <mirco.barone@polito.it>2025-05-28 17:26:34 +0000
committerJason A. Donenfeld <Jason@zx2c4.com>2025-05-28 20:08:08 +0200
commitec2dc2aa19745137cdcbf2d01c90a023586da3fb (patch)
tree711751c8c0850fa2f2d6aef110b3f99ca37a0a93
parentDoc: networking: Fix various typos in rds.rst (diff)
downloadwireguard-linux-ec2dc2aa19745137cdcbf2d01c90a023586da3fb.tar.xz
wireguard-linux-ec2dc2aa19745137cdcbf2d01c90a023586da3fb.zip
wireguard: device: enable threaded NAPI
Enable threaded NAPI by default for WireGuard devices in response to low performance behavior that we observed when multiple tunnels (and thus multiple wg devices) are deployed on a single host. This affects any kind of multi-tunnel deployment, regardless of whether the tunnels share the same endpoints or not (i.e., a VPN concentrator type of gateway would also be affected). The problem is caused by the fact that, in case of a traffic surge that involves multiple tunnels at the same time, the polling of the NAPI instance of all these wg devices tends to converge onto the same core, causing underutilization of the CPU and bottlenecking performance. This happens because NAPI polling is hosted by default in softirq context, but the WireGuard driver only raises this softirq after the rx peer queue has been drained, which doesn't happen during high traffic. In this case, the softirq already active on a core is reused instead of raising a new one. As a result, once two or more tunnel softirqs have been scheduled on the same core, they remain pinned there until the surge ends. In our experiments, this almost always leads to all tunnel NAPIs being handled on a single core shortly after a surge begins, limiting scalability to less than 3× the performance of a single tunnel, despite plenty of unused CPU cores being available. The proposed mitigation is to enable threaded NAPI for all WireGuard devices. This moves the NAPI polling context to a dedicated per-device kernel thread, allowing the scheduler to balance the load across all available cores. On our 32-core gateways, enabling threaded NAPI yields a ~4× performance improvement with 16 tunnels, increasing throughput from ~13 Gbps to ~48 Gbps. Meanwhile, CPU usage on the receiver (which is the bottleneck) jumps from 20% to 100%. We have found no performance regressions in any scenario we tested. Single-tunnel throughput remains unchanged. More details are available in our Netdev paper. Link: https://netdevconf.info/0x18/docs/netdev-0x18-paper23-talk-paper.pdf Signed-off-by: Mirco Barone <mirco.barone@polito.it> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
-rw-r--r--drivers/net/wireguard/device.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c
index 3ffeeba5dccf..4a529f1f9bea 100644
--- a/drivers/net/wireguard/device.c
+++ b/drivers/net/wireguard/device.c
@@ -366,6 +366,7 @@ static int wg_newlink(struct net_device *dev,
if (ret < 0)
goto err_free_handshake_queue;
+ dev_set_threaded(dev, true);
ret = register_netdevice(dev);
if (ret < 0)
goto err_uninit_ratelimiter;