aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/src/receive.c
diff options
context:
space:
mode:
authorJason A. Donenfeld <Jason@zx2c4.com>2017-09-19 02:56:21 +0200
committerJason A. Donenfeld <Jason@zx2c4.com>2017-09-19 03:04:21 +0200
commit939c122c57ce834ac5624409272ece1428f70a4a (patch)
tree7f794d67d7bbd9175fbceeceed594e0960ab76f1 /src/receive.c
parentversion: bump snapshot (diff)
downloadwireguard-monolithic-historical-939c122c57ce834ac5624409272ece1428f70a4a.tar.xz
wireguard-monolithic-historical-939c122c57ce834ac5624409272ece1428f70a4a.zip
receive: use netif_receive_skb instead of netif_rx
netif_rx queues things up to a per-cpu backlog, whereas netif_receive_skb immediately delivers the packet to the underlying network device and mostly never fails. In the event where decrypting packets is actually happening faster than the networking subsystem receive them -- like with 65k packets with UDPv6 in `make test-qemu` -- then this backlog fills up and we wind up dropping some packets. This is fine and not all together terrible, but it does raise the question of why we bothered spending CPU cycles decrypting those packets if they were just going to be dropped anyway. So, moving from netif_rx to netif_receive_skb means that whatever time netif_receive_skb needs winds up slowing down the dequeuing of decryption packets, which in turn means the decryption receive queue fills up sooner, so that we drop packets before decryption, rather than after, thus saving precious CPU cycles. Potential downsides of this include not keeping the cache hot, or not inundating the network subsystem with as many packets per second as possible, but in preliminary benchmarks, no difference has yet been observed.
Diffstat (limited to '')
-rw-r--r--src/receive.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/src/receive.c b/src/receive.c
index a7f6004..efe53f4 100644
--- a/src/receive.c
+++ b/src/receive.c
@@ -299,7 +299,7 @@ void packet_consume_data_done(struct sk_buff *skb, struct wireguard_peer *peer,
goto dishonest_packet_peer;
len = skb->len;
- if (likely(netif_rx(skb) == NET_RX_SUCCESS))
+ if (likely(netif_receive_skb(skb) == NET_RX_SUCCESS))
rx_stats(peer, len);
else {
++dev->stats.rx_dropped;