aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/net/virtio_net.c
diff options
context:
space:
mode:
authorHerbert Xu <herbert@gondor.apana.org.au>2014-12-20 11:23:27 +1100
committerDavid S. Miller <davem@davemloft.net>2014-12-22 16:10:12 -0500
commit8acdf999accfd95093db17f33a58429a38782060 (patch)
treeffb5a2a9eac75039bd120628fed70ad8ad2b7c18 /drivers/net/virtio_net.c
parentstmmac: Don't init ptp again when resume from suspend/hibernation (diff)
downloadlinux-dev-8acdf999accfd95093db17f33a58429a38782060.tar.xz
linux-dev-8acdf999accfd95093db17f33a58429a38782060.zip
virtio_net: Fix napi poll list corruption
The commit d75b1ade567ffab085e8adbbdacf0092d10cd09c (net: less interrupt masking in NAPI) breaks virtio_net in an insidious way. It is now required that if the entire budget is consumed when poll returns, the napi poll_list must remain empty. However, like some other drivers virtio_net tries to do a last-ditch check and if there is more work it will call napi_schedule and then immediately process some of this new work. Should the entire budget be consumed while processing such new work then we will violate the new caller contract. This patch fixes this by not touching any work when we reschedule in virtio_net. The worst part of this bug is that the list corruption causes other napi users to be moved off-list. In my case I was chasing a stall in IPsec (IPsec uses netif_rx) and I only belatedly realised that it was virtio_net which caused the stall even though the virtio_net poll was still functioning perfectly after IPsec stalled. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'drivers/net/virtio_net.c')
-rw-r--r--drivers/net/virtio_net.c2
1 files changed, 0 insertions, 2 deletions
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index b8bd7191572d..5ca97713bfb3 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -760,7 +760,6 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
container_of(napi, struct receive_queue, napi);
unsigned int r, received = 0;
-again:
received += virtnet_receive(rq, budget - received);
/* Out of packets? */
@@ -771,7 +770,6 @@ again:
napi_schedule_prep(napi)) {
virtqueue_disable_cb(rq->vq);
__napi_schedule(napi);
- goto again;
}
}