aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/block/bio.c
diff options
context:
space:
mode:
authorPavel Begunkov <asml.silence@gmail.com>2022-11-02 15:18:20 +0000
committerJens Axboe <axboe@kernel.dk>2022-11-16 09:44:26 -0700
commit759aa12f19155fe4e4fb4740450b4aa4233b7d9f (patch)
treef88d17edd9dfb7e74e988208e0c366093b71753f /block/bio.c
parentmempool: introduce mempool_is_saturated (diff)
downloadwireguard-linux-759aa12f19155fe4e4fb4740450b4aa4233b7d9f.tar.xz
wireguard-linux-759aa12f19155fe4e4fb4740450b4aa4233b7d9f.zip
bio: don't rob starving biosets of bios
Biosets keep a mempool, so as long as requests complete we can always can allocate and have forward progress. Percpu bio caches break that assumptions as we may complete into the cache of one CPU and after try and fail to allocate with another CPU. We also can't grab from another CPU's cache without tricky sync. If we're allocating with a bio while the mempool is undersaturated, remove REQ_ALLOC_CACHE flag, so on put it will go straight to mempool. It might try to free into mempool more requests than required, but assuming than there is no memory starvation in the system it'll stabilise and never hit that path. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/aa150caf9c263fa92269e86d7826cc8fa65f38de.1667384020.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/bio.c')
-rw-r--r--block/bio.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/block/bio.c b/block/bio.c
index aa1de6a367f9..91f4a6926a1b 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -526,6 +526,8 @@ struct bio *bio_alloc_bioset(struct block_device *bdev, unsigned short nr_vecs,
}
if (unlikely(!p))
return NULL;
+ if (!mempool_is_saturated(&bs->bio_pool))
+ opf &= ~REQ_ALLOC_CACHE;
bio = p + bs->front_pad;
if (nr_vecs > BIO_INLINE_VECS) {