diff options
author | 2025-08-08 06:35:14 -0600 | |
---|---|---|
committer | 2025-08-08 06:35:14 -0600 | |
commit | 33503c083fda048c77903460ac0429e1e2c0e341 (patch) | |
tree | 161b8c3a6de755f59ffb92b4120037d34ab8948e | |
parent | io_uring/net: Allow to do vectorized send (diff) | |
download | wireguard-linux-33503c083fda048c77903460ac0429e1e2c0e341.tar.xz wireguard-linux-33503c083fda048c77903460ac0429e1e2c0e341.zip |
io_uring/memmap: cast nr_pages to size_t before shifting
If the allocated size exceeds UINT_MAX, then it's necessary to cast
the mr->nr_pages value to size_t to prevent it from overflowing. In
practice this isn't much of a concern as the required memory size will
have been validated upfront, and accounted to the user. And > 4GB sizes
will be necessary to make the lack of a cast a problem, which greatly
exceeds normal user locked_vm settings that are generally in the kb to
mb range. However, if root is used, then accounting isn't done, and
then it's possible to hit this issue.
Link: https://lore.kernel.org/all/6895b298.050a0220.7f033.0059.GAE@google.com/
Cc: stable@vger.kernel.org
Reported-by: syzbot+23727438116feb13df15@syzkaller.appspotmail.com
Fixes: 087f997870a9 ("io_uring/memmap: implement mmap for regions")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
-rw-r--r-- | io_uring/memmap.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/io_uring/memmap.c b/io_uring/memmap.c index 725dc0bec24c..2e99dffddfc5 100644 --- a/io_uring/memmap.c +++ b/io_uring/memmap.c @@ -156,7 +156,7 @@ static int io_region_allocate_pages(struct io_ring_ctx *ctx, unsigned long mmap_offset) { gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO | __GFP_NOWARN; - unsigned long size = mr->nr_pages << PAGE_SHIFT; + size_t size = (size_t) mr->nr_pages << PAGE_SHIFT; unsigned long nr_allocated; struct page **pages; void *p; |