aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/io_uring/kbuf.c
diff options
context:
space:
mode:
authorPavel Begunkov <asml.silence@gmail.com>2022-06-16 10:22:12 +0100
committerJens Axboe <axboe@kernel.dk>2022-07-24 18:39:14 -0600
commit9ca9fb24d5febccea354089c41f96a8ad0d853f8 (patch)
tree1a08b01d113fce77e375769430ea06f40c2280d4 /io_uring/kbuf.c
parentio_uring: propagate locking state to poll cancel (diff)
downloadwireguard-linux-9ca9fb24d5febccea354089c41f96a8ad0d853f8.tar.xz
wireguard-linux-9ca9fb24d5febccea354089c41f96a8ad0d853f8.zip
io_uring: mutex locked poll hashing
Currently we do two extra spin lock/unlock pairs to add a poll/apoll request to the cancellation hash table and remove it from there. On the submission side we often already hold ->uring_lock and tw completion is likely to hold it as well. Add a second cancellation hash table protected by ->uring_lock. In concerns for latency because of a need to have the mutex locked on the completion side, use the new table only in following cases: 1) IORING_SETUP_SINGLE_ISSUER: only one task grabs uring_lock, so there is little to no contention and so the main tw hander will almost always end up grabbing it before calling callbacks. 2) IORING_SETUP_SQPOLL: same as with single issuer, only one task is a major user of ->uring_lock. 3) apoll: we normally grab the lock on the completion side anyway to execute the request, so it's free. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/1bbad9c78c454b7b92f100bbf46730a37df7194f.1655371007.git.asml.silence@gmail.com Reviewed-by: Hao Xu <howeyxu@tencent.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring/kbuf.c')
0 files changed, 0 insertions, 0 deletions