aboutsummaryrefslogtreecommitdiffstats
path: root/fs
diff options
context:
space:
mode:
authorHao Xu <haoxu@linux.alibaba.com>2021-10-18 21:34:45 +0800
committerJens Axboe <axboe@kernel.dk>2021-10-22 19:20:57 -0600
commit90fa02883f063b971ebfd9f5b2184b38b83b7ee3 (patch)
tree6c98bada53527473d9bc8d37ef1b938f2f471419 /fs
parentio_uring: Use ERR_CAST() instead of ERR_PTR(PTR_ERR()) (diff)
downloadlinux-dev-90fa02883f063b971ebfd9f5b2184b38b83b7ee3.tar.xz
linux-dev-90fa02883f063b971ebfd9f5b2184b38b83b7ee3.zip
io_uring: implement async hybrid mode for pollable requests
The current logic of requests with IOSQE_ASYNC is first queueing it to io-worker, then execute it in a synchronous way. For unbound works like pollable requests(e.g. read/write a socketfd), the io-worker may stuck there waiting for events for a long time. And thus other works wait in the list for a long time too. Let's introduce a new way for unbound works (currently pollable requests), with this a request will first be queued to io-worker, then executed in a nonblock try rather than a synchronous way. Failure of that leads it to arm poll stuff and then the worker can begin to handle other works. The detail process of this kind of requests is: step1: original context: queue it to io-worker step2: io-worker context: nonblock try(the old logic is a synchronous try here) | |--fail--> arm poll | |--(fail/ready)-->synchronous issue | |--(succeed)-->worker finish it's job, tw take over the req This works much better than the old IOSQE_ASYNC logic in cases where unbound max_worker is relatively small. In this case, number of io-worker eazily increments to max_worker, new worker cannot be created and running workers stuck there handling old works in IOSQE_ASYNC mode. In my 64-core machine, set unbound max_worker to 20, run echo-server, turns out: (arguments: register_file, connetion number is 1000, message size is 12 Byte) original IOSQE_ASYNC: 76664.151 tps after this patch: 166934.985 tps Suggested-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Hao Xu <haoxu@linux.alibaba.com> Link: https://lore.kernel.org/r/20211018133445.103438-1-haoxu@linux.alibaba.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'fs')
-rw-r--r--fs/io_uring.c36
1 files changed, 35 insertions, 1 deletions
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 88c5ee4dc242..736d456e7913 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6739,8 +6739,18 @@ static void io_wq_submit_work(struct io_wq_work *work)
ret = -ECANCELED;
if (!ret) {
+ bool needs_poll = false;
+ unsigned int issue_flags = IO_URING_F_UNLOCKED;
+
+ if (req->flags & REQ_F_FORCE_ASYNC) {
+ needs_poll = req->file && file_can_poll(req->file);
+ if (needs_poll)
+ issue_flags |= IO_URING_F_NONBLOCK;
+ }
+
do {
- ret = io_issue_sqe(req, IO_URING_F_UNLOCKED);
+issue_sqe:
+ ret = io_issue_sqe(req, issue_flags);
/*
* We can get EAGAIN for polled IO even though we're
* forcing a sync submission from here, since we can't
@@ -6748,6 +6758,30 @@ static void io_wq_submit_work(struct io_wq_work *work)
*/
if (ret != -EAGAIN)
break;
+ if (needs_poll) {
+ bool armed = false;
+
+ ret = 0;
+ needs_poll = false;
+ issue_flags &= ~IO_URING_F_NONBLOCK;
+
+ switch (io_arm_poll_handler(req)) {
+ case IO_APOLL_READY:
+ goto issue_sqe;
+ case IO_APOLL_ABORTED:
+ /*
+ * somehow we failed to arm the poll infra,
+ * fallback it to a normal async worker try.
+ */
+ break;
+ case IO_APOLL_OK:
+ armed = true;
+ break;
+ }
+
+ if (armed)
+ break;
+ }
cond_resched();
} while (1);
}