diff options
author | 2022-09-22 11:41:51 -0600 | |
---|---|---|
committer | 2022-09-30 07:49:11 -0600 | |
commit | 851eb780decb7180bcf09fad0035cba9aae669df (patch) | |
tree | d089afb600e24f9fdca9c0e2bbe2d2871b832b12 /scripts/generate_rust_analyzer.py | |
parent | nvme: split out metadata vs non metadata end_io uring_cmd completions (diff) | |
download | wireguard-linux-851eb780decb7180bcf09fad0035cba9aae669df.tar.xz wireguard-linux-851eb780decb7180bcf09fad0035cba9aae669df.zip |
nvme: enable batched completions of passthrough IO
Now that the normal passthrough end_io path doesn't need the request
anymore, we can kill the explicit blk_mq_free_request() and just pass
back RQ_END_IO_FREE instead. This enables the batched completion from
freeing batches of requests at the time.
This brings passthrough IO performance at least on par with bdev based
O_DIRECT with io_uring. With this and batche allocations, peak performance
goes from 110M IOPS to 122M IOPS. For IRQ based, passthrough is now also
about 10% faster than previously, going from ~61M to ~67M IOPS.
Reviewed-by: Anuj Gupta <anuj20.g@samsung.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Co-developed-by: Stefan Roesch <shr@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'scripts/generate_rust_analyzer.py')
0 files changed, 0 insertions, 0 deletions