aboutsummaryrefslogtreecommitdiffstats
path: root/fs/xfs/xfs_bmap_util.c
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2019-12-27 11:17:08 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2019-12-27 11:17:08 -0800
commit534121d289e06827ee84c9262267ca5ebf1d9fd9 (patch)
tree6bfe9ff157e94a3d244f3c2321507cff91aa6e39 /fs/xfs/xfs_bmap_util.c
parentMerge tag 'libata-5.5-20191226' of git://git.kernel.dk/linux-block (diff)
parentio-wq: add cond_resched() to worker thread (diff)
downloadlinux-dev-534121d289e06827ee84c9262267ca5ebf1d9fd9.tar.xz
linux-dev-534121d289e06827ee84c9262267ca5ebf1d9fd9.zip
Merge tag 'io_uring-5.5-20191226' of git://git.kernel.dk/linux-block
Pull io_uring fixes from Jens Axboe: - Removal of now unused busy wqe list (Hillf) - Add cond_resched() to io-wq work processing (Hillf) - And then the series that I hinted at from last week, which removes the sqe from the io_kiocb and keeps all sqe handling on the prep side. This guarantees that an opcode can't do the wrong thing and read the sqe more than once. This is unchanged from last week, no issues have been observed with this in testing. Hence I really think we should fold this into 5.5. * tag 'io_uring-5.5-20191226' of git://git.kernel.dk/linux-block: io-wq: add cond_resched() to worker thread io-wq: remove unused busy list from io_sqe io_uring: pass in 'sqe' to the prep handlers io_uring: standardize the prep methods io_uring: read 'count' for IORING_OP_TIMEOUT in prep handler io_uring: move all prep state for IORING_OP_{SEND,RECV}_MGS to prep handler io_uring: move all prep state for IORING_OP_CONNECT to prep handler io_uring: add and use struct io_rw for read/writes io_uring: use u64_to_user_ptr() consistently
Diffstat (limited to 'fs/xfs/xfs_bmap_util.c')
0 files changed, 0 insertions, 0 deletions