aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/blk-mq.h
diff options
context:
space:
mode:
authorMing Lei <ming.lei@redhat.com>2018-06-25 19:31:47 +0800
committerJens Axboe <axboe@kernel.dk>2018-07-09 09:07:52 -0600
commit5815839b3ca16bb1d45939270871169f6803a121 (patch)
tree84b6f71475cc133ccd5e0ded503ee0ba0b4a46ad /include/linux/blk-mq.h
parentblk-mq: don't pass **hctx to blk_mq_mark_tag_wait() (diff)
downloadlinux-dev-5815839b3ca16bb1d45939270871169f6803a121.tar.xz
linux-dev-5815839b3ca16bb1d45939270871169f6803a121.zip
blk-mq: introduce new lock for protecting hctx->dispatch_wait
Now hctx->lock is only acquired when adding hctx->dispatch_wait to one wait queue, but not held when removing it from the wait queue. IO hang can be observed easily if SCHED RESTART is disabled, that means now RESTART exits just for fixing the issue in blk_mq_mark_tag_wait(). This patch fixes the issue by introducing hctx->dispatch_wait_lock and holding it for removing hctx->dispatch_wait in blk_mq_dispatch_wake(), since we need to avoid acquiring hctx->lock in irq context. Fixes: eb619fdb2d4cb8b3d3419 ("blk-mq: fix issue with shared tag queue re-running") Cc: Christoph Hellwig <hch@lst.de> Cc: Omar Sandoval <osandov@fb.com> Cc: Bart Van Assche <bart.vanassche@wdc.com> Tested-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'include/linux/blk-mq.h')
-rw-r--r--include/linux/blk-mq.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index e3147eb74222..ea690254dab7 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -39,6 +39,7 @@ struct blk_mq_hw_ctx {
struct blk_mq_ctx **ctxs;
unsigned int nr_ctx;
+ spinlock_t dispatch_wait_lock;
wait_queue_entry_t dispatch_wait;
atomic_t wait_index;