From 651438bb0af5213f1f70d66e75bf11d08cb5537a Mon Sep 17 00:00:00 2001 From: Wen Xiong Date: Thu, 15 Feb 2018 14:05:10 -0600 Subject: nvme-pci: Fix EEH failure on ppc Triggering PPC EEH detection and handling requires a memory mapped read failure. The NVMe driver removed the periodic health check MMIO, so there's no early detection mechanism to trigger the recovery. Instead, the detection now happens when the nvme driver handles an IO timeout event. This takes the pci channel offline, so we do not want the driver to proceed with escalating its own recovery efforts that may conflict with the EEH handler. This patch ensures the driver will observe the channel was set to offline after a failed MMIO read and resets the IO timer so the EEH handler has a chance to recover the device. Signed-off-by: Wen Xiong [updated change log] Signed-off-by: Keith Busch --- drivers/nvme/host/pci.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) (limited to 'drivers') diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 5933a5c732e8..e5ce07f4966f 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1153,12 +1153,6 @@ static bool nvme_should_reset(struct nvme_dev *dev, u32 csts) if (!(csts & NVME_CSTS_CFS) && !nssro) return false; - /* If PCI error recovery process is happening, we cannot reset or - * the recovery mechanism will surely fail. - */ - if (pci_channel_offline(to_pci_dev(dev->dev))) - return false; - return true; } @@ -1189,6 +1183,13 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) struct nvme_command cmd; u32 csts = readl(dev->bar + NVME_REG_CSTS); + /* If PCI error recovery process is happening, we cannot reset or + * the recovery mechanism will surely fail. + */ + mb(); + if (pci_channel_offline(to_pci_dev(dev->dev))) + return BLK_EH_RESET_TIMER; + /* * Reset immediately if the controller is failed */ -- cgit v1.2.3-59-g8ed1b From 16ccfff2897613007b5eda9e29d65303c6280026 Mon Sep 17 00:00:00 2001 From: Ming Lei Date: Tue, 6 Feb 2018 20:17:42 +0800 Subject: nvme: pci: pass max vectors as num_possible_cpus() to pci_alloc_irq_vectors 84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs") has switched to do irq vectors spread among all possible CPUs, so pass num_possible_cpus() as max vecotrs to be assigned. For example, in a 8 cores system, 0~3 online, 4~8 offline/not present, see 'lscpu': [ming@box]$lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 2 Socket(s): 2 NUMA node(s): 2 ... NUMA node0 CPU(s): 0-3 NUMA node1 CPU(s): ... 1) before this patch, follows the allocated vectors and their affinity: irq 47, cpu list 0,4 irq 48, cpu list 1,6 irq 49, cpu list 2,5 irq 50, cpu list 3,7 2) after this patch, follows the allocated vectors and their affinity: irq 43, cpu list 0 irq 44, cpu list 1 irq 45, cpu list 2 irq 46, cpu list 3 irq 47, cpu list 4 irq 48, cpu list 6 irq 49, cpu list 5 irq 50, cpu list 7 Cc: Keith Busch Cc: Sagi Grimberg Cc: Thomas Gleixner Cc: Christoph Hellwig Signed-off-by: Ming Lei Reviewed-by: Christoph Hellwig Signed-off-by: Keith Busch --- drivers/nvme/host/pci.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'drivers') diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index e5ce07f4966f..b6f43b738f03 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1914,7 +1914,7 @@ static int nvme_setup_io_queues(struct nvme_dev *dev) int result, nr_io_queues; unsigned long size; - nr_io_queues = num_present_cpus(); + nr_io_queues = num_possible_cpus(); result = nvme_set_queue_count(&dev->ctrl, &nr_io_queues); if (result < 0) return result; -- cgit v1.2.3-59-g8ed1b From cc40daf91bdddbba72a4a8cd0860640e06668309 Mon Sep 17 00:00:00 2001 From: Tang Junhui Date: Mon, 5 Mar 2018 13:41:54 -0800 Subject: bcache: fix crashes in duplicate cache device register Kernel crashed when register a duplicate cache device, the call trace is bellow: [ 417.643790] CPU: 1 PID: 16886 Comm: bcache-register Tainted: G W OE 4.15.5-amd64-preempt-sysrq-20171018 #2 [ 417.643861] Hardware name: LENOVO 20ERCTO1WW/20ERCTO1WW, BIOS N1DET41W (1.15 ) 12/31/2015 [ 417.643870] RIP: 0010:bdevname+0x13/0x1e [ 417.643876] RSP: 0018:ffffa3aa9138fd38 EFLAGS: 00010282 [ 417.643884] RAX: 0000000000000000 RBX: ffff8c8f2f2f8000 RCX: ffffd6701f8 c7edf [ 417.643890] RDX: ffffa3aa9138fd88 RSI: ffffa3aa9138fd88 RDI: 00000000000 00000 [ 417.643895] RBP: ffffa3aa9138fde0 R08: ffffa3aa9138fae8 R09: 00000000000 1850e [ 417.643901] R10: ffff8c8eed34b271 R11: ffff8c8eed34b250 R12: 00000000000 00000 [ 417.643906] R13: ffffd6701f78f940 R14: ffff8c8f38f80000 R15: ffff8c8ea7d 90000 [ 417.643913] FS: 00007fde7e66f500(0000) GS:ffff8c8f61440000(0000) knlGS: 0000000000000000 [ 417.643919] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 417.643925] CR2: 0000000000000314 CR3: 00000007e6fa0001 CR4: 00000000003 606e0 [ 417.643931] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 00000000000 00000 [ 417.643938] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 00000000000 00400 [ 417.643946] Call Trace: [ 417.643978] register_bcache+0x1117/0x1270 [bcache] [ 417.643994] ? slab_pre_alloc_hook+0x15/0x3c [ 417.644001] ? slab_post_alloc_hook.isra.44+0xa/0x1a [ 417.644013] ? kernfs_fop_write+0xf6/0x138 [ 417.644020] kernfs_fop_write+0xf6/0x138 [ 417.644031] __vfs_write+0x31/0xcc [ 417.644043] ? current_kernel_time64+0x10/0x36 [ 417.644115] ? __audit_syscall_entry+0xbf/0xe3 [ 417.644124] vfs_write+0xa5/0xe2 [ 417.644133] SyS_write+0x5c/0x9f [ 417.644144] do_syscall_64+0x72/0x81 [ 417.644161] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 [ 417.644169] RIP: 0033:0x7fde7e1c1974 [ 417.644175] RSP: 002b:00007fff13009a38 EFLAGS: 00000246 ORIG_RAX: 0000000 000000001 [ 417.644183] RAX: ffffffffffffffda RBX: 0000000001658280 RCX: 00007fde7e1c 1974 [ 417.644188] RDX: 000000000000000a RSI: 0000000001658280 RDI: 000000000000 0001 [ 417.644193] RBP: 000000000000000a R08: 0000000000000003 R09: 000000000000 0077 [ 417.644198] R10: 000000000000089e R11: 0000000000000246 R12: 000000000000 0001 [ 417.644203] R13: 000000000000000a R14: 7fffffffffffffff R15: 000000000000 0000 [ 417.644213] Code: c7 c2 83 6f ee 98 be 20 00 00 00 48 89 df e8 6c 27 3b 0 0 48 89 d8 5b c3 0f 1f 44 00 00 48 8b 47 70 48 89 f2 48 8b bf 80 00 00 00 <8 b> b0 14 03 00 00 e9 73 ff ff ff 0f 1f 44 00 00 48 8b 47 40 39 [ 417.644302] RIP: bdevname+0x13/0x1e RSP: ffffa3aa9138fd38 [ 417.644306] CR2: 0000000000000314 When registering duplicate cache device in register_cache(), after failure on calling register_cache_set(), bch_cache_release() will be called, then bdev will be freed, so bdevname(bdev, name) caused kernel crash. Since bch_cache_release() will free bdev, so in this patch we make sure bdev being freed if register_cache() fail, and do not free bdev again in register_bcache() when register_cache() fail. Signed-off-by: Tang Junhui Reported-by: Marc MERLIN Tested-by: Michael Lyle Reviewed-by: Michael Lyle Cc: Signed-off-by: Jens Axboe --- drivers/md/bcache/super.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) (limited to 'drivers') diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 4d1d8dfb2d2a..58d8998529de 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -1204,7 +1204,7 @@ static void register_bdev(struct cache_sb *sb, struct page *sb_page, return; err: - pr_notice("error opening %s: %s", bdevname(bdev, name), err); + pr_notice("error %s: %s", bdevname(bdev, name), err); bcache_device_stop(&dc->disk); } @@ -1883,6 +1883,8 @@ static int register_cache(struct cache_sb *sb, struct page *sb_page, const char *err = NULL; /* must be set for any error case */ int ret = 0; + bdevname(bdev, name); + memcpy(&ca->sb, sb, sizeof(struct cache_sb)); ca->bdev = bdev; ca->bdev->bd_holder = ca; @@ -1891,11 +1893,12 @@ static int register_cache(struct cache_sb *sb, struct page *sb_page, bio_first_bvec_all(&ca->sb_bio)->bv_page = sb_page; get_page(sb_page); - if (blk_queue_discard(bdev_get_queue(ca->bdev))) + if (blk_queue_discard(bdev_get_queue(bdev))) ca->discard = CACHE_DISCARD(&ca->sb); ret = cache_alloc(ca); if (ret != 0) { + blkdev_put(bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL); if (ret == -ENOMEM) err = "cache_alloc(): -ENOMEM"; else @@ -1918,14 +1921,14 @@ static int register_cache(struct cache_sb *sb, struct page *sb_page, goto out; } - pr_info("registered cache device %s", bdevname(bdev, name)); + pr_info("registered cache device %s", name); out: kobject_put(&ca->kobj); err: if (err) - pr_notice("error opening %s: %s", bdevname(bdev, name), err); + pr_notice("error %s: %s", name, err); return ret; } @@ -2014,6 +2017,7 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr, if (err) goto err_close; + err = "failed to register device"; if (SB_IS_BDEV(sb)) { struct cached_dev *dc = kzalloc(sizeof(*dc), GFP_KERNEL); if (!dc) @@ -2028,7 +2032,7 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr, goto err_close; if (register_cache(sb, sb_page, bdev, ca) != 0) - goto err_close; + goto err; } out: if (sb_page) @@ -2041,7 +2045,7 @@ out: err_close: blkdev_put(bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL); err: - pr_info("error opening %s: %s", path, err); + pr_info("error %s: %s", path, err); ret = -EINVAL; goto out; } -- cgit v1.2.3-59-g8ed1b From 86755b7a96faed57f910f9e6b8061e019ac1ec08 Mon Sep 17 00:00:00 2001 From: Michael Lyle Date: Mon, 5 Mar 2018 13:41:55 -0800 Subject: bcache: don't attach backing with duplicate UUID This can happen e.g. during disk cloning. This is an incomplete fix: it does not catch duplicate UUIDs earlier when things are still unattached. It does not unregister the device. Further changes to cope better with this are planned but conflict with Coly's ongoing improvements to handling device errors. In the meantime, one can manually stop the device after this has happened. Attempts to attach a duplicate device result in: [ 136.372404] loop: module loaded [ 136.424461] bcache: register_bdev() registered backing device loop0 [ 136.424464] bcache: bch_cached_dev_attach() Tried to attach loop0 but duplicate UUID already attached My test procedure is: dd if=/dev/sdb1 of=imgfile bs=1024 count=262144 losetup -f imgfile Signed-off-by: Michael Lyle Reviewed-by: Tang Junhui Cc: Signed-off-by: Jens Axboe --- drivers/md/bcache/super.c | 11 +++++++++++ 1 file changed, 11 insertions(+) (limited to 'drivers') diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 58d8998529de..f2273143b3cb 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -963,6 +963,7 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c, uint32_t rtime = cpu_to_le32(get_seconds()); struct uuid_entry *u; char buf[BDEVNAME_SIZE]; + struct cached_dev *exist_dc, *t; bdevname(dc->bdev, buf); @@ -987,6 +988,16 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c, return -EINVAL; } + /* Check whether already attached */ + list_for_each_entry_safe(exist_dc, t, &c->cached_devs, list) { + if (!memcmp(dc->sb.uuid, exist_dc->sb.uuid, 16)) { + pr_err("Tried to attach %s but duplicate UUID already attached", + buf); + + return -EINVAL; + } + } + u = uuid_find(c, dc->sb.uuid); if (u && -- cgit v1.2.3-59-g8ed1b From 8a30ecc6e0ecbb9ae95daf499b2680b885ed0349 Mon Sep 17 00:00:00 2001 From: Christoph Hellwig Date: Wed, 7 Mar 2018 14:13:58 +0100 Subject: Revert "nvme: create 'slaves' and 'holders' entries for hidden controllers" This reverts commit e9a48034d7d1318ece7d4a235838a86c94db9d68. The slaves and holders link for the hidden gendisks confuse lsblk so that it errors out on, or doesn't report the nvme multipath devices. Given that we don't need holder relationships for something that can't even be directly accessed we should just stop creating those links. Signed-off-by: Christoph Hellwig Reported-by: Potnuri Bharat Teja Cc: stable@vger.kernel.org Signed-off-by: Keith Busch --- drivers/nvme/host/core.c | 2 -- drivers/nvme/host/multipath.c | 30 ------------------------------ drivers/nvme/host/nvme.h | 8 -------- 3 files changed, 40 deletions(-) (limited to 'drivers') diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 817e5e2766da..7aeca5db7916 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -3033,7 +3033,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid) ns->disk->disk_name); nvme_mpath_add_disk(ns->head); - nvme_mpath_add_disk_links(ns); return; out_unlink_ns: mutex_lock(&ctrl->subsys->lock); @@ -3053,7 +3052,6 @@ static void nvme_ns_remove(struct nvme_ns *ns) return; if (ns->disk && ns->disk->flags & GENHD_FL_UP) { - nvme_mpath_remove_disk_links(ns); sysfs_remove_group(&disk_to_dev(ns->disk)->kobj, &nvme_ns_id_attr_group); if (ns->ndev) diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index b7e5c6db4d92..060f69e03427 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -210,25 +210,6 @@ void nvme_mpath_add_disk(struct nvme_ns_head *head) mutex_unlock(&head->subsys->lock); } -void nvme_mpath_add_disk_links(struct nvme_ns *ns) -{ - struct kobject *slave_disk_kobj, *holder_disk_kobj; - - if (!ns->head->disk) - return; - - slave_disk_kobj = &disk_to_dev(ns->disk)->kobj; - if (sysfs_create_link(ns->head->disk->slave_dir, slave_disk_kobj, - kobject_name(slave_disk_kobj))) - return; - - holder_disk_kobj = &disk_to_dev(ns->head->disk)->kobj; - if (sysfs_create_link(ns->disk->part0.holder_dir, holder_disk_kobj, - kobject_name(holder_disk_kobj))) - sysfs_remove_link(ns->head->disk->slave_dir, - kobject_name(slave_disk_kobj)); -} - void nvme_mpath_remove_disk(struct nvme_ns_head *head) { if (!head->disk) @@ -243,14 +224,3 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head) blk_cleanup_queue(head->disk->queue); put_disk(head->disk); } - -void nvme_mpath_remove_disk_links(struct nvme_ns *ns) -{ - if (!ns->head->disk) - return; - - sysfs_remove_link(ns->disk->part0.holder_dir, - kobject_name(&disk_to_dev(ns->head->disk)->kobj)); - sysfs_remove_link(ns->head->disk->slave_dir, - kobject_name(&disk_to_dev(ns->disk)->kobj)); -} diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 0521e4707d1c..d733b14ede9d 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -410,9 +410,7 @@ bool nvme_req_needs_failover(struct request *req, blk_status_t error); void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl); int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl,struct nvme_ns_head *head); void nvme_mpath_add_disk(struct nvme_ns_head *head); -void nvme_mpath_add_disk_links(struct nvme_ns *ns); void nvme_mpath_remove_disk(struct nvme_ns_head *head); -void nvme_mpath_remove_disk_links(struct nvme_ns *ns); static inline void nvme_mpath_clear_current_path(struct nvme_ns *ns) { @@ -454,12 +452,6 @@ static inline void nvme_mpath_add_disk(struct nvme_ns_head *head) static inline void nvme_mpath_remove_disk(struct nvme_ns_head *head) { } -static inline void nvme_mpath_add_disk_links(struct nvme_ns *ns) -{ -} -static inline void nvme_mpath_remove_disk_links(struct nvme_ns *ns) -{ -} static inline void nvme_mpath_clear_current_path(struct nvme_ns *ns) { } -- cgit v1.2.3-59-g8ed1b From 7ed8ce1c5fc7cf25b3602c73bef897a3466a6645 Mon Sep 17 00:00:00 2001 From: Bhavesh Davda Date: Fri, 22 Dec 2017 14:17:13 -0800 Subject: xen-blkfront: move negotiate_mq to cover all cases of new VBDs negotiate_mq should happen in all cases of a new VBD being discovered by xen-blkfront, whether called through _probe() or a hot-attached new VBD from dom-0 via xenstore. Otherwise, hot-attached new VBDs are left configured without multi-queue. Signed-off-by: Bhavesh Davda Reviewed-by: Konrad Rzeszutek Wilk Signed-off-by: Konrad Rzeszutek Wilk --- drivers/block/xen-blkfront.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) (limited to 'drivers') diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 891265acb10e..7d23225f79ed 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -262,6 +262,7 @@ static DEFINE_SPINLOCK(minor_lock); static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo); static void blkfront_gather_backend_features(struct blkfront_info *info); +static int negotiate_mq(struct blkfront_info *info); static int get_id_from_freelist(struct blkfront_ring_info *rinfo) { @@ -1774,11 +1775,18 @@ static int talk_to_blkback(struct xenbus_device *dev, unsigned int i, max_page_order; unsigned int ring_page_order; + if (!info) + return -ENODEV; + max_page_order = xenbus_read_unsigned(info->xbdev->otherend, "max-ring-page-order", 0); ring_page_order = min(xen_blkif_max_ring_order, max_page_order); info->nr_ring_pages = 1 << ring_page_order; + err = negotiate_mq(info); + if (err) + goto destroy_blkring; + for (i = 0; i < info->nr_rings; i++) { struct blkfront_ring_info *rinfo = &info->rinfo[i]; @@ -1978,11 +1986,6 @@ static int blkfront_probe(struct xenbus_device *dev, } info->xbdev = dev; - err = negotiate_mq(info); - if (err) { - kfree(info); - return err; - } mutex_init(&info->mutex); info->vdevice = vdevice; @@ -2099,10 +2102,6 @@ static int blkfront_resume(struct xenbus_device *dev) blkif_free(info, info->connected == BLKIF_STATE_CONNECTED); - err = negotiate_mq(info); - if (err) - return err; - err = talk_to_blkback(dev, info); if (!err) blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings); -- cgit v1.2.3-59-g8ed1b From 0475821e229cfd9954b7501113d1acbc57b68689 Mon Sep 17 00:00:00 2001 From: Roland Dreier Date: Mon, 5 Mar 2018 11:59:53 -0800 Subject: nvme-fabrics: Ignore nr_io_queues option for discovery controllers This removes a dependency on the order options are passed when creating a fabrics controller. With the old code, if "nr_io_queues" appears before an "nqn" option specifying the discovery controller, then nr_io_queues is overridden with zero. If "nr_io_queues" appears after specifying the discovery controller, then the nr_io_queues option is used to set the number of queues, and the driver attempts to establish IO connections to the discovery controller (which doesn't work). It seems better to ignore (and warn about) the "nr_io_queues" option if userspace has already asked to connect to the discovery controller. Signed-off-by: Roland Dreier Reviewed-by: James Smart Reviewed-by: Christoph Hellwig Signed-off-by: Keith Busch --- drivers/nvme/host/fabrics.c | 5 +++++ 1 file changed, 5 insertions(+) (limited to 'drivers') diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index a1c58e35075e..8f0f34d06d46 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -650,6 +650,11 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts, ret = -EINVAL; goto out; } + if (opts->discovery_nqn) { + pr_debug("Ignoring nr_io_queues value for discovery controller\n"); + break; + } + opts->nr_io_queues = min_t(unsigned int, num_online_cpus(), token); break; -- cgit v1.2.3-59-g8ed1b From d157e5343cb360a525e6b3e7924612a9a90df3b7 Mon Sep 17 00:00:00 2001 From: James Smart Date: Wed, 7 Mar 2018 15:59:36 -0800 Subject: nvme_fc: rework sqsize handling Corrected four outstanding issues in the transport around sqsize. 1: Create Connection LS is sending the 1's-based sqsize, should be sending the 0's-based value. 2: allocation of hw queue is using the 0's-base size. It should be using the 1's-based value. 3: normalization of ctrl.sqsize by MQES is using MQES+1 (1's-based value). It should be MQES (0's-based value). 4: Missing clause to ensure queue_count not larger than ctrl->sqsize. Corrected by: Clean up routines that pass queue size around. The queue size value is the actual count (1's-based) value and determined from ctrl->sqsize + 1. Routines that send 0's-based value adapt from queue size. Sset ctrl->sqsize properly for MQES. Added clause to nsure queue_count not larger than ctrl->sqsize + 1. Signed-off-by: James Smart Reviewed-by: Sagi Grimberg Signed-off-by: Keith Busch --- drivers/nvme/host/fc.c | 27 +++++++++++++++++---------- 1 file changed, 17 insertions(+), 10 deletions(-) (limited to 'drivers') diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c index 7f51f8414b97..1dc1387b7134 100644 --- a/drivers/nvme/host/fc.c +++ b/drivers/nvme/host/fc.c @@ -1206,7 +1206,7 @@ nvme_fc_connect_admin_queue(struct nvme_fc_ctrl *ctrl, sizeof(struct fcnvme_lsdesc_cr_assoc_cmd)); assoc_rqst->assoc_cmd.ersp_ratio = cpu_to_be16(ersp_ratio); - assoc_rqst->assoc_cmd.sqsize = cpu_to_be16(qsize); + assoc_rqst->assoc_cmd.sqsize = cpu_to_be16(qsize - 1); /* Linux supports only Dynamic controllers */ assoc_rqst->assoc_cmd.cntlid = cpu_to_be16(0xffff); uuid_copy(&assoc_rqst->assoc_cmd.hostid, &ctrl->ctrl.opts->host->id); @@ -1321,7 +1321,7 @@ nvme_fc_connect_queue(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue, sizeof(struct fcnvme_lsdesc_cr_conn_cmd)); conn_rqst->connect_cmd.ersp_ratio = cpu_to_be16(ersp_ratio); conn_rqst->connect_cmd.qid = cpu_to_be16(queue->qnum); - conn_rqst->connect_cmd.sqsize = cpu_to_be16(qsize); + conn_rqst->connect_cmd.sqsize = cpu_to_be16(qsize - 1); lsop->queue = queue; lsreq->rqstaddr = conn_rqst; @@ -2481,11 +2481,11 @@ nvme_fc_create_io_queues(struct nvme_fc_ctrl *ctrl) goto out_free_tag_set; } - ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.opts->queue_size); + ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1); if (ret) goto out_cleanup_blk_queue; - ret = nvme_fc_connect_io_queues(ctrl, ctrl->ctrl.opts->queue_size); + ret = nvme_fc_connect_io_queues(ctrl, ctrl->ctrl.sqsize + 1); if (ret) goto out_delete_hw_queues; @@ -2532,11 +2532,11 @@ nvme_fc_reinit_io_queues(struct nvme_fc_ctrl *ctrl) if (ret) goto out_free_io_queues; - ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.opts->queue_size); + ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1); if (ret) goto out_free_io_queues; - ret = nvme_fc_connect_io_queues(ctrl, ctrl->ctrl.opts->queue_size); + ret = nvme_fc_connect_io_queues(ctrl, ctrl->ctrl.sqsize + 1); if (ret) goto out_delete_hw_queues; @@ -2632,13 +2632,12 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl) nvme_fc_init_queue(ctrl, 0); ret = __nvme_fc_create_hw_queue(ctrl, &ctrl->queues[0], 0, - NVME_AQ_BLK_MQ_DEPTH); + NVME_AQ_DEPTH); if (ret) goto out_free_queue; ret = nvme_fc_connect_admin_queue(ctrl, &ctrl->queues[0], - NVME_AQ_BLK_MQ_DEPTH, - (NVME_AQ_BLK_MQ_DEPTH / 4)); + NVME_AQ_DEPTH, (NVME_AQ_DEPTH / 4)); if (ret) goto out_delete_hw_queue; @@ -2666,7 +2665,7 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl) } ctrl->ctrl.sqsize = - min_t(int, NVME_CAP_MQES(ctrl->ctrl.cap) + 1, ctrl->ctrl.sqsize); + min_t(int, NVME_CAP_MQES(ctrl->ctrl.cap), ctrl->ctrl.sqsize); ret = nvme_enable_ctrl(&ctrl->ctrl, ctrl->ctrl.cap); if (ret) @@ -2699,6 +2698,14 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl) opts->queue_size = ctrl->ctrl.maxcmd; } + if (opts->queue_size > ctrl->ctrl.sqsize + 1) { + /* warn if sqsize is lower than queue_size */ + dev_warn(ctrl->ctrl.device, + "queue_size %zu > ctrl sqsize %u, clamping down\n", + opts->queue_size, ctrl->ctrl.sqsize + 1); + opts->queue_size = ctrl->ctrl.sqsize + 1; + } + ret = nvme_fc_init_aen_ops(ctrl); if (ret) goto out_term_aen_ops; -- cgit v1.2.3-59-g8ed1b From 1d037577c323e5090ce281e96bc313ab2eee5be2 Mon Sep 17 00:00:00 2001 From: Ross Zwisler Date: Fri, 9 Mar 2018 08:36:36 -0700 Subject: loop: Fix lost writes caused by missing flag The following commit: commit aa4d86163e4e ("block: loop: switch to VFS ITER_BVEC") replaced __do_lo_send_write(), which used ITER_KVEC iterators, with lo_write_bvec() which uses ITER_BVEC iterators. In this change, though, the WRITE flag was lost: - iov_iter_kvec(&from, ITER_KVEC | WRITE, &kvec, 1, len); + iov_iter_bvec(&i, ITER_BVEC, bvec, 1, bvec->bv_len); This flag is necessary for the DAX case because we make decisions based on whether or not the iterator is a READ or a WRITE in dax_iomap_actor() and in dax_iomap_rw(). We end up going through this path in configurations where we combine a PMEM device with 4k sectors, a loopback device and DAX. The consequence of this missed flag is that what we intend as a write actually turns into a read in the DAX code, so no data is ever written. The very simplest test case is to create a loopback device and try and write a small string to it, then hexdump a few bytes of the device to see if the write took. Without this patch you read back all zeros, with this you read back the string you wrote. For XFS this causes us to fail or panic during the following xfstests: xfs/074 xfs/078 xfs/216 xfs/217 xfs/250 For ext4 we have a similar issue where writes never happen, but we don't currently have any xfstests that use loopback and show this issue. Fix this by restoring the WRITE flag argument to iov_iter_bvec(). This causes the xfstests to all pass. Cc: Al Viro Cc: stable@vger.kernel.org Fixes: commit aa4d86163e4e ("block: loop: switch to VFS ITER_BVEC") Reviewed-by: Christoph Hellwig Reviewed-by: Ming Lei Signed-off-by: Ross Zwisler Signed-off-by: Jens Axboe --- drivers/block/loop.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'drivers') diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 87855b5123a6..ee62d2d517bf 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -266,7 +266,7 @@ static int lo_write_bvec(struct file *file, struct bio_vec *bvec, loff_t *ppos) struct iov_iter i; ssize_t bw; - iov_iter_bvec(&i, ITER_BVEC, bvec, 1, bvec->bv_len); + iov_iter_bvec(&i, ITER_BVEC | WRITE, bvec, 1, bvec->bv_len); file_start_write(file); bw = vfs_iter_write(file, &i, ppos, 0); -- cgit v1.2.3-59-g8ed1b