aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/nvme/host/rdma.c
diff options
context:
space:
mode:
authorJason Gunthorpe <jgg@nvidia.com>2020-10-26 11:25:49 -0300
committerJason Gunthorpe <jgg@nvidia.com>2020-10-28 09:14:49 -0300
commit071ba4cc559de47160761b9500b72e8fa09d923d (patch)
tree0433f309f2aae8e868f09260cd1647671236af85 /drivers/nvme/host/rdma.c
parentRDMA/uverbs: Fix false error in query gid IOCTL (diff)
downloadlinux-dev-071ba4cc559de47160761b9500b72e8fa09d923d.tar.xz
linux-dev-071ba4cc559de47160761b9500b72e8fa09d923d.zip
RDMA: Add rdma_connect_locked()
There are two flows for handling RDMA_CM_EVENT_ROUTE_RESOLVED, either the handler triggers a completion and another thread does rdma_connect() or the handler directly calls rdma_connect(). In all cases rdma_connect() needs to hold the handler_mutex, but when handler's are invoked this is already held by the core code. This causes ULPs using the 2nd method to deadlock. Provide a rdma_connect_locked() and have all ULPs call it from their handlers. Link: https://lore.kernel.org/r/0-v2-53c22d5c1405+33-rdma_connect_locking_jgg@nvidia.com Reported-and-tested-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com> Fixes: 2a7cec538169 ("RDMA/cma: Fix locking for the RDMA_CM_CONNECT state") Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Acked-by: Jack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Diffstat (limited to 'drivers/nvme/host/rdma.c')
-rw-r--r--drivers/nvme/host/rdma.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index aad829a2b50d..8bbc48cc45dc 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1890,10 +1890,10 @@ static int nvme_rdma_route_resolved(struct nvme_rdma_queue *queue)
priv.hsqsize = cpu_to_le16(queue->ctrl->ctrl.sqsize);
}
- ret = rdma_connect(queue->cm_id, &param);
+ ret = rdma_connect_locked(queue->cm_id, &param);
if (ret) {
dev_err(ctrl->ctrl.device,
- "rdma_connect failed (%d).\n", ret);
+ "rdma_connect_locked failed (%d).\n", ret);
goto out_destroy_queue_ib;
}