From 074fca3a18e7e1e0d4d7dcc9d7badc43b90232f4 Mon Sep 17 00:00:00 2001 From: Majd Dibbiny Date: Mon, 5 Nov 2018 08:07:37 +0200 Subject: RDMA/mlx5: Fix fence type for IB_WR_LOCAL_INV WR Currently, for IB_WR_LOCAL_INV WR, when the next fence is None, the current fence will be SMALL instead of Normal Fence. Without this patch krping doesn't work on CX-5 devices and throws following error: The error messages are from CX5 driver are: (from server side) [ 710.434014] mlx5_0:dump_cqe:278:(pid 2712): dump error cqe [ 710.434016] 00000000 00000000 00000000 00000000 [ 710.434016] 00000000 00000000 00000000 00000000 [ 710.434017] 00000000 00000000 00000000 00000000 [ 710.434018] 00000000 93003204 100000b8 000524d2 [ 710.434019] krping: cq completion failed with wr_id 0 status 4 opcode 128 vender_err 32 Fixed the logic to set the correct fence type. Fixes: 6e8484c5cf07 ("RDMA/mlx5: set UMR wqe fence according to HCA cap") Signed-off-by: Majd Dibbiny Signed-off-by: Leon Romanovsky Signed-off-by: Jason Gunthorpe --- drivers/infiniband/hw/mlx5/qp.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) (limited to 'drivers/infiniband') diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 6841c0f9237f..8c74afc91a47 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -4678,17 +4678,18 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, goto out; } - if (wr->opcode == IB_WR_LOCAL_INV || - wr->opcode == IB_WR_REG_MR) { + if (wr->opcode == IB_WR_REG_MR) { fence = dev->umr_fence; next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL; - } else if (wr->send_flags & IB_SEND_FENCE) { - if (qp->next_fence) - fence = MLX5_FENCE_MODE_SMALL_AND_FENCE; - else - fence = MLX5_FENCE_MODE_FENCE; - } else { - fence = qp->next_fence; + } else { + if (wr->send_flags & IB_SEND_FENCE) { + if (qp->next_fence) + fence = MLX5_FENCE_MODE_SMALL_AND_FENCE; + else + fence = MLX5_FENCE_MODE_FENCE; + } else { + fence = qp->next_fence; + } } switch (ibqp->qp_type) { -- cgit v1.2.3-59-g8ed1b From d52ef88a9f4be523425730da3239cf87bee936da Mon Sep 17 00:00:00 2001 From: Parav Pandit Date: Mon, 19 Nov 2018 09:58:24 +0200 Subject: RDMA/core: Add GIDs while changing MAC addr only for registered ndev Currently when MAC address is changed, regardless of the netdev reg_state, GID entries are removed and added to reflect the new MAC address and new default GID entries. When a bonding device is used and the underlying PCI device is removed several netdevice events are generated. Two events of the interest are CHANGEADDR and UNREGISTER event on lower(slave) netdevice of the bond netdevice. Sometimes CHANGEADDR event is generated when netdev state is UNREGISTERING (after UNREGISTER event is generated). In this scenario, GID entries for default GIDs are added and never deleted because GID entries are deleted only when netdev state is < UNREGISTERED. This leads to non zero reference count on the netdevice. Due to this, PCI device unbind operation is getting stuck. To avoid it, when changing mac address, add GID entries only if netdev is in REGISTERED state. Fixes: 03db3a2d81e6 ("IB/core: Add RoCE GID table management") Signed-off-by: Parav Pandit Reviewed-by: Mark Bloch Signed-off-by: Leon Romanovsky Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/roce_gid_mgmt.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) (limited to 'drivers/infiniband') diff --git a/drivers/infiniband/core/roce_gid_mgmt.c b/drivers/infiniband/core/roce_gid_mgmt.c index ee366199b169..25d43c8f1c2a 100644 --- a/drivers/infiniband/core/roce_gid_mgmt.c +++ b/drivers/infiniband/core/roce_gid_mgmt.c @@ -767,8 +767,10 @@ static int netdevice_event(struct notifier_block *this, unsigned long event, case NETDEV_CHANGEADDR: cmds[0] = netdev_del_cmd; - cmds[1] = add_default_gid_cmd; - cmds[2] = add_cmd; + if (ndev->reg_state == NETREG_REGISTERED) { + cmds[1] = add_default_gid_cmd; + cmds[2] = add_cmd; + } break; case NETDEV_CHANGEUPPER: -- cgit v1.2.3-59-g8ed1b From 3c4b1419c33c2417836a63f8126834ee36968321 Mon Sep 17 00:00:00 2001 From: Selvin Xavier Date: Wed, 21 Nov 2018 00:05:00 -0800 Subject: RDMA/bnxt_re: Fix system hang when registration with L2 driver fails Driver doesn't release rtnl lock if registration with L2 driver (bnxt_re_register_netdev) fais and this causes hang while requesting for the next lock. [ 371.635416] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 371.635417] kworker/u48:1 D 0 634 2 0x80000000 [ 371.635423] Workqueue: bnxt_re bnxt_re_task [bnxt_re] [ 371.635424] Call Trace: [ 371.635426] ? __schedule+0x36b/0xbd0 [ 371.635429] schedule+0x39/0x90 [ 371.635430] schedule_preempt_disabled+0x11/0x20 [ 371.635431] __mutex_lock+0x45b/0x9c0 [ 371.635433] ? __mutex_lock+0x16d/0x9c0 [ 371.635435] ? bnxt_re_ib_reg+0x2b/0xb30 [bnxt_re] [ 371.635438] ? wake_up_klogd+0x37/0x40 [ 371.635442] bnxt_re_ib_reg+0x2b/0xb30 [bnxt_re] [ 371.635447] bnxt_re_task+0xfd/0x180 [bnxt_re] [ 371.635449] process_one_work+0x216/0x5b0 [ 371.635450] ? process_one_work+0x189/0x5b0 [ 371.635453] worker_thread+0x4e/0x3d0 [ 371.635455] kthread+0x10e/0x140 [ 371.635456] ? process_one_work+0x5b0/0x5b0 [ 371.635458] ? kthread_stop+0x220/0x220 [ 371.635460] ret_from_fork+0x3a/0x50 [ 371.635477] INFO: task NetworkManager:1228 blocked for more than 120 seconds. [ 371.635478] Tainted: G B OE 4.20.0-rc1+ #42 [ 371.635479] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Release the rtnl_lock correctly in the failure path. Fixes: de5c95d0f518 ("RDMA/bnxt_re: Fix system crash during RDMA resource initialization") Signed-off-by: Selvin Xavier Signed-off-by: Jason Gunthorpe --- drivers/infiniband/hw/bnxt_re/main.c | 1 + 1 file changed, 1 insertion(+) (limited to 'drivers/infiniband') diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c index cf2282654210..bd5ded5809f5 100644 --- a/drivers/infiniband/hw/bnxt_re/main.c +++ b/drivers/infiniband/hw/bnxt_re/main.c @@ -1268,6 +1268,7 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev) /* Registered a new RoCE device instance to netdev */ rc = bnxt_re_register_netdev(rdev); if (rc) { + rtnl_unlock(); pr_err("Failed to register with netedev: %#x\n", rc); return -EINVAL; } -- cgit v1.2.3-59-g8ed1b From a6c66d6a08b88cc10aca9d3f65cfae31e7652a99 Mon Sep 17 00:00:00 2001 From: Selvin Xavier Date: Wed, 21 Nov 2018 00:05:01 -0800 Subject: RDMA/bnxt_re: Avoid accessing the device structure after it is freed When bnxt_re_ib_reg returns failure, the device structure gets freed. Driver tries to access the device pointer after it is freed. [ 4871.034744] Failed to register with netedev: 0xffffffa1 [ 4871.034765] infiniband (null): Failed to register with IB: 0xffffffea [ 4871.046430] ================================================================== [ 4871.046437] BUG: KASAN: use-after-free in bnxt_re_task+0x63/0x180 [bnxt_re] [ 4871.046439] Write of size 4 at addr ffff880fa8406f48 by task kworker/u48:2/17813 [ 4871.046443] CPU: 20 PID: 17813 Comm: kworker/u48:2 Kdump: loaded Tainted: G B OE 4.20.0-rc1+ #42 [ 4871.046444] Hardware name: Dell Inc. PowerEdge R730/0599V5, BIOS 1.0.4 08/28/2014 [ 4871.046447] Workqueue: bnxt_re bnxt_re_task [bnxt_re] [ 4871.046449] Call Trace: [ 4871.046454] dump_stack+0x91/0xeb [ 4871.046458] print_address_description+0x6a/0x2a0 [ 4871.046461] kasan_report+0x176/0x2d0 [ 4871.046463] ? bnxt_re_task+0x63/0x180 [bnxt_re] [ 4871.046466] bnxt_re_task+0x63/0x180 [bnxt_re] [ 4871.046470] process_one_work+0x216/0x5b0 [ 4871.046471] ? process_one_work+0x189/0x5b0 [ 4871.046475] worker_thread+0x4e/0x3d0 [ 4871.046479] kthread+0x10e/0x140 [ 4871.046480] ? process_one_work+0x5b0/0x5b0 [ 4871.046482] ? kthread_stop+0x220/0x220 [ 4871.046486] ret_from_fork+0x3a/0x50 [ 4871.046492] The buggy address belongs to the page: [ 4871.046494] page:ffffea003ea10180 count:0 mapcount:0 mapping:0000000000000000 index:0x0 [ 4871.046495] flags: 0x57ffffc0000000() [ 4871.046498] raw: 0057ffffc0000000 0000000000000000 ffffea003ea10188 0000000000000000 [ 4871.046500] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000 [ 4871.046501] page dumped because: kasan: bad access detected Avoid accessing the device structure once it is freed. Fixes: 497158aa5f52 ("RDMA/bnxt_re: Fix the ib_reg failure cleanup") Signed-off-by: Selvin Xavier Signed-off-by: Jason Gunthorpe --- drivers/infiniband/hw/bnxt_re/main.c | 2 ++ 1 file changed, 2 insertions(+) (limited to 'drivers/infiniband') diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c index bd5ded5809f5..77f095e5fbe3 100644 --- a/drivers/infiniband/hw/bnxt_re/main.c +++ b/drivers/infiniband/hw/bnxt_re/main.c @@ -1467,6 +1467,7 @@ static void bnxt_re_task(struct work_struct *work) "Failed to register with IB: %#x", rc); bnxt_re_remove_one(rdev); bnxt_re_dev_unreg(rdev); + goto exit; } break; case NETDEV_UP: @@ -1490,6 +1491,7 @@ static void bnxt_re_task(struct work_struct *work) } smp_mb__before_atomic(); atomic_dec(&rdev->sched_count); +exit: kfree(re_work); } -- cgit v1.2.3-59-g8ed1b From 13f8d9c16693afb908ead3d2a758adbe6a79eccd Mon Sep 17 00:00:00 2001 From: Yonatan Cohen Date: Wed, 21 Nov 2018 13:48:39 +0200 Subject: IB/mlx5: Fix XRC QP support after introducing extended atomic Extended atomics are supported with RC and XRC QP types, but the commit citied in the Fixes line added an unneeded check to to_mlx5_access_flags. This broke XRC QPs. The following ib_atomic_bw invocation over XRC reproduces the issue: ib_atomic_bw -d mlx5_1 --connection=XRC --atomic_type=FETCH_AND_ADD It is safe to remove such checks because the QP type was already checked in ib_modify_qp_is_ok(), which was previously called from mlx5_ib_modify_qp. Fixes: a60109dc9a95 ("IB/mlx5: Add support for extended atomic operations") Signed-off-by: Yonatan Cohen Signed-off-by: Leon Romanovsky Signed-off-by: Jason Gunthorpe --- drivers/infiniband/hw/mlx5/qp.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) (limited to 'drivers/infiniband') diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 8c74afc91a47..3747cc681b18 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -2633,8 +2633,7 @@ static int to_mlx5_access_flags(struct mlx5_ib_qp *qp, if (access_flags & IB_ACCESS_REMOTE_READ) *hw_access_flags |= MLX5_QP_BIT_RRE; - if ((access_flags & IB_ACCESS_REMOTE_ATOMIC) && - qp->ibqp.qp_type == IB_QPT_RC) { + if (access_flags & IB_ACCESS_REMOTE_ATOMIC) { int atomic_mode; atomic_mode = get_atomic_mode(dev, qp->ibqp.qp_type); -- cgit v1.2.3-59-g8ed1b From db7a691a1551a748cb92d9c89c6b190ea87e28d5 Mon Sep 17 00:00:00 2001 From: Michael Guralnik Date: Wed, 21 Nov 2018 15:03:54 +0200 Subject: IB/mlx5: Avoid load failure due to unknown link width If the firmware reports a connection width that is not 1x, 4x, 8x or 12x it causes the driver to fail during initialization. To prevent this failure every time a new width is introduced to the RDMA stack, we will set a default 4x width for these widths which ar unknown to the driver. This is needed to allow to run old kernels with new firmware. Cc: # 4.1 Fixes: 1b5daf11b015 ("IB/mlx5: Avoid using the MAD_IFC command under ISSI > 0 mode") Signed-off-by: Michael Guralnik Reviewed-by: Majd Dibbiny Signed-off-by: Leon Romanovsky Signed-off-by: Jason Gunthorpe --- drivers/infiniband/hw/mlx5/main.c | 29 +++++++++++------------------ 1 file changed, 11 insertions(+), 18 deletions(-) (limited to 'drivers/infiniband') diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index e9c428071df3..3569fda07e07 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -1094,31 +1094,26 @@ enum mlx5_ib_width { MLX5_IB_WIDTH_12X = 1 << 4 }; -static int translate_active_width(struct ib_device *ibdev, u8 active_width, +static void translate_active_width(struct ib_device *ibdev, u8 active_width, u8 *ib_width) { struct mlx5_ib_dev *dev = to_mdev(ibdev); - int err = 0; - if (active_width & MLX5_IB_WIDTH_1X) { + if (active_width & MLX5_IB_WIDTH_1X) *ib_width = IB_WIDTH_1X; - } else if (active_width & MLX5_IB_WIDTH_2X) { - mlx5_ib_dbg(dev, "active_width %d is not supported by IB spec\n", - (int)active_width); - err = -EINVAL; - } else if (active_width & MLX5_IB_WIDTH_4X) { + else if (active_width & MLX5_IB_WIDTH_4X) *ib_width = IB_WIDTH_4X; - } else if (active_width & MLX5_IB_WIDTH_8X) { + else if (active_width & MLX5_IB_WIDTH_8X) *ib_width = IB_WIDTH_8X; - } else if (active_width & MLX5_IB_WIDTH_12X) { + else if (active_width & MLX5_IB_WIDTH_12X) *ib_width = IB_WIDTH_12X; - } else { - mlx5_ib_dbg(dev, "Invalid active_width %d\n", + else { + mlx5_ib_dbg(dev, "Invalid active_width %d, setting width to default value: 4x\n", (int)active_width); - err = -EINVAL; + *ib_width = IB_WIDTH_4X; } - return err; + return; } static int mlx5_mtu_to_ib_mtu(int mtu) @@ -1225,10 +1220,8 @@ static int mlx5_query_hca_port(struct ib_device *ibdev, u8 port, if (err) goto out; - err = translate_active_width(ibdev, ib_link_width_oper, - &props->active_width); - if (err) - goto out; + translate_active_width(ibdev, ib_link_width_oper, &props->active_width); + err = mlx5_query_port_ib_proto_oper(mdev, &props->active_speed, port); if (err) goto out; -- cgit v1.2.3-59-g8ed1b From 4f32fb921b153ae9ea280e02a3e91509fffc03d3 Mon Sep 17 00:00:00 2001 From: Kamal Heib Date: Thu, 15 Nov 2018 09:49:38 -0800 Subject: RDMA/rdmavt: Fix rvt_create_ah function signature rdmavt uses a crazy system that looses the type checking when assinging functions to struct ib_device function pointers. Because of this the signature to this function was not changed when the below commit revised things. Fix the signature so we are not calling a function pointer with a mismatched signature. Fixes: 477864c8fcd9 ("IB/core: Let create_ah return extended response to user") Signed-off-by: Kamal Heib Reviewed-by: Dennis Dalessandro Signed-off-by: Jason Gunthorpe --- drivers/infiniband/sw/rdmavt/ah.c | 4 +++- drivers/infiniband/sw/rdmavt/ah.h | 3 ++- 2 files changed, 5 insertions(+), 2 deletions(-) (limited to 'drivers/infiniband') diff --git a/drivers/infiniband/sw/rdmavt/ah.c b/drivers/infiniband/sw/rdmavt/ah.c index 89ec0f64abfc..084bb4baebb5 100644 --- a/drivers/infiniband/sw/rdmavt/ah.c +++ b/drivers/infiniband/sw/rdmavt/ah.c @@ -91,13 +91,15 @@ EXPORT_SYMBOL(rvt_check_ah); * rvt_create_ah - create an address handle * @pd: the protection domain * @ah_attr: the attributes of the AH + * @udata: pointer to user's input output buffer information. * * This may be called from interrupt context. * * Return: newly allocated ah */ struct ib_ah *rvt_create_ah(struct ib_pd *pd, - struct rdma_ah_attr *ah_attr) + struct rdma_ah_attr *ah_attr, + struct ib_udata *udata) { struct rvt_ah *ah; struct rvt_dev_info *dev = ib_to_rvt(pd->device); diff --git a/drivers/infiniband/sw/rdmavt/ah.h b/drivers/infiniband/sw/rdmavt/ah.h index 16105af99189..25271b48a683 100644 --- a/drivers/infiniband/sw/rdmavt/ah.h +++ b/drivers/infiniband/sw/rdmavt/ah.h @@ -51,7 +51,8 @@ #include struct ib_ah *rvt_create_ah(struct ib_pd *pd, - struct rdma_ah_attr *ah_attr); + struct rdma_ah_attr *ah_attr, + struct ib_udata *udata); int rvt_destroy_ah(struct ib_ah *ibah); int rvt_modify_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); int rvt_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); -- cgit v1.2.3-59-g8ed1b From 24c3456c8d5ee6fc1933ca40f7b4406130682668 Mon Sep 17 00:00:00 2001 From: Sagi Grimberg Date: Wed, 14 Nov 2018 10:17:01 -0800 Subject: iser: set sector for ambiguous mr status errors If for some reason we failed to query the mr status, we need to make sure to provide sufficient information for an ambiguous error (guard error on sector 0). Fixes: 0a7a08ad6f5f ("IB/iser: Implement check_protection") Cc: Reported-by: Dan Carpenter Signed-off-by: Sagi Grimberg Signed-off-by: Jason Gunthorpe --- drivers/infiniband/ulp/iser/iser_verbs.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) (limited to 'drivers/infiniband') diff --git a/drivers/infiniband/ulp/iser/iser_verbs.c b/drivers/infiniband/ulp/iser/iser_verbs.c index 946b623ba5eb..4ff3d98fa6a4 100644 --- a/drivers/infiniband/ulp/iser/iser_verbs.c +++ b/drivers/infiniband/ulp/iser/iser_verbs.c @@ -1124,7 +1124,9 @@ u8 iser_check_task_pi_status(struct iscsi_iser_task *iser_task, IB_MR_CHECK_SIG_STATUS, &mr_status); if (ret) { pr_err("ib_check_mr_status failed, ret %d\n", ret); - goto err; + /* Not a lot we can do, return ambiguous guard error */ + *sector = 0; + return 0x1; } if (mr_status.fail_status & IB_MR_CHECK_SIG_STATUS) { @@ -1152,9 +1154,6 @@ u8 iser_check_task_pi_status(struct iscsi_iser_task *iser_task, } return 0; -err: - /* Not alot we can do here, return ambiguous guard error */ - return 0x1; } void iser_err_comp(struct ib_wc *wc, const char *type) -- cgit v1.2.3-59-g8ed1b From ca088320a02537f36c243ac21794525d8eabb3bd Mon Sep 17 00:00:00 2001 From: Yixian Liu Date: Fri, 23 Nov 2018 15:46:07 +0800 Subject: RDMA/hns: Bugfix pbl configuration for rereg mr Current hns driver assigned the first two PBL page addresses from previous registered MR to the hardware when reregister MR changing the memory locations occurred. This will lead to PBL addressing error as the PBL has already been released. This patch fixes this wrong assignment by using the page address from new allocated PBL. Fixes: a2c80b7b4119 ("RDMA/hns: Add rereg mr support for hip08") Signed-off-by: Yixian Liu Signed-off-by: Jason Gunthorpe --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 128 ++++++++++++++--------------- 1 file changed, 60 insertions(+), 68 deletions(-) (limited to 'drivers/infiniband') diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index a4c62ae23a9a..3beb1523e17c 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -1756,10 +1756,9 @@ static int hns_roce_v2_set_mac(struct hns_roce_dev *hr_dev, u8 phy_port, return hns_roce_cmq_send(hr_dev, &desc, 1); } -static int hns_roce_v2_write_mtpt(void *mb_buf, struct hns_roce_mr *mr, - unsigned long mtpt_idx) +static int set_mtpt_pbl(struct hns_roce_v2_mpt_entry *mpt_entry, + struct hns_roce_mr *mr) { - struct hns_roce_v2_mpt_entry *mpt_entry; struct scatterlist *sg; u64 page_addr; u64 *pages; @@ -1767,6 +1766,53 @@ static int hns_roce_v2_write_mtpt(void *mb_buf, struct hns_roce_mr *mr, int len; int entry; + mpt_entry->pbl_size = cpu_to_le32(mr->pbl_size); + mpt_entry->pbl_ba_l = cpu_to_le32(lower_32_bits(mr->pbl_ba >> 3)); + roce_set_field(mpt_entry->byte_48_mode_ba, + V2_MPT_BYTE_48_PBL_BA_H_M, V2_MPT_BYTE_48_PBL_BA_H_S, + upper_32_bits(mr->pbl_ba >> 3)); + + pages = (u64 *)__get_free_page(GFP_KERNEL); + if (!pages) + return -ENOMEM; + + i = 0; + for_each_sg(mr->umem->sg_head.sgl, sg, mr->umem->nmap, entry) { + len = sg_dma_len(sg) >> PAGE_SHIFT; + for (j = 0; j < len; ++j) { + page_addr = sg_dma_address(sg) + + (j << mr->umem->page_shift); + pages[i] = page_addr >> 6; + /* Record the first 2 entry directly to MTPT table */ + if (i >= HNS_ROCE_V2_MAX_INNER_MTPT_NUM - 1) + goto found; + i++; + } + } +found: + mpt_entry->pa0_l = cpu_to_le32(lower_32_bits(pages[0])); + roce_set_field(mpt_entry->byte_56_pa0_h, V2_MPT_BYTE_56_PA0_H_M, + V2_MPT_BYTE_56_PA0_H_S, upper_32_bits(pages[0])); + + mpt_entry->pa1_l = cpu_to_le32(lower_32_bits(pages[1])); + roce_set_field(mpt_entry->byte_64_buf_pa1, V2_MPT_BYTE_64_PA1_H_M, + V2_MPT_BYTE_64_PA1_H_S, upper_32_bits(pages[1])); + roce_set_field(mpt_entry->byte_64_buf_pa1, + V2_MPT_BYTE_64_PBL_BUF_PG_SZ_M, + V2_MPT_BYTE_64_PBL_BUF_PG_SZ_S, + mr->pbl_buf_pg_sz + PG_SHIFT_OFFSET); + + free_page((unsigned long)pages); + + return 0; +} + +static int hns_roce_v2_write_mtpt(void *mb_buf, struct hns_roce_mr *mr, + unsigned long mtpt_idx) +{ + struct hns_roce_v2_mpt_entry *mpt_entry; + int ret; + mpt_entry = mb_buf; memset(mpt_entry, 0, sizeof(*mpt_entry)); @@ -1781,7 +1827,6 @@ static int hns_roce_v2_write_mtpt(void *mb_buf, struct hns_roce_mr *mr, mr->pbl_ba_pg_sz + PG_SHIFT_OFFSET); roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_PD_M, V2_MPT_BYTE_4_PD_S, mr->pd); - mpt_entry->byte_4_pd_hop_st = cpu_to_le32(mpt_entry->byte_4_pd_hop_st); roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RA_EN_S, 0); roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_R_INV_EN_S, 1); @@ -1796,13 +1841,11 @@ static int hns_roce_v2_write_mtpt(void *mb_buf, struct hns_roce_mr *mr, (mr->access & IB_ACCESS_REMOTE_WRITE ? 1 : 0)); roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_LW_EN_S, (mr->access & IB_ACCESS_LOCAL_WRITE ? 1 : 0)); - mpt_entry->byte_8_mw_cnt_en = cpu_to_le32(mpt_entry->byte_8_mw_cnt_en); roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_PA_S, mr->type == MR_TYPE_MR ? 0 : 1); roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_INNER_PA_VLD_S, 1); - mpt_entry->byte_12_mw_pa = cpu_to_le32(mpt_entry->byte_12_mw_pa); mpt_entry->len_l = cpu_to_le32(lower_32_bits(mr->size)); mpt_entry->len_h = cpu_to_le32(upper_32_bits(mr->size)); @@ -1813,53 +1856,9 @@ static int hns_roce_v2_write_mtpt(void *mb_buf, struct hns_roce_mr *mr, if (mr->type == MR_TYPE_DMA) return 0; - mpt_entry->pbl_size = cpu_to_le32(mr->pbl_size); - - mpt_entry->pbl_ba_l = cpu_to_le32(lower_32_bits(mr->pbl_ba >> 3)); - roce_set_field(mpt_entry->byte_48_mode_ba, V2_MPT_BYTE_48_PBL_BA_H_M, - V2_MPT_BYTE_48_PBL_BA_H_S, - upper_32_bits(mr->pbl_ba >> 3)); - mpt_entry->byte_48_mode_ba = cpu_to_le32(mpt_entry->byte_48_mode_ba); - - pages = (u64 *)__get_free_page(GFP_KERNEL); - if (!pages) - return -ENOMEM; - - i = 0; - for_each_sg(mr->umem->sg_head.sgl, sg, mr->umem->nmap, entry) { - len = sg_dma_len(sg) >> PAGE_SHIFT; - for (j = 0; j < len; ++j) { - page_addr = sg_dma_address(sg) + - (j << mr->umem->page_shift); - pages[i] = page_addr >> 6; - - /* Record the first 2 entry directly to MTPT table */ - if (i >= HNS_ROCE_V2_MAX_INNER_MTPT_NUM - 1) - goto found; - i++; - } - } - -found: - mpt_entry->pa0_l = cpu_to_le32(lower_32_bits(pages[0])); - roce_set_field(mpt_entry->byte_56_pa0_h, V2_MPT_BYTE_56_PA0_H_M, - V2_MPT_BYTE_56_PA0_H_S, - upper_32_bits(pages[0])); - mpt_entry->byte_56_pa0_h = cpu_to_le32(mpt_entry->byte_56_pa0_h); - - mpt_entry->pa1_l = cpu_to_le32(lower_32_bits(pages[1])); - roce_set_field(mpt_entry->byte_64_buf_pa1, V2_MPT_BYTE_64_PA1_H_M, - V2_MPT_BYTE_64_PA1_H_S, upper_32_bits(pages[1])); + ret = set_mtpt_pbl(mpt_entry, mr); - free_page((unsigned long)pages); - - roce_set_field(mpt_entry->byte_64_buf_pa1, - V2_MPT_BYTE_64_PBL_BUF_PG_SZ_M, - V2_MPT_BYTE_64_PBL_BUF_PG_SZ_S, - mr->pbl_buf_pg_sz + PG_SHIFT_OFFSET); - mpt_entry->byte_64_buf_pa1 = cpu_to_le32(mpt_entry->byte_64_buf_pa1); - - return 0; + return ret; } static int hns_roce_v2_rereg_write_mtpt(struct hns_roce_dev *hr_dev, @@ -1868,6 +1867,7 @@ static int hns_roce_v2_rereg_write_mtpt(struct hns_roce_dev *hr_dev, u64 size, void *mb_buf) { struct hns_roce_v2_mpt_entry *mpt_entry = mb_buf; + int ret = 0; if (flags & IB_MR_REREG_PD) { roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_PD_M, @@ -1880,14 +1880,14 @@ static int hns_roce_v2_rereg_write_mtpt(struct hns_roce_dev *hr_dev, V2_MPT_BYTE_8_BIND_EN_S, (mr_access_flags & IB_ACCESS_MW_BIND ? 1 : 0)); roce_set_bit(mpt_entry->byte_8_mw_cnt_en, - V2_MPT_BYTE_8_ATOMIC_EN_S, - (mr_access_flags & IB_ACCESS_REMOTE_ATOMIC ? 1 : 0)); + V2_MPT_BYTE_8_ATOMIC_EN_S, + mr_access_flags & IB_ACCESS_REMOTE_ATOMIC ? 1 : 0); roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RR_EN_S, - (mr_access_flags & IB_ACCESS_REMOTE_READ ? 1 : 0)); + mr_access_flags & IB_ACCESS_REMOTE_READ ? 1 : 0); roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RW_EN_S, - (mr_access_flags & IB_ACCESS_REMOTE_WRITE ? 1 : 0)); + mr_access_flags & IB_ACCESS_REMOTE_WRITE ? 1 : 0); roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_LW_EN_S, - (mr_access_flags & IB_ACCESS_LOCAL_WRITE ? 1 : 0)); + mr_access_flags & IB_ACCESS_LOCAL_WRITE ? 1 : 0); } if (flags & IB_MR_REREG_TRANS) { @@ -1896,21 +1896,13 @@ static int hns_roce_v2_rereg_write_mtpt(struct hns_roce_dev *hr_dev, mpt_entry->len_l = cpu_to_le32(lower_32_bits(size)); mpt_entry->len_h = cpu_to_le32(upper_32_bits(size)); - mpt_entry->pbl_size = cpu_to_le32(mr->pbl_size); - mpt_entry->pbl_ba_l = - cpu_to_le32(lower_32_bits(mr->pbl_ba >> 3)); - roce_set_field(mpt_entry->byte_48_mode_ba, - V2_MPT_BYTE_48_PBL_BA_H_M, - V2_MPT_BYTE_48_PBL_BA_H_S, - upper_32_bits(mr->pbl_ba >> 3)); - mpt_entry->byte_48_mode_ba = - cpu_to_le32(mpt_entry->byte_48_mode_ba); - mr->iova = iova; mr->size = size; + + ret = set_mtpt_pbl(mpt_entry, mr); } - return 0; + return ret; } static int hns_roce_v2_frmr_write_mtpt(void *mb_buf, struct hns_roce_mr *mr) -- cgit v1.2.3-59-g8ed1b From 4d5422a309deecec906c491f8aea77593a46321d Mon Sep 17 00:00:00 2001 From: Artemy Kovalyov Date: Sun, 25 Nov 2018 20:34:23 +0200 Subject: IB/mlx5: Skip non-ODP MR when handling a page fault It is possible that we call pagefault_single_data_segment() with a MKey that belongs to a memory region which is not on demand (i.e. pinned pages). This can happen if, for instance, a WQE that points to multiple MRs where some of them are ODP MRs and some are not. In this case we don't need to handle this MR in the ODP context besides reporting success. Otherwise the code will call pagefault_mr() which will do to_ib_umem_odp() on a non-ODP MR and thus access out of bounds. Fixes: 7bdf65d411c1 ("IB/mlx5: Handle page faults") Signed-off-by: Artemy Kovalyov Signed-off-by: Moni Shoua Signed-off-by: Leon Romanovsky Signed-off-by: Jason Gunthorpe --- drivers/infiniband/hw/mlx5/odp.c | 8 ++++++++ 1 file changed, 8 insertions(+) (limited to 'drivers/infiniband') diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index b04eb6775326..2a0743808bd8 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -674,6 +674,14 @@ next_mr: goto srcu_unlock; } + if (!mr->umem->is_odp) { + mlx5_ib_dbg(dev, "skipping non ODP MR (lkey=0x%06x) in page fault handler.\n", + key); + if (bytes_mapped) + *bytes_mapped += bcnt; + goto srcu_unlock; + } + ret = pagefault_mr(dev, mr, io_virt, bcnt, bytes_mapped); if (ret < 0) goto srcu_unlock; -- cgit v1.2.3-59-g8ed1b From 605728e65ad303a1b639bcae7c0abd2e24e6a930 Mon Sep 17 00:00:00 2001 From: Artemy Kovalyov Date: Sun, 25 Nov 2018 20:34:25 +0200 Subject: IB/umem: Set correct address to the invalidation function The invalidate range was using PAGE_SIZE instead of the computed 'end', and had the wrong transformation of page_index due the weird construction. This can trigger during error unwind and would cause malfunction. Inline the code and correct the math. Fixes: 403cd12e2cf7 ("IB/umem: Add contiguous ODP support") Signed-off-by: Artemy Kovalyov Signed-off-by: Moni Shoua Signed-off-by: Leon Romanovsky Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/umem_odp.c | 20 ++++++-------------- 1 file changed, 6 insertions(+), 14 deletions(-) (limited to 'drivers/infiniband') diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 2b4c5e7dd5a1..676c1fd1119d 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -137,15 +137,6 @@ static void ib_umem_notifier_release(struct mmu_notifier *mn, up_read(&per_mm->umem_rwsem); } -static int invalidate_page_trampoline(struct ib_umem_odp *item, u64 start, - u64 end, void *cookie) -{ - ib_umem_notifier_start_account(item); - item->umem.context->invalidate_range(item, start, start + PAGE_SIZE); - ib_umem_notifier_end_account(item); - return 0; -} - static int invalidate_range_start_trampoline(struct ib_umem_odp *item, u64 start, u64 end, void *cookie) { @@ -553,12 +544,13 @@ out: put_page(page); if (remove_existing_mapping && umem->context->invalidate_range) { - invalidate_page_trampoline( + ib_umem_notifier_start_account(umem_odp); + umem->context->invalidate_range( umem_odp, - ib_umem_start(umem) + (page_index >> umem->page_shift), - ib_umem_start(umem) + ((page_index + 1) >> - umem->page_shift), - NULL); + ib_umem_start(umem) + (page_index << umem->page_shift), + ib_umem_start(umem) + + ((page_index + 1) << umem->page_shift)); + ib_umem_notifier_end_account(umem_odp); ret = -EAGAIN; } -- cgit v1.2.3-59-g8ed1b From 75b7b86bdb0df37e08e44b6c1f99010967f81944 Mon Sep 17 00:00:00 2001 From: Artemy Kovalyov Date: Sun, 25 Nov 2018 20:34:26 +0200 Subject: IB/mlx5: Fix page fault handling for MW Memory windows are implemented with an indirect MKey, when a page fault event comes for a MW Mkey we need to find the MR at the end of the list of the indirect MKeys by iterating on all items from the first to the last. The offset calculated during this process has to be zeroed after the first iteration or the next iteration will start from a wrong address, resulting incorrect ODP faulting behavior. Fixes: db570d7deafb ("IB/mlx5: Add ODP support to MW") Signed-off-by: Artemy Kovalyov Signed-off-by: Moni Shoua Signed-off-by: Leon Romanovsky Signed-off-by: Jason Gunthorpe --- drivers/infiniband/hw/mlx5/odp.c | 1 + 1 file changed, 1 insertion(+) (limited to 'drivers/infiniband') diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 2a0743808bd8..b711a0f3aa35 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -743,6 +743,7 @@ next_mr: head = frame; bcnt -= frame->bcnt; + offset = 0; } break; -- cgit v1.2.3-59-g8ed1b From 7bca603a69c0c239654a8f0bcb99e1a60b30040c Mon Sep 17 00:00:00 2001 From: Leon Romanovsky Date: Thu, 29 Nov 2018 12:25:29 +0200 Subject: RDMA/mlx5: Initialize return variable in case pagefault was skipped Pagefaults occurred in non-ODP MR are completely valid events, so initialize return variable to 0. Fixes: 4d5422a309de ("IB/mlx5: Skip non-ODP MR when handling a page fault") Reported-by: Dan Carpenter Signed-off-by: Leon Romanovsky Signed-off-by: Jason Gunthorpe --- drivers/infiniband/hw/mlx5/odp.c | 1 + 1 file changed, 1 insertion(+) (limited to 'drivers/infiniband') diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index b711a0f3aa35..2cc3d69ab6f6 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -679,6 +679,7 @@ next_mr: key); if (bytes_mapped) *bytes_mapped += bcnt; + ret = 0; goto srcu_unlock; } -- cgit v1.2.3-59-g8ed1b