aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/infiniband (follow)
AgeCommit message (Collapse)AuthorFilesLines
2022-09-07RDMA/irdma: Report RNR NAK generation in device capsSindhu-Devale1-1/+4
Report RNR NAK generation when device capabilities are queried Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Sindhu-Devale <sindhu.devale@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Link: https://lore.kernel.org/r/20220906223244.1119-6-shiraz.saleem@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-09-07RDMA/irdma: Use s/g array in post send only when its validSindhu-Devale1-1/+2
Send with invalidate verb call can pass in an uninitialized s/g array with 0 sge's which is filled into irdma WQE and causes a HW asynchronous event. Fix this by using the s/g array in irdma post send only when its valid. Fixes: 551c46e ("RDMA/irdma: Add user/kernel shared libraries") Signed-off-by: Sindhu-Devale <sindhu.devale@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Link: https://lore.kernel.org/r/20220906223244.1119-5-shiraz.saleem@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-09-07RDMA/irdma: Return correct WC error for bind operation failureSindhu-Devale1-1/+3
When a QP and a MR on a local host are in different PDs, the HW generates an asynchronous event (AE). The same AE is generated when a QP and a MW are in different PDs during a bind operation. Return the more appropriate IBV_WC_MW_BIND_ERR for the latter case by checking the OP type from the CQE in error. Fixes: 551c46edc769 ("RDMA/irdma: Add user/kernel shared libraries") Signed-off-by: Sindhu-Devale <sindhu.devale@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Link: https://lore.kernel.org/r/20220906223244.1119-4-shiraz.saleem@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-09-07RDMA/irdma: Return error on MR deregister CQP failureSindhu-Devale2-6/+13
The MR deregister CQP can fail if an MW is bound to it. Return an appropriate error for this case. Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Sindhu-Devale <sindhu.devale@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Link: https://lore.kernel.org/r/20220906223244.1119-3-shiraz.saleem@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-09-07RDMA/irdma: Report the correct max cqes from query deviceSindhu-Devale1-1/+1
Report the correct max cqes available to an application taking into account a reserved entry to detect overflow. Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Sindhu-Devale <sindhu.devale@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Link: https://lore.kernel.org/r/20220906223244.1119-2-shiraz.saleem@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-09-05RDMA/mlx5: Fix UMR cleanup on error flow of driver initMaor Gottlieb2-0/+4
The cited commit removed from the cleanup flow of umr the checks if the resources were created. This could lead to null-ptr-deref in case that we had failure in mlx5_ib_stage_ib_reg_init stage. Fix it by adding new state to the umr that can say if the resources were created or not and check it in the umr cleanup flow before destroying the resources. Fixes: 04876c12c19e ("RDMA/mlx5: Move init and cleanup of UMR to umr.c") Reviewed-by: Michael Guralnik <michaelgur@nvidia.com> Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Link: https://lore.kernel.org/r/4cfa61386cf202e9ce330e8d228ce3b25a36326e.1661763459.git.leonro@nvidia.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-09-05RDMA/mlx5: Set local port to one when accessing countersChris Mi1-0/+6
When accessing Ports Performance Counters Register (PPCNT), local port must be one if it is Function-Per-Port HCA that HCA_CAP.num_ports is 1. The offending patch can change the local port to other values when accessing PPCNT after enabling switchdev mode. The following syndrome will be printed: # cat /sys/class/infiniband/rdmap4s0f0/ports/2/counters/* # dmesg mlx5_core 0000:04:00.0: mlx5_cmd_check:756:(pid 12450): ACCESS_REG(0x805) op_mod(0x1) failed, status bad parameter(0x3), syndrome (0x1e5585) Fix it by setting local port to one for Function-Per-Port HCA. Fixes: 210b1f78076f ("IB/mlx5: When not in dual port RoCE mode, use provided port as native") Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Chris Mi <cmi@nvidia.com> Link: https://lore.kernel.org/r/6c5086c295c76211169e58dbd610fb0402360bab.1661763459.git.leonro@nvidia.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-09-05RDMA/mlx5: Rely on RoCE fw cap instead of devlink when setting profileMaher Sanalla1-1/+1
When the RDMA auxiliary driver probes, it sets its profile based on devlink driverinit value. The latter might not be in sync with FW yet (In case devlink reload is not performed), thus causing a mismatch between RDMA driver and FW. This results in the following FW syndrome when the RDMA driver tries to adjust RoCE state, which fails the probe: "0xC1F678 | modify_nic_vport_context: roce_en set on a vport that doesn't support roce" To prevent this, select the PF profile based on FW RoCE capability instead of relying on devlink driverinit value. To provide backward compatibility of the RoCE disable feature, on older FW's where roce_rw is not set (FW RoCE capability is read-only), keep the current behavior e.g., rely on devlink driverinit value. Fixes: fbfa97b4d79f ("net/mlx5: Disable roce at HCA level") Reviewed-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Michael Guralnik <michaelgur@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Maher Sanalla <msanalla@nvidia.com> Link: https://lore.kernel.org/r/cb34ce9a1df4a24c135cb804db87f7d2418bd6cc.1661763459.git.leonro@nvidia.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-09-05IB/core: Fix a nested dead lock as part of ODP flowYishai Hadas1-1/+1
Fix a nested dead lock as part of ODP flow by using mmput_async(). From the below call trace [1] can see that calling mmput() once we have the umem_odp->umem_mutex locked as required by ib_umem_odp_map_dma_and_lock() might trigger in the same task the exit_mmap()->__mmu_notifier_release()->mlx5_ib_invalidate_range() which may dead lock when trying to lock the same mutex. Moving to use mmput_async() will solve the problem as the above exit_mmap() flow will be called in other task and will be executed once the lock will be available. [1] [64843.077665] task:kworker/u133:2 state:D stack: 0 pid:80906 ppid: 2 flags:0x00004000 [64843.077672] Workqueue: mlx5_ib_page_fault mlx5_ib_eqe_pf_action [mlx5_ib] [64843.077719] Call Trace: [64843.077722] <TASK> [64843.077724] __schedule+0x23d/0x590 [64843.077729] schedule+0x4e/0xb0 [64843.077735] schedule_preempt_disabled+0xe/0x10 [64843.077740] __mutex_lock.constprop.0+0x263/0x490 [64843.077747] __mutex_lock_slowpath+0x13/0x20 [64843.077752] mutex_lock+0x34/0x40 [64843.077758] mlx5_ib_invalidate_range+0x48/0x270 [mlx5_ib] [64843.077808] __mmu_notifier_release+0x1a4/0x200 [64843.077816] exit_mmap+0x1bc/0x200 [64843.077822] ? walk_page_range+0x9c/0x120 [64843.077828] ? __cond_resched+0x1a/0x50 [64843.077833] ? mutex_lock+0x13/0x40 [64843.077839] ? uprobe_clear_state+0xac/0x120 [64843.077860] mmput+0x5f/0x140 [64843.077867] ib_umem_odp_map_dma_and_lock+0x21b/0x580 [ib_core] [64843.077931] pagefault_real_mr+0x9a/0x140 [mlx5_ib] [64843.077962] pagefault_mr+0xb4/0x550 [mlx5_ib] [64843.077992] pagefault_single_data_segment.constprop.0+0x2ac/0x560 [mlx5_ib] [64843.078022] mlx5_ib_eqe_pf_action+0x528/0x780 [mlx5_ib] [64843.078051] process_one_work+0x22b/0x3d0 [64843.078059] worker_thread+0x53/0x410 [64843.078065] ? process_one_work+0x3d0/0x3d0 [64843.078073] kthread+0x12a/0x150 [64843.078079] ? set_kthread_struct+0x50/0x50 [64843.078085] ret_from_fork+0x22/0x30 [64843.078093] </TASK> Fixes: 36f30e486dce ("IB/core: Improve ODP to use hmm_range_fault()") Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Link: https://lore.kernel.org/r/74d93541ea533ef7daec6f126deb1072500aeb16.1661251841.git.leonro@nvidia.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-09-04RDMA/siw: Pass a pointer to virt_to_page()Linus Walleij1-4/+14
Functions that work on a pointer to virtual memory such as virt_to_pfn() and users of that function such as virt_to_page() are supposed to pass a pointer to virtual memory, ideally a (void *) or other pointer. However since many architectures implement virt_to_pfn() as a macro, this function becomes polymorphic and accepts both a (unsigned long) and a (void *). If we instead implement a proper virt_to_pfn(void *addr) function the following happens (occurred on arch/arm): drivers/infiniband/sw/siw/siw_qp_tx.c:32:23: warning: incompatible integer to pointer conversion passing 'dma_addr_t' (aka 'unsigned int') to parameter of type 'const void *' [-Wint-conversion] drivers/infiniband/sw/siw/siw_qp_tx.c:32:37: warning: passing argument 1 of 'virt_to_pfn' makes pointer from integer without a cast [-Wint-conversion] drivers/infiniband/sw/siw/siw_qp_tx.c:538:36: warning: incompatible integer to pointer conversion passing 'unsigned long long' to parameter of type 'const void *' [-Wint-conversion] Fix this with an explicit cast. In one case where the SIW SGE uses an unaligned u64 we need a double cast modifying the virtual address (va) to a platform-specific uintptr_t before casting to a (void *). Fixes: b9be6f18cf9e ("rdma/siw: transmit path") Cc: linux-rdma@vger.kernel.org Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20220902215918.603761-1-linus.walleij@linaro.org Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-09-01RDMA/srp: Set scmnd->result only when scmnd is not NULLyangx.jy@fujitsu.com1-1/+2
This change fixes the following kernel NULL pointer dereference which is reproduced by blktests srp/007 occasionally. BUG: kernel NULL pointer dereference, address: 0000000000000170 PGD 0 P4D 0 Oops: 0002 [#1] PREEMPT SMP NOPTI CPU: 0 PID: 9 Comm: kworker/0:1H Kdump: loaded Not tainted 6.0.0-rc1+ #37 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.15.0-29-g6a62e0cb0dfe-prebuilt.qemu.org 04/01/2014 Workqueue: 0x0 (kblockd) RIP: 0010:srp_recv_done+0x176/0x500 [ib_srp] Code: 00 4d 85 ff 0f 84 52 02 00 00 48 c7 82 80 02 00 00 00 00 00 00 4c 89 df 4c 89 14 24 e8 53 d3 4a f6 4c 8b 14 24 41 0f b6 42 13 <41> 89 87 70 01 00 00 41 0f b6 52 12 f6 c2 02 74 44 41 8b 42 1c b9 RSP: 0018:ffffaef7c0003e28 EFLAGS: 00000282 RAX: 0000000000000000 RBX: ffff9bc9486dea60 RCX: 0000000000000000 RDX: 0000000000000102 RSI: ffffffffb76bbd0e RDI: 00000000ffffffff RBP: ffff9bc980099a00 R08: 0000000000000001 R09: 0000000000000001 R10: ffff9bca53ef0000 R11: ffff9bc980099a10 R12: ffff9bc956e14000 R13: ffff9bc9836b9cb0 R14: ffff9bc9557b4480 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff9bc97ec00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000170 CR3: 0000000007e04000 CR4: 00000000000006f0 Call Trace: <IRQ> __ib_process_cq+0xb7/0x280 [ib_core] ib_poll_handler+0x2b/0x130 [ib_core] irq_poll_softirq+0x93/0x150 __do_softirq+0xee/0x4b8 irq_exit_rcu+0xf7/0x130 sysvec_apic_timer_interrupt+0x8e/0xc0 </IRQ> Fixes: ad215aaea4f9 ("RDMA/srp: Make struct scsi_cmnd and struct srp_request adjacent") Link: https://lore.kernel.org/r/20220831081626.18712-1-yangx.jy@fujitsu.com Signed-off-by: Xiao Yang <yangx.jy@fujitsu.com> Acked-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-08-30RDMA/hns: Remove the num_qpc_timer variableYixing Liu4-5/+3
The bt number of qpc_timer of HIP09 increases compared with that of HIP08. Therefore, qpc_timer_bt_num and num_qpc_timer do not match. As a result, the driver may fail to allocate qpc_timer. So the driver needs to uniquely uses qpc_timer_bt_num to represent the bt number of qpc_timer. Fixes: 0e40dc2f70cd ("RDMA/hns: Add timer allocation support for hip08") Link: https://lore.kernel.org/r/20220829105021.1427804-4-liangwenpeng@huawei.com Signed-off-by: Yixing Liu <liuyixing1@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-08-30RDMA/hns: Fix wrong fixed value of qp->rq.wqe_shiftWenpeng Liang1-5/+2
The value of qp->rq.wqe_shift of HIP08 is always determined by the number of sge. So delete the wrong branch. Fixes: cfc85f3e4b7f ("RDMA/hns: Add profile support for hip08 driver") Fixes: 926a01dc000d ("RDMA/hns: Add QP operations support for hip08 SoC") Link: https://lore.kernel.org/r/20220829105021.1427804-3-liangwenpeng@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-08-30RDMA/hns: Fix supported page sizeChengchang Tang1-1/+1
The supported page size for hns is (4K, 128M), not (4K, 2G). Fixes: cfc85f3e4b7f ("RDMA/hns: Add profile support for hip08 driver") Link: https://lore.kernel.org/r/20220829105021.1427804-2-liangwenpeng@huawei.com Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-08-29RDMA/cma: Fix arguments order in net device validationMichael Guralnik1-2/+2
Fix the order of source and destination addresses when resolving the route between server and client to validate use of correct net device. The reverse order we had so far didn't actually validate the net device as the server would try to resolve the route to itself, thus always getting the server's net device. The issue was discovered when running cm applications on a single host between 2 interfaces with same subnet and source based routing rules. When resolving the reverse route the source based route rules were ignored. Fixes: f887f2ac87c2 ("IB/cma: Validate routing of incoming requests") Link: https://lore.kernel.org/r/1c1ec2277a131d277ebcceec987fd338d35b775f.1661251872.git.leonro@nvidia.com Signed-off-by: Michael Guralnik <michaelgur@nvidia.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-08-28RDMA/irdma: Fix drain SQ hang with no completionShiraz Saleem1-1/+1
SW generated completions for outstanding WRs posted on SQ after QP is in error target the wrong CQ. This causes the ib_drain_sq to hang with no completion. Fix this to generate completions on the right CQ. [ 863.969340] INFO: task kworker/u52:2:671 blocked for more than 122 seconds. [ 863.979224] Not tainted 5.14.0-130.el9.x86_64 #1 [ 863.986588] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 863.996997] task:kworker/u52:2 state:D stack: 0 pid: 671 ppid: 2 flags:0x00004000 [ 864.007272] Workqueue: xprtiod xprt_autoclose [sunrpc] [ 864.014056] Call Trace: [ 864.017575] __schedule+0x206/0x580 [ 864.022296] schedule+0x43/0xa0 [ 864.026736] schedule_timeout+0x115/0x150 [ 864.032185] __wait_for_common+0x93/0x1d0 [ 864.037717] ? usleep_range_state+0x90/0x90 [ 864.043368] __ib_drain_sq+0xf6/0x170 [ib_core] [ 864.049371] ? __rdma_block_iter_next+0x80/0x80 [ib_core] [ 864.056240] ib_drain_sq+0x66/0x70 [ib_core] [ 864.062003] rpcrdma_xprt_disconnect+0x82/0x3b0 [rpcrdma] [ 864.069365] ? xprt_prepare_transmit+0x5d/0xc0 [sunrpc] [ 864.076386] xprt_rdma_close+0xe/0x30 [rpcrdma] [ 864.082593] xprt_autoclose+0x52/0x100 [sunrpc] [ 864.088718] process_one_work+0x1e8/0x3c0 [ 864.094170] worker_thread+0x50/0x3b0 [ 864.099109] ? rescuer_thread+0x370/0x370 [ 864.104473] kthread+0x149/0x170 [ 864.109022] ? set_kthread_struct+0x40/0x40 [ 864.114713] ret_from_fork+0x22/0x30 Fixes: 81091d7696ae ("RDMA/irdma: Add SW mechanism to generate completions on error") Link: https://lore.kernel.org/r/20220824154358.117-1-shiraz.saleem@intel.com Reported-by: Kamal Heib <kamalheib1@gmail.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-08-21RDMA/rtrs-srv: Pass the correct number of entries for dma mapped SGLJack Wang1-7/+7
ib_dma_map_sg() augments the SGL into a 'dma mapped SGL'. This process may change the number of entries and the lengths of each entry. Code that touches dma_address is iterating over the 'dma mapped SGL' and must use dma_nents which returned from ib_dma_map_sg(). We should use the return count from ib_dma_map_sg for futher usage. Fixes: 9cb837480424e ("RDMA/rtrs: server: main functionality") Link: https://lore.kernel.org/r/20220818105355.110344-4-haris.iqbal@ionos.com Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Reviewed-by: Aleksei Marov <aleksei.marov@ionos.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-08-21RDMA/rtrs-clt: Use the right sg_cnt after ib_dma_map_sgJack Wang1-4/+5
When iommu is enabled, we hit warnings like this: WARNING: at rtrs/rtrs.c:178 rtrs_iu_post_rdma_write_imm+0x9b/0x110 rtrs warn on one sge entry length is 0, which is unexpected. The problem is ib_dma_map_sg augments the SGL into a 'dma mapped SGL'. This process may change the number of entries and the lengths of each entry. Code that touches dma_address is iterating over the 'dma mapped SGL' and must use dma_nents which returned from ib_dma_map_sg(). So pass the count return from ib_dma_map_sg. Fixes: 6a98d71daea1 ("RDMA/rtrs: client: main functionality") Link: https://lore.kernel.org/r/20220818105355.110344-3-haris.iqbal@ionos.com Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Reviewed-by: Aleksei Marov <aleksei.marov@ionos.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-08-16RDMA: Handle the return code from dma_resv_wait_timeout() properlyJason Gunthorpe1-1/+7
ib_umem_dmabuf_map_pages() returns 0 on success and -ERRNO on failure. dma_resv_wait_timeout() uses a different scheme: * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or * greater than zero on success. This results in ib_umem_dmabuf_map_pages() being non-functional as a positive return will be understood to be an error by drivers. Fixes: f30bceab16d1 ("RDMA: use dma_resv_wait() instead of extracting the fence") Cc: stable@kernel.org Link: https://lore.kernel.org/r/0-v1-d8f4e1fa84c8+17-rdma_dmabuf_fix_jgg@nvidia.com Tested-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-08-16RDMA/erdma: Correct the max_qp and max_cq capacities of the deviceCheng Xu1-2/+2
QP0 in HW is used for CMDQ, and the rest is for RDMA QPs. So the actual max_qp capacity reported to core should be max_qp (reported by HW) - 1. So does max_cq. Fixes: 155055771704 ("RDMA/erdma: Add verbs implementation") Link: https://lore.kernel.org/all/20220810014320.88026-1-chengyou@linux.alibaba.com Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2022-08-16RDMA/erdma: Using the key in FMR WR instead of MR structureCheng Xu1-1/+1
RDMA driver should get the MR key from FMR WR, not the MR structure passed in. Fixes: 155055771704 ("RDMA/erdma: Add verbs implementation") Link: https://lore.kernel.org/r/20220810014320.88026-2-chengyou@linux.alibaba.com Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-08-16RDMA/cxgb4: fix accept failure due to increased cpl_t5_pass_accept_rpl sizePotnuri Bharat Teja1-16/+9
Commit 'c2ed5611afd7' has increased the cpl_t5_pass_accept_rpl{} structure size by 8B to avoid roundup. cpl_t5_pass_accept_rpl{} is a HW specific structure and increasing its size will lead to unwanted adapter errors. Current commit reverts the cpl_t5_pass_accept_rpl{} back to its original and allocates zeroed skb buffer there by avoiding the memset for iss field. Reorder code to minimize chip type checks. Fixes: c2ed5611afd7 ("iw_cxgb4: Use memset_startat() for cpl_t5_pass_accept_rpl") Link: https://lore.kernel.org/r/20220809184118.2029-1-rahul.lakkireddy@chelsio.com Signed-off-by: Potnuri Bharat Teja <bharat@chelsio.com> Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-08-16RDMA/mlx5: Use the proper number of portsMark Bloch1-18/+16
The cited commit allowed the driver to operate over HCAs that have 4 physical ports. Use the number of ports of the RDMA device in the for loop instead of using the struct size. Fixes: 4cd14d44b11d ("net/mlx5: Support devices with more than 2 ports") Link: https://lore.kernel.org/r/a54a56c2ede16044a29d119209b35189c662ac72.1659944855.git.leonro@nvidia.com Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-08-16IB/iser: Fix login with authenticationSergey Gorenko1-1/+6
The iSER Initiator uses two types of receive buffers: - one big login buffer posted by iser_post_recvl(); - several small message buffers posted by iser_post_recvm(). The login buffer is used at the login phase and full feature phase in the discovery session. It may take a few requests and responses to complete the login phase. The message buffers are only used in the normal operational session at the full feature phase. After the commit referred in the fixes line, the login operation fails if the authentication is enabled. That happens because the Initiator posts a small receive buffer after the first response from Target. So, the next send operation fails because Target's second response does not fit into the small receive buffer. This commit adds additional checks to prevent posting small receive buffers until the full feature phase. Fixes: 39b169ea0d36 ("IB/iser: Fix RNR errors") Link: https://lore.kernel.org/r/20220805060135.18493-1-sergeygo@nvidia.com Signed-off-by: Sergey Gorenko <sergeygo@nvidia.com> Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
2022-08-06Merge tag 'dma-mapping-5.20-2022-08-06' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds1-36/+9
Pull dma-mapping updates from Christoph Hellwig: - convert arm32 to the common dma-direct code (Arnd Bergmann, Robin Murphy, Christoph Hellwig) - restructure the PCIe peer to peer mapping support (Logan Gunthorpe) - allow the IOMMU code to communicate an optional DMA mapping length and use that in scsi and libata (John Garry) - split the global swiotlb lock (Tianyu Lan) - various fixes and cleanup (Chao Gao, Dan Carpenter, Dongli Zhang, Lukas Bulwahn, Robin Murphy) * tag 'dma-mapping-5.20-2022-08-06' of git://git.infradead.org/users/hch/dma-mapping: (45 commits) swiotlb: fix passing local variable to debugfs_create_ulong() dma-mapping: reformat comment to suppress htmldoc warning PCI/P2PDMA: Remove pci_p2pdma_[un]map_sg() RDMA/rw: drop pci_p2pdma_[un]map_sg() RDMA/core: introduce ib_dma_pci_p2p_dma_supported() nvme-pci: convert to using dma_map_sgtable() nvme-pci: check DMA ops when indicating support for PCI P2PDMA iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg iommu: Explicitly skip bus address marked segments in __iommu_map_sg() dma-mapping: add flags to dma_map_ops to indicate PCI P2PDMA support dma-direct: support PCI P2PDMA pages in dma-direct map_sg dma-mapping: allow EREMOTEIO return code for P2PDMA transfers PCI/P2PDMA: Introduce helpers for dma_map_sg implementations PCI/P2PDMA: Attempt to set map_type if it has not been set lib/scatterlist: add flag for indicating P2PDMA segments in an SGL swiotlb: clean up some coding style and minor issues dma-mapping: update comment after dmabounce removal scsi: sd: Add a comment about limiting max_sectors to shost optimal limit ata: libata-scsi: cap ata_device->max_sectors according to shost->max_sectors scsi: scsi_transport_sas: cap shost opt_sectors according to DMA optimal limit ...
2022-08-05Merge tag 'trace-v6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-traceLinus Torvalds1-6/+2
Pull tracing updates from Steven Rostedt: - Runtime verification infrastructure This is the biggest change here. It introduces the runtime verification that is necessary for running Linux on safety critical systems. It allows for deterministic automata models to be inserted into the kernel that will attach to tracepoints, where the information on these tracepoints will move the model from state to state. If a state is encountered that does not belong to the model, it will then activate a given reactor, that could just inform the user or even panic the kernel (for which safety critical systems will detect and can recover from). - Two monitor models are also added: Wakeup In Preemptive (WIP - not to be confused with "work in progress"), and Wakeup While Not Running (WWNR). - Added __vstring() helper to the TRACE_EVENT() macro to replace several vsnprintf() usages that were all doing it wrong. - eprobes now can have their event autogenerated when the event name is left off. - The rest is various cleanups and fixes. * tag 'trace-v6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (50 commits) rv: Unlock on error path in rv_unregister_reactor() tracing: Use alignof__(struct {type b;}) instead of offsetof() tracing/eprobe: Show syntax error logs in error_log file scripts/tracing: Fix typo 'the the' in comment tracepoints: It is CONFIG_TRACEPOINTS not CONFIG_TRACEPOINT tracing: Use free_trace_buffer() in allocate_trace_buffers() tracing: Use a struct alignof to determine trace event field alignment rv/reactor: Add the panic reactor rv/reactor: Add the printk reactor rv/monitor: Add the wwnr monitor rv/monitor: Add the wip monitor rv/monitor: Add the wip monitor skeleton created by dot2k Documentation/rv: Add deterministic automata instrumentation documentation Documentation/rv: Add deterministic automata monitor synthesis documentation tools/rv: Add dot2k Documentation/rv: Add deterministic automaton documentation tools/rv: Add dot2c Documentation/rv: Add a basic documentation rv/include: Add instrumentation helper functions rv/include: Add deterministic automata monitor definition via C macros ...
2022-08-04Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdmaLinus Torvalds78-982/+8247
Pull rdma updates from Jason Gunthorpe: "This cycle we got a new RDMA driver "ERDMA" for the Alibaba cloud environment. Otherwise the changes are dominated by rxe fixes. There is another RDMA driver on the list that might get merged next cycle, 'MANA' for the Azure cloud environment. Summary: - Bug fixes and small features for irdma, hns, siw, qedr, hfi1, mlx5 - General spelling/grammer fixes - rdma cm can follow changes in neighbours for control packets - Significant amounts of rxe fixes and spec compliance changes - Use the modern NAPI API - Use the bitmap API instead of open coding - Performance improvements for rtrs - Add the ERDMA driver for Alibaba cloud - Fix a use after free bug in SRP" * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: (99 commits) RDMA/ib_srpt: Unify checking rdma_cm_id condition in srpt_cm_req_recv() RDMA/rxe: Fix error unwind in rxe_create_qp() RDMA/mlx5: Add missing check for return value in get namespace flow RDMA/rxe: Split qp state for requester and completer RDMA/rxe: Generate error completion for error requester QP state RDMA/rxe: Update wqe_index for each wqe error completion RDMA/srpt: Fix a use-after-free RDMA/srpt: Introduce a reference count in struct srpt_device RDMA/srpt: Duplicate port name members IB/qib: Fix repeated "in" within comments RDMA/erdma: Add driver to kernel build environment RDMA/erdma: Add the ABI definitions RDMA/erdma: Add the erdma module RDMA/erdma: Add connection management (CM) support RDMA/erdma: Add verbs implementation RDMA/erdma: Add verbs header file RDMA/erdma: Add event queue implementation RDMA/erdma: Add cmdq implementation RDMA/erdma: Add main include file RDMA/erdma: Add the hardware related definitions ...
2022-08-04Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsiLinus Torvalds1-2/+2
Pull SCSI updates from James Bottomley: "Updates to the usual drivers (ufs, qla2xx, target, lpfc, smartpqi, mpi3mr). The main driver change that might cause issues on down the road is the conversion of some of our oldest surviving drivers to the DMA API (should only affect m68k). The only major core change is the rework of async resume; the rest are either completely trivial or for updating deprecated APIs" * tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (195 commits) scsi: target: Remove XDWRITEREAD emulated support scsi: megaraid: Remove the static variable initialisation scsi: ch: Do not initialise statics to 0 scsi: ufs: core: Fix spelling mistake "Cannnot" -> "Cannot" scsi: target: iscsi: Do not require target authentication scsi: target: iscsi: Allow AuthMethod=None scsi: target: iscsi: Support base64 in CHAP scsi: target: iscsi: Add support for extended CDB AHS scsi: ufs: dt-bindings: Add SC8280XP binding scsi: target: iscsi: Fix clang -Wformat warnings scsi: ufs: core: Read device property for ref clock scsi: libsas: Resume SAS host for phy reset or enable via sysfs scsi: hisi_sas: Modify v3 HW SATA completion error processing scsi: hisi_sas: Relocate DMA unmap of SMP task scsi: hisi_sas: Remove unnecessary variable to hold DMA map elements scsi: hisi_sas: Call hisi_sas_slave_configure() from slave_configure_v3_hw() scsi: mpi3mr: Delete a stray tab scsi: mpi3mr: Unlock on error path scsi: mpi3mr: Reduce VD queue depth on detecting throttling scsi: mpi3mr: Resource Based Metering ...
2022-08-03Merge tag 'net-next-6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextLinus Torvalds3-19/+37
Pull networking changes from Paolo Abeni: "Core: - Refactor the forward memory allocation to better cope with memory pressure with many open sockets, moving from a per socket cache to a per-CPU one - Replace rwlocks with RCU for better fairness in ping, raw sockets and IP multicast router. - Network-side support for IO uring zero-copy send. - A few skb drop reason improvements, including codegen the source file with string mapping instead of using macro magic. - Rename reference tracking helpers to a more consistent netdev_* schema. - Adapt u64_stats_t type to address load/store tearing issues. - Refine debug helper usage to reduce the log noise caused by bots. BPF: - Improve socket map performance, avoiding skb cloning on read operation. - Add support for 64 bits enum, to match types exposed by kernel. - Introduce support for sleepable uprobes program. - Introduce support for enum textual representation in libbpf. - New helpers to implement synproxy with eBPF/XDP. - Improve loop performances, inlining indirect calls when possible. - Removed all the deprecated libbpf APIs. - Implement new eBPF-based LSM flavor. - Add type match support, which allow accurate queries to the eBPF used types. - A few TCP congetsion control framework usability improvements. - Add new infrastructure to manipulate CT entries via eBPF programs. - Allow for livepatch (KLP) and BPF trampolines to attach to the same kernel function. Protocols: - Introduce per network namespace lookup tables for unix sockets, increasing scalability and reducing contention. - Preparation work for Wi-Fi 7 Multi-Link Operation (MLO) support. - Add support to forciby close TIME_WAIT TCP sockets via user-space tools. - Significant performance improvement for the TLS 1.3 receive path, both for zero-copy and not-zero-copy. - Support for changing the initial MTPCP subflow priority/backup status - Introduce virtually contingus buffers for sockets over RDMA, to cope better with memory pressure. - Extend CAN ethtool support with timestamping capabilities - Refactor CAN build infrastructure to allow building only the needed features. Driver API: - Remove devlink mutex to allow parallel commands on multiple links. - Add support for pause stats in distributed switch. - Implement devlink helpers to query and flash line cards. - New helper for phy mode to register conversion. New hardware / drivers: - Ethernet DSA driver for the rockchip mt7531 on BPI-R2 Pro. - Ethernet DSA driver for the Renesas RZ/N1 A5PSW switch. - Ethernet DSA driver for the Microchip LAN937x switch. - Ethernet PHY driver for the Aquantia AQR113C EPHY. - CAN driver for the OBD-II ELM327 interface. - CAN driver for RZ/N1 SJA1000 CAN controller. - Bluetooth: Infineon CYW55572 Wi-Fi plus Bluetooth combo device. Drivers: - Intel Ethernet NICs: - i40e: add support for vlan pruning - i40e: add support for XDP framented packets - ice: improved vlan offload support - ice: add support for PPPoE offload - Mellanox Ethernet (mlx5) - refactor packet steering offload for performance and scalability - extend support for TC offload - refactor devlink code to clean-up the locking schema - support stacked vlans for bridge offloads - use TLS objects pool to improve connection rate - Netronome Ethernet NICs (nfp): - extend support for IPv6 fields mangling offload - add support for vepa mode in HW bridge - better support for virtio data path acceleration (VDPA) - enable TSO by default - Microsoft vNIC driver (mana) - add support for XDP redirect - Others Ethernet drivers: - bonding: add per-port priority support - microchip lan743x: extend phy support - Fungible funeth: support UDP segmentation offload and XDP xmit - Solarflare EF100: add support for virtual function representors - MediaTek SoC: add XDP support - Mellanox Ethernet/IB switch (mlxsw): - dropped support for unreleased H/W (XM router). - improved stats accuracy - unified bridge model coversion improving scalability (parts 1-6) - support for PTP in Spectrum-2 asics - Broadcom PHYs - add PTP support for BCM54210E - add support for the BCM53128 internal PHY - Marvell Ethernet switches (prestera): - implement support for multicast forwarding offload - Embedded Ethernet switches: - refactor OcteonTx MAC filter for better scalability - improve TC H/W offload for the Felix driver - refactor the Microchip ksz8 and ksz9477 drivers to share the probe code (parts 1, 2), add support for phylink mac configuration - Other WiFi: - Microchip wilc1000: diable WEP support and enable WPA3 - Atheros ath10k: encapsulation offload support Old code removal: - Neterion vxge ethernet driver: this is untouched since more than 10 years" * tag 'net-next-6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1890 commits) doc: sfp-phylink: Fix a broken reference wireguard: selftests: support UML wireguard: allowedips: don't corrupt stack when detecting overflow wireguard: selftests: update config fragments wireguard: ratelimiter: use hrtimer in selftest net/mlx5e: xsk: Discard unaligned XSK frames on striding RQ net: usb: ax88179_178a: Bind only to vendor-specific interface selftests: net: fix IOAM test skip return code net: usb: make USB_RTL8153_ECM non user configurable net: marvell: prestera: remove reduntant code octeontx2-pf: Reduce minimum mtu size to 60 net: devlink: Fix missing mutex_unlock() call net/tls: Remove redundant workqueue flush before destroy net: txgbe: Fix an error handling path in txgbe_probe() net: dsa: Fix spelling mistakes and cleanup code Documentation: devlink: add add devlink-selftests to the table of contents dccp: put dccp_qpolicy_full() and dccp_qpolicy_push() in the same lock net: ionic: fix error check for vlan flags in ionic_set_nic_features() net: ice: fix error NETIF_F_HW_VLAN_CTAG_FILTER check in ice_vsi_sync_fltr() nfp: flower: add support for tunnel offload without key ID ...
2022-08-02Merge tag 'for-5.20/block-2022-07-29' of git://git.kernel.dk/linux-blockLinus Torvalds1-2/+1
Pull block updates from Jens Axboe: - Improve the type checking of request flags (Bart) - Ensure queue mapping for a single queues always picks the right queue (Bart) - Sanitize the io priority handling (Jan) - rq-qos race fix (Jinke) - Reserved tags handling improvements (John) - Separate memory alignment from file/disk offset aligment for O_DIRECT (Keith) - Add new ublk driver, userspace block driver using io_uring for communication with the userspace backend (Ming) - Use try_cmpxchg() to cleanup the code in various spots (Uros) - Finally remove bdevname() (Christoph) - Clean up the zoned device handling (Christoph) - Clean up independent access range support (Christoph) - Clean up and improve block sysfs handling (Christoph) - Clean up and improve teardown of block devices. This turns the usual two step process into something that is simpler to implement and handle in block drivers (Christoph) - Clean up chunk size handling (Christoph) - Misc cleanups and fixes (Bart, Bo, Dan, GuoYong, Jason, Keith, Liu, Ming, Sebastian, Yang, Ying) * tag 'for-5.20/block-2022-07-29' of git://git.kernel.dk/linux-block: (178 commits) ublk_drv: fix double shift bug ublk_drv: make sure that correct flags(features) returned to userspace ublk_drv: fix error handling of ublk_add_dev ublk_drv: fix lockdep warning block: remove __blk_get_queue block: call blk_mq_exit_queue from disk_release for never added disks blk-mq: fix error handling in __blk_mq_alloc_disk ublk: defer disk allocation ublk: rewrite ublk_ctrl_get_queue_affinity to not rely on hctx->cpumask ublk: fold __ublk_create_dev into ublk_ctrl_add_dev ublk: cleanup ublk_ctrl_uring_cmd ublk: simplify ublk_ch_open and ublk_ch_release ublk: remove the empty open and release block device operations ublk: remove UBLK_IO_F_PREFLUSH ublk: add a MAINTAINERS entry block: don't allow the same type rq_qos add more than once mmc: fix disk/queue leak in case of adding disk failure ublk_drv: fix an IS_ERR() vs NULL check ublk: remove UBLK_IO_F_INTEGRITY ublk_drv: remove unneeded semicolon ...
2022-08-02RDMA/ib_srpt: Unify checking rdma_cm_id condition in srpt_cm_req_recv()Li Zhijian1-4/+4
Although rdma_cm_id and ib_cm_id passing to srpt_cm_req_recv() are exclusive currently, all other checking condition are using rdma_cm_id. So unify the 'if' condition to make the code more clear. Link: https://lore.kernel.org/r/1659336226-2-1-git-send-email-lizhijian@fujitsu.com Signed-off-by: Li Zhijian <lizhijian@fujitsu.com> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-08-02RDMA/rxe: Fix error unwind in rxe_create_qp()Zhu Yanjun1-4/+8
In the function rxe_create_qp(), rxe_qp_from_init() is called to initialize qp, internally things like the spin locks are not setup until rxe_qp_init_req(). If an error occures before this point then the unwind will call rxe_cleanup() and eventually to rxe_qp_do_cleanup()/rxe_cleanup_task() which will oops when trying to access the uninitialized spinlock. Move the spinlock initializations earlier before any failures. Fixes: 8700e3e7c485 ("Soft RoCE driver") Link: https://lore.kernel.org/r/20220731063621.298405-1-yanjun.zhu@linux.dev Reported-by: syzbot+833061116fa28df97f3b@syzkaller.appspotmail.com Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-08-02RDMA/mlx5: Add missing check for return value in get namespace flowMaor Gottlieb1-4/+2
Add missing check for return value when calling to mlx5_ib_ft_type_to_namespace, even though it can't really fail in this specific call. Fixes: 52438be44112 ("RDMA/mlx5: Allow inserting a steering rule to the FDB") Link: https://lore.kernel.org/r/7b9ceda217d9368a51dc47a46b769bad4af9ac92.1659256069.git.leonro@nvidia.com Reviewed-by: Itay Aveksis <itayav@nvidia.com> Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-08-02RDMA/rxe: Split qp state for requester and completerBob Pearson4-3/+10
Currently the requester can continue to process send wqes after an local qp operation error is detected because the setting of the qp state to the error state is deferred until later. This patch splits the qp state for the completer and requester into two separate states and sets qp->req.state = QP_STATE_ERROR as soon as the error is detected before another wqe can be executed. Link: https://lore.kernel.org/r/1658307368-1851-4-git-send-email-lizhijian@fujitsu.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-08-02RDMA/rxe: Generate error completion for error requester QP stateLi Zhijian1-1/+12
As per IBTA specification, all subsequent WQEs while QP is in error state should be completed with a flush error. Here we check QP_STATE_ERROR after req_next_wqe() so that rxe_completer() has chance to be called where it will set CQ state to FLUSH ERROR and the completion can associate with its WQE. Link: https://lore.kernel.org/r/1658307368-1851-3-git-send-email-lizhijian@fujitsu.com Signed-off-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-08-02RDMA/rxe: Update wqe_index for each wqe error completionLi Zhijian1-0/+2
Previously, if user space keeps sending abnormal wqe, queue.index will keep increasing while qp->req.wqe_index doesn't. Once qp->req.wqe_index==queue.index in next round, req_next_wqe() will treat queue as empty. In such case, no new completion would be generated. Update wqe_index for each wqe completion so that req_next_wqe() can get next wqe properly. Link: https://lore.kernel.org/r/1658307368-1851-2-git-send-email-lizhijian@fujitsu.com Signed-off-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-07-29RDMA/srpt: Fix a use-after-freeBart Van Assche2-46/+94
Change the LIO port members inside struct srpt_port from regular members into pointers. Allocate the LIO port data structures from inside srpt_make_tport() and free these from inside srpt_make_tport(). Keep struct srpt_device as long as either an RDMA port or a LIO target port is associated with it. This patch decouples the lifetime of struct srpt_port (controlled by the RDMA core) and struct srpt_port_id (controlled by LIO). This patch fixes the following KASAN complaint: BUG: KASAN: use-after-free in srpt_enable_tpg+0x31/0x70 [ib_srpt] Read of size 8 at addr ffff888141cc34b8 by task check/5093 Call Trace: <TASK> show_stack+0x4e/0x53 dump_stack_lvl+0x51/0x66 print_address_description.constprop.0.cold+0xea/0x41e print_report.cold+0x90/0x205 kasan_report+0xb9/0xf0 __asan_load8+0x69/0x90 srpt_enable_tpg+0x31/0x70 [ib_srpt] target_fabric_tpg_base_enable_store+0xe2/0x140 [target_core_mod] configfs_write_iter+0x18b/0x210 new_sync_write+0x1f2/0x2f0 vfs_write+0x3e3/0x540 ksys_write+0xbb/0x140 __x64_sys_write+0x42/0x50 do_syscall_64+0x34/0x80 entry_SYSCALL_64_after_hwframe+0x46/0xb0 </TASK> Link: https://lore.kernel.org/r/20220727193415.1583860-4-bvanassche@acm.org Reported-by: Li Zhijian <lizhijian@fujitsu.com> Tested-by: Li Zhijian <lizhijian@fujitsu.com> Fixes: a42d985bd5b2 ("ib_srpt: Initial SRP Target merge for v3.3-rc1") Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-07-29RDMA/srpt: Introduce a reference count in struct srpt_deviceBart Van Assche2-2/+17
This will be used to keep struct srpt_device around as long as either the RDMA port exists or a LIO target port is associated with the struct srpt_device. Link: https://lore.kernel.org/r/20220727193415.1583860-3-bvanassche@acm.org Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-07-29RDMA/srpt: Duplicate port name membersBart Van Assche2-6/+13
Prepare for decoupling the lifetimes of struct srpt_port and struct srpt_port_id by duplicating the port name into struct srpt_port. Link: https://lore.kernel.org/r/20220727193415.1583860-2-bvanassche@acm.org Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-07-28IB/qib: Fix repeated "in" within commentswangjianli1-1/+1
Delete the redundant word 'in'. Link: https://lore.kernel.org/r/20220724074407.18552-1-wangjianli@cdjrlc.com Signed-off-by: wangjianli <wangjianli@cdjrlc.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-07-27Merge branch 'erdma' into rdma.git for-nextJason Gunthorpe15-7/+6420
Cheng Xu says ==================== This v14 patch set introduces the Elastic RDMA Adapter (ERDMA) driver, which released in Apsara Conference 2021 by Alibaba. The PR of ERDMA userspace provider has already been created [1]. ERDMA enables large-scale RDMA acceleration capability in Alibaba ECS environment, initially offered in g7re instance. It can improve the efficiency of large-scale distributed computing and communication significantly and expand dynamically with the cluster scale of Alibaba Cloud. ERDMA is a RDMA networking adapter based on the Alibaba MOC hardware. It works in the VPC network environment (overlay network), and uses iWarp transport protocol. ERDMA supports reliable connection (RC). ERDMA also supports both kernel space and user space verbs. Now we have already supported HPC/AI applications with libfabric, NoF and some other internal verbs libraries, such as xrdma, epsl, etc,. For the ECS instance with RDMA enabled, our MOC hardware generates two kinds of PCI devices: one for ERDMA, and one for the original net device (virtio-net). They are separated PCI devices. ==================== * branch 'erdma': RDMA/erdma: Add driver to kernel build environment RDMA/erdma: Add the ABI definitions RDMA/erdma: Add the erdma module RDMA/erdma: Add connection management (CM) support RDMA/erdma: Add verbs implementation RDMA/erdma: Add verbs header file RDMA/erdma: Add event queue implementation RDMA/erdma: Add cmdq implementation RDMA/erdma: Add main include file RDMA/erdma: Add the hardware related definitions RDMA: Add ERDMA to rdma_driver_id definition Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-07-27RDMA/erdma: Add driver to kernel build environmentCheng Xu4-7/+25
Add erdma to the kernel build environment, and sort the source order in drivers/infiniband/Kconfig. Link: https://lore.kernel.org/r/20220727014927.76564-12-chengyou@linux.alibaba.com Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-07-27RDMA/erdma: Add the erdma moduleCheng Xu1-0/+608
Add the main erdma module, which provides interface to infiniband subsystem. This commit includes a modification from Christophe, that using the bitmap API to allocate bitmaps instead of hand-writing. And the commit also fixes warnings reported by static checkers. Link: https://lore.kernel.org/r/20220727014927.76564-10-chengyou@linux.alibaba.com Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-07-27RDMA/erdma: Add connection management (CM) supportCheng Xu2-0/+1597
ERDMA's transport protocol is iWarp, so the driver must support CM interface. In CM part, we use the same way as SoftiWarp: using kernel socket to set up the connection, then performing MPA negotiation in kernel. So, this part of code mainly comes from SoftiWarp, base on it, we add some more features, such as non-blocking iw_connect implementation. This commit also fixes a duplicated include issue reported by Abaci Robot. Link: https://lore.kernel.org/r/20220727014927.76564-9-chengyou@linux.alibaba.com Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-07-27RDMA/erdma: Add verbs implementationCheng Xu3-0/+2231
The RDMA verbs implementation of erdma is divided into three files: erdma_qp.c, erdma_cq.c, and erdma_verbs.c. Internal used functions and datapath functions of QP/CQ are put in erdma_qp.c and erdma_cq.c, the rest is in erdma_verbs.c. This commit also fixes some static check warnings. Link: https://lore.kernel.org/r/20220727014927.76564-8-chengyou@linux.alibaba.com Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-07-27RDMA/erdma: Add verbs header fileCheng Xu1-0/+342
This header file defines the main structures and functions used for RDMA Verbs, including qp, cq, mr, ucontext, etc,. Link: https://lore.kernel.org/r/20220727014927.76564-7-chengyou@linux.alibaba.com Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-07-27RDMA/erdma: Add event queue implementationCheng Xu1-0/+329
Event queue (EQ) is the main notification way from erdma hardware to its driver. Each erdma device contains 2 kinds EQs: asynchronous EQ (AEQ) and completion EQ (CEQ). Per device has 1 AEQ, which used for RDMA async event report, and max to 32 CEQs (numbered for CEQ0 to CEQ31). CEQ0 is used for cmdq completion event report, and the rest CEQs are used for RDMA completion event report. Link: https://lore.kernel.org/r/20220727014927.76564-6-chengyou@linux.alibaba.com Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-07-27RDMA/erdma: Add cmdq implementationCheng Xu1-0/+493
Cmdq is the main control plane channel between erdma driver and hardware. After erdma device is initialized, the cmdq channel will be active in the whole lifecycle of this driver. This commit also includes two modifications from Christophe, one is using the bitmap API to allocate bitmaps instead of hand-writing, and another is using the non-atomic bitmap API when applicable. Link: https://lore.kernel.org/r/20220727014927.76564-5-chengyou@linux.alibaba.com Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-07-27RDMA/erdma: Add main include fileCheng Xu1-0/+287
Add ERDMA driver main header file, defining internal used data structures and operations. The defined data structures includes *cmdq*, which is used as the communication channel between ERDMA driver and hardware. Link: https://lore.kernel.org/r/20220727014927.76564-4-chengyou@linux.alibaba.com Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-07-27RDMA/erdma: Add the hardware related definitionsCheng Xu1-0/+508
ERDMA is a PCIe device, and this file provides ERDMA hardware related definitions, mainly including PCIe device capabilities and restrictions, device registers definitions, doorbell space, doorbell structure definitions and WQE definitions. Link: https://lore.kernel.org/r/20220727014927.76564-3-chengyou@linux.alibaba.com Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>