aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/infiniband (follow)
AgeCommit message (Collapse)AuthorFilesLines
2020-12-01RDMA/hns: Bugfix for calculation of extended sgeYangyang Li1-1/+6
Page alignment is required when setting the number of extended sge according to the hardware's achivement. If the space of needed extended sge is greater than one page, the roundup_pow_of_two() can ensure that. But if the needed extended sge isn't 0 and can not be filled in a whole page, the driver should align it specifically. Fixes: 54d6638765b0 ("RDMA/hns: Optimize WQE buffer size calculating process") Link: https://lore.kernel.org/r/1606558959-48510-3-git-send-email-liweihang@huawei.com Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-12-01RDMA/hns: Fix 0-length sge calculation errorLang Cheng1-14/+10
One RC SQ WQE can store 2 sges but UD can't, so ignore 2 valid sges of wr.sglist for RC which have been filled in WQE before setting extended sge. Either of RC and UD can not contain 0-length sges, so these 0-length sges should be skipped. Fixes: 54d6638765b0 ("RDMA/hns: Optimize WQE buffer size calculating process") Link: https://lore.kernel.org/r/1606558959-48510-2-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-12-01RDMA/efa: Use the correct current and new states in modify QPGal Pressman1-2/+2
The local variables cur_state and new_state hold the state that should be used for the modify QP operation instead of the ones in the ib_qp_attr struct. Fixes: 40909f664d27 ("RDMA/efa: Add EFA verbs implementation") Link: https://lore.kernel.org/r/20201201091724.37016-1-galpress@amazon.com Reviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-12-01RDMA/qedr: iWARP invalid(zero) doorbell address fixAlok Prasad1-0/+9
This patch fixes issue introduced by a previous commit where iWARP doorbell address wasn't initialized, causing call trace when any RDMA application wants to use this interface: Illegal doorbell address: 0000000000000000. Legal range for doorbell addresses is [0000000011431e08..00000000ec3799d3] WARNING: CPU: 11 PID: 11990 at drivers/net/ethernet/qlogic/qed/qed_dev.c:93 qed_db_rec_sanity.isra.12+0x48/0x70 [qed] ... hpsa scsi_transport_sas [last unloaded: crc8] CPU: 11 PID: 11990 Comm: rping Tainted: G S 5.10.0-rc1 #29 Hardware name: HP ProLiant DL380 Gen9/ProLiant DL380 Gen9, BIOS P89 01/22/2018 RIP: 0010:qed_db_rec_sanity.isra.12+0x48/0x70 [qed] ... RSP: 0018:ffffafc28458fa88 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff8d0d4c620000 RCX: 0000000000000000 RDX: ffff8d10afde7d50 RSI: ffff8d10afdd8b40 RDI: ffff8d10afdd8b40 RBP: ffffafc28458fe38 R08: 0000000000000003 R09: 0000000000007fff R10: 0000000000000001 R11: ffffafc28458f888 R12: 0000000000000000 R13: 0000000000000000 R14: ffff8d0d43ccbbd0 R15: ffff8d0d48dae9c0 FS: 00007fbd5267e740(0000) GS:ffff8d10afdc0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fbd4f258fb8 CR3: 0000000108d96003 CR4: 00000000001706e0 Call Trace: qed_db_recovery_add+0x6d/0x1f0 [qed] qedr_create_user_qp+0x57e/0xd30 [qedr] qedr_create_qp+0x5f3/0xab0 [qedr] ? lookup_get_idr_uobject.part.12+0x45/0x90 [ib_uverbs] create_qp+0x45d/0xb30 [ib_uverbs] ? ib_uverbs_cq_event_handler+0x30/0x30 [ib_uverbs] ib_uverbs_create_qp+0xb9/0xe0 [ib_uverbs] ib_uverbs_write+0x3f9/0x570 [ib_uverbs] ? security_mmap_file+0x62/0xe0 vfs_write+0xb7/0x200 ksys_write+0xaf/0xd0 ? syscall_trace_enter.isra.25+0x152/0x200 do_syscall_64+0x2d/0x40 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Fixes: 06e8d1df46ed ("RDMA/qedr: Add support for user mode XRC-SRQ's") Link: https://lore.kernel.org/r/20201127163251.14533-1-palok@marvell.com Signed-off-by: Michal Kalderon <mkalderon@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: Alok Prasad <palok@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-12-01RDMA/i40iw: Remove push code from i40iwShiraz Saleem8-224/+18
The push feature does not work as expected in x722 and has historically been disabled in the driver. Purge all remaining code related to the push feature in i40iw. Link: https://lore.kernel.org/r/20201125005616.1800-3-shiraz.saleem@intel.com Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-12-01Merge tag 'v5.10-rc6' into rdma.git for-nextJason Gunthorpe13-93/+98
For dependencies in following patches Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-27Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski13-93/+98
Trivial conflict in CAN, keep the net-next + the byteswap wrapper. Conflicts: drivers/net/can/usb/gs_usb.c Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-27RDMA/hns: Add support for QP stashLang Cheng2-61/+75
Stash is a mechanism that uses the core information carried by the ARM AXI bus to access the L3 cache. It can be used to improve the performance by increasing the hit ratio of L3 cache. QPs need to enable stash by default. Link: https://lore.kernel.org/r/1606374251-21512-3-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-27RDMA/hns: Add support for CQ stashLang Cheng4-16/+37
Stash is a mechanism that uses the core information carried by the ARM AXI bus to access the L3 cache. It can be used to improve the performance by increasing the hit ratio of L3 cache. CQs need to enable stash by default. Link: https://lore.kernel.org/r/1606374251-21512-2-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-27RDMA/hns: Create QP with selected QPN for bank load balanceYangyang Li2-19/+96
In order to improve performance by balancing the load between different banks of cache, the QPC cache is desigend to choose one of 8 banks according to lower 3 bits of QPN. The hns driver needs to count the number of QP on each bank and then assigns the QP being created to the bank with the minimum load first. Link: https://lore.kernel.org/r/1606220649-1465-1-git-send-email-liweihang@huawei.com Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-27RDMA/restrack: Support all QP typesLeon Romanovsky4-27/+17
The latest changes in restrack name handling allowed to simplify the QP creation code to support all types of QPs. For example XRC QP are presented with rdmatool. $ ibv_xsrq_pingpong & $ rdma res show qp link ibp0s9/1 lqpn 0 type SMI state RTS sq-psn 0 comm [ib_core] link ibp0s9/1 lqpn 1 type GSI state RTS sq-psn 0 comm [ib_core] link ibp0s9/1 lqpn 7 type UD state RTS sq-psn 0 comm [mlx5_ib] link ibp0s9/1 lqpn 42 type XRC_TGT state INIT sq-psn 0 path-mig-state MIGRATED comm [ib_uverbs] link ibp0s9/1 lqpn 43 type XRC_INI state INIT sq-psn 0 path-mig-state MIGRATED pdn 197 pid 419 comm ibv_xsrq_pingpong Link: https://lore.kernel.org/r/20201117070148.1974114-4-leon@kernel.org Reviewed-by: Mark Zhang <markz@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-27RDMA/core: Allow drivers to disable restrack DBLeon Romanovsky4-4/+17
Driver QP types are special case with no IBTA restrictions. For example, EFA implemented creation of this QP type as regular one, while mlx5 separated create to two step: create and modify. That separation causes to the situation where DC QP (mlx5) is always added to the same xarray index zero. This change allows to drivers like mlx5 simply disable restrack DB tracking, but it doesn't disable kref on the memory. Fixes: 52e0a118a203 ("RDMA/restrack: Track driver QP types in resource tracker") Link: https://lore.kernel.org/r/20201117070148.1974114-3-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-27RDMA/core: Track device memory MRsLeon Romanovsky1-0/+4
Device memory (DM) are registered as MR during initialization flow, these MRs were not tracked by resource tracker and had res->valid set as a false. Update the code to manage them too. Before this change: $ ibv_rc_pingpong -j & $ rdma res show mr <-- shows nothing After this change: $ ibv_rc_pingpong -j & $ rdma res show mr dev ibp0s9 mrn 0 mrlen 4096 pdn 3 pid 734 comm ibv_rc_pingpong Fixes: be934cca9e98 ("IB/uverbs: Add device memory registration ioctl support") Link: https://lore.kernel.org/r/20201117070148.1974114-2-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-26RDMA/mlx5: Use PCI device for dma mappingsParav Pandit1-9/+11
DMA operation of the IB device is done using ib_device->dma_device. Instead of accessing parent of the IB device, use the PCI dma device which is setup to ib_device->dma_device during IB device registration. Link: https://lore.kernel.org/r/20201125064628.8431-1-leon@kernel.org Signed-off-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-26RDMA/mlx5: Silence the overflow warning while building offset maskLeon Romanovsky1-1/+1
Coverity reports "Potentially overflowing expression ..." warning, which is correct thing to complain from the compiler point of view, but this is not possible in the current code. Still, this is a small error as there are some future situations that might need to use a 32 bit offset. Use ULL so the calculation works up to 63. Fixes: b045db62f6f6 ("RDMA/mlx5: Use ib_umem_find_best_pgoff() for SRQ") Link: https://lore.kernel.org/r/20201125061704.6580-1-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-26RDMA/mlx5: Check for ERR_PTR from uverbs_zalloc()Jason Gunthorpe1-1/+1
The return code from uverbs_zalloc() was wrongly checked, it is ERR_PTR not NULL like other allocators: drivers/infiniband/hw/mlx5/devx.c:2110 devx_umem_reg_cmd_alloc() warn: passing zero to 'PTR_ERR' Fixes: 878f7b31c3a7 ("RDMA/mlx5: Use ib_umem_find_best_pgsz() for devx") Link: https://lore.kernel.org/r/0-v1-4d05ccc1c223+173-devx_err_ptr_jgg@nvidia.com Reported-by: kernel test robot <lkp@intel.com> Acked-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-26RDMA/hns: Add UD support for HIP09Weihang Li4-12/+37
HIP09 supports service type of Unreliable Datagram, add necessary process to enable this feature. Link: https://lore.kernel.org/r/1605526408-6936-7-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-26RDMA/hns: Simplify process of filling UD SQ WQEWeihang Li2-69/+43
There are some codes can be simplified or encapsulated in set_ud_wqe() to make them easier to be understand. Link: https://lore.kernel.org/r/1605526408-6936-6-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-26RDMA/hns: Remove the portn field in UD SQ WQEWeihang Li2-5/+0
This field in UD WQE in not used by hardware. Fixes: 7bdee4158b37 ("RDMA/hns: Fill sq wqe context of ud type in hip08") Link: https://lore.kernel.org/r/1605526408-6936-5-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-26RDMA/hns: Avoid setting loopback indicator when smac is same as dmacWeihang Li1-9/+0
The loopback flag will be set to 1 by the hardware when the source mac address is same as the destination mac address. So the driver don't need to compare them. Fixes: d6a3627e311c ("RDMA/hns: Optimize wqe buffer set flow for post send") Link: https://lore.kernel.org/r/1605526408-6936-4-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-26RDMA/hns: Fix missing fields in address vectorWeihang Li1-4/+6
Traffic class and hop limit in address vector is not assigned from GRH, but it will be filled into UD SQ WQE. So the hardware will get a wrong value. Fixes: 82e620d9c3a0 ("RDMA/hns: Modify the data structure of hns_roce_av") Link: https://lore.kernel.org/r/1605526408-6936-3-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-26RDMA/hns: Only record vlan info for HIP08Weihang Li3-33/+33
Information about vlan is stored in GMV(GID/MAC/VLAN) table for HIP09, so there is no need to copy it to address vector. Link: https://lore.kernel.org/r/1605526408-6936-2-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-26RDMA/mlx4: Enable querying AH for XRC QP typesAvihai Horon1-1/+3
Address handle is set for connected QP types such as RC and UC, and thus can also be queried. Since XRC QP types INI and TGT are connected, it should be possible to query their address handle as well. Until now it was not the case, and although the firmware supported it, the driver allowed querying the address handle only for RC and UC. Hence, we enable it now for INI and TGT QPs as well. Link: https://lore.kernel.org/r/20201115121425.139833-3-leon@kernel.org Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Avihai Horon <avihaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-26RDMA/mlx5: Enable querying AH for XRC QP typesAvihai Horon1-1/+3
Address handle is set for connected QP types such as RC and UC, and thus can also be queried. Since XRC QP types INI and TGT are connected, it should be possible to query their address handle as well. Until now it was not the case, and although the firmware supported it, the driver allowed querying the address handle only for RC and UC. Hence, we enable it now for INI and TGT QPs as well. Link: https://lore.kernel.org/r/20201115121425.139833-2-leon@kernel.org Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Avihai Horon <avihaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-26RDMA/hns: Bugfix for memory window mtpt configurationYixian Liu1-0/+1
When a memory window is bound to a memory region, the local write access should be set for its mtpt table. Fixes: c7c28191408b ("RDMA/hns: Add MW support for hip08") Link: https://lore.kernel.org/r/1606386372-21094-1-git-send-email-liweihang@huawei.com Signed-off-by: Yixian Liu <liuyixian@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-26RDMA/hns: Fix retry_cnt and rnr_cnt when querying QPWenpeng Liang1-4/+4
The maximum number of retransmission should be returned when querying QP, not the value of retransmission counter. Fixes: 99fcf82521d9 ("RDMA/hns: Fix the wrong value of rnr_retry when querying qp") Fixes: 926a01dc000d ("RDMA/hns: Add QP operations support for hip08 SoC") Link: https://lore.kernel.org/r/1606382977-21431-1-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-26RDMA/hns: Fix wrong field of SRQ number the device supportsWenpeng Liang1-1/+1
The SRQ capacity is got from the firmware, whose field should be ended at bit 19. Fixes: ba6bb7e97421 ("RDMA/hns: Add interfaces to get pf capabilities from firmware") Link: https://lore.kernel.org/r/1606382812-23636-1-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-25IB/hfi1: Ensure correct mm is used at all timesDennis Dalessandro8-49/+79
Two earlier bug fixes have created a security problem in the hfi1 driver. One fix aimed to solve an issue where current->mm was not valid when closing the hfi1 cdev. It attempted to do this by saving a cached value of the current->mm pointer at file open time. This is a problem if another process with access to the FD calls in via write() or ioctl() to pin pages via the hfi driver. The other fix tried to solve a use after free by taking a reference on the mm. To fix this correctly we use the existing cached value of the mm in the mmu notifier. Now we can check in the insert, evict, etc. routines that current->mm matched what the notifier was registered for. If not, then don't allow access. The register of the mmu notifier will save the mm pointer. Since in do_exit() the exit_mm() is called before exit_files(), which would call our close routine a reference is needed on the mm. We rely on the mmgrab done by the registration of the notifier, whereas before it was explicit. The mmu notifier deregistration happens when the user context is torn down, the creation of which triggered the registration. Also of note is we do not do any explicit work to protect the interval tree notifier. It doesn't seem that this is going to be needed since we aren't actually doing anything with current->mm. The interval tree notifier stuff still has a FIXME noted from a previous commit that will be addressed in a follow on patch. Cc: <stable@vger.kernel.org> Fixes: e0cf75deab81 ("IB/hfi1: Fix mm_struct use after free") Fixes: 3faa3d9a308e ("IB/hfi1: Make use of mm consistent") Link: https://lore.kernel.org/r/20201125210112.104301.51331.stgit@awfm-01.aw.intel.com Suggested-by: Jann Horn <jannh@google.com> Reported-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-25RDMA/cma: Fix deadlock on &lock in rdma_cma_listen_on_all() error unwindJason Gunthorpe1-7/+18
rdma_detroy_id() cannot be called under &lock - we must instead keep the error'd ID around until &lock can be released, then destroy it. This is complicated by the usual way listen IDs are destroyed through cma_process_remove() which can run at any time and will asynchronously destroy the same ID. Remove the ID from visiblity of cma_process_remove() before going down the destroy path outside the locking. Fixes: c80a0c52d85c ("RDMA/cma: Add missing error handling of listen_id") Link: https://lore.kernel.org/r/20201118133756.GK244516@ziepe.ca Reported-by: syzbot+1bc48bf7f78253f664a9@syzkaller.appspotmail.com Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-25RDMA/i40iw: Address an mmap handler exploit in i40iwShiraz Saleem2-35/+7
i40iw_mmap manipulates the vma->vm_pgoff to differentiate a push page mmap vs a doorbell mmap, and uses it to compute the pfn in remap_pfn_range without any validation. This is vulnerable to an mmap exploit as described in: https://lore.kernel.org/r/20201119093523.7588-1-zhudi21@huawei.com The push feature is disabled in the driver currently and therefore no push mmaps are issued from user-space. The feature does not work as expected in the x722 product. Remove the push module parameter and all VMA attribute manipulations for this feature in i40iw_mmap. Update i40iw_mmap to only allow DB user mmapings at offset = 0. Check vm_pgoff for zero and if the mmaps are bound to a single page. Cc: <stable@kernel.org> Fixes: d37498417947 ("i40iw: add files for iwarp interface") Link: https://lore.kernel.org/r/20201125005616.1800-2-shiraz.saleem@intel.com Reported-by: Di Zhu <zhudi21@huawei.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-23RDMA/hns: Refactor the hns_roce_buf allocation flowXi Wang3-105/+113
Add a group of flags to control the 'struct hns_roce_buf' allocation flow, this is used to support the caller running in atomic context. Link: https://lore.kernel.org/r/1605347916-15964-1-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-23IB/qib: Use dma_set_mask_and_coherent to simplify codeChristophe JAILLET1-9/+2
'pci_set_dma_mask()' + 'pci_set_consistent_dma_mask()' can be replaced by an equivalent 'dma_set_mask_and_coherent()' which is much less verbose. Link: https://lore.kernel.org/r/20201121095127.1335228-1-christophe.jaillet@wanadoo.fr Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-23RDMA/i40iw: Constify ops structsRikard Falkeborn2-20/+20
The ops structs are never modified. Make them const to allow the compiler to put them in read-only memory. Link: https://lore.kernel.org/r/20201121002529.89148-1-rikard.falkeborn@gmail.com Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-23IB/mthca: fix return value of error branch in mthca_init_cq()Xiongfeng Wang1-4/+6
We return 'err' in the error branch, but this variable may be set as zero by the above code. Fix it by setting 'err' as a negative value before we goto the error label. Fixes: 74c2174e7be5 ("IB uverbs: add mthca user CQ support") Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Link: https://lore.kernel.org/r/1605837422-42724-1-git-send-email-wangxiongfeng2@huawei.com Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-23RDMA/cxgb4: Validate the number of CQEsKamal Heib1-0/+3
Before create CQ, make sure that the requested number of CQEs is in the supported range. Fixes: cfdda9d76436 ("RDMA/cxgb4: Add driver for Chelsio T4 RNIC") Link: https://lore.kernel.org/r/20201108132007.67537-1-kamalheib1@gmail.com Signed-off-by: Kamal Heib <kamalheib1@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-23RDMA/siw,rxe: Make emulated devices virtual in the device treeJason Gunthorpe3-28/+1
This moves siw and rxe to be virtual devices in the device tree: lrwxrwxrwx 1 root root 0 Nov 6 13:55 /sys/class/infiniband/rxe0 -> ../../devices/virtual/infiniband/rxe0/ Previously they were trying to parent themselves to the physical device of their attached netdev, which doesn't make alot of sense. My hope is this will solve some weird syzkaller hits related to sysfs as it could be possible that the parent of a netdev is another netdev, eg under bonding or some other syzkaller found netdev configuration. Nesting a ib_device under anything but a physical device is going to cause inconsistencies in sysfs during destructions. Link: https://lore.kernel.org/r/0-v1-dcbfc68c4b4a+d6-virtual_dev_jgg@nvidia.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-23IB/mlx5: Fix fall-through warnings for ClangGustavo A. R. Silva1-0/+1
In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning by explicitly adding the new pseudo-keyword fallthrough; instead of letting the code fall through to the next case. Link: https://lore.kernel.org/r/2b0c87362bc86f6adfe56a5a6685837b71022bbf.1605896059.git.gustavoars@kernel.org Link: https://github.com/KSPP/linux/issues/115 Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Acked-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-23IB/qedr: Fix fall-through warnings for ClangGustavo A. R. Silva1-0/+1
In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning by explicitly adding a break statement instead of just letting the code fall through to the next case. Link: https://lore.kernel.org/r/8d7cf00ec3a4b27a895534e02077c2c9ed8a5f8e.1605896059.git.gustavoars@kernel.org Link: https://github.com/KSPP/linux/issues/115 Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Acked-by: Michal KalderonĀ <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-23IB/mlx4: Fix fall-through warnings for ClangGustavo A. R. Silva1-0/+1
In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning by explicitly adding a break statement instead of just letting the code fall through to the next case. Link: https://lore.kernel.org/r/0153716933e01608d46155941c447d011c59c1e4.1605896059.git.gustavoars@kernel.org Link: https://github.com/KSPP/linux/issues/115 Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-23IB/hfi1: Fix fall-through warnings for ClangGustavo A. R. Silva2-0/+6
In preparation to enable -Wimplicit-fallthrough for Clang, fix multiple warnings by explicitly adding multiple break statements instead of just letting the code fall through to the next case. Link: https://lore.kernel.org/r/13cc2fe2cf8a71a778dbb3d996b07f5e5d04fd40.1605896059.git.gustavoars@kernel.org Link: https://github.com/KSPP/linux/issues/115 Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Tested-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-20RDMA/ipoib: Distribute cq completion vector betterJack Wang1-2/+2
Currently ipoib choose cq completion vector based on port number, when HCA only have one port, all the interface recv queue completion are bind to cq completion vector 0. To better distribute the load, use same method as __ib_alloc_cq_any to choose completion vector, with the change, each interface now use different completion vectors. Link: https://lore.kernel.org/r/20201013074342.15867-1-jinpu.wang@cloud.ionos.com Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: Gioh Kim <gi-oh.kim@cloud.ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-19Merge https://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski7-10/+16
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-17RDMA/core: remove use of dma_virt_opsChristoph Hellwig11-52/+29
Use the ib_dma_* helpers to skip the DMA translation instead. This removes the last user if dma_virt_ops and keeps the weird layering violation inside the RDMA core instead of burderning the DMA mapping subsystems with it. This also means the software RDMA drivers now don't have to mess with DMA parameters that are not relevant to them at all, and that in the future we can use PCI P2P transfers even for software RDMA, as there is no first fake layer of DMA mapping that the P2P DMA support. Link: https://lore.kernel.org/r/20201106181941.1878556-8-hch@lst.de Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-17Merge branch 'for-rc' into rdma.gitJason Gunthorpe21-44/+119
From https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git The rc RDMA branch is needed due to dependencies on the next patches. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-16RDMA/mlx5: Lower setting the umem's PAS for SRQJason Gunthorpe3-29/+79
Some of the SRQ types are created using a WQ, and the WQ requires a different parameter set to mlx5_umem_find_best_quantized_pgoff() as it has a 5 bit page_offset. Add the umem to the mlx5_srq_attr and defer computing the PAS data until the code has figured out what kind of mailbox to use. Compute the PAS directly from the umem for each of the four unique mailbox types. This also avoids allocating memory to store the user PAS, instead it is written directly to the mailbox as in most other cases. Fixes: 01949d0109ee ("net/mlx5_core: Enable XRCs and SRQs when using ISSI > 0") Link: https://lore.kernel.org/r/20201115114311.136250-8-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-16RDMA/mlx5: Use ib_umem_find_best_pgsz() for devxJason Gunthorpe3-91/+35
Since devx uses the new rdma_for_each_block() to fill the PAS it can also use ib_umem_find_best_pgsz(). However, the umem constructionin devx is complicated, the umem must still respect all the HW limits such as page_offset_quantized and the IOVA alignment. Since we don't know what the user intends to use the umem for we have to limit it to PAGE_SIZE. There are users trying to mix umem's with mkeys so this makes them work reliably, at least for an identity IOVA, by ensuring the IOVA matches the selected page size. Last user of mlx5_ib_get_buf_offset() so it can also be removed. Fixes: aeae94579caf ("IB/mlx5: Add DEVX support for memory registration") Link: https://lore.kernel.org/r/20201115114311.136250-7-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-16RDMA/mlx5: mlx5_umem_find_best_quantized_pgoff() for CQJason Gunthorpe2-14/+44
This fixes a bug where the page_offset was not being considered when building a CQ. The HW specification says it 'must be zero', so use a variant of mlx5_umem_find_best_quantized_pgoff() with a 0 pgoff_bitmask to force this result. Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters") Link: https://lore.kernel.org/r/20201115114311.136250-6-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-16RDMA/mlx5: Use mlx5_umem_find_best_quantized_pgoff() for QPJason Gunthorpe3-70/+23
Delete custom logic in the QP in favor of more general variant. Link: https://lore.kernel.org/r/20201115114311.136250-5-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-16RDMA/mlx5: Directly compute the PAS list for raw QP RQ'sJason Gunthorpe1-25/+16
The RQ WQ created when making a raw ethernet QP copies the PAS list from a dummy QPC command created earlier in the flow. The WQC and QPC PAS lists are not fully compatible as the page_offset is a different size. Create the RQ WQ's PAS list directly and do not try to copy it from another command structure. Like the prior patch, this also means that badly aligned buffers were not correctly rejected. Link: https://lore.kernel.org/r/20201115114311.136250-4-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-16RDMA/mlx5: Use mlx5_umem_find_best_quantized_pgoff() for WQJason Gunthorpe1-22/+31
This fixes a subtle bug, the WQ mailbox has only 5 bits to describe the page_offset, while mlx5_ib_get_buf_offset() is hard wired to only work with 6 bit page_offsets. Thus it did not properly reject badly aligned buffers. Fixes: 79b20a6c3014 ("IB/mlx5: Add receive Work Queue verbs") Fixes: 0fb2ed66a14c ("IB/mlx5: Add create and destroy functionality for Raw Packet QP") Link: https://lore.kernel.org/r/20201115114311.136250-3-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>