aboutsummaryrefslogtreecommitdiffstats
path: root/drivers
diff options
context:
space:
mode:
authorShiraz Saleem <shiraz.saleem@intel.com>2019-03-28 11:49:46 -0500
committerJason Gunthorpe <jgg@mellanox.com>2019-03-28 14:13:27 -0300
commit93923d309bda99bc52f8cee6ea4774895b18ae5b (patch)
treee2fa64df48b595595a7c56f7e12ee29158f675fd /drivers
parentRDMA/mthca: Use correct sizing on buffers holding page DMA addresses (diff)
downloadlinux-dev-93923d309bda99bc52f8cee6ea4774895b18ae5b.tar.xz
linux-dev-93923d309bda99bc52f8cee6ea4774895b18ae5b.zip
RDMA/rxe: Use correct sizing on buffers holding page DMA addresses
The buffer that holds the page DMA addresses is sized off umem->nmap. This can potentially cause out of bound accesses on the PBL array when iterating the umem DMA-mapped SGL. This is because if umem pages are combined, umem->nmap can be much lower than the number of system pages in umem. Use ib_umem_num_pages() to size this buffer. Cc: Moni Shoua <monis@mellanox.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Diffstat (limited to 'drivers')
-rw-r--r--drivers/infiniband/sw/rxe/rxe_mr.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index ec89fbd06c53..f501f72489d8 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -179,7 +179,7 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start,
}
mem->umem = umem;
- num_buf = umem->nmap;
+ num_buf = ib_umem_num_pages(umem);
rxe_mem_init(access, mem);