diff options
author | Mike Marciniszyn <mike.marciniszyn@intel.com> | 2016-02-14 12:45:53 -0800 |
---|---|---|
committer | Doug Ledford <dledford@redhat.com> | 2016-03-10 20:38:14 -0500 |
commit | a545f5308b6cf476def8a9326f7e82f89623bb03 (patch) | |
tree | b89b9c1f95d75f69178dc3cdc9bcbe042f71622f /drivers/staging/rdma/hfi1/qp.c | |
parent | IB/qib, staging/rdma/hfi1, IB/rdmavt: progress selection changes (diff) | |
download | linux-dev-a545f5308b6cf476def8a9326f7e82f89623bb03.tar.xz linux-dev-a545f5308b6cf476def8a9326f7e82f89623bb03.zip |
staging/rdma/hfi: fix CQ completion order issue
The current implementation of the sdma_wait variable
has a timing hole that can cause a completion Q entry
to be returned from a pio send prior to an older
sdma packets completion queue entry.
The sdma_wait variable used to be decremented prior to
calling the packet complete routine. The hole is between decrement
and the verbs completion where send engine using pio could return
a out of order completion in that window.
This patch closes the hole by allowing an API option to
specify an sdma_drained callback. The atomic dec
is positioned after the complete callback to avoid the
window as long as the pio path doesn't execute when
there is a non-zero sdma count.
Reviewed-by: Jubin John <jubin.john@intel.com>
Signed-off-by: Dean Luick <dean.luick@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Diffstat (limited to 'drivers/staging/rdma/hfi1/qp.c')
-rw-r--r-- | drivers/staging/rdma/hfi1/qp.c | 20 |
1 files changed, 19 insertions, 1 deletions
diff --git a/drivers/staging/rdma/hfi1/qp.c b/drivers/staging/rdma/hfi1/qp.c index 2d157054576a..77e91f280b21 100644 --- a/drivers/staging/rdma/hfi1/qp.c +++ b/drivers/staging/rdma/hfi1/qp.c @@ -73,6 +73,7 @@ static int iowait_sleep( struct sdma_txreq *stx, unsigned seq); static void iowait_wakeup(struct iowait *wait, int reason); +static void iowait_sdma_drained(struct iowait *wait); static void qp_pio_drain(struct rvt_qp *qp); static inline unsigned mk_qpn(struct rvt_qpn_table *qpt, @@ -509,6 +510,22 @@ static void iowait_wakeup(struct iowait *wait, int reason) hfi1_qp_wakeup(qp, RVT_S_WAIT_DMA_DESC); } +static void iowait_sdma_drained(struct iowait *wait) +{ + struct rvt_qp *qp = iowait_to_qp(wait); + + /* + * This happens when the send engine notes + * a QP in the error state and cannot + * do the flush work until that QP's + * sdma work has finished. + */ + if (qp->s_flags & RVT_S_WAIT_DMA) { + qp->s_flags &= ~RVT_S_WAIT_DMA; + hfi1_schedule_send(qp); + } +} + /** * * qp_to_sdma_engine - map a qp to a send engine @@ -773,7 +790,8 @@ void notify_qp_reset(struct rvt_qp *qp) 1, _hfi1_do_send, iowait_sleep, - iowait_wakeup); + iowait_wakeup, + iowait_sdma_drained); priv->r_adefered = 0; clear_ahg(qp); } |