diff options
author | 2017-07-26 11:22:25 -0400 | |
---|---|---|
committer | 2017-07-30 08:08:31 -0700 | |
commit | e16ffa838b2060dd4025125dffd2f83b9d9d6b43 (patch) | |
tree | ef46ff749c038eab433b25bfaee5e1cd7e5a2a00 /tools/perf/scripts/python | |
parent | staging: lustre: ptlrpc: correct use of list_add_tail() (diff) | |
download | wireguard-linux-e16ffa838b2060dd4025125dffd2f83b9d9d6b43.tar.xz wireguard-linux-e16ffa838b2060dd4025125dffd2f83b9d9d6b43.zip |
staging: lustre: ptlrpc: no need to reassign mbits for replay
It's not necessary reassgin & re-adjust rq_mbits for replay
request in ptlrpc_set_bulk_mbits(), they all must have already
been correctly assigned before.
Such unecessary reassign could make the first matchbit not
PTLRPC_BULK_OPS_MASK aligned, that'll trigger LASSERT in
ptlrpc_register_bulk():
- ptlrpc_set_bulk_mbits() is called when first time sending
request, rq_mbits is set as xid, which is BULK_OPS aligned;
- ptlrpc_set_bulk_mbits() continue to adjust the mbits for
multi-bulk RPC, rq_mbits is not aligned anymore, then rq_xid
is changed accordingly if client is connecting to an old
server, so rq_xid became unaligned too;
- The request is replayed, ptlrpc_set_bulk_mbits() reassign
the rq_mbits as rq_xid, which isn't aligned already, but
ptlrpc_register_bulk() still assumes this value as the
first matchbits and LASSERT it's BULK_OPS aligned.
Signed-off-by: Niu Yawei <yawei.niu@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6808
Reviewed-on: http://review.whamcloud.com/23048
Reviewed-by: Fan Yong <fan.yong@intel.com>
Reviewed-by: Alex Zhuravlev <alexey.zhuravlev@intel.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'tools/perf/scripts/python')
0 files changed, 0 insertions, 0 deletions