diff options
author | 2018-06-05 09:22:18 +0300 | |
---|---|---|
committer | 2018-09-05 21:14:57 -0700 | |
commit | 64109f1dc41f25f4a9c6b114e04b6266bf4128ad (patch) | |
tree | de38f98480af0b34b5fd01e0c81a182169cdfed5 /tools/perf/scripts/python/export-to-postgresql.py | |
parent | net/mlx5e: Move Q counters allocation and drop RQ to init_rx (diff) | |
download | wireguard-linux-64109f1dc41f25f4a9c6b114e04b6266bf4128ad.tar.xz wireguard-linux-64109f1dc41f25f4a9c6b114e04b6266bf4128ad.zip |
net/mlx5e: Replace PTP clock lock from RW lock to seq lock
Changed "priv.clock.lock" lock from 'rw_lock' to 'seq_lock'
in order to improve packet rate performance.
Tested on Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz.
Sent 64b packets between two peers connected by ConnectX-5,
and measured packet rate for the receiver in three modes:
no time-stamping (base rate)
time-stamping using rw_lock (old lock) for critical region
time-stamping using seq_lock (new lock) for critical region
Only the receiver time stamped its packets.
The measured packet rate improvements are:
Single flow (multiple TX rings to single RX ring):
without timestamping: 4.26 (M packets)/sec
with rw-lock (old lock): 4.1 (M packets)/sec
with seq-lock (new lock): 4.16 (M packets)/sec
1.46% improvement
Multiple flows (multiple TX rings to six RX rings):
without timestamping: 22 (M packets)/sec
with rw-lock (old lock): 11.7 (M packets)/sec
with seq-lock (new lock): 21.3 (M packets)/sec
82.05% improvement
The packet rate improvement is due to the lack of atomic operations
for the 'readers' by the seq-lock.
Since there are much more 'readers' than 'writers' contention
on this lock, almost all atomic operations are saved.
this results in a dramatic decrease in overall
cache misses.
Signed-off-by: Shay Agroskin <shayag@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Diffstat (limited to 'tools/perf/scripts/python/export-to-postgresql.py')
0 files changed, 0 insertions, 0 deletions