aboutsummaryrefslogtreecommitdiffstats
path: root/net
diff options
context:
space:
mode:
authorDavid S. Miller <davem@davemloft.net>2017-10-07 23:05:58 +0100
committerDavid S. Miller <davem@davemloft.net>2017-10-07 23:05:58 +0100
commita1d753d29066e9b568884b0f12c029b0b811f0c8 (patch)
treeb5ec33cc6476e31c8e30c6623e635c6b75be80ae /net
parentip_tunnel: add mpls over gre support (diff)
parentbpf: add a test case for helper bpf_perf_prog_read_value (diff)
downloadlinux-dev-a1d753d29066e9b568884b0f12c029b0b811f0c8.tar.xz
linux-dev-a1d753d29066e9b568884b0f12c029b0b811f0c8.zip
Merge branch 'bpf-perf-time-helpers'
Yonghong Song says: ==================== bpf: add two helpers to read perf event enabled/running time Hardware pmu counters are limited resources. When there are more pmu based perf events opened than available counters, kernel will multiplex these events so each event gets certain percentage (but not 100%) of the pmu time. In case that multiplexing happens, the number of samples or counter value will not reflect the case compared to no multiplexing. This makes comparison between different runs difficult. Typically, the number of samples or counter value should be normalized before comparing to other experiments. The typical normalization is done like: normalized_num_samples = num_samples * time_enabled / time_running normalized_counter_value = counter_value * time_enabled / time_running where time_enabled is the time enabled for event and time_running is the time running for event since last normalization. This patch set implements two helper functions. The helper bpf_perf_event_read_value reads counter/time_enabled/time_running for perf event array map. The helper bpf_perf_prog_read_value read counter/time_enabled/time_running for bpf prog with type BPF_PROG_TYPE_PERF_EVENT. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
0 files changed, 0 insertions, 0 deletions