Age | Commit message (Collapse) | Author | Files | Lines |
|
Currently lock contention timestamp is maintained in a hash map keyed by
pid. That means it needs to get and release a map element (which is
proctected by spinlock!) on each contention begin and end pair. This
can impact on performance if there are a lot of contention (usually from
spinlocks).
It used to go with task local storage but it had an issue on memory
allocation in some critical paths. Although it's addressed in recent
kernels IIUC, the tool should support old kernels too. So it cannot
simply switch to the task local storage at least for now.
As spinlocks create lots of contention and they disabled preemption
during the spinning, it can use per-cpu array to keep the timestamp to
avoid overhead in hashmap update and delete.
In contention_begin, it's easy to check the lock types since it can see
the flags. But contention_end cannot see it. So let's try to per-cpu
array first (unconditionally) if it has an active element (lock != 0).
Then it should be used and per-task tstamp map should not be used until
the per-cpu array element is cleared which means nested spinlock
contention (if any) was finished and it nows see (the outer) lock.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Cc: Hao Luo <haoluo@google.com>
Cc: Song Liu <song@kernel.org>
Cc: bpf@vger.kernel.org
Link: https://lore.kernel.org/r/20231020204741.1869520-3-namhyung@kernel.org
|
|
When pelem is NULL, it'd create a new entry with zero data. But it
might be preempted by IRQ/NMI just before calling bpf_map_update_elem()
then there's a chance to call it twice for the same pid. So it'd be
better to use BPF_NOEXIST flag and check the return value to prevent
the race.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Cc: Hao Luo <haoluo@google.com>
Cc: Song Liu <song@kernel.org>
Cc: bpf@vger.kernel.org
Link: https://lore.kernel.org/r/20231020204741.1869520-2-namhyung@kernel.org
|
|
It checks the current lock to calculated the delta of contention time.
The address is saved in the tstamp map which is allocated at begining of
contention and released at end of contention.
But it's possible for bpf_map_delete_elem() to fail. In that case, the
element in the tstamp map kept for the current lock and it makes the
next contention for the same lock tracked incorrectly. Specificially
the next contention begin will see the existing element for the task and
it'd just return. Then the next contention end will see the element and
calculate the time using the timestamp for the previous begin.
This can result in a large value for two small contentions happened from
time to time. Let's clear the lock address so that it can be updated
next time even if the bpf_map_delete_elem() failed.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Cc: Hao Luo <haoluo@google.com>
Cc: Song Liu <song@kernel.org>
Cc: bpf@vger.kernel.org
Link: https://lore.kernel.org/r/20231020204741.1869520-1-namhyung@kernel.org
|
|
evsel__increase_rlimit() helper does nothing with evsel, and description
of the functionality is inaccurate, rename it and move to util/rlimit.c.
By the way, fix a checkppatch warning about misplaced license tag:
WARNING: Misplaced SPDX-License-Identifier tag - use line 1 instead
#160: FILE: tools/perf/util/rlimit.h:3:
/* SPDX-License-Identifier: LGPL-2.1 */
No functional change.
Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
Link: https://lore.kernel.org/r/20231023033144.1011896-1-yangjihong1@huawei.com
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
|
|
The -G/--cgroups option is to put sender and receiver in different
cgroups in order to measure cgroup context switch overheads.
Users need to make sure the cgroups exist and accessible. The following
example should the effect of this change. Please don't forget taskset
before the perf bench to measure cgroup switches properly. Otherwise
each task would run on a different CPU and generate cgroup switches
regardless of this change.
# perf stat -e context-switches,cgroup-switches \
> taskset -c 0 perf bench sched pipe -l 10000 > /dev/null
Performance counter stats for 'taskset -c 0 perf bench sched pipe -l 10000':
20,001 context-switches
2 cgroup-switches
0.053449651 seconds time elapsed
0.011286000 seconds user
0.041869000 seconds sys
# perf stat -e context-switches,cgroup-switches \
> taskset -c 0 perf bench sched pipe -l 10000 -G AAA,BBB > /dev/null
Performance counter stats for 'taskset -c 0 perf bench sched pipe -l 10000 -G AAA,BBB':
20,001 context-switches
20,001 cgroup-switches
0.052768627 seconds time elapsed
0.006284000 seconds user
0.046266000 seconds sys
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/r/20231017202342.1353124-1-namhyung@kernel.org
|
|
CoreSight might be not available, in such case, skip the tests.
Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Reviewed-by: Leo Yan <leo.yan@linaro.org>
Reviewed-by: Carsten Haitzler <carsten.haitzler@arm.com>
Cc: vmolnaro@redhat.com
Link: https://lore.kernel.org/r/20231019091137.22525-1-mpetlan@redhat.com
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
|
|
If using parallel threads to collect data, perf record needs at least 6 fds
per CPU. (one for sys_perf_event_open, four for pipe msg and ack of the
pipe, see record__thread_data_open_pipes(), and one for open perf.data.XXX)
For an environment with more than 100 cores, if perf record uses both
`-a` and `--threads` options, it is easy to exceed the upper limit of the
file descriptor number, when we run out of them try to increase the limits.
Before:
$ ulimit -n
1024
$ lscpu | grep 'On-line CPU(s)'
On-line CPU(s) list: 0-159
$ perf record --threads -a sleep 1
Failed to create data directory: Too many open files
After:
$ ulimit -n
1024
$ lscpu | grep 'On-line CPU(s)'
On-line CPU(s) list: 0-159
$ perf record --threads -a sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.394 MB perf.data (1576 samples) ]
Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Link: https://lore.kernel.org/r/20231013075945.698874-1-yangjihong1@huawei.com
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
|
|
The CPI_STALL_RATIO metric group can be used to present the high
level CPI stall breakdown metrics in powerpc, which will show:
- DISPATCH_STALL_CPI ( Dispatch stall cycles per insn )
- ISSUE_STALL_CPI ( Issue stall cycles per insn )
- EXECUTION_STALL_CPI ( Execution stall cycles per insn )
- COMPLETION_STALL_CPI ( Completion stall cycles per insn )
Commit cf26e043c2a9 ("perf vendor events power10: Add JSON
metric events to present CPI stall cycles in powerpc)" which added
the CPI_STALL_RATIO metric group, also modified
the PMC value used in PM_RUN_INST_CMPL event from PMC4 to PMC5,
to avoid multiplexing of events.
But that got revert in recent changes. Fix this issue by changing
back the PMC value used in PM_RUN_INST_CMPL to PMC5.
Result with the fix:
./perf stat --metric-no-group -M CPI_STALL_RATIO <workload>
Performance counter stats for 'workload':
68,745,426 PM_CMPL_STALL # 0.21 COMPLETION_STALL_CPI
7,692,827 PM_ISSUE_STALL # 0.02 ISSUE_STALL_CPI
322,638,223 PM_RUN_INST_CMPL # 0.05 DISPATCH_STALL_CPI
# 0.48 EXECUTION_STALL_CPI
16,858,553 PM_DISP_STALL_CYC
153,880,133 PM_EXEC_STALL
0.089774592 seconds time elapsed
"--metric-no-group" is used for forcing PM_RUN_INST_CMPL to be scheduled
in all group for more accuracy.
Fixes: 7d473f475b2a ("perf vendor events: Move JSON/events to appropriate files for power10 platform")
Reported-by: Disha Goel <disgoel@linux.vnet.ibm.com>
Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
Reviewed-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Tested-by: Disha Goel<disgoel@linux.ibm.com>
Cc: maddy@linux.ibm.com
Link: https://lore.kernel.org/r/20231016143110.244255-1-kjain@linux.ibm.com
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
|
|
Perf test case 111 Check open filename arg using perf trace + vfs_getname
fails on s390. This is caused by a failing function
bpf_probe_read() in file util/bpf_skel/augmented_raw_syscalls.bpf.c.
The root cause is the lookup by address. Function bpf_probe_read()
is used. This function works only for architectures
with ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE.
On s390 is not possible to determine from the address to which
address space the address belongs to (user or kernel space).
Replace bpf_probe_read() by bpf_probe_read_kernel()
and bpf_probe_read_str() by bpf_probe_read_user_str() to
explicity specify the address space the address refers to.
Output before:
# ./perf trace -eopen,openat -- touch /tmp/111
libbpf: prog 'sys_enter': BPF program load failed: Invalid argument
libbpf: prog 'sys_enter': -- BEGIN PROG LOAD LOG --
reg type unsupported for arg#0 function sys_enter#75
0: R1=ctx(off=0,imm=0) R10=fp0
; int sys_enter(struct syscall_enter_args *args)
0: (bf) r6 = r1 ; R1=ctx(off=0,imm=0) R6_w=ctx(off=0,imm=0)
; return bpf_get_current_pid_tgid();
1: (85) call bpf_get_current_pid_tgid#14 ; R0_w=scalar()
2: (63) *(u32 *)(r10 -8) = r0 ; R0_w=scalar() R10=fp0 fp-8=????mmmm
3: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
.....
lines deleted here
.....
23: (bf) r3 = r6 ; R3_w=ctx(off=0,imm=0) R6=ctx(off=0,imm=0)
24: (85) call bpf_probe_read#4
unknown func bpf_probe_read#4
processed 23 insns (limit 1000000) max_states_per_insn 0 \
total_states 2 peak_states 2 mark_read 2
-- END PROG LOAD LOG --
libbpf: prog 'sys_enter': failed to load: -22
libbpf: failed to load object 'augmented_raw_syscalls_bpf'
libbpf: failed to load BPF skeleton 'augmented_raw_syscalls_bpf': -22
....
Output after:
# ./perf test -Fv 111
111: Check open filename arg using perf trace + vfs_getname :
--- start ---
1.085 ( 0.011 ms): touch/320753 openat(dfd: CWD, filename: \
"/tmp/temporary_file.SWH85", \
flags: CREAT|NOCTTY|NONBLOCK|WRONLY, mode: IRUGO|IWUGO) = 3
---- end ----
Check open filename arg using perf trace + vfs_getname: Ok
#
Test with the sleep command shows:
Output before:
# ./perf trace -e *sleep sleep 1.234567890
0.000 (1234.681 ms): sleep/63114 clock_nanosleep(rqtp: \
{ .tv_sec: 0, .tv_nsec: 0 }, rmtp: 0x3ffe0979720) = 0
#
Output after:
# ./perf trace -e *sleep sleep 1.234567890
0.000 (1234.686 ms): sleep/64277 clock_nanosleep(rqtp: \
{ .tv_sec: 1, .tv_nsec: 234567890 }, rmtp: 0x3fff3df9ea0) = 0
#
Fixes: 14e4b9f4289a ("perf trace: Raw augmented syscalls fix libbpf 1.0+ compatibility")
Signed-off-by: Thomas Richter <tmricht@linux.ibm.com>
Co-developed-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Ian Rogers <irogers@google.com>
Cc: gor@linux.ibm.com
Cc: hca@linux.ibm.com
Cc: sumanthk@linux.ibm.com
Cc: svens@linux.ibm.com
Link: https://lore.kernel.org/r/20231019082642.3286650-1-tmricht@linux.ibm.com
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
|