aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-sqlite.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2022-08-01tools build: Add feature test for init_disassemble_info API changesAndres Freund4-0/+22
binutils changed the signature of init_disassemble_info(), which now causes compilation failures for tools/{perf,bpf}, e.g. on debian unstable. Relevant binutils commit: https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=60a3da00bd5407f07 This commit adds a feature test to detect the new signature. Subsequent commits will use it to fix the build failures. Signed-off-by: Andres Freund <andres@anarazel.de> Acked-by: Quentin Monnet <quentin@isovalent.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Ben Hutchings <benh@debian.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Quentin Monnet <quentin@isovalent.com> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: bpf@vger.kernel.org Link: http://lore.kernel.org/lkml/20220622181918.ykrs5rsnmx3og4sv@alap3.anarazel.de Link: https://lore.kernel.org/r/20220801013834.156015-2-andres@anarazel.de Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-01perf test: Add ARM SPE system wide testNamhyung Kim1-3/+27
In the past it had a problem not setting the pid/tid on the sample correctly when system-wide mode is used. Although it's fixed now it'd be nice if we have a test case for it. Reviewed-by: German Gomez <german.gomez@arm.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Leo Yan <leo.yan@linaro.org> Cc: German Gomez <german.gomez@arm.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220701230932.1000495-1-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-01perf tools: Rework prologue generation codeJiri Olsa1-29/+175
Some functions we use for bpf prologue generation are going to be deprecated. This change reworks current code not to use them. We need to replace following functions/struct: bpf_program__set_prep bpf_program__nth_fd struct bpf_prog_prep_result Currently we use bpf_program__set_prep to hook perf callback before program is loaded and provide new instructions with the prologue. We replace this function/ality by taking instructions for specific program, attaching prologue to them and load such new ebpf programs with prologue using separate bpf_prog_load calls (outside libbpf load machinery). Before we can take and use program instructions, we need libbpf to actually load it. This way we get the final shape of its instructions with all relocations and verifier adjustments). There's one glitch though.. perf kprobe program already assumes generated prologue code with proper values in argument registers, so loading such program directly will fail in the verifier. That's where the fallback pre-load handler fits in and prepends the initialization code to the program. Once such program is loaded we take its instructions, cut off the initialization code and prepend the prologue. I know.. sorry ;-) To have access to the program when loading this patch adds support to register 'fallback' section handler to take care of perf kprobe programs. The fallback means that it handles any section definition besides the ones that libbpf handles. The handler serves two purposes: - allows perf programs to have special arguments in section name - allows perf to use pre-load callback where we can attach init code (zeroing all argument registers) to each perf program Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: John Fastabend <john.fastabend@gmail.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Yonghong Song <yhs@fb.com> Cc: bpf@vger.kernel.org Cc: netdev@vger.kernel.org Link: https://lore.kernel.org/r/20220616202214.70359-2-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-01perf bpf: Convert legacy map definition to BTF-definedJiri Olsa2-13/+24
The libbpf is switching off support for legacy map definitions [1], which will break the perf llvm tests. Moving the base source map definition to BTF-defined, so we need to use -g compile option for to add debug/BTF info. [1] https://lore.kernel.org/bpf/20220627211527.2245459-1-andrii@kernel.org/ Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: bpf@vger.kernel.org Link: https://lore.kernel.org/r/20220704152721.352046-1-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-01perf symbol: Fail to read phdr workaroundIan Rogers1-7/+20
The perf jvmti agent doesn't create program headers, in this case fallback on section headers as happened previously. Committer notes: To test this, from a public post by Ian: 1) download a Java workload dacapo-9.12-MR1-bach.jar from https://sourceforge.net/projects/dacapobench/ 2) build perf such as "make -C tools/perf O=/tmp/perf NO_LIBBFD=1" it should detect Java and create /tmp/perf/libperf-jvmti.so 3) run perf with the jvmti agent: perf record -k 1 java -agentpath:/tmp/perf/libperf-jvmti.so -jar dacapo-9.12-MR1-bach.jar -n 10 fop 4) run perf inject: perf inject -i perf.data -o perf-injected.data -j 5) run perf report perf report -i perf-injected.data | grep org.apache.fop With this patch reverted I see lots of symbols like: 0.00% java jitted-388040-4656.so [.] org.apache.fop.fo.FObj.bind(org.apache.fop.fo.PropertyList) With the patch (2d86612aacb7805f ("perf symbol: Correct address for bss symbols")) I see lots of: dso__load_sym_internal: failed to find program header for symbol: Lorg/apache/fop/fo/FObj;bind(Lorg/apache/fop/fo/PropertyList;)V st_value: 0x40 Fixes: 2d86612aacb7805f ("perf symbol: Correct address for bss symbols") Reviewed-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Leo Yan <leo.yan@linaro.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20220731164923.691193-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-01perf lock: Implement cpu and task filters for BPFNamhyung Kim5-13/+162
Add -a/--all-cpus and -C/--cpu options for cpu filtering. Also -p/--pid and --tid options are added for task filtering. The short -t option is taken for --threads already. Tracking the command line workload is possible as well. $ sudo perf lock contention -a -b sleep 1 Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Blake Jones <blakejones@google.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Waiman Long <longman@redhat.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220729200756.666106-4-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-01perf lock: Use BPF for lock contention analysisNamhyung Kim7-115/+453
Add -b/--use-bpf option to use BPF to collect lock contention stats. For simplicity it now runs system-wide and requires C-c to stop. Upcoming changes will add the usual filtering. $ sudo perf lock con -b ^C contended total wait max wait avg wait type caller 42 192.67 us 13.64 us 4.59 us spinlock queue_work_on+0x20 23 85.54 us 10.28 us 3.72 us spinlock worker_thread+0x14a 6 13.92 us 6.51 us 2.32 us mutex kernfs_iop_permission+0x30 3 11.59 us 10.04 us 3.86 us mutex kernfs_dop_revalidate+0x3c 1 7.52 us 7.52 us 7.52 us spinlock kthread+0x115 1 7.24 us 7.24 us 7.24 us rwlock:W sys_epoll_wait+0x148 2 7.08 us 3.99 us 3.54 us spinlock delayed_work_timer_fn+0x1b 1 6.41 us 6.41 us 6.41 us spinlock idle_balance+0xa06 2 2.50 us 1.83 us 1.25 us mutex kernfs_iop_lookup+0x2f 1 1.71 us 1.71 us 1.71 us mutex kernfs_iop_getattr+0x2c Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Blake Jones <blakejones@google.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Waiman Long <longman@redhat.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220729200756.666106-3-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-01perf lock: Pass machine pointer to is_lock_function()Namhyung Kim1-8/+7
This is a preparation for later change to expose the function externally so that it can be used without the implicit session data. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Blake Jones <blakejones@google.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Waiman Long <longman@redhat.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220729200756.666106-2-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-01perf test: Add user space counter reading testsIan Rogers1-1/+126
These tests are based on test_stat_user_read in tools/lib/perf/tests/test-evsel.c. The tests are modified to skip if perf_event_open fails or rdpmc isn't supported. Committer testing: ⬢[acme@toolbox perf]$ perf test "mmap interface" 4: mmap interface tests : 4.1: Read samples using the mmap interface : Skip (permissions) ⬢[acme@toolbox perf]$ [root@five ~]# perf test "mmap interface" 4: mmap interface tests : 4.1: Read samples using the mmap interface : Ok [root@five ~]# Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rob Herring <robh@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20220719223946.176299-4-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-01perf test: Remove x86 rdpmc testIan Rogers3-185/+0
This test has been superseded by test_stat_user_read in: tools/lib/perf/tests/test-evsel.c The updated test doesn't divide-by-0 when running time of a counter is 0. It also supports ARM64. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Rob Herring <robh@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20220719223946.176299-3-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-31Linux 5.19Linus Torvalds1-1/+1
2022-07-30locking/rwsem: Allow slowpath writer to ignore handoff bit if not set by first waiterWaiman Long1-10/+20
With commit d257cc8cb8d5 ("locking/rwsem: Make handoff bit handling more consistent"), the writer that sets the handoff bit can be interrupted out without clearing the bit if the wait queue isn't empty. This disables reader and writer optimistic lock spinning and stealing. Now if a non-first writer in the queue is somehow woken up or a new waiter enters the slowpath, it can't acquire the lock. This is not the case before commit d257cc8cb8d5 as the writer that set the handoff bit will clear it when exiting out via the out_nolock path. This is less efficient as the busy rwsem stays in an unlock state for a longer time. In some cases, this new behavior may cause lockups as shown in [1] and [2]. This patch allows a non-first writer to ignore the handoff bit if it is not originally set or initiated by the first waiter. This patch is shown to be effective in fixing the lockup problem reported in [1]. [1] https://lore.kernel.org/lkml/20220617134325.GC30825@techsingularity.net/ [2] https://lore.kernel.org/lkml/3f02975c-1a9d-be20-32cf-f1d8e3dfafcc@oracle.com/ Fixes: d257cc8cb8d5 ("locking/rwsem: Make handoff bit handling more consistent") Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: John Donnelly <john.p.donnelly@oracle.com> Tested-by: Mel Gorman <mgorman@techsingularity.net> Link: https://lore.kernel.org/r/20220622200419.778799-1-longman@redhat.com
2022-07-29docs/kernel-parameters: Update descriptions for "mitigations=" param with retbleedEiichi Tsukata1-0/+2
Updates descriptions for "mitigations=off" and "mitigations=auto,nosmt" with the respective retbleed= settings. Signed-off-by: Eiichi Tsukata <eiichi.tsukata@nutanix.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: corbet@lwn.net Link: https://lore.kernel.org/r/20220728043907.165688-1-eiichi.tsukata@nutanix.com
2022-07-29mm/hmm: fault non-owner device private entriesRalph Campbell1-11/+8
If hmm_range_fault() is called with the HMM_PFN_REQ_FAULT flag and a device private PTE is found, the hmm_range::dev_private_owner page is used to determine if the device private page should not be faulted in. However, if the device private page is not owned by the caller, hmm_range_fault() returns an error instead of calling migrate_to_ram() to fault in the page. For example, if a page is migrated to GPU private memory and a RDMA fault capable NIC tries to read the migrated page, without this patch it will get an error. With this patch, the page will be migrated back to system memory and the NIC will be able to read the data. Link: https://lkml.kernel.org/r/20220727000837.4128709-2-rcampbell@nvidia.com Link: https://lkml.kernel.org/r/20220725183615.4118795-2-rcampbell@nvidia.com Fixes: 08ddddda667b ("mm/hmm: check the device private page owner in hmm_range_fault()") Signed-off-by: Ralph Campbell <rcampbell@nvidia.com> Reported-by: Felix Kuehling <felix.kuehling@amd.com> Reviewed-by: Alistair Popple <apopple@nvidia.com> Cc: Philip Yang <Philip.Yang@amd.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-07-29page_alloc: fix invalid watermark check on a negative valueJaewon Kim1-4/+8
There was a report that a task is waiting at the throttle_direct_reclaim. The pgscan_direct_throttle in vmstat was increasing. This is a bug where zone_watermark_fast returns true even when the free is very low. The commit f27ce0e14088 ("page_alloc: consider highatomic reserve in watermark fast") changed the watermark fast to consider highatomic reserve. But it did not handle a negative value case which can be happened when reserved_highatomic pageblock is bigger than the actual free. If watermark is considered as ok for the negative value, allocating contexts for order-0 will consume all free pages without direct reclaim, and finally free page may become depleted except highatomic free. Then allocating contexts may fall into throttle_direct_reclaim. This symptom may easily happen in a system where wmark min is low and other reclaimers like kswapd does not make free pages quickly. Handle the negative case by using MIN. Link: https://lkml.kernel.org/r/20220725095212.25388-1-jaewon31.kim@samsung.com Fixes: f27ce0e14088 ("page_alloc: consider highatomic reserve in watermark fast") Signed-off-by: Jaewon Kim <jaewon31.kim@samsung.com> Reported-by: GyeongHwan Hong <gh21.hong@samsung.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Cc: Minchan Kim <minchan@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Yong-Taek Lee <ytk.lee@samsung.com> Cc: <stable@vger.kerenl.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-07-29workqueue: Avoid a false warning in unbind_workers()Lai Jiangshan1-1/+4
Doing set_cpus_allowed_ptr() with wq_unbound_cpumask can be possible fails and trigger the false warning. Use cpu_possible_mask instead when wq_unbound_cpumask has no active CPUs. It is very easy to trigger the warning: Set wq_unbound_cpumask to a small set of CPUs. Offline all the CPUs of wq_unbound_cpumask. Offline an extra CPU and trigger the warning. Fixes: 10a5a651e3af ("workqueue: Restrict kworker in the offline CPU pool running on housekeeping CPUs") Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2022-07-29perf stat: Add topdown metrics in the default perf stat on the hybrid machineZhengjun Xing7-25/+66
Topdown metrics are missed in the default perf stat on the hybrid machine, add Topdown metrics in default perf stat for hybrid systems. Currently, we support the perf metrics Topdown for the p-core PMU in the perf stat default, the perf metrics Topdown support for e-core PMU will be implemented later separately. Refactor the code adds two x86 specific functions. Widen the size of the event name column by 7 chars, so that all metrics after the "#" become aligned again. The perf metrics topdown feature is supported on the cpu_core of ADL. The dedicated perf metrics counter and the fixed counter 3 are used for the topdown events. Adding the topdown metrics doesn't trigger multiplexing. Before: # ./perf stat -a true Performance counter stats for 'system wide': 53.70 msec cpu-clock # 25.736 CPUs utilized 80 context-switches # 1.490 K/sec 24 cpu-migrations # 446.951 /sec 52 page-faults # 968.394 /sec 2,788,555 cpu_core/cycles/ # 51.931 M/sec 851,129 cpu_atom/cycles/ # 15.851 M/sec 2,974,030 cpu_core/instructions/ # 55.385 M/sec 416,919 cpu_atom/instructions/ # 7.764 M/sec 586,136 cpu_core/branches/ # 10.916 M/sec 79,872 cpu_atom/branches/ # 1.487 M/sec 14,220 cpu_core/branch-misses/ # 264.819 K/sec 7,691 cpu_atom/branch-misses/ # 143.229 K/sec 0.002086438 seconds time elapsed After: # ./perf stat -a true Performance counter stats for 'system wide': 61.39 msec cpu-clock # 24.874 CPUs utilized 76 context-switches # 1.238 K/sec 24 cpu-migrations # 390.968 /sec 52 page-faults # 847.097 /sec 2,753,695 cpu_core/cycles/ # 44.859 M/sec 903,899 cpu_atom/cycles/ # 14.725 M/sec 2,927,529 cpu_core/instructions/ # 47.690 M/sec 428,498 cpu_atom/instructions/ # 6.980 M/sec 581,299 cpu_core/branches/ # 9.470 M/sec 83,409 cpu_atom/branches/ # 1.359 M/sec 13,641 cpu_core/branch-misses/ # 222.216 K/sec 8,008 cpu_atom/branch-misses/ # 130.453 K/sec 14,761,308 cpu_core/slots/ # 240.466 M/sec 3,288,625 cpu_core/topdown-retiring/ # 22.3% retiring 1,323,323 cpu_core/topdown-bad-spec/ # 9.0% bad speculation 5,477,470 cpu_core/topdown-fe-bound/ # 37.1% frontend bound 4,679,199 cpu_core/topdown-be-bound/ # 31.7% backend bound 646,194 cpu_core/topdown-heavy-ops/ # 4.4% heavy operations # 17.9% light operations 1,244,999 cpu_core/topdown-br-mispredict/ # 8.4% branch mispredict # 0.5% machine clears 3,891,800 cpu_core/topdown-fetch-lat/ # 26.4% fetch latency # 10.7% fetch bandwidth 1,879,034 cpu_core/topdown-mem-bound/ # 12.7% memory bound # 19.0% Core bound 0.002467839 seconds time elapsed Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Acked-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220721065706.2886112-6-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-29perf x86 evlist: Add default hybrid events for perf statKan Liang3-2/+54
Provide a new solution to replace the reverted commit ac2dc29edd21f9ec ("perf stat: Add default hybrid events") For the default software attrs, nothing is changed. For the default hardware attrs, create a new evsel for each hybrid pmu. With the new solution, adding a new default attr will not require the special support for the hybrid platform anymore. Also, the "--detailed" is supported on the hybrid platform With the patch, $ perf stat -a -ddd sleep 1 Performance counter stats for 'system wide': 32,231.06 msec cpu-clock # 32.056 CPUs utilized 529 context-switches # 16.413 /sec 32 cpu-migrations # 0.993 /sec 69 page-faults # 2.141 /sec 176,754,151 cpu_core/cycles/ # 5.484 M/sec (41.65%) 161,695,280 cpu_atom/cycles/ # 5.017 M/sec (49.92%) 48,595,992 cpu_core/instructions/ # 1.508 M/sec (49.98%) 32,363,337 cpu_atom/instructions/ # 1.004 M/sec (58.26%) 10,088,639 cpu_core/branches/ # 313.010 K/sec (58.31%) 6,390,582 cpu_atom/branches/ # 198.274 K/sec (58.26%) 846,201 cpu_core/branch-misses/ # 26.254 K/sec (66.65%) 676,477 cpu_atom/branch-misses/ # 20.988 K/sec (58.27%) 14,290,070 cpu_core/L1-dcache-loads/ # 443.363 K/sec (66.66%) 9,983,532 cpu_atom/L1-dcache-loads/ # 309.749 K/sec (58.27%) 740,725 cpu_core/L1-dcache-load-misses/ # 22.982 K/sec (66.66%) <not supported> cpu_atom/L1-dcache-load-misses/ 480,441 cpu_core/LLC-loads/ # 14.906 K/sec (66.67%) 326,570 cpu_atom/LLC-loads/ # 10.132 K/sec (58.27%) 329 cpu_core/LLC-load-misses/ # 10.208 /sec (66.68%) 0 cpu_atom/LLC-load-misses/ # 0.000 /sec (58.32%) <not supported> cpu_core/L1-icache-loads/ 21,982,491 cpu_atom/L1-icache-loads/ # 682.028 K/sec (58.43%) 4,493,189 cpu_core/L1-icache-load-misses/ # 139.406 K/sec (33.34%) 4,711,404 cpu_atom/L1-icache-load-misses/ # 146.176 K/sec (50.08%) 13,713,090 cpu_core/dTLB-loads/ # 425.462 K/sec (33.34%) 9,384,727 cpu_atom/dTLB-loads/ # 291.170 K/sec (50.08%) 157,387 cpu_core/dTLB-load-misses/ # 4.883 K/sec (33.33%) 108,328 cpu_atom/dTLB-load-misses/ # 3.361 K/sec (50.08%) <not supported> cpu_core/iTLB-loads/ <not supported> cpu_atom/iTLB-loads/ 37,655 cpu_core/iTLB-load-misses/ # 1.168 K/sec (33.32%) 61,661 cpu_atom/iTLB-load-misses/ # 1.913 K/sec (50.03%) <not supported> cpu_core/L1-dcache-prefetches/ <not supported> cpu_atom/L1-dcache-prefetches/ <not supported> cpu_core/L1-dcache-prefetch-misses/ <not supported> cpu_atom/L1-dcache-prefetch-misses/ 1.005466919 seconds time elapsed Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Acked-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220721065706.2886112-5-zhengjun.xing@linux.intel.com Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-29perf evlist: Always use arch_evlist__add_default_attrs()Kan Liang4-6/+23
Current perf stat uses the evlist__add_default_attrs() to add the generic default attrs, and uses arch_evlist__add_default_attrs() to add the Arch specific default attrs, e.g., Topdown for x86. It works well for the non-hybrid platforms. However, for a hybrid platform, the hard code generic default attrs don't work. Uses arch_evlist__add_default_attrs() to replace the evlist__add_default_attrs(). The arch_evlist__add_default_attrs() is modified to invoke the same __evlist__add_default_attrs() for the generic default attrs. No functional change. Add default_null_attrs[] to indicate the arch specific attrs. No functional change for the arch specific default attrs either. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Acked-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220721065706.2886112-4-zhengjun.xing@linux.intel.com Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-29perf evsel: Add arch_evsel__hw_name()Kan Liang3-1/+27
The commit 55bcf6ef314ae8ba ("perf: Extend PERF_TYPE_HARDWARE and PERF_TYPE_HW_CACHE") extends the two types to become PMU aware types for a hybrid system. However, current evsel__hw_name doesn't take the PMU type into account. It mistakenly returns the "unknown-hardware" for the hardware event with a specific PMU type. Add an arch specific arch_evsel__hw_name() to specially handle the PMU aware hardware event. Currently, the extend PERF_TYPE_HARDWARE and PERF_TYPE_HW_CACHE is only supported by X86. Only implement the specific arch_evsel__hw_name() for X86 in the patch. Nothing is changed for the other archs. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Acked-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220721065706.2886112-3-zhengjun.xing@linux.intel.com Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-29perf stat: Revert "perf stat: Add default hybrid events"Kan Liang1-30/+0
This reverts commit Fixes: ac2dc29edd21f9ec ("perf stat: Add default hybrid events") Between this patch and the reverted patch, the commit 6c1912898ed21bef ("perf parse-events: Rename parse_events_error functions") and the commit 07eafd4e053a41d7 ("perf parse-event: Add init and exit to parse_event_error") clean up the parse_events_error_*() codes. The related change is also reverted. The reverted patch is hard to be extended to support new default events, e.g., Topdown events, and the existing "--detailed" option on a hybrid platform. A new solution will be proposed in the following patch to enable the perf stat default on a hybrid platform. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Acked-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220721065706.2886112-2-zhengjun.xing@linux.intel.com Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-29perf test: Fix test case 95 ("Check branch stack sampling") on s390 and use same eventThomas Richter1-1/+1
On linux-next tree 'perf test 95' ("Check branch stack sampling") was added recently. s390 does not support branch sampling at all and the test case fails despite for checking branch support before hand. The check for support of branching uses the software event named "dummy", as seen in the line: perf record -b -o- -e dummy -B true > /dev/null 2>&1 || exit 2 However when the branch recording is actually done, a different event is used, as seen in the line: perf record -o $TMPDIR/... --branch-filter any,save_type,u -- ... The event is omitted and for "perf record" the default event is cycles, which is not supported by s390 and this fails when executed on s390: # perf record --branch-filter any,save_type,u -- /tmp/__perf_test.program.iDSmQ/a.out Error: cycles: PMU Hardware or event type doesn't support branch stack sampling. # Therefore fix this and use the same event cycles for testing support and actually running the test. Output before: # ./perf test -Fv 95 95: Check branch stack sampling : --- start --- Testing user branch stack sampling ---- end ---- Check branch stack sampling: FAILED! # Output after: # ./perf test -Fv 95 95: Check branch stack sampling : --- start --- ---- end ---- Check branch stack sampling: Skip # Fixes: b55878c90ab92a24 ("perf test: Add test for branch stack sampling") Reviewed-by: James Clark <james.clark@arm.com> Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: German Gomez <german.gomez@arm.com> Cc: German Gomez <german.gomez@arm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Sumanth Korikkar <sumanthk@linux.ibm.com> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Link: https://lore.kernel.org/r/20220727141439.712582-1-tmricht@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-29LoongArch: Fix wrong "ROM Size" of boardinfoTiezhu Yang1-1/+1
We can see the "ROM Size" is different in the following outputs: [root@linux loongson]# cat /sys/firmware/loongson/boardinfo BIOS Information Vendor : Loongson Version : vUDK2018-LoongArch-V2.0.pre-beta8 ROM Size : 63 KB Release Date : 06/15/2022 Board Information Manufacturer : Loongson Board Name : Loongson-LS3A5000-7A1000-1w-A2101 Family : LOONGSON64 [root@linux loongson]# dmidecode | head -11 ... Handle 0x0000, DMI type 0, 26 bytes BIOS Information Vendor: Loongson Version: vUDK2018-LoongArch-V2.0.pre-beta8 Release Date: 06/15/2022 ROM Size: 4 MB According to "BIOS Information (Type 0) structure" in the SMBIOS Reference Specification [1], it shows 64K * (n+1) is the size of the physical device containing the BIOS if the size is less than 16M. Additionally, we can see the related code in dmidecode [2]: u64 s = { .l = (code1 + 1) << 6 }; So the output of dmidecode is correct, the output of boardinfo is wrong, fix it. By the way, at present no need to consider the size is 16M or greater on LoongArch, because it is usually 4M or 8M which is enough to use. [1] https://www.dmtf.org/sites/default/files/standards/documents/DSP0134_3.6.0.pdf [2] https://git.savannah.nongnu.org/cgit/dmidecode.git/tree/dmidecode.c#n347 Fixes: 628c3bb40e9a ("LoongArch: Add boot and setup routines") Reviewed-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Fix missing fcsr in ptrace's fpr_setQi Hu1-5/+7
In file ptrace.c, function fpr_set does not copy fcsr data from ubuf to kbuf. That's the reason why fcsr cannot be modified by ptrace. This patch fixs this problem and allows users using ptrace to modify the fcsr. Co-developed-by: Xu Li <lixu@loongson.cn> Signed-off-by: Qi Hu <huqi@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Fix shared cache size calculationHuacai Chen1-2/+9
Current calculation of shared cache size is from the node (die) scope, but we hope 'lscpu' to show the shared cache size of the whole package for multi-die chips (e.g., Loongson-3C5000L, which contains 4 dies in one package). So fix it by multiplying nodes_per_package. Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Disable executable stack by defaultHuacai Chen1-2/+0
Disable executable stack for LoongArch by default, as all modern architectures do. Reported-by: Andreas Schwab <schwab@suse.de> Suggested-by: WANG Xuerui <git@xen0n.name> Link: https://sourceware.org/pipermail/binutils/2022-July/121992.html Tested-by: WANG Xuerui <git@xen0n.name> Tested-by: Xi Ruoyao <xry111@xry111.site> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Remove unused variablesBibo Mao2-32/+0
There are some variables never used or referenced, this patch removes these varaibles and make the code cleaner. Reviewed-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Remove clock setting during cpu hotplug stageBibo Mao2-101/+13
On physical machine we can save power by disabling clock of hot removed cpu. However as different platforms require different methods to configure clocks, the code is platform-specific, and probably belongs to firmware/pmu or cpu regulator, rather than generic arch/loongarch code. Also, there is no such register on QEMU virt machine since the clock/frequency regulation is not emulated. This patch removes the hard-coded clock register accesses in generic LoongArch cpu hotplug flow. Reviewed-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Remove useless header compiler.hJun Yi8-32/+6
The content of LoongArch's compiler.h is trivial, with some unused anywhere, so inline the definitions and remove the header. Signed-off-by: Jun Yi <yijun@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Remove several syntactic sugar macros for branchesWANG Xuerui1-12/+0
These syntactic sugars have been supported by upstream binutils from the beginning, so no need to patch them locally. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Re-tab the assembly filesWANG Xuerui7-163/+163
Reflow the *.S files for better stylistic consistency, namely hard tabs after mnemonic position, and vertical alignment of the first operand with hard tabs. Tab width is obviously 8. Some pre-existing intra-block vertical alignments are preserved. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Simplify "BGT foo, zero" with BGTZWANG Xuerui2-2/+2
Support for the syntactic sugar is present in upstream binutils port from the beginning. Use it for shorter lines and better consistency. Generated code should be identical. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Simplify "BLT foo, zero" with BLTZWANG Xuerui2-7/+7
Support for the syntactic sugar is present in upstream binutils port from the beginning. Use it for shorter lines and better consistency. Generated code should be identical. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Simplify "BEQ/BNE foo, zero" with BEQZ/BNEZWANG Xuerui4-22/+22
While B{EQ,NE}Z and B{EQ,NE} are different instructions, and the vastly expanded range for branch destination does not really matter in the few cases touched, use the B{EQ,NE}Z where possible for shorter lines and better consistency (e.g. some places used "BEQ foo, zero", while some used "BEQ zero, foo"). Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Use the "move" pseudo-instruction where applicableWANG Xuerui5-8/+8
Some of the assembly code in the LoongArch port likely originated from a time when the assembler did not support pseudo-instructions like "move" or "jr", so the desugared form was used and readability suffers (to a minor degree) as a result. As the upstream toolchain supports these pseudo-instructions from the beginning, migrate the existing few usages to them for better readability. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Use the "jr" pseudo-instruction where applicableWANG Xuerui5-15/+15
Some of the assembly code in the LoongArch port likely originated from a time when the assembler did not support pseudo-instructions like "move" or "jr", so the desugared form was used and readability suffers (to a minor degree) as a result. As the upstream toolchain supports these pseudo-instructions from the beginning, migrate the existing few usages to them for better readability. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Use ABI names of registers where appropriateWANG Xuerui7-55/+55
Some of the assembly in the LoongArch port seem to come from a prehistoric time, when the assembler didn't even have support for the ABI names we all come to know and love, thus used raw register numbers which hampered readability. The usages are found with a regex match inside arch/loongarch, then manually adjusted for those non-definitions. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29ARM: findbit: fix overflowing offsetRussell King (Oracle)1-8/+8
When offset is larger than the size of the bit array, we should not attempt to access the array as we can perform an access beyond the end of the array. Fix this by changing the pre-condition. Using "cmp r2, r1; bhs ..." covers us for the size == 0 case, since this will always take the branch when r1 is zero, irrespective of the value of r2. This means we can fix this bug without adding any additional code! Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-07-29x86/bugs: Do not enable IBPB at firmware entry when IBPB is not availableThadeu Lima de Souza Cascardo1-0/+1
Some cloud hypervisors do not provide IBPB on very recent CPU processors, including AMD processors affected by Retbleed. Using IBPB before firmware calls on such systems would cause a GPF at boot like the one below. Do not enable such calls when IBPB support is not present. EFI Variables Facility v0.08 2004-May-17 general protection fault, maybe for address 0x1: 0000 [#1] PREEMPT SMP NOPTI CPU: 0 PID: 24 Comm: kworker/u2:1 Not tainted 5.19.0-rc8+ #7 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Workqueue: efi_rts_wq efi_call_rts RIP: 0010:efi_call_rts Code: e8 37 33 58 ff 41 bf 48 00 00 00 49 89 c0 44 89 f9 48 83 c8 01 4c 89 c2 48 c1 ea 20 66 90 b9 49 00 00 00 b8 01 00 00 00 31 d2 <0f> 30 e8 7b 9f 5d ff e8 f6 f8 ff ff 4c 89 f1 4c 89 ea 4c 89 e6 48 RSP: 0018:ffffb373800d7e38 EFLAGS: 00010246 RAX: 0000000000000001 RBX: 0000000000000006 RCX: 0000000000000049 RDX: 0000000000000000 RSI: ffff94fbc19d8fe0 RDI: ffff94fbc1b2b300 RBP: ffffb373800d7e70 R08: 0000000000000000 R09: 0000000000000000 R10: 000000000000000b R11: 000000000000000b R12: ffffb3738001fd78 R13: ffff94fbc2fcfc00 R14: ffffb3738001fd80 R15: 0000000000000048 FS: 0000000000000000(0000) GS:ffff94fc3da00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffff94fc30201000 CR3: 000000006f610000 CR4: 00000000000406f0 Call Trace: <TASK> ? __wake_up process_one_work worker_thread ? rescuer_thread kthread ? kthread_complete_and_exit ret_from_fork </TASK> Modules linked in: Fixes: 28a99e95f55c ("x86/amd: Use IBPB for firmware calls") Reported-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20220728122602.2500509-1-cascardo@canonical.com
2022-07-28nouveau/svm: Fix to migrate all requested pagesAlistair Popple1-1/+5
Users may request that pages from an OpenCL SVM allocation be migrated to the GPU with clEnqueueSVMMigrateMem(). In Nouveau this will call into nouveau_dmem_migrate_vma() to do the migration. If the total range to be migrated exceeds SG_MAX_SINGLE_ALLOC the pages will be migrated in chunks of size SG_MAX_SINGLE_ALLOC. However a typo in updating the starting address means that only the first chunk will get migrated. Fix the calculation so that the entire range will get migrated if possible. Signed-off-by: Alistair Popple <apopple@nvidia.com> Fixes: e3d8b0890469 ("drm/nouveau/svm: map pages after migration") Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> Reviewed-by: Lyude Paul <lyude@redhat.com> Signed-off-by: Lyude Paul <lyude@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220720062745.960701-1-apopple@nvidia.com Cc: <stable@vger.kernel.org> # v5.8+
2022-07-28perf vendor events arm64: Arm Cortex-A78C and X1CNick Forrington1-0/+2
Add PMU events for the Arm Cortex-A78C and Arm Cortex-X1C CPUs. Events for Arm Cortex-A78C match those for Arm Cortex-A78. Events for Arm Cortex-X1C match those for Arm Cortex- X1. As such, this is just a mapfile change. Main ID Register (MIDR) and event data is sourced from the corresponding Arm Technical Reference Manuals: Arm Cortex-A78C: https://developer.arm.com/documentation/102226/ Arm Cortex-X1C: https://developer.arm.com/documentation/101968/ Reviewed-by: John Garry <john.garry@huawei.com> Signed-off-by: Nick Forrington <nick.forrington@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andrew Kilroy <andrew.kilroy@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20220610174459.615995-1-nick.forrington@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-28perf vendor events: Update Intel snowridgexIan Rogers1-1/+1
Update to v1.20, the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the snowridgex files into perf and update mapfile.csv. Tested on a non-snowridgex with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok This change just updates the version number. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kshipra Bopardikar <kshipra.bopardikar@intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Stephane Eranian <eranian@google.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: http://lore.kernel.org/lkml/20220727220832.2865794-31-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-28perf vendor events: Update Intel westmereexIan Rogers4-4/+4
Update to v3, the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the westmereex files into perf and update mapfile.csv. This change just aligns whitespace and updates the version number. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kshipra Bopardikar <kshipra.bopardikar@intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Stephane Eranian <eranian@google.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: http://lore.kernel.org/lkml/20220727220832.2865794-30-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-28perf vendor events: Update Intel westmereep-spIan Rogers4-4/+4
Update to v3, the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the westmereep-sp files into perf and update mapfile.csv. This change just aligns whitespace and updates the version number. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kshipra Bopardikar <kshipra.bopardikar@intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Stephane Eranian <eranian@google.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: http://lore.kernel.org/lkml/20220727220832.2865794-29-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-28perf vendor events: Update Intel westmereep-dpIan Rogers5-5/+5
Update to v2, the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the westmereep-dp files into perf and update mapfile.csv. This change just aligns whitespace. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kshipra Bopardikar <kshipra.bopardikar@intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Stephane Eranian <eranian@google.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: http://lore.kernel.org/lkml/20220727220832.2865794-28-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-28perf vendor events: Update Intel tigerlakeIan Rogers10-67/+439
Update to v1.07, the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the tigerlake files into perf and update mapfile.csv. Tested on a non-tigerlake with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kshipra Bopardikar <kshipra.bopardikar@intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Stephane Eranian <eranian@google.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: http://lore.kernel.org/lkml/20220727220832.2865794-27-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-28perf vendor events: Update Intel skylakexIan Rogers9-77/+1414
Update to v1.28, the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the skylakex files into perf and update mapfile.csv. Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok 90: perf all metricgroups test : Ok 91: perf all metrics test : Skip 93: perf all PMU test : Ok Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kshipra Bopardikar <kshipra.bopardikar@intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Stephane Eranian <eranian@google.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: http://lore.kernel.org/lkml/20220727220832.2865794-26-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-28perf vendor events: Update Intel skylakeIan Rogers9-319/+345
Update to v53, the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the skylake files into perf and update mapfile.csv. Tested on a non-skylake with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kshipra Bopardikar <kshipra.bopardikar@intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Stephane Eranian <eranian@google.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: http://lore.kernel.org/lkml/20220727220832.2865794-25-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-28perf vendor events: Update Intel silvermontIan Rogers8-10/+8
Update to v14, the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the silvermont files into perf and update mapfile.csv. Other than aligning whitespace this change just folds the mapfile.csv entries for silvertmont onto one line. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kshipra Bopardikar <kshipra.bopardikar@intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Stephane Eranian <eranian@google.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: http://lore.kernel.org/lkml/20220727220832.2865794-24-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-28perf vendor events: Update Intel sapphirerapidsIan Rogers7-25/+691
Update to v1.04, the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the sapphirerapids files into perf and update mapfile.csv. Tested on a non-sapphirerapids with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kshipra Bopardikar <kshipra.bopardikar@intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Stephane Eranian <eranian@google.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: http://lore.kernel.org/lkml/20220727220832.2865794-23-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>