aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/arch/x86/util/evlist.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2025-03-11perf x86 evlist: Update comments on topdown regroupingDapeng Mi1-21/+6
Update to remove comments about groupings not working and with the: ``` perf stat -e "{instructions,slots},{cycles,topdown-retiring}" ``` case that now works. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250307023906.1135613-4-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-11perf parse-events: Corrections to topdown sortingIan Rogers1-6/+9
In the case of '{instructions,slots},faults,topdown-retiring' the first event that must be grouped, slots, is ignored causing the topdown-retiring event not to be adjacent to the group it needs to be inserted into. Don't ignore the group members when computing the force_grouped_index. Make the force_grouped_index be for the leader of the group it is within and always use it first rather than a group leader index so that topdown events may be sorted from one group into another. As the PMU name comparison applies to moving events in the same group ensure the name ordering is always respected. Change the group splitting logic to not group if there are no other topdown events and to fix cases where the force group leader wasn't being grouped with the other members of its group. Reported-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Closes: https://lore.kernel.org/lkml/20250224083306.71813-2-dapeng1.mi@linux.intel.com/ Closes: https://lore.kernel.org/lkml/f7e4f7e8-748c-4ec7-9088-0e844392c11a@linux.intel.com/ Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://lore.kernel.org/r/20250307023906.1135613-3-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2024-10-16perf x86/topdown: Make topdown metrics comparators be symmetricDapeng Mi1-6/+9
The commit "3b5edc0421e2 (perf x86/topdown: Don't move topdown metric events in group)" modifies topdown metrics comparator to move topdown metrics events which are not in same group with previous event. But it just modifies the 2nd comparator and causes the comparators become asymmetric. Thus modify the 1st topdown metrics comparator and make the two comparators be symmetric, and refine the comments as well. Suggested-by: Ian Rogers <irogers@google.com> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20241011110207.1032235-1-dapeng1.mi@linux.intel.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2024-09-30perf x86/topdown: Don't move topdown metric events in groupDapeng Mi1-1/+61
when running below perf command, we say error is reported. perf record -e "{slots,instructions,topdown-retiring}:S" -vv -C0 sleep 1 ------------------------------------------------------------ perf_event_attr: type 4 (cpu) size 168 config 0x400 (slots) sample_type IP|TID|TIME|READ|CPU|PERIOD|IDENTIFIER read_format ID|GROUP|LOST disabled 1 sample_id_all 1 exclude_guest 1 ------------------------------------------------------------ sys_perf_event_open: pid -1 cpu 0 group_fd -1 flags 0x8 = 5 ------------------------------------------------------------ perf_event_attr: type 4 (cpu) size 168 config 0x8000 (topdown-retiring) { sample_period, sample_freq } 4000 sample_type IP|TID|TIME|READ|CPU|PERIOD|IDENTIFIER read_format ID|GROUP|LOST freq 1 sample_id_all 1 exclude_guest 1 ------------------------------------------------------------ sys_perf_event_open: pid -1 cpu 0 group_fd 5 flags 0x8 sys_perf_event_open failed, error -22 Error: The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (topdown-retiring). The reason of error is that the events are regrouped and topdown-retiring event is moved to closely after the slots event and topdown-retiring event needs to do the sampling, but Intel PMU driver doesn't support to sample topdown metrics events. For topdown metrics events, it just requires to be in a group which has slots event as leader. It doesn't require topdown metrics event must be closely after slots event. Thus it's a overkill to move topdown metrics event closely after slots event in events regrouping and furtherly cause the above issue. Thus don't move topdown metrics events forward if they are already in a group. Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Cc: Yongwei Ma <yongwei.ma@intel.com> Link: https://lore.kernel.org/r/20240913084712.13861-4-dapeng1.mi@linux.intel.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2024-09-30perf x86/topdown: Complete topdown slots/metrics events checkDapeng Mi1-4/+4
It's not complete to check whether an event is a topdown slots or topdown metrics event by only comparing the event name since user may assign the event by RAW format, e.g. perf stat -e '{instructions,cpu/r400/,cpu/r8300/}' sleep 1 Performance counter stats for 'sleep 1': <not counted> instructions <not counted> cpu/r400/ <not supported> cpu/r8300/ 1.002917796 seconds time elapsed 0.002955000 seconds user 0.000000000 seconds sys The RAW format slots and topdown-be-bound events are not recognized and not regroup the events, and eventually cause error. Thus add two helpers arch_is_topdown_slots()/arch_is_topdown_metrics() to detect whether an event is topdown slots/metrics event by comparing the event config directly, and use these two helpers to replace the original event name comparisons. Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Cc: Yongwei Ma <yongwei.ma@intel.com> Link: https://lore.kernel.org/r/20240913084712.13861-2-dapeng1.mi@linux.intel.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2024-09-26perf stat: Remove evlist__add_default_attrs use stringsIan Rogers1-71/+3
add_default_atttributes would add evsels by having pre-created perf_event_attr, however, this needed fixing for hybrid as the extended PMU type was necessary for each core PMU. The logic for this was in an arch specific x86 function and wasn't present for ARM, meaning that default events weren't being opened on all PMUs on ARM. Change the creation of the default events to use parse_events and strings as that will open the events on all PMUs. Rather than try to detect events on PMUs before parsing, parse the event but skip its output in stat-display. The previous order of hardware events was: cycles, stalled-cycles-frontend, stalled-cycles-backend, instructions. As instructions is a more fundamental concept the order is changed to: instructions, cycles, stalled-cycles-frontend, stalled-cycles-backend. Closes: https://lore.kernel.org/lkml/CAP-5=fVABSBZnsmtRn1uF-k-G1GWM-L5SgiinhPTfHbQsKXb_g@mail.gmail.com/ Acked-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Ian Rogers <irogers@google.com> [Don't display unsupported default events except 'cycles'] Acked-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: James Clark <james.clark@linaro.org> Cc: Yang Jihong <yangjihong@bytedance.com> Cc: Dominique Martinet <asmadeus@codewreck.org> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: Howard Chu <howardchu95@gmail.com> Cc: Ze Gao <zegao2021@gmail.com> Cc: Yicong Yang <yangyicong@hisilicon.com> Cc: Weilin Wang <weilin.wang@intel.com> Cc: Will Deacon <will@kernel.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Jing Zhang <renyu.zj@linux.alibaba.com> Cc: Yang Li <yang.lee@linux.alibaba.com> Cc: Leo Yan <leo.yan@linux.dev> Cc: ak@linux.intel.com Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Sun Haiyong <sunhaiyong@loongson.cn> Cc: John Garry <john.g.garry@oracle.com> Link: https://lore.kernel.org/r/20240926144851.245903-4-james.clark@linaro.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2024-08-13perf stat: Fork and launch 'perf record' when 'perf stat' needs to get retire latency value for a metric.Weilin Wang1-0/+6
When retire_latency value is used in a metric formula, evsel would fork a 'perf record' process with "-e" and "-W" options. 'perf record' will collect required retire_latency values in parallel while 'perf stat' is collecting counting values. At the point of time that 'perf stat' stops counting, evsel would stop 'perf record' by sending sigterm signal to 'perf record' process. Sampled data will be processed to get retire latency value. Another thread is required to synchronize between 'perf stat' and 'perf record' when we pass data through pipe. Retire_latency evsel is not opened for 'perf stat' so that there is no counter wasted on it. This commit includes code suggested by Namhyung to adjust reading size for groups that include retire_latency evsels. In current :R parsing implementation, the parser would recognize events with retire_latency modifier and insert them into the evlist like a normal event. Ideally, we need to avoid counting these events. In this commit, at the time when a retire_latency evsel is read, set the retire latency value processed from the sampled data to count value. This sampled retire latency value will be used for metric calculation and final event count print out. No special metric calculation and event print out code required for retire_latency events. Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Weilin Wang <weilin.wang@intel.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Samantha Alt <samantha.alt@intel.com> Link: https://lore.kernel.org/r/20240720062102.444578-4-weilin.wang@intel.com [ Squashed the 3rd and 4th commit in the series to keep it building patch by patch ] [ Constified the 'struct perf_tool' pointer in process_sample_event() ] [ Use perf_tool__init(&tool, false) to address a segfault I reported and Ian/Weilin diagnosed ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-08-03perf parse-events x86: Avoid sorting uops_retired.slotsIan Rogers1-3/+4
As topdown.slots may appear as slots it may get confused with uops_retired.slots which is an invalid perf metric event group leader. Special case uops_retired.slots to avoid this confusion. Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Weilin Wang <weilin.wang@intel.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: https://lore.kernel.org/r/20230801053634.1142634-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-05-27perf pmus: Remove perf_pmus__has_hybridIan Rogers1-1/+1
perf_pmus__has_hybrid was used to detect when there was >1 core PMU, this can be achieved with perf_pmus__num_core_pmus that doesn't depend upon is_pmu_hybrid and PMU name comparisons. When modifying the function calls take the opportunity to improve comments, enable/simplify tests that were previously failing for hybrid but now pass and to simplify generic code. Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ali Saidi <alisaidi@amazon.com> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Dmitrii Dolgov <9erthalion6@gmail.com> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jing Zhang <renyu.zj@linux.alibaba.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kang Minchul <tegongkang@gmail.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Ming Wang <wangming01@loongson.cn> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Rob Herring <robh@kernel.org> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Suzuki Poulouse <suzuki.poulose@arm.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: coresight@lists.linaro.org Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20230527072210.2900565-34-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-05-27perf pmus: Allow just core PMU scanningIan Rogers1-4/+1
Scanning all PMUs is expensive as all PMUs sysfs entries are loaded, benchmarking shows more than 4x the cost: ``` $ perf bench internals pmu-scan -i 1000 Computing performance of sysfs PMU event scan for 1000 times Average core PMU scanning took: 989.231 usec (+- 1.535 usec) Average PMU scanning took: 4309.425 usec (+- 74.322 usec) ``` Add new perf_pmus__scan_core routine that scans just core PMUs. Replace perf_pmus__scan calls with perf_pmus__scan_core when non-core PMUs are being ignored. Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ali Saidi <alisaidi@amazon.com> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Dmitrii Dolgov <9erthalion6@gmail.com> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jing Zhang <renyu.zj@linux.alibaba.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kang Minchul <tegongkang@gmail.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Ming Wang <wangming01@loongson.cn> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Rob Herring <robh@kernel.org> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Suzuki Poulouse <suzuki.poulose@arm.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: coresight@lists.linaro.org Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20230527072210.2900565-30-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-05-27perf pmu: Separate pmu and pmusIan Rogers1-2/+3
Separate and hide the pmus list in pmus.[ch]. Move pmus functionality out of pmu.[ch] into pmus.[ch] renaming pmus functions which were prefixed perf_pmu__ to perf_pmus__. Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ali Saidi <alisaidi@amazon.com> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Dmitrii Dolgov <9erthalion6@gmail.com> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jing Zhang <renyu.zj@linux.alibaba.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kang Minchul <tegongkang@gmail.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Ming Wang <wangming01@loongson.cn> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Rob Herring <robh@kernel.org> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Suzuki Poulouse <suzuki.poulose@arm.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: coresight@lists.linaro.org Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20230527072210.2900565-28-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-05-27perf x86: Iterate hybrid PMUs as core PMUsIan Rogers1-8/+17
Rather than iterating over a separate hybrid list, iterate all PMUs with the hybrid ones having is_core as true. Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ali Saidi <alisaidi@amazon.com> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Dmitrii Dolgov <9erthalion6@gmail.com> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jing Zhang <renyu.zj@linux.alibaba.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kang Minchul <tegongkang@gmail.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Ming Wang <wangming01@loongson.cn> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Rob Herring <robh@kernel.org> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Suzuki Poulouse <suzuki.poulose@arm.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: coresight@lists.linaro.org Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20230527072210.2900565-18-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-05-15perf parse-events: Don't reorder atom cpu eventsIan Rogers1-2/+2
On hybrid systems the topdown events don't share a fixed counter on the atom core, so they don't require the sorting the perf metric supporting PMUs do. Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Kan Liang <kan.liang@linux.intel.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ahmad Yasin <ahmad.yasin@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: Edward Baker <edward.baker@intel.com> Cc: Florian Fischer <florian.fischer@muhq.space> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kang Minchul <tegongkang@gmail.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Rob Herring <robh@kernel.org> Cc: Samantha Alt <samantha.alt@intel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Sumanth Korikkar <sumanthk@linux.ibm.com> Cc: Suzuki Poulouse <suzuki.poulose@arm.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Tiezhu Yang <yangtiezhu@loongson.cn> Cc: Weilin Wang <weilin.wang@intel.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: Yang Jihong <yangjihong1@huawei.com> Link: https://lore.kernel.org/r/20230502223851.2234828-38-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-03-13perf parse-events: Sort and group parsed eventsIan Rogers1-23/+16
This change is intended to be a no-op for most current cases, the default sort order is the order the events were parsed. Where it varies is in how groups are handled. Previously an uncore and core event that are grouped would most often cause the group to be removed: ``` $ perf stat -e '{instructions,uncore_imc_free_running_0/data_total/}' -a sleep 1 WARNING: grouped events cpus do not match, disabling group: anon group { instructions, uncore_imc_free_running_0/data_total/ } ... ``` However, when wildcards are used the events should be re-sorted and re-grouped in parse_events__set_leader, but this currently fails for simple examples: ``` $ perf stat -e '{uncore_imc_free_running/data_read/,uncore_imc_free_running/data_write/}' -a sleep 1 Performance counter stats for 'system wide': <not counted> MiB uncore_imc_free_running/data_read/ <not counted> MiB uncore_imc_free_running/data_write/ 1.000996992 seconds time elapsed ``` A futher failure mode, fixed in this patch, is to force topdown events into a group. This change moves sorting the evsels in the evlist after parsing. It requires parsing to set up groups. First the evsels are sorted respecting the existing groupings and parse order, but also reordering to ensure evsels of the same PMU and group appear together. So that software and aux events respect groups, their pmu_name is taken from the group leader. The sorting is done with list_sort removing a memory allocation. After sorting a pass is done to correct the group leaders and for topdown events ensuring they have a group leader. This fixes the problems seen before: ``` $ perf stat -e '{uncore_imc_free_running/data_read/,uncore_imc_free_running/data_write/}' -a sleep 1 Performance counter stats for 'system wide': 727.42 MiB uncore_imc_free_running/data_read/ 81.84 MiB uncore_imc_free_running/data_write/ 1.000948615 seconds time elapsed ``` As well as making groups not fail for cases like: ``` $ perf stat -e '{imc_free_running_0/data_total/,imc_free_running_1/data_total/}' -a sleep 1 Performance counter stats for 'system wide': 256.47 MiB imc_free_running_0/data_total/ 256.48 MiB imc_free_running_1/data_total/ 1.001165442 seconds time elapsed ``` Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Florian Fischer <florian.fischer@muhq.space> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kim Phillips <kim.phillips@amd.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Steinar H. Gunderson <sesse@google.com> Cc: Stephane Eranian <eranian@google.com> Cc: Suzuki Poulouse <suzuki.poulose@arm.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: https://lore.kernel.org/r/20230312021543.3060328-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-02-19perf stat: Add TopdownL1 metric as a default if presentIan Rogers1-3/+3
When there are no events and on Intel, the topdown events will be added by default if present. To display the metrics associated with these request special handling in stat-shadow.c. To more easily update these metrics use the json metric version via the TopdownL1 group. This makes the handling less platform specific. Modify the metricgroup__has_metric code to also cover metric groups. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Caleb Biggers <caleb.biggers@intel.com> Cc: Eduard Zingerman <eddyz87@gmail.com> Cc: Florian Fischer <florian.fischer@muhq.space> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jing Zhang <renyu.zj@linux.alibaba.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Perry Taylor <perry.taylor@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Stephane Eranian <eranian@google.com> Cc: Suzuki Poulouse <suzuki.poulose@arm.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-stm32@st-md-mailman.stormreply.com Link: https://lore.kernel.org/r/20230219092848.639226-40-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-29perf stat: Add topdown metrics in the default perf stat on the hybrid machineZhengjun Xing1-11/+2
Topdown metrics are missed in the default perf stat on the hybrid machine, add Topdown metrics in default perf stat for hybrid systems. Currently, we support the perf metrics Topdown for the p-core PMU in the perf stat default, the perf metrics Topdown support for e-core PMU will be implemented later separately. Refactor the code adds two x86 specific functions. Widen the size of the event name column by 7 chars, so that all metrics after the "#" become aligned again. The perf metrics topdown feature is supported on the cpu_core of ADL. The dedicated perf metrics counter and the fixed counter 3 are used for the topdown events. Adding the topdown metrics doesn't trigger multiplexing. Before: # ./perf stat -a true Performance counter stats for 'system wide': 53.70 msec cpu-clock # 25.736 CPUs utilized 80 context-switches # 1.490 K/sec 24 cpu-migrations # 446.951 /sec 52 page-faults # 968.394 /sec 2,788,555 cpu_core/cycles/ # 51.931 M/sec 851,129 cpu_atom/cycles/ # 15.851 M/sec 2,974,030 cpu_core/instructions/ # 55.385 M/sec 416,919 cpu_atom/instructions/ # 7.764 M/sec 586,136 cpu_core/branches/ # 10.916 M/sec 79,872 cpu_atom/branches/ # 1.487 M/sec 14,220 cpu_core/branch-misses/ # 264.819 K/sec 7,691 cpu_atom/branch-misses/ # 143.229 K/sec 0.002086438 seconds time elapsed After: # ./perf stat -a true Performance counter stats for 'system wide': 61.39 msec cpu-clock # 24.874 CPUs utilized 76 context-switches # 1.238 K/sec 24 cpu-migrations # 390.968 /sec 52 page-faults # 847.097 /sec 2,753,695 cpu_core/cycles/ # 44.859 M/sec 903,899 cpu_atom/cycles/ # 14.725 M/sec 2,927,529 cpu_core/instructions/ # 47.690 M/sec 428,498 cpu_atom/instructions/ # 6.980 M/sec 581,299 cpu_core/branches/ # 9.470 M/sec 83,409 cpu_atom/branches/ # 1.359 M/sec 13,641 cpu_core/branch-misses/ # 222.216 K/sec 8,008 cpu_atom/branch-misses/ # 130.453 K/sec 14,761,308 cpu_core/slots/ # 240.466 M/sec 3,288,625 cpu_core/topdown-retiring/ # 22.3% retiring 1,323,323 cpu_core/topdown-bad-spec/ # 9.0% bad speculation 5,477,470 cpu_core/topdown-fe-bound/ # 37.1% frontend bound 4,679,199 cpu_core/topdown-be-bound/ # 31.7% backend bound 646,194 cpu_core/topdown-heavy-ops/ # 4.4% heavy operations # 17.9% light operations 1,244,999 cpu_core/topdown-br-mispredict/ # 8.4% branch mispredict # 0.5% machine clears 3,891,800 cpu_core/topdown-fetch-lat/ # 26.4% fetch latency # 10.7% fetch bandwidth 1,879,034 cpu_core/topdown-mem-bound/ # 12.7% memory bound # 19.0% Core bound 0.002467839 seconds time elapsed Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Acked-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220721065706.2886112-6-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-29perf x86 evlist: Add default hybrid events for perf statKan Liang1-1/+51
Provide a new solution to replace the reverted commit ac2dc29edd21f9ec ("perf stat: Add default hybrid events") For the default software attrs, nothing is changed. For the default hardware attrs, create a new evsel for each hybrid pmu. With the new solution, adding a new default attr will not require the special support for the hybrid platform anymore. Also, the "--detailed" is supported on the hybrid platform With the patch, $ perf stat -a -ddd sleep 1 Performance counter stats for 'system wide': 32,231.06 msec cpu-clock # 32.056 CPUs utilized 529 context-switches # 16.413 /sec 32 cpu-migrations # 0.993 /sec 69 page-faults # 2.141 /sec 176,754,151 cpu_core/cycles/ # 5.484 M/sec (41.65%) 161,695,280 cpu_atom/cycles/ # 5.017 M/sec (49.92%) 48,595,992 cpu_core/instructions/ # 1.508 M/sec (49.98%) 32,363,337 cpu_atom/instructions/ # 1.004 M/sec (58.26%) 10,088,639 cpu_core/branches/ # 313.010 K/sec (58.31%) 6,390,582 cpu_atom/branches/ # 198.274 K/sec (58.26%) 846,201 cpu_core/branch-misses/ # 26.254 K/sec (66.65%) 676,477 cpu_atom/branch-misses/ # 20.988 K/sec (58.27%) 14,290,070 cpu_core/L1-dcache-loads/ # 443.363 K/sec (66.66%) 9,983,532 cpu_atom/L1-dcache-loads/ # 309.749 K/sec (58.27%) 740,725 cpu_core/L1-dcache-load-misses/ # 22.982 K/sec (66.66%) <not supported> cpu_atom/L1-dcache-load-misses/ 480,441 cpu_core/LLC-loads/ # 14.906 K/sec (66.67%) 326,570 cpu_atom/LLC-loads/ # 10.132 K/sec (58.27%) 329 cpu_core/LLC-load-misses/ # 10.208 /sec (66.68%) 0 cpu_atom/LLC-load-misses/ # 0.000 /sec (58.32%) <not supported> cpu_core/L1-icache-loads/ 21,982,491 cpu_atom/L1-icache-loads/ # 682.028 K/sec (58.43%) 4,493,189 cpu_core/L1-icache-load-misses/ # 139.406 K/sec (33.34%) 4,711,404 cpu_atom/L1-icache-load-misses/ # 146.176 K/sec (50.08%) 13,713,090 cpu_core/dTLB-loads/ # 425.462 K/sec (33.34%) 9,384,727 cpu_atom/dTLB-loads/ # 291.170 K/sec (50.08%) 157,387 cpu_core/dTLB-load-misses/ # 4.883 K/sec (33.33%) 108,328 cpu_atom/dTLB-load-misses/ # 3.361 K/sec (50.08%) <not supported> cpu_core/iTLB-loads/ <not supported> cpu_atom/iTLB-loads/ 37,655 cpu_core/iTLB-load-misses/ # 1.168 K/sec (33.32%) 61,661 cpu_atom/iTLB-load-misses/ # 1.913 K/sec (50.03%) <not supported> cpu_core/L1-dcache-prefetches/ <not supported> cpu_atom/L1-dcache-prefetches/ <not supported> cpu_core/L1-dcache-prefetch-misses/ <not supported> cpu_atom/L1-dcache-prefetch-misses/ 1.005466919 seconds time elapsed Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Acked-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220721065706.2886112-5-zhengjun.xing@linux.intel.com Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-29perf evlist: Always use arch_evlist__add_default_attrs()Kan Liang1-1/+6
Current perf stat uses the evlist__add_default_attrs() to add the generic default attrs, and uses arch_evlist__add_default_attrs() to add the Arch specific default attrs, e.g., Topdown for x86. It works well for the non-hybrid platforms. However, for a hybrid platform, the hard code generic default attrs don't work. Uses arch_evlist__add_default_attrs() to replace the evlist__add_default_attrs(). The arch_evlist__add_default_attrs() is modified to invoke the same __evlist__add_default_attrs() for the generic default attrs. No functional change. Add default_null_attrs[] to indicate the arch specific attrs. No functional change for the arch specific default attrs either. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Acked-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220721065706.2886112-4-zhengjun.xing@linux.intel.com Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-05-20perf parse-events: Move slots event for the hybrid platform tooKan Liang1-2/+3
The commit 94dbfd6781a0e87b ("perf parse-events: Architecture specific leader override") introduced a feature to reorder the slots event to fulfill the restriction of the perf metrics topdown group. But the feature doesn't work on the hybrid machine. $ perf stat -e "{cpu_core/instructions/,cpu_core/slots/,cpu_core/topdown-retiring/}" -a sleep 1 Performance counter stats for 'system wide': <not counted> cpu_core/instructions/ <not counted> cpu_core/slots/ <not supported> cpu_core/topdown-retiring/ 1.002871801 seconds time elapsed A hybrid platform has a different PMU name for the core PMUs, while current perf hard code the PMU name "cpu". Introduce a new function to check whether the system supports the perf metrics feature. The result is cached for the future usage. For X86, the core PMU name always has "cpu" prefix. With the patch: $ perf stat -e "{cpu_core/instructions/,cpu_core/slots/,cpu_core/topdown-retiring/}" -a sleep 1 Performance counter stats for 'system wide': 76,337,010 cpu_core/slots/ 10,416,809 cpu_core/instructions/ 11,692,372 cpu_core/topdown-retiring/ 1.002805453 seconds time elapsed Reviewed-by: Ian Rogers <irogers@google.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: https://lore.kernel.org/r/20220518143900.1493980-5-kan.liang@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-05-20perf parse-events: Support different format of the topdown event nameKan Liang1-1/+1
The evsel->name may have a different format for a topdown event, a pure topdown name (e.g., topdown-fe-bound), or a PMU name + a topdown name (e.g., cpu/topdown-fe-bound/). The cpu/topdown-fe-bound/ kind format isn't supported by the arch_evlist__leader(). This format is a very common format for a hybrid platform, which requires specifying the PMU name for each event. Without the patch, $ perf stat -e '{instructions,slots,cpu/topdown-fe-bound/}' -a sleep 1 Performance counter stats for 'system wide': <not counted> instructions <not counted> slots <not supported> cpu/topdown-fe-bound/ 1.003482041 seconds time elapsed Some events weren't counted. Try disabling the NMI watchdog: echo 0 > /proc/sys/kernel/nmi_watchdog perf stat ... echo 1 > /proc/sys/kernel/nmi_watchdog The events in group usually have to be from the same PMU. Try reorganizing the group. With the patch, $ perf stat -e '{instructions,slots,cpu/topdown-fe-bound/}' -a sleep 1 Performance counter stats for 'system wide': 157,383,996 slots 25,011,711 instructions 27,441,686 cpu/topdown-fe-bound/ 1.003530890 seconds time elapsed Fixes: bc355822f0d9623b ("perf parse-events: Move slots only with topdown") Reviewed-by: Ian Rogers <irogers@google.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: https://lore.kernel.org/r/20220518143900.1493980-4-kan.liang@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-03-22perf parse-events: Move slots only with topdownIan Rogers1-4/+14
If slots isn't with a topdown event then moving it is unnecessary. For example {instructions, slots} is re-ordered: $ perf stat -e '{instructions,slots}' -a sleep 1 Performance counter stats for 'system wide': 936,600,825 slots 144,440,968 instructions 1.006061423 seconds time elapsed Which can break tools expecting the command line order to match the printed order. It is necessary to move the slots event first when it appears with topdown events. Add extra checking so that the slots event is only moved in the case of there being a topdown event like: $ perf stat -e '{instructions,slots,topdown-fe-bound}' -a sleep 1 Performance counter stats for 'system wide': 2427568570 slots 300927614 instructions 551021649 topdown-fe-bound 1.001771803 seconds time elapsed Fixes: 94dbfd6781a0e87b ("perf parse-events: Architecture specific leader override") Reported-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Kan Liang <kan.liang@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: https://lore.kernel.org/r/20220321223344.1034479-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-03-18perf parse-events: Ignore case in topdown.slots checkIan Rogers1-1/+1
An issue with icelakex metrics: https://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git/tree/tools/perf/pmu-events/arch/x86/icelakex/icx-metrics.json?h=perf/core&id=65eab2bc7dab326ee892ec5a4c749470b368b51a#n48 That causes the slots not to be first. Fixes: 94dbfd6781a0e87b ("perf parse-events: Architecture specific leader override") Reported-by: Caleb Biggers <caleb.biggers@intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Zhengjun Xing <zhengjun.xing@linux.intel.com> Link: https://lore.kernel.org/r/20220317224309.543736-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-12-07perf parse-events: Architecture specific leader overrideIan Rogers1-0/+17
Currently topdown events must appear after a slots event: $ perf stat -e '{slots,topdown-fe-bound}' /bin/true Performance counter stats for '/bin/true': 3,183,090 slots 986,133 topdown-fe-bound Reversing the events yields: $ perf stat -e '{topdown-fe-bound,slots}' /bin/true Error: The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (topdown-fe-bound). For metrics the order of events is determined by iterating over a hashmap, and so slots isn't guaranteed to be first which can yield this error. Change the set_leader in parse-events, called when a group is closed, so that rather than always making the first event the leader, if the slots event exists then it is made the leader. It is then moved to the head of the evlist otherwise it won't be opened in the correct order. The result is: $ perf stat -e '{topdown-fe-bound,slots}' /bin/true Performance counter stats for '/bin/true': 3,274,795 slots 1,001,702 topdown-fe-bound A problem with this approach is the slots event is identified by name, names can be overwritten like 'cpu/slots,name=foo/' and this causes the leader change to fail. The change also modifies and fixes mixed groups like, with the change: $ perf stat -e '{instructions,slots,topdown-fe-bound}' -a -- sleep 2 Performance counter stats for 'system wide': 5574985410 slots 971981616 instructions 1348461887 topdown-fe-bound 2.001263120 seconds time elapsed Without the change: $ perf stat -e '{instructions,slots,topdown-fe-bound}' -a -- sleep 2 Performance counter stats for 'system wide': <not counted> instructions <not counted> slots <not supported> topdown-fe-bound 2.006247990 seconds time elapsed Something that may be undesirable here is that the events are reordered in the output. Reviewed-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Clarke <pc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Riccardo Mancini <rickyman7@gmail.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vineet Singh <vineet.singh@intel.com> Link: http://lore.kernel.org/lkml/20211130174945.247604-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-07-09perf stat: Add Topdown metrics L2 events as default eventsKan Liang1-1/+5
The Topdown Microarchitecture Analysis (TMA) Method is a structured analysis methodology to identify critical performance bottlenecks in out-of-order processors. The Topdown metrics L1 event was added as default in 42641d6f4d15e6db ("perf stat: Add Topdown metrics events as default events") From the Sapphire Rapids server and later platforms, the same dedicated "metrics" register is extended to support both L1 and L2 events. Add both L1 and L2 Topdown metrics events as default to enrich the default measuring information if the new measurement register is available. On legacy systems there is no change to avoid extra multiplexing. The topdown_level indicates the max metrics level for the top-down statistics. Set it to 2 to display all L1 and L2 Topdown metrics events. With the patch: $ perf stat sleep 1 Performance counter stats for 'sleep 1': 0.59 msec task-clock # 0.001 CPUs utilized 1 context-switches # 1.687 K/sec 0 cpu-migrations # 0.000 /sec 76 page-faults # 128.198 K/sec 1,405,318 cycles # 2.371 GHz 1,471,136 instructions # 1.05 insn per cycle 310,132 branches # 523.136 M/sec 10,435 branch-misses # 3.36% of all branches 8,431,908 slots # 14.223 G/sec 1,554,116 topdown-retiring # 18.4% retiring 1,289,585 topdown-bad-spec # 15.2% bad speculation 2,810,636 topdown-fe-bound # 33.2% frontend bound 2,810,636 topdown-be-bound # 33.2% backend bound 231,464 topdown-heavy-ops # 2.7% heavy operations # 15.6% light operations 1,223,453 topdown-br-mispredict # 14.5% branch mispredict # 0.8% machine clears 1,884,779 topdown-fetch-lat # 22.3% fetch latency # 10.9% fetch bandwidth 1,454,917 topdown-mem-bound # 17.2% memory bound # 16.0% Core bound 1.001179699 seconds time elapsed 0.000000000 seconds user 0.001238000 seconds sys Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/1625760169-18396-1-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-02-03perf stat: Add Topdown metrics events as default eventsKan Liang1-0/+15
The Topdown Microarchitecture Analysis (TMA) Method is a structured analysis methodology to identify critical performance bottlenecks in out-of-order processors. From the Ice Lake and later platforms, the Topdown information can be retrieved from the dedicated "metrics" register, which isn't impacted by other events. Also, the Topdown metrics support both per thread/process and per core measuring. Adding Topdown metrics events as default events can enrich the default measuring information, and would not cost any extra multiplexing. Introduce arch_evlist__add_default_attrs() to allow architecture specific default events. Add the Topdown metrics events in the X86 specific arch_evlist__add_default_attrs(). Other architectures can add their own default events later separately. With the patch: $ perf stat sleep 1 Performance counter stats for 'sleep 1': 0.82 msec task-clock:u # 0.001 CPUs utilized 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 61 page-faults:u # 0.074 M/sec 319,941 cycles:u # 0.388 GHz 242,802 instructions:u # 0.76 insn per cycle 54,380 branches:u # 66.028 M/sec 4,043 branch-misses:u # 7.43% of all branches 1,585,555 slots:u # 1925.189 M/sec 238,941 topdown-retiring:u # 15.0% retiring 410,378 topdown-bad-spec:u # 25.8% bad speculation 634,222 topdown-fe-bound:u # 39.9% frontend bound 304,675 topdown-be-bound:u # 19.2% backend bound 1.001791625 seconds time elapsed 0.000000000 seconds user 0.001572000 seconds sys Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lore.kernel.org/lkml/20210121133752.118327-1-kan.liang@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>