aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/export-to-sqlite.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2022-09-10arm64: mm: fix resume for 52-bit enabled buildsJoey Gouly1-0/+3
__cpu_setup() was changed to take the actual number of VA bits in x0, however the resume path was not updated at the same time. Load `vabits_actual` in the resume path, to ensure that the correct number of VA bits is used. This fixes booting v6.0-rc kernels on my Juno. Signed-off-by: Joey Gouly <joey.gouly@arm.com> Fixes: 0aaa68532e9d ("arm64: mm: fix booting with 52-bit address space") Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220909124311.38489-1-joey.gouly@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-09-08arm64/ptrace: Don't clear calling process' TIF_SME on OOMMark Brown1-2/+0
If allocating memory for the target SVE state in za_set() fails we clear TIF_SME for the ptracing task which is obviously not correct. If we are here we know that the target task already had neither TIF_SVE nor TIF_SME set since we only need to allocate if either the target had not used either SVE or SME and had no need to allocate state before or we just changed the vector length with vec_set_vector_length() which clears TIF_ for us on allocation failure so just remove the clear entirely. Reported-by: Wang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220902132802.39682-1-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-09-06arm64/bti: Disable in kernel BTI when cross section thunks are brokenMark Brown1-0/+2
GCC does not insert a `bti c` instruction at the beginning of a function when it believes that all callers reach the function through a direct branch[1]. Unfortunately the logic it uses to determine this is not sufficiently robust, for example not taking account of functions being placed in different sections which may be loaded separately, so we may still see thunks being generated to these functions. If that happens, the first instruction in the callee function will result in a Branch Target Exception due to the missing landing pad. While this has currently only been observed in the case of modules having their main code loaded sufficiently far from their init section to require thunks it could potentially happen for other cases so the safest thing is to disable BTI for the kernel when building with an affected toolchain. [1]: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106671 Reported-by: D Scott Phillips <scott@os.amperecomputing.com> [Bits of the commit message are lifted from his report & workaround] Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220905142255.591990-1-broonie@kernel.org Cc: <stable@vger.kernel.org> # v5.10+ Signed-off-by: Will Deacon <will@kernel.org>
2022-09-01arm64: mm: Reserve enough pages for the initial ID mapArd Biesheuvel1-13/+13
The logic that conditionally allocates one additional page at each swapper page table level if KASLR is enabled is also applied to the initial ID map, now that we have started using the same set of macros to allocate the space for it. However, the placement of the kernel in physical memory might result in additional pages being needed at any level, even if KASLR is disabled in the build. So account for this in the computation. Fixes: c3cee924bd85 ("arm64: head: cover entire kernel image in initial ID map") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220826164800.2059148-1-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-09-01perf/arm_pmu_platform: fix tests for platform_get_irq() failureYu Zhe1-1/+1
The platform_get_irq() returns negative error codes. It can't actually return zero. Signed-off-by: Yu Zhe <yuzhe@nfschina.com> Link: https://lore.kernel.org/r/20220825011844.8536-1-yuzhe@nfschina.com Signed-off-by: Will Deacon <will@kernel.org>
2022-09-01arm64: head: Ignore bogus KASLR displacement on non-relocatable kernelsArd Biesheuvel1-0/+2
Even non-KASLR kernels can be built as relocatable, to work around broken bootloaders that violate the rules regarding physical placement of the kernel image - in this case, the physical offset modulo 2 MiB is used as the KASLR offset, and all absolute symbol references are fixed up in the usual way. This workaround is enabled by default. CONFIG_RELOCATABLE can also be disabled entirely, in which case the relocation code and the code that captures the offset are omitted from the build. However, since commit aacd149b6238 ("arm64: head: avoid relocating the kernel twice for KASLR"), this code got out of sync, and we still add the offset to the kernel virtual address before populating the page tables even though we never capture it. This means we add a bogus value instead, breaking the boot entirely. Fixes: aacd149b6238 ("arm64: head: avoid relocating the kernel twice for KASLR") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Mikulas Patocka <mpatocka@redhat.com> Link: https://lore.kernel.org/r/20220827070904.2216989-1-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-09-01arm64/kexec: Fix missing extra range for crashkres_low.Levi Yun1-1/+1
Like crashk_res, Calling crash_exclude_mem_range function with crashk_low_res area would need extra crash_mem range too. Add one more extra cmem slot in case of crashk_low_res is used. Signed-off-by: Levi Yun <ppbuk5246@gmail.com> Fixes: 944a45abfabc ("arm64: kdump: Reimplement crashkernel=X") Cc: <stable@vger.kernel.org> # 5.19.x Acked-by: Baoquan He <bhe@redhat.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20220831103913.12661-1-ppbuk5246@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2022-08-23arm64/sme: Don't flush SVE register state when handling SME trapsMark Brown1-11/+0
Currently as part of handling a SME access trap we flush the SVE register state. This is not needed and would corrupt register state if the task has access to the SVE registers already. For non-streaming mode accesses the required flushing will be done in the SVE access trap. For streaming mode SVE register accesses the architecture guarantees that the register state will be flushed when streaming mode is entered or exited so there is no need for us to do so. Simply remove the register initialisation. Fixes: 8bd7f91c03d8 ("arm64/sme: Implement traps and syscall handling for SME") Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20220817182324.638214-5-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-08-23arm64/sme: Don't flush SVE register state when allocating SME storageMark Brown4-10/+12
Currently when taking a SME access trap we allocate storage for the SVE register state in order to be able to handle storage of streaming mode SVE. Due to the original usage in a purely SVE context the SVE register state allocation this also flushes the register state for SVE if storage was already allocated but in the SME context this is not desirable. For a SME access trap to be taken the task must not be in streaming mode so either there already is SVE register state present for regular SVE mode which would be corrupted or the task does not have TIF_SVE and the flush is redundant. Fix this by adding a flag to sve_alloc() indicating if we are in a SVE context and need to flush the state. Freshly allocated storage is always zeroed either way. Fixes: 8bd7f91c03d8 ("arm64/sme: Implement traps and syscall handling for SME") Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20220817182324.638214-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-08-23arm64/signal: Flush FPSIMD register state when disabling streaming modeMark Brown1-0/+10
When handling a signal delivered to a context with streaming mode enabled we will disable streaming mode for the signal handler, when doing so we should also flush the saved FPSIMD register state like exiting streaming mode in the hardware would do so that if that state is reloaded we get the same behaviour. Without this we will reload whatever the last FPSIMD state that was saved for the task was. Fixes: 40a8e87bb328 ("arm64/sme: Disable ZA and streaming mode when handling signals") Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20220817182324.638214-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-08-23arm64/signal: Raise limit on stack framesMark Brown1-1/+1
The signal code has a limit of 64K on the size of a stack frame that it will generate, if this limit is exceeded then a process will be killed if it receives a signal. Unfortunately with the advent of SME this limit is too small - the maximum possible size of the ZA register alone is 64K. This is not an issue for practical systems at present but is easily seen using virtual platforms. Raise the limit to 256K, this is substantially more than could be used by any current architecture extension. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20220817182324.638214-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-08-23arm64/cache: Fix cache_type_cwg() for register generationMark Brown1-1/+1
Ard noticed that since we converted CTR_EL0 to automatic generation we have been seeing errors on some systems handling the value of cache_type_cwg() such as CPU features: No Cache Writeback Granule information, assuming 128 This is because the manual definition of CTR_EL0_CWG_MASK was done without a shift while our convention is to define the mask after shifting. This means that the user in cache_type_cwg() was broken as it was written for the manually written shift then mask. Fix this by converting to use SYS_FIELD_GET(). The only other field where the _MASK for this register is used is IminLine which is at offset 0 so unaffected. Fixes: 9a3634d02301 ("arm64/sysreg: Convert CTR_EL0 to automatic generation") Reported-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220818213613.733091-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-08-23arm64/sysreg: Guard SYS_FIELD_ macros for asmMark Brown1-2/+2
The SYS_FIELD_ macros are not safe for assembly contexts, move them inside the guarded section. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220818213613.733091-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-08-23arm64/sysreg: Directly include bitfield.hMark Brown1-0/+1
The SYS_FIELD_ macros in sysreg.h use definitions from bitfield.h but there is no direct inclusion of it, add one to ensure that sysreg.h is directly usable. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220818213613.733091-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2022-08-23arm64: cacheinfo: Fix incorrect assignment of signed error value to unsigned fw_levelSudeep Holla1-1/+5
Though acpi_find_last_cache_level() always returned signed value and the document states it will return any errors caused by lack of a PPTT table, it never returned negative values before. Commit 0c80f9e165f8 ("ACPI: PPTT: Leave the table mapped for the runtime usage") however changed it by returning -ENOENT if no PPTT was found. The value returned from acpi_find_last_cache_level() is then assigned to unsigned fw_level. It will result in the number of cache leaves calculated incorrectly as a huge value which will then cause the following warning from __alloc_pages as the order would be great than MAX_ORDER because of incorrect and huge cache leaves value. | WARNING: CPU: 0 PID: 1 at mm/page_alloc.c:5407 __alloc_pages+0x74/0x314 | Modules linked in: | CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-10393-g7c2a8d3ac4c0 #73 | pstate: 20000005 (nzCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) | pc : __alloc_pages+0x74/0x314 | lr : alloc_pages+0xe8/0x318 | Call trace: | __alloc_pages+0x74/0x314 | alloc_pages+0xe8/0x318 | kmalloc_order_trace+0x68/0x1dc | __kmalloc+0x240/0x338 | detect_cache_attributes+0xe0/0x56c | update_siblings_masks+0x38/0x284 | store_cpu_topology+0x78/0x84 | smp_prepare_cpus+0x48/0x134 | kernel_init_freeable+0xc4/0x14c | kernel_init+0x2c/0x1b4 | ret_from_fork+0x10/0x20 Fix the same by changing fw_level to be signed integer and return the error from init_cache_level() early in case of error. Reported-and-Tested-by: Bruno Goncalves <bgoncalv@redhat.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/20220808084640.3165368-1-sudeep.holla@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-08-23arm64: errata: add detection for AMEVCNTR01 incrementing incorrectlyIonela Voinescu6-3/+64
The AMU counter AMEVCNTR01 (constant counter) should increment at the same rate as the system counter. On affected Cortex-A510 cores, AMEVCNTR01 increments incorrectly giving a significantly higher output value. This results in inaccurate task scheduler utilization tracking and incorrect feedback on CPU frequency. Work around this problem by returning 0 when reading the affected counter in key locations that results in disabling all users of this counter from using it either for frequency invariance or as FFH reference counter. This effect is the same to firmware disabling affected counters. Details on how the two features are affected by this erratum: - AMU counters will not be used for frequency invariance for affected CPUs and CPUs in the same cpufreq policy. AMUs can still be used for frequency invariance for unaffected CPUs in the system. Although unlikely, if no alternative method can be found to support frequency invariance for affected CPUs (cpufreq based or solution based on platform counters) frequency invariance will be disabled. Please check the chapter on frequency invariance at Documentation/scheduler/sched-capacity.rst for details of its effect. - Given that FFH can be used to fetch either the core or constant counter values, restrictions are lifted regarding any of these counters returning a valid (!0) value. Therefore FFH is considered supported if there is a least one CPU that support AMUs, independent of any counters being disabled or affected by this erratum. Clarifying comments are now added to the cpc_ffh_supported(), cpu_read_constcnt() and cpu_read_corecnt() functions. The above is achieved through adding a new erratum: ARM64_ERRATUM_2457168. Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20220819103050.24211-1-ionela.voinescu@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-08-23arm64: fix rodata=fullMark Rutland4-21/+34
On arm64, "rodata=full" has been suppored (but not documented) since commit: c55191e96caa9d78 ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well") As it's necessary to determine the rodata configuration early during boot, arm64 has an early_param() handler for this, whereas init/main.c has a __setup() handler which is run later. Unfortunately, this split meant that since commit: f9a40b0890658330 ("init/main.c: return 1 from handled __setup() functions") ... passing "rodata=full" would result in a spurious warning from the __setup() handler (though RO permissions would be configured appropriately). Further, "rodata=full" has been broken since commit: 0d6ea3ac94ca77c5 ("lib/kstrtox.c: add "false"/"true" support to kstrtobool()") ... which caused strtobool() to parse "full" as false (in addition to many other values not documented for the "rodata=" kernel parameter. This patch fixes this breakage by: * Moving the core parameter parser to an __early_param(), such that it is available early. * Adding an (optional) arch hook which arm64 can use to parse "full". * Updating the documentation to mention that "full" is valid for arm64. * Having the core parameter parser handle "on" and "off" explicitly, such that any undocumented values (e.g. typos such as "ful") are reported as errors rather than being silently accepted. Note that __setup() and early_param() have opposite conventions for their return values, where __setup() uses 1 to indicate a parameter was handled and early_param() uses 0 to indicate a parameter was handled. Fixes: f9a40b089065 ("init/main.c: return 1 from handled __setup() functions") Fixes: 0d6ea3ac94ca ("lib/kstrtox.c: add "false"/"true" support to kstrtobool()") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jagdish Gediya <jvgediya@linux.ibm.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220817154022.3974645-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-08-23arm64: Fix comment typoKuan-Ying Lee1-1/+1
Replace wrong 'FIQ EL1h' comment with 'FIQ EL1t'. Signed-off-by: Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com> Link: https://lore.kernel.org/r/20220721030531.21234-1-Kuan-Ying.Lee@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2022-08-23docs/arm64: elf_hwcaps: unify newlines in HWCAP listsMartin Liška1-10/+0
Unify horizontal spacing (remove extra newlines) which are sensitive to visual presentation by Sphinx. Fixes: 5e64b862c482 ("arm64/sme: Basic enumeration support") Signed-off-by: Martin Liska <mliska@suse.cz> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/84e3d6cc-75cf-d6f3-9bb8-be02075aaf6d@suse.cz Signed-off-by: Will Deacon <will@kernel.org>
2022-08-17arm64: adjust KASLR relocation after ARCH_RANDOM removalLukas Bulwahn1-5/+3
Commit aacd149b6238 ("arm64: head: avoid relocating the kernel twice for KASLR") adds the new file arch/arm64/kernel/pi/kaslr_early.c with a small code part guarded by '#ifdef CONFIG_ARCH_RANDOM'. Concurrently, commit 9592eef7c16e ("random: remove CONFIG_ARCH_RANDOM") removes the config CONFIG_ARCH_RANDOM and turns all '#ifdef CONFIG_ARCH_RANDOM' code parts into unconditional code parts, which is generally safe to do. Remove a needless ifdef guard after the ARCH_RANDOM removal. Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220721100433.18286-1-lukas.bulwahn@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2022-08-17arm64: Fix match_list for erratum 1286807 on Arm Cortex-A76Zenghui Yu1-0/+2
Since commit 51f559d66527 ("arm64: Enable repeat tlbi workaround on KRYO4XX gold CPUs"), we failed to detect erratum 1286807 on Cortex-A76 because its entry in arm64_repeat_tlbi_list[] was accidently corrupted by this commit. Fix this issue by creating a separate entry for Kryo4xx Gold. Fixes: 51f559d66527 ("arm64: Enable repeat tlbi workaround on KRYO4XX gold CPUs") Cc: Shreyas K K <quic_shrekk@quicinc.com> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220809043848.969-1-yuzenghui@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2022-08-14Linux 6.0-rc1Linus Torvalds1-4/+4
2022-08-14radix-tree: replace gfp.h inclusion with gfp_types.hYury Norov1-1/+1
Radix tree header includes gfp.h for __GFP_BITS_SHIFT only. Now we have gfp_types.h for this. Fixes powerpc allmodconfig build: In file included from include/linux/nodemask.h:97, from include/linux/mmzone.h:17, from include/linux/gfp.h:7, from include/linux/radix-tree.h:12, from include/linux/idr.h:15, from include/linux/kernfs.h:12, from include/linux/sysfs.h:16, from include/linux/kobject.h:20, from include/linux/pci.h:35, from arch/powerpc/kernel/prom_init.c:24: include/linux/random.h: In function 'add_latent_entropy': >> include/linux/random.h:25:46: error: 'latent_entropy' undeclared (first use in this function); did you mean 'add_latent_entropy'? 25 | add_device_randomness((const void *)&latent_entropy, sizeof(latent_entropy)); | ^~~~~~~~~~~~~~ | add_latent_entropy include/linux/random.h:25:46: note: each undeclared identifier is reported only once for each function it appears in Reported-by: kernel test robot <lkp@intel.com> CC: Andy Shevchenko <andriy.shevchenko@linux.intel.com> CC: Andrew Morton <akpm@linux-foundation.org> CC: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Yury Norov <yury.norov@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-08-14take care to handle NULL ->proc_lseek()Al Viro1-0/+3
Easily done now, just by clearing FMODE_LSEEK in ->f_mode during proc_reg_open() for such entries. Fixes: 868941b14441 "fs: remove no_llseek" Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2022-08-13afs: Enable multipage folio supportDavid Howells2-1/+3
Enable multipage folio support for the afs filesystem. Support has already been implemented in netfslib, fscache and cachefiles and in most of afs, but I've waited for Matthew Wilcox's latest folio changes. Note that it does require a change to afs_write_begin() to return the correct subpage. This is a "temporary" change as we're working on getting rid of the need for ->write_begin() and ->write_end() completely, at least as far as network filesystems are concerned - but it doesn't prevent afs from making use of the capability. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Matthew Wilcox (Oracle) <willy@infradead.org> Tested-by: kafs-testing@auristor.com Cc: Marc Dionne <marc.dionne@auristor.com> Cc: linux-afs@lists.infradead.org Link: https://lore.kernel.org/lkml/2274528.1645833226@warthog.procyon.org.uk/ Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-08-13perf test: Refactor shell tests allowing subdirsCarsten Haitzler4-134/+238
This is a prelude to adding more tests to shell tests and in order to support putting those tests into subdirectories, I need to change the test code that scans/finds and runs them. To support subdirs I have to recurse so it's time to refactor the code to allow this and centralize the shell script finding into one location and only one single scan that builds a list of all the found tests in memory instead of it being duplicated in 3 places. This code also optimizes things like knowing the max width of desciption strings (as we can do that while we scan instead of a whole new pass of opening files). It also more cleanly filters scripts to see only *.sh files thus skipping random other files in directories like *~ backup files, other random junk/data files that may appear and the scripts must be executable to make the cut (this ensures the script lib dir is not seen as scripts to run). This avoids perf test running previous older versions of test scripts that are editor backup files as well as skipping perf.data files that may appear and so on. Reviewed-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Carsten Haitzler <carsten.haitzler@arm.com> Tested-by: Leo Yan <leo.yan@linaro.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Suzuki Poulouse <suzuki.poulose@arm.com> Cc: coresight@lists.linaro.org Link: https://lore.kernel.org/r/20220812121641.336465-2-carsten.haitzler@foss.arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf vendor events: Update events for snowridgexZhengjun Xing1-84/+27
Update the events to v1.20, update events for snowridgex by the latest event converter tools. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the snowridgex files into perf. Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220812085239.3089231-12-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf vendor events: Update events and metrics for skylakexZhengjun Xing4-1157/+24918
Update the events to v1.28, the metrics are based on TMA 4.4 full, update events and metrics for skylakex by the latest event converter tools. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the skylakex files into perf. Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220812085239.3089231-11-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf vendor events: Update metrics for sapphirerapidsZhengjun Xing1-0/+6
The metrics are based on TMA 4.4 full, add new metrics “UNCORE_FREQ” for sapphirerapids. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the sapphirerapids files into perf. Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220812085239.3089231-10-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf vendor events: Update events for knightslandingZhengjun Xing1-0/+213
Update the events to v9, update events for knightslanding by the latest event converter tools. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the knightslanding files into perf. Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220812085239.3089231-9-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf vendor events: Update metrics for jaketownZhengjun Xing1-0/+6
The metrics are based on TMA 4.4 full, add new metrics “UNCORE_FREQ” for jaketown. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the jaketown files into perf. Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220812085239.3089231-8-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf vendor events: Update metrics for ivytownZhengjun Xing1-0/+6
The metrics are based on TMA 4.4 full, add new metrics “UNCORE_FREQ” for ivytown. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the ivytown files into perf. Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220812085239.3089231-7-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf vendor events: Update events and metrics for icelakexZhengjun Xing4-556/+38332
Update the events to v1.15, the metrics are based on TMA 4.4 full, update events and metrics for icelakex by the latest event converter tools. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the icelakex files into perf. Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220812085239.3089231-6-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf vendor events: Update events and metrics for haswellxZhengjun Xing2-171/+413
Update the events to v25, the metrics are based on TMA 4.4 full, update events and metrics for haswellx by the latest event converter tools. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the haswellx files into perf. Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220812085239.3089231-5-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf vendor events: Update events and metrics for cascadelakexZhengjun Xing4-1241/+25848
Update to v16, the metrics are based on TMA 4.4 full, update events and add new metrics “UNCORE_FREQ” for cascadelakex. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the cascadelakex files into perf. Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220812085239.3089231-4-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf vendor events: Update events and metrics for broadwellxZhengjun Xing2-159/+10
Update to v19, the metrics are based on TMA 4.4 full, update events and add new metrics “UNCORE_FREQ” for broadwellx. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the broadwellx files into perf. Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220812085239.3089231-3-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf vendor events: Update metrics for broadwelldeZhengjun Xing1-0/+6
The metrics are based on TMA 4.4 full, add new metrics “UNCORE_FREQ” for broadwellde. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py to download and generate the latest events and metrics. Manually copy the broadwellde files into perf. Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220812085239.3089231-2-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf jevents: Fold strings optimizationIan Rogers1-7/+48
If a shorter string ends a longer string then the shorter string may reuse the longer string at an offset. For example, on x86 the event arith.cycles_div_busy and cycles_div_busy can be folded, even though they have difference names the strings are identical after 6 characters. cycles_div_busy can reuse the arith.cycles_div_busy string at an offset of 6. In pmu-events.c this looks like the following where the 'also:' lists folded strings: /* offset=177541 */ "arith.cycles_div_busy\000\000pipeline\000Cycles the divider is busy\000\000\000event=0x14,period=2000000,umask=0x1\000\000\000\000\000\000\000\000\000" /* also: cycles_div_busy\000\000pipeline\000Cycles the divider is busy\000\000\000event=0x14,period=2000000,umask=0x1\000\000\000\000\000\000\000\000\000 */ As jevents.py combines multiple strings for an event into a larger string, the amount of folding is minimal as all parts of the event must align. Other organizations can benefit more from folding, but lose space by say recording more offsets. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20220812230949.683239-15-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf jevents: Compress the pmu_events_tableIan Rogers1-45/+162
The pmu_events array requires 15 pointers per entry which in position independent code need relocating. Change the array to be an array of offsets within a big C string. Only the offset of the first variable is required, subsequent variables are stored in order after the \0 terminator (requiring a byte per variable rather than 4 bytes per offset). The file size savings are: no jevents - the same 19,788,464bytes x86 jevents - ~16.7% file size saving 23,744,288bytes vs 28,502,632bytes all jevents - ~19.5% file size saving 24,469,056bytes vs 30,379,920bytes default build options plus NO_LIBBFD=1. For example, the x86 build savings come from .rela.dyn and .data.rel.ro becoming smaller by 3,157,032bytes and 3,042,016bytes respectively. .rodata increases by 1,432,448bytes, giving an overall 4,766,600bytes saving. To make metric strings more shareable, the topic is changed from say 'skx metrics' to just 'metrics'. To try to help with the memory layout the pmu_events are ordered as used by perf qsort comparator functions. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20220812230949.683239-14-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf metrics: Copy entire pmu_event in find metricIan Rogers2-17/+18
The pmu_event passed to the pmu_events_table_for_each_event is invalid after the loop. Copy the entire struct in metricgroup__find_metric. Reduce the scope of this function to static. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20220812230949.683239-13-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf pmu-events: Hide the pmu_eventsIan Rogers12-91/+101
Hide that the pmu_event structs are an array with a new wrapper struct. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20220812230949.683239-12-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf pmu-events: Don't assume pmu_event is an arrayIan Rogers7-182/+313
The current code assumes that a struct pmu_event can be iterated over forward until a NULL pmu_event is encountered. This makes it difficult to refactor pmu_event. Add a loop function taking a callback function that's passed the struct pmu_event. This way the pmu_event is only needed for one element and not an entire array. Switch existing code iterating over the pmu_event arrays to use the new loop function pmu_events_table_for_each_event. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20220812230949.683239-11-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf pmu-events: Move test events/metrics to JSONIan Rogers4-81/+132
Move arrays of pmu_events into the JSON code so that it may be regenerated and modified by the jevents.py script. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20220812230949.683239-10-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf test: Use full metric resolutionIan Rogers1-145/+77
The simple metric resolution doesn't handle recursion properly, switch to use the full resolution as with the parse-metric tests which also increases coverage. Don't set the values for the metric backward as failures to generate a result are ignored. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20220812230949.683239-9-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf pmu-events: Hide pmu_events_mapIan Rogers7-180/+280
Move usage of the table to pmu-events.c so it may be hidden. By abstracting the table the implementation can later be changed. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20220812230949.683239-8-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf pmu-events: Avoid passing pmu_events_mapIan Rogers9-120/+101
Preparation for hiding pmu_events_map as an implementation detail. While the map is passed, the table of events is all that is normally wanted. While modifying the function's types, rename pmu_events_map__find to pmu_events_table__find to match later encapsulation. Similarly rename pmu_add_cpu_aliases_map to pmu_add_cpu_aliases_table. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20220812230949.683239-7-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf pmu-events: Hide pmu_sys_event_tablesIan Rogers6-52/+84
Move usage of the table to pmu-events.c so it may be hidden. By abstracting the table the implementation can later be changed. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20220812230949.683239-6-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf jevents: Sort JSON files entriesIan Rogers1-13/+35
Sort the JSON files entries on conversion to C. The sort order tries to replicated cmp_sevent from pmu.c so that the input there is already sorted except for sysfs events. Specifically, the sort order is given by the tuple: (not j.desc is None, fix_none(j.topic), fix_none(j.name), fix_none(j.pmu), fix_none(j.metric_name)) which is putting events with descriptions and topics before those without, then sorting by name, then pmu and finally metric_name Add the topic to JsonEvent on reading to simplify. Remove an unnecessary lambda in the JSON reading. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20220812230949.683239-5-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf jevents: Provide path to JSON file on errorIan Rogers1-1/+6
If a JSONDecoderError or similar is raised then it is useful to know the path. Print this and then raise the exception agan. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20220812230949.683239-4-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-08-13perf jevents: Remove the type/version variablesIan Rogers5-16/+0
pmu_events_map has a type variable that is always initialized to "core" and a version variable that is never read. Remove these from the API as it is straightforward to add them back when necessary. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Will Deacon <will@kernel.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20220812230949.683239-3-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>