aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/stackcollapse.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2023-12-29powerpc/86xx: Drop unused CONFIG_MPC8610Michael Ellerman1-7/+0
The MPC8610 symbol used to be default y if MPC8610_HPCD, but since MPC8610_HPCD was removed MPC8610 is now never used. Remove it. Fixes: 248667f8bbde ("powerpc: drop HPCD/MPC8610 evaluation platform support") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231123032902.2760818-1-mpe@ellerman.id.au
2023-12-21powerpc/powernv: Add error handling to opal_prd_range_is_validHaoran Liu1-0/+2
In the opal_prd_range_is_valid function within opal-prd.c, error handling was missing for the of_get_address call. This patch adds necessary error checking, ensuring that the function gracefully handles scenarios where of_get_address fails. Signed-off-by: Haoran Liu <liuhaoran14@163.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231127144108.29782-1-liuhaoran14@163.com
2023-12-21selftests/powerpc: Fix spelling mistake "EACCESS" -> "EACCES"Colin Ian King1-1/+1
There is a spelling mistake of the EACCES error name, fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231215112456.13554-1-colin.i.king@gmail.com
2023-12-21powerpc/hvcall: Reorder Nestedv2 hcall opcodesVaibhav Jain1-10/+10
Reorder the newly introduced hcall opcodes for Nestedv2 to follow the increasing-opcode-number convention followed in 'hvcall.h'. Also updates the value for MAX_HCALL_OPCODE which is used in various places in arch code for range checking. Notably in the KVM enabled-hcall logic, and in hcall tracing. Fixes: 19d31c5f1157 ("KVM: PPC: Add support for nestedv2 guests") Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231219092309.118151-1-vaibhav@linux.ibm.com
2023-12-21powerpc/ps3: Add missing set_freezable() for ps3_probe_thread()Kevin Hao1-0/+1
The kernel thread function ps3_probe_thread() invokes the try_to_freeze() in its loop. But all the kernel threads are non-freezable by default. So if we want to make a kernel thread to be freezable, we have to invoke set_freezable() explicitly. Signed-off-by: Kevin Hao <haokexin@gmail.com> Acked-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231221044510.1802429-4-haokexin@gmail.com
2023-12-21powerpc/mpc83xx: Use wait_event_freezable() for freezable kthreadKevin Hao1-2/+1
A freezable kernel thread can enter frozen state during freezing by either calling try_to_freeze() or using wait_event_freezable() and its variants. So for the following snippet of code in a kernel thread loop: wait_event_interruptible(); try_to_freeze(); We can change it to a simple wait_event_freezable() and then eliminate a function call. Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231221044510.1802429-3-haokexin@gmail.com
2023-12-21powerpc/mpc83xx: Add the missing set_freezable() for agent_thread_fn()Kevin Hao1-0/+2
The kernel thread function agent_thread_fn() invokes the try_to_freeze() in its loop. But all the kernel threads are non-freezable by default. So if we want to make a kernel thread to be freezable, we have to invoke set_freezable() explicitly. Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231221044510.1802429-2-haokexin@gmail.com
2023-12-19powerpc/fsl: Fix fsl,tmu-calibration to match the schemaDavid Heidelberg2-74/+76
fsl,tmu-calibration is defined as a u32 matrix in Documentation/devicetree/bindings/thermal/qoriq-thermal.yaml. Use matching property syntax. No functional changes. Signed-off-by: David Heidelberg <david@ixit.cz> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212184515.82886-2-david@ixit.cz
2023-12-15powerpc/smp: Dynamically build Powerpc topologySrikar Dronamraju1-50/+28
Currently there are four Powerpc specific sched topologies. These are all statically defined. However not all these topologies are used by all Powerpc systems. To avoid unnecessary degenerations by the scheduler, masks and flags are compared. However if the sched topologies are build dynamically then the code is simpler and there are greater chances of avoiding degenerations. Note: Even X86 builds its sched topologies dynamically and proposed changes are very similar to the way X86 is building its topologies. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231214180720.310852-6-srikar@linux.vnet.ibm.com
2023-12-15powerpc/smp: Avoid asym packing within thread_group of a coreSrikar Dronamraju1-0/+13
PowerVM Hypervisor will schedule at a core granularity. However each core can have more than one thread_groups. For better utilization in case of a shared processor, its preferable for the scheduler to pack to the lowest core. However there is no benefit of moving a thread between two thread groups of the same core. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231214180720.310852-5-srikar@linux.vnet.ibm.com
2023-12-15powerpc/smp: Add __ro_after_init attributeSrikar Dronamraju1-5/+5
There are some variables that are only updated at boot time. So add __ro_after_init attribute to such variables Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231214180720.310852-4-srikar@linux.vnet.ibm.com
2023-12-15powerpc/smp: Disable MC domain for shared processorSrikar Dronamraju1-0/+4
Like L2-cache info, coregroup information which is used to determine MC sched domains is only present on dedicated LPARs. i.e PowerVM doesn't export coregroup information for shared processor LPARs. Hence disable creating MC domains on shared LPAR Systems. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231214180720.310852-3-srikar@linux.vnet.ibm.com
2023-12-15powerpc/smp: Enable Asym packing for cores on shared processorSrikar Dronamraju1-2/+23
If there are shared processor LPARs, underlying Hypervisor can have more virtual cores to handle than actual physical cores. Starting with Power 9, a big core (aka SMT8 core) has 2 nearly independent thread groups. On a shared processors LPARs, it helps to pack threads to lesser number of cores so that the overall system performance and utilization improves. PowerVM schedules at a big core level. Hence packing to fewer cores helps. Since each thread-group is independent, running threads on both the thread-groups of a SMT8 core, should have a minimal adverse impact in non over provisioned scenarios. These changes in this patchset will not affect in the over provisioned scenario. If there are more threads than SMT domains, then asym_packing will not kick-in For example: Lets says there are two 8-core Shared LPARs that are actually sharing a 8 Core shared physical pool, each running 8 threads each. Then Consolidating 8 threads to 4 cores on each LPAR would help them to perform better. This is because each of the LPAR will get 100% time to run applications and there will no switching required by the Hypervisor. To achieve this, enable SD_ASYM_PACKING flag at CACHE, MC and DIE level when the system is running in shared processor mode and has big cores. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231214180720.310852-2-srikar@linux.vnet.ibm.com
2023-12-15powerpc/sched: Cleanup vcpu_is_preempted()Aneesh Kumar K.V1-8/+25
No functional change in this patch. A helper is added to find if vcpu is dispatched by hypervisor. Use that instead of opencoding. Also clarify some of the comments. Signed-off-by: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231114071219.198222-1-aneesh.kumar@linux.ibm.com
2023-12-13powerpc: add cpu_spec.cpu_features to vmcoreinfoAditya Gupta1-0/+1
CPU features can be determined in makedumpfile, using 'cur_cpu_spec.cpu_features'. This provides more data to makedumpfile about the crashed system, and can help in filtering the vmcore accordingly. Signed-off-by: Aditya Gupta <adityag@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230920105706.853626-2-adityag@linux.ibm.com
2023-12-13powerpc/imc-pmu: Add a null pointer check in update_events_in_group()Kunwu Chan1-0/+6
kasprintf() returns a pointer to dynamically allocated memory which can be NULL upon failure. Fixes: 885dcd709ba9 ("powerpc/perf: Add nest IMC PMU support") Signed-off-by: Kunwu Chan <chentao@kylinos.cn> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231126093719.1440305-1-chentao@kylinos.cn
2023-12-13powerpc/powernv: Add a null pointer check in opal_powercap_init()Kunwu Chan1-0/+6
kasprintf() returns a pointer to dynamically allocated memory which can be NULL upon failure. Fixes: b9ef7b4b867f ("powerpc: Convert to using %pOFn instead of device_node.name") Signed-off-by: Kunwu Chan <chentao@kylinos.cn> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231126095739.1501990-1-chentao@kylinos.cn
2023-12-13powerpc/powernv: Add a null pointer check in opal_event_init()Kunwu Chan1-0/+2
kasprintf() returns a pointer to dynamically allocated memory which can be NULL upon failure. Fixes: 2717a33d6074 ("powerpc/opal-irqchip: Use interrupt names if present") Signed-off-by: Kunwu Chan <chentao@kylinos.cn> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231127030755.1546750-1-chentao@kylinos.cn
2023-12-13powerpc/powernv: Add a null pointer check to scom_debug_init_one()Kunwu Chan1-0/+5
kasprintf() returns a pointer to dynamically allocated memory which can be NULL upon failure. Add a null pointer check, and release 'ent' to avoid memory leaks. Fixes: bfd2f0d49aef ("powerpc/powernv: Get rid of old scom_controller abstraction") Signed-off-by: Kunwu Chan <chentao@kylinos.cn> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231208085937.107210-1-chentao@kylinos.cn
2023-12-13powerpc/mm: Fix null-pointer dereference in pgtable_cache_addKunwu Chan1-2/+3
kasprintf() returns a pointer to dynamically allocated memory which can be NULL upon failure. Ensure the allocation was successful by checking the pointer validity. Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Kunwu Chan <chentao@kylinos.cn> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231204023223.2447523-1-chentao@kylinos.cn
2023-12-13powerpc/Kconfig: Select FUNCTION_ALIGNMENT_4BSathvika Vasireddy2-3/+1
Commit d49a0626216b95 ("arch: Introduce CONFIG_FUNCTION_ALIGNMENT") introduced a generic function-alignment infrastructure. Move to using FUNCTION_ALIGNMENT_4B on powerpc, to use the same alignment as that of the existing _GLOBAL macro. Signed-off-by: Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/21892186ec44abe24df0daf64f577dac0e78783f.1702045299.git.naveen@kernel.org
2023-12-13powerpc/ftrace: Remove nops after the call to ftrace_stubNaveen N Rao1-2/+0
ftrace_stub is within the same CU, so there is no need for a subsequent nop instruction. Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/8ee5ec520e37d5523654bb2cd65a17512fb774e2.1702045299.git.naveen@kernel.org
2023-12-13powerpc/ftrace: Fix indentation in ftrace.hNaveen N Rao1-1/+1
Replace seven spaces with a tab character to fix an indentation issue reported by the kernel test robot. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202311221731.alUwTDIm-lkp@intel.com/ Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/9f058227bd9243f0842786ef7228d87ab10d29f6.1702045299.git.naveen@kernel.org
2023-12-13powerpc/selftests: Add test for papr-sysparmNathan Lynch4-0/+210
Consistently testing system parameter access is a bit difficult by nature -- the set of parameters available depends on the model and system configuration, and updating a parameter should be considered a destructive operation reserved for the admin. So we validate some of the error paths and retrieve the SPLPAR characteristics string, but not much else. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-13-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/selftests: Add test for papr-vpdNathan Lynch4-0/+366
Add selftests for /dev/papr-vpd, exercising the common expected use cases: * Retrieve all VPD by passing an empty location code. * Retrieve the "system VPD" by passing a location code derived from DT root node properties, as done by the vpdupdate command. The tests also verify that certain intended properties of the driver hold: * Passing an unterminated location code to PAPR_VPD_CREATE_HANDLE gets EINVAL. * Passing a NULL location code pointer to PAPR_VPD_CREATE_HANDLE gets EFAULT. * Closing the device node without first issuing a PAPR_VPD_CREATE_HANDLE command to it succeeds. * Releasing a handle without first consuming any data from it succeeds. * Re-reading the contents of a handle returns the same data as the first time. Some minimal validation of the returned data is performed. The tests are skipped on systems where the papr-vpd driver does not initialize, making this useful only on PowerVM LPARs at this point. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-12-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/pseries/papr-sysparm: Expose character device to user spaceNathan Lynch4-8/+227
Until now the papr_sysparm APIs have been kernel-internal. But user space needs access to PAPR system parameters too. The only method available to user space today to get or set system parameters is using sys_rtas() and /dev/mem to pass RTAS-addressable buffers between user space and firmware. This is incompatible with lockdown and should be deprecated. So provide an alternative ABI to user space in the form of a /dev/papr-sysparm character device with just two ioctl commands (get and set). The data payloads involved are small enough to fit in the ioctl argument buffer, making the code relatively simple. Exposing the system parameters through sysfs has been considered but it would be too awkward: * The kernel currently does not have to contain an exhaustive list of defined system parameters. This is a convenient property to maintain because we don't have to update the kernel whenever a new parameter is added to PAPR. Exporting a named attribute in sysfs for each parameter would negate this. * Some system parameters are text-based and some are not. * Retrieval of at least one system parameter requires input data, which a simple read-oriented interface can't support. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-11-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/pseries/papr-sysparm: Validate buffer object lengthsNathan Lynch1-0/+47
The ability to get and set system parameters will be exposed to user space, so let's get a little more strict about malformed papr_sysparm_buf objects. * Create accessors for the length field of struct papr_sysparm_buf. The length is always stored in MSB order and this is better than spreading the necessary conversions all over. * Reject attempts to submit invalid buffers to RTAS. * Warn if RTAS returns a buffer with an invalid length, clamping the returned length to a safe value that won't overrun the buffer. These are meant as precautionary measures to mitigate both firmware and kernel bugs in this area, should they arise, but I am not aware of any. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-10-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/pseries: Add papr-vpd character driver for VPD retrievalNathan Lynch5-0/+575
PowerVM LPARs may retrieve Vital Product Data (VPD) for system components using the ibm,get-vpd RTAS function. We can expose this to user space with a /dev/papr-vpd character device, where the programming model is: struct papr_location_code plc = { .str = "", }; /* obtain all VPD */ int devfd = open("/dev/papr-vpd", O_RDONLY); int vpdfd = ioctl(devfd, PAPR_VPD_CREATE_HANDLE, &plc); size_t size = lseek(vpdfd, 0, SEEK_END); char *buf = malloc(size); pread(devfd, buf, size, 0); When a file descriptor is obtained from ioctl(PAPR_VPD_CREATE_HANDLE), the file contains the result of a complete ibm,get-vpd sequence. The file contents are immutable from the POV of user space. To get a new view of the VPD, the client must create a new handle. This design choice insulates user space from most of the complexities that ibm,get-vpd brings: * ibm,get-vpd must be called more than once to obtain complete results. * Only one ibm,get-vpd call sequence should be in progress at a time; interleaved sequences will disrupt each other. Callers must have a protocol for serializing their use of the function. * A call sequence in progress may receive a "VPD changed, try again" status, requiring the client to abandon the sequence and start over. The memory required for the VPD buffers seems acceptable, around 20KB for all VPD on one of my systems. And the value of the /rtas/ibm,vpd-size DT property (the estimated maximum size of VPD) is consistently 300KB across various systems I've checked. I've implemented support for this new ABI in the rtas_get_vpd() function in librtas, which the vpdupdate command currently uses to populate its VPD database. I've verified that an unmodified vpdupdate binary generates an identical database when using a librtas.so that prefers the new ABI. Along with the papr-vpd.h header exposed to user space, this introduces a common papr-miscdev.h uapi header to share a base ioctl ID with similar drivers to come. Tested-by: Michal Suchánek <msuchanek@suse.de> Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-9-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/rtas: Warn if per-function lock isn't heldNathan Lynch1-12/+9
If the function descriptor has a populated lock member, then callers are required to hold it across calls. Now that the firmware activation sequence is appropriately guarded, we can warn when the requirement isn't satisfied. __do_enter_rtas_trace() gets reorganized a bit as a result of performing the function descriptor lookup unconditionally now. Reviewed-by: "Aneesh Kumar K.V (IBM)" <aneesh.kumar@kernel.org> Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-8-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/rtas: Serialize firmware activation sequencesNathan Lynch1-0/+4
Use rtas_ibm_activate_firmware_lock to prevent interleaving call sequences of the ibm,activate-firmware RTAS function, which typically requires multiple calls to complete the update. While the spec does not specifically prohibit interleaved sequences, there's almost certainly no advantage to allowing them. Reviewed-by: "Aneesh Kumar K.V (IBM)" <aneesh.kumar@kernel.org> Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-7-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/rtas: Facilitate high-level call sequencesNathan Lynch2-0/+86
On RTAS platforms there is a general restriction that the OS must not enter RTAS on more than one CPU at a time. This low-level serialization requirement is satisfied by holding a spin lock (rtas_lock) across most RTAS function invocations. However, some pseries RTAS functions require multiple successive calls to complete a logical operation. Beginning a new call sequence for such a function may disrupt any other sequences of that function already in progress. Safe and reliable use of these functions effectively requires higher-level serialization beyond what is already done at the level of RTAS entry and exit. Where a sequence-based RTAS function is invoked only through sys_rtas(), with no in-kernel users, there is no issue as far as the kernel is concerned. User space is responsible for appropriately serializing its call sequences. (Whether user space code actually takes measures to prevent sequence interleaving is another matter.) Examples of such functions currently include ibm,platform-dump and ibm,get-vpd. But where a sequence-based RTAS function has both user space and in-kernel uesrs, there is a hazard. Even if the in-kernel call sites of such a function serialize their sequences correctly, a user of sys_rtas() can invoke the same function at any time, potentially disrupting a sequence in progress. So in order to prevent disruption of kernel-based RTAS call sequences, they must serialize not only with themselves but also with sys_rtas() users, somehow. Preferably without adding more function-specific hacks to sys_rtas(). This is a prerequisite for adding an in-kernel call sequence of ibm,get-vpd, which is in a change to follow. Note that it has never been feasible for the kernel to prevent sys_rtas()-based sequences from being disrupted because control returns to user space on every call. sys_rtas()-based users of these functions have always been, and continue to be, responsible for coordinating their call sequences with other users, even those which may invoke the RTAS functions through less direct means than sys_rtas(). This is an unavoidable consequence of exposing sequence-based RTAS functions through sys_rtas(). * Add an optional mutex member to struct rtas_function. * Statically define a mutex for each RTAS function with known call sequence serialization requirements, and assign its address to the .lock member of the corresponding function table entry, along with justifying commentary. * In sys_rtas(), if the table entry for the RTAS function being called has a populated lock member, acquire it before taking rtas_lock and entering RTAS. * Kernel-based RTAS call sequences are expected to access the appropriate mutex explicitly by name. For example, a user of the ibm,activate-firmware RTAS function would do: int token = rtas_function_token(RTAS_FN_IBM_ACTIVATE_FIRMWARE); int fwrc; mutex_lock(&rtas_ibm_activate_firmware_lock); do { fwrc = rtas_call(token, 0, 1, NULL); } while (rtas_busy_delay(fwrc)); mutex_unlock(&rtas_ibm_activate_firmware_lock); There should be no perceivable change introduced here except that concurrent callers of the same RTAS function via sys_rtas() may block on a mutex instead of spinning on rtas_lock. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-6-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/rtas: Move token validation from block_rtas_call() to sys_rtas()Nathan Lynch1-16/+16
The rtas system call handler sys_rtas() delegates certain input validation steps to a helper function: block_rtas_call(). One of these steps ensures that the user-supplied token value maps to a known RTAS function. This is done by performing a "reverse" token-to-function lookup via rtas_token_to_function_untrusted() to obtain an rtas_function object. In changes to come, sys_rtas() itself will need the function descriptor for the token. To prepare: * Move the lookup and validation up into sys_rtas() and pass the resulting rtas_function pointer to block_rtas_call(), which is otherwise unconcerned with the token value. * Change block_rtas_call() to report the RTAS function name instead of the token value on validation failures, since it can now rely on having a valid function descriptor. One behavior change is that sys_rtas() now silently errors out when passed a bad token, before calling block_rtas_call(). So we will no longer log "RTAS call blocked - exploit attempt?" on invalid tokens. This is consistent with how sys_rtas() currently handles other "metadata" (nargs and nret), while block_rtas_call() is primarily concerned with validating the arguments to be passed to specific RTAS functions. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-5-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/rtas: Add function return status constantsNathan Lynch1-6/+19
Not all of the generic RTAS function statuses specified in PAPR have symbolic constants and descriptions in rtas.h. Fix this, providing a little more background, slightly updating the existing wording, and improving the formatting. Reviewed-by: "Aneesh Kumar K.V (IBM)" <aneesh.kumar@kernel.org> Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-4-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/rtas: Fall back to linear search on failed token->function lookupNathan Lynch1-4/+14
Enabling any of the powerpc:rtas_* tracepoints at boot is likely to result in an oops on RTAS platforms. For example, booting a QEMU pseries model with 'trace_event=powerpc:rtas_input' in the command line leads to: BUG: Kernel NULL pointer dereference on read at 0x00000008 Oops: Kernel access of bad area, sig: 7 [#1] NIP [c00000000004231c] do_enter_rtas+0x1bc/0x460 LR [c00000000004231c] do_enter_rtas+0x1bc/0x460 Call Trace: do_enter_rtas+0x1bc/0x460 (unreliable) rtas_call+0x22c/0x4a0 rtas_get_boot_time+0x80/0x14c read_persistent_clock64+0x124/0x150 read_persistent_wall_and_boot_offset+0x28/0x58 timekeeping_init+0x70/0x348 start_kernel+0xa0c/0xc1c start_here_common+0x1c/0x20 (This is preceded by a warning for the failed lookup in rtas_token_to_function().) This happens when __do_enter_rtas_trace() attempts a token to function descriptor lookup before the xarray containing the mappings has been set up. Fall back to linear scan of the table if rtas_token_to_function_xarray is empty. Fixes: 24098f580e2b ("powerpc/rtas: add tracepoints around RTAS entry") Reviewed-by: "Aneesh Kumar K.V (IBM)" <aneesh.kumar@kernel.org> Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-3-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/rtas: Add for_each_rtas_function() iteratorNathan Lynch1-2/+7
Add a convenience macro for iterating over every element of the internal function table and convert the one site that can use it. An additional user of the macro is anticipated in changes to follow. Reviewed-by: "Aneesh Kumar K.V (IBM)" <aneesh.kumar@kernel.org> Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-2-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/rtas: Avoid warning on invalid token argument to sys_rtas()Nathan Lynch1-2/+17
rtas_token_to_function() WARNs when passed an invalid token; it's meant to catch bugs in kernel-based users of RTAS functions. However, user space controls the token value passed to rtas_token_to_function() by block_rtas_call(), so user space with sufficient privilege to use sys_rtas() can trigger the warnings at will: unexpected failed lookup for token 2048 WARNING: CPU: 20 PID: 2247 at arch/powerpc/kernel/rtas.c:556 rtas_token_to_function+0xfc/0x110 ... NIP rtas_token_to_function+0xfc/0x110 LR rtas_token_to_function+0xf8/0x110 Call Trace: rtas_token_to_function+0xf8/0x110 (unreliable) sys_rtas+0x188/0x880 system_call_exception+0x268/0x530 system_call_common+0x160/0x2c4 It's desirable to continue warning on bogus tokens in rtas_token_to_function(). Currently it is used to look up RTAS function descriptors when tracing, where we know there has to have been a successful descriptor lookup by different means already, and it would be a serious inconsistency for the reverse lookup to fail. So instead of weakening rtas_token_to_function()'s contract by removing the warnings, introduce rtas_token_to_function_untrusted(), which has no opinion on failed lookups. Convert block_rtas_call() and rtas_token_to_function() to use it. Fixes: 8252b88294d2 ("powerpc/rtas: improve function information lookups") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-1-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/hv-gpci: Add return value check in affinity_domain_via_partition_show functionKajol Jain1-0/+3
To access hv-gpci kernel interface files data, the "Enable Performance Information Collection" option has to be set in hmc. Incase that option is not set and user try to read the interface files, it should give error message as operation not permitted. Result of accessing added interface files with disabled performance collection option: [command]# cat processor_bus_topology cat: processor_bus_topology: Operation not permitted [command]# cat processor_config cat: processor_config: Operation not permitted [command]# cat affinity_domain_via_domain cat: affinity_domain_via_domain: Operation not permitted [command]# cat affinity_domain_via_virtual_processor cat: affinity_domain_via_virtual_processor: Operation not permitted [command]# cat affinity_domain_via_partition Based on above result there is no error message when reading affinity_domain_via_partition file because of missing check for failed hcall. Fix this issue by adding a check in the start of affinity_domain_via_partition_show function, to return error incase hcall fails, with error type other then H_PARAMETER. Fixes: a15e0d6a6929 ("powerpc/hv_gpci: Add sysfs file inside hv_gpci device to show affinity domain via partition information") Reported-by: Disha Goel <disgoel@linux.vnet.ibm.com> Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231116122033.160964-1-kjain@linux.ibm.com
2023-12-13selftests/powerpc: Check all FPRs in fpu_syscall testMichael Ellerman2-6/+9
There is a selftest that checks if FPRs are corrupted across a fork, aka clone. It was added as part of the series that optimised the clone path to save the parent's FP state without "giving up" (turning off FP). See commit 8792468da5e1 ("powerpc: Add the ability to save FPU without giving it up"). The test encodes the assumption that FPRs 0-13 are volatile across the syscall, by only checking the volatile FPRs are not changed by the fork. There was also a comment in the fpu_preempt test alluding to that: The check_fpu function in asm only checks the non volatile registers as it is reused from the syscall test It is true that the function call ABI treats f0-f13 as volatile, however the syscall ABI has since been documented as *not* treating those registers as volatile. See commit 7b8845a2a2ec ("powerpc/64: Document the syscall ABI"). So change the test to check all FPRs are not corrupted by the syscall. Note that this currently fails, because save_fpu() etc. do not restore f0/vsr0. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231128132748.1990179-5-mpe@ellerman.id.au
2023-12-13selftests/powerpc: Run fpu_preempt test for 60 secondsMichael Ellerman1-1/+1
The FPU preempt test only runs for 20 seconds, which is not particularly long. Run it for 60 seconds to increase the chance of detecting corruption. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231128132748.1990179-4-mpe@ellerman.id.au
2023-12-13selftests/powerpc: Generate better bit patterns for FPU testsMichael Ellerman2-4/+27
The fpu_preempt test randomly initialises an array of doubles to try and detect FPU register corruption. However the values it generates do not occupy the full range of values possible in the 64-bit double, meaning some partial register corruption could go undetected. Without getting too carried away, add some better initialisation to generate values that occupy more bits. Sample values before: f0 902677510 (raw 0x41cae6e203000000) f1 325217596 (raw 0x41b3626d3c000000) f2 1856578300 (raw 0x41dbaa48bf000000) f3 1247189984 (raw 0x41d295a6f8000000) And after: f0 1.1078153481413311e-09 (raw 0x3e13083932805cc2) f1 1.0576648474801922e+17 (raw 0x43777c20eb88c261) f2 -6.6245033413594075e-10 (raw 0xbe06c2f989facae9) f3 3.0085988827156291e+18 (raw 0x43c4e0585f2df37b) Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231128132748.1990179-3-mpe@ellerman.id.au
2023-12-13selftests/powerpc: Check all FPRs in fpu_preemptMichael Ellerman2-13/+43
There's a selftest that checks FPRs aren't corrupted by preemption, or just process scheduling. However it only checks the non-volatile FPRs, meaning corruption of the volatile FPRs could go undetected. The check_fpu function it calls is used by several other tests, so for now add a new routine to check all the FPRs. Increase the size of the array of FPRs to 32, and initialise them all with random values. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231128132748.1990179-2-mpe@ellerman.id.au
2023-12-13selftests/powerpc: Fix error handling in FPU/VMX preemption testsMichael Ellerman2-8/+11
The FPU & VMX preemption tests do not check for errors returned by the low-level asm routines, preempt_fpu() / preempt_vsx() respectively. That means any register corruption detected by the asm routines does not result in a test failure. Fix it by returning the return value of the asm routines from the pthread child routines. Fixes: e5ab8be68e44 ("selftests/powerpc: Test preservation of FPU and VMX regs across preemption") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231128132748.1990179-1-mpe@ellerman.id.au
2023-12-07powerpc/Makefile: Auto detect cross compilerMichael Ellerman1-0/+11
If no cross compiler is specified, try to auto detect one. Look for various combinations, matching: powerpc(64(le)?)?(-unknown)?-linux(-gnu)?- There are more possibilities, but the above is known to find a compiler on Fedora and Ubuntu (which use linux-gnu-), and also detects the kernel.org cross compilers (which use linux-). This allows cross compiling with simply: # Ubuntu $ sudo apt install gcc-powerpc-linux-gnu # Fedora $ sudo dnf install gcc-powerpc64-linux-gnu $ make ARCH=powerpc defconfig $ make ARCH=powerpc -j 4 Inspired by arch/parisc/Makefile. Acked-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231206115548.1466874-4-mpe@ellerman.id.au
2023-12-07powerpc/Makefile: Default to ppc64le_defconfig when cross buildingMichael Ellerman1-2/+2
If the kernel is being cross compiled, there is no information from uname on which defconfig is most appropriate, so the Makefile defaults to ppc64. However these days almost all distros that support powerpc are little endian, so it's more likely that defaulting to ppc64le_defconfig will produce something useful for a user. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231206115548.1466874-3-mpe@ellerman.id.au
2023-12-07powerpc/vdso: No need to undef powerpc for 64-bit buildMichael Ellerman1-1/+1
The vdso Makefile adds -U$(ARCH) to CPPFLAGS for the vdso64.lds linker script. ARCH is always powerpc, so it becomes -Upowerpc, which means undefine the "powerpc" symbol. But the 64-bit compiler doesn't define powerpc in the first place, compare: $ gcc-5.1.0-nolibc/powerpc64-linux/bin/powerpc64-linux-gcc -m32 -E -dM - </dev/null | grep -w powerpc #define powerpc 1 $ gcc-5.1.0-nolibc/powerpc64-linux/bin/powerpc64-linux-gcc -m64 -E -dM - </dev/null | grep -w powerpc $ So there's no need to undefine it for the 64-bit linker script. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231206115548.1466874-2-mpe@ellerman.id.au
2023-12-07powerpc/Makefile: Don't use $(ARCH) unnecessarilyMichael Ellerman1-5/+5
There's no need to use $(ARCH) for references to the arch directory in the source tree, it is always arch/powerpc. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231206115548.1466874-1-mpe@ellerman.id.au
2023-12-07MAINTAINERS: powerpc: Transfer PPC83XX to ChristopheMichael Ellerman1-3/+3
Christophe volunteered[1] to maintain PPC83XX. 1: https://lore.kernel.org/all/7b1bf4dc-d09d-35b8-f4df-16bf00429b6d@csgroup.eu/ Acked-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Crystal Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231205051239.737384-1-mpe@ellerman.id.au
2023-12-07powerpc/book3s64: Avoid __pte_protnone() check in __pte_flags_need_flush()Aneesh Kumar K.V (IBM)1-7/+2
This reverts commit 1abce0580b89 ("powerpc/64s: Fix __pte_needs_flush() false positive warning") The previous patch dropped the usage of _PAGE_PRIVILEGED with PAGE_NONE. Hence this check can be dropped. Signed-off-by: "Aneesh Kumar K.V (IBM)" <aneesh.kumar@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231204093638.71503-2-aneesh.kumar@kernel.org
2023-12-07powerpc/book3s/hash: Drop _PAGE_PRIVILEGED from PAGE_NONEAneesh Kumar K.V (IBM)2-8/+9
There used to be a dependency on _PAGE_PRIVILEGED with pte_savedwrite. But that got dropped by commit 6a56ccbcf6c6 ("mm/autonuma: use can_change_(pte|pmd)_writable() to replace savedwrite") With the change in this patch numa fault pte (pte_protnone()) gets mapped as regular user pte with RWX cleared (no-access) whereas earlier it used to be mapped _PAGE_PRIVILEGED. Hash fault handling code gets some WARN_ON added in this patch because those functions are not expected to get called with _PAGE_READ cleared. commit 18061c17c8ec ("powerpc/mm: Update PROTFAULT handling in the page fault path") explains the details. Signed-off-by: "Aneesh Kumar K.V (IBM)" <aneesh.kumar@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231204093638.71503-1-aneesh.kumar@kernel.org
2023-12-01powerpc/pseries/memhp: Log more error conditions in add pathNathan Lynch1-1/+6
When an add operation for multiple LMBs fails, there is currently little indication from the kernel of what went wrong. Be a little more verbose about error conditions in the add paths. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231114-pseries-memhp-fixes-v1-3-fb8f2bb7c557@linux.ibm.com