aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/export-to-sqlite.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2025-04-16sched/debug: Print the local group's asym_prefer_cpuK Prateek Nayak1-0/+4
Add a file to read local group's "asym_prefer_cpu" from debugfs. This information was useful when debugging issues where "asym_prefer_cpu" was incorrectly set to a CPU with a lower asym priority. Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250409053446.23367-5-kprateek.nayak@amd.com
2025-04-16cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings changeK Prateek Nayak1-1/+3
A subset of AMD systems supporting Preferred Core rankings can have their rankings changed dynamically at runtime. Update the "sg->asym_prefer_cpu" across the local hierarchy of CPU when the preferred core ranking changes. Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Mario Limonciello <mario.limonciello@amd.com> Link: https://lore.kernel.org/r/20250409053446.23367-4-kprateek.nayak@amd.com
2025-04-16sched/topology: Introduce sched_update_asym_prefer_cpu()K Prateek Nayak2-0/+64
A subset of AMD Processors supporting Preferred Core Rankings also feature the ability to dynamically switch these rankings at runtime to bias load balancing towards or away from the LLC domain with larger cache. To support dynamically updating "sg->asym_prefer_cpu" without needing to rebuild the sched domain, introduce sched_update_asym_prefer_cpu() which recomutes the "asym_prefer_cpu" when the core-ranking of a CPU changes. sched_update_asym_prefer_cpu() swaps the "sg->asym_prefer_cpu" with the CPU whose ranking has changed if the new ranking is greater than that of the "asym_prefer_cpu". If CPU whose ranking has changed is the current "asym_prefer_cpu", it scans the CPUs of the sched groups to find the new "asym_prefer_cpu" and sets it accordingly. get_group() for non-overlapping sched domains returns the sched group for the first CPU in the sched_group_span() which ensures all CPUs in the group see the updated value of "asym_prefer_cpu". Overlapping groups are allocated differently and will require moving the "asym_prefer_cpu" to "sg->sgc" but since the current implementations do not set "SD_ASYM_PACKING" at NUMA domains, skip additional indirection and place a SCHED_WARN_ON() to alert any future users. Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250409053446.23367-3-kprateek.nayak@amd.com
2025-04-16sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpuK Prateek Nayak1-2/+3
Subsequent commits add the support to dynamically update the sched_group struct's "asym_prefer_cpu" member from a remote CPU. Use READ_ONCE() when reading the "sg->asym_prefer_cpu" to ensure load balancer always reads the latest value. Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250409053446.23367-2-kprateek.nayak@amd.com
2025-04-08sched/isolation: Make use of more than one housekeeping cpuPhil Auld2-2/+2
The exising code uses housekeeping_any_cpu() to select a cpu for a given housekeeping task. However, this often ends up calling cpumask_any_and() which is defined as cpumask_first_and() which has the effect of alyways using the first cpu among those available. The same applies when multiple NUMA nodes are involved. In that case the first cpu in the local node is chosen which does provide a bit of spreading but with multiple HK cpus per node the same issues arise. We have numerous cases where a single HK cpu just cannot keep up and the remote_tick warning fires. It also can lead to the other things (orchastration sw, HA keepalives etc) on the HK cpus getting starved which leads to other issues. In these cases we recommend increasing the number of HK cpus. But... that only helps the userspace tasks somewhat. It does not help the actual housekeeping part. Spread the HK work out by having housekeeping_any_cpu() and sched_numa_find_closest() use cpumask_any_and_distribute() instead of cpumask_any_and(). Signed-off-by: Phil Auld <pauld@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Waiman Long <longman@redhat.com> Reviewed-by: Vishal Chourasia <vishalc@linux.ibm.com> Acked-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20250218184618.1331715-1-pauld@redhat.com
2025-04-08sched/rt: Fix race in push_rt_taskHarshit Agarwal1-28/+26
Overview ======== When a CPU chooses to call push_rt_task and picks a task to push to another CPU's runqueue then it will call find_lock_lowest_rq method which would take a double lock on both CPUs' runqueues. If one of the locks aren't readily available, it may lead to dropping the current runqueue lock and reacquiring both the locks at once. During this window it is possible that the task is already migrated and is running on some other CPU. These cases are already handled. However, if the task is migrated and has already been executed and another CPU is now trying to wake it up (ttwu) such that it is queued again on the runqeue (on_rq is 1) and also if the task was run by the same CPU, then the current checks will pass even though the task was migrated out and is no longer in the pushable tasks list. Crashes ======= This bug resulted in quite a few flavors of crashes triggering kernel panics with various crash signatures such as assert failures, page faults, null pointer dereferences, and queue corruption errors all coming from scheduler itself. Some of the crashes: -> kernel BUG at kernel/sched/rt.c:1616! BUG_ON(idx >= MAX_RT_PRIO) Call Trace: ? __die_body+0x1a/0x60 ? die+0x2a/0x50 ? do_trap+0x85/0x100 ? pick_next_task_rt+0x6e/0x1d0 ? do_error_trap+0x64/0xa0 ? pick_next_task_rt+0x6e/0x1d0 ? exc_invalid_op+0x4c/0x60 ? pick_next_task_rt+0x6e/0x1d0 ? asm_exc_invalid_op+0x12/0x20 ? pick_next_task_rt+0x6e/0x1d0 __schedule+0x5cb/0x790 ? update_ts_time_stats+0x55/0x70 schedule_idle+0x1e/0x40 do_idle+0x15e/0x200 cpu_startup_entry+0x19/0x20 start_secondary+0x117/0x160 secondary_startup_64_no_verify+0xb0/0xbb -> BUG: kernel NULL pointer dereference, address: 00000000000000c0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x183/0x350 ? __warn+0x8a/0xe0 ? exc_page_fault+0x3d6/0x520 ? asm_exc_page_fault+0x1e/0x30 ? pick_next_task_rt+0xb5/0x1d0 ? pick_next_task_rt+0x8c/0x1d0 __schedule+0x583/0x7e0 ? update_ts_time_stats+0x55/0x70 schedule_idle+0x1e/0x40 do_idle+0x15e/0x200 cpu_startup_entry+0x19/0x20 start_secondary+0x117/0x160 secondary_startup_64_no_verify+0xb0/0xbb -> BUG: unable to handle page fault for address: ffff9464daea5900 kernel BUG at kernel/sched/rt.c:1861! BUG_ON(rq->cpu != task_cpu(p)) -> kernel BUG at kernel/sched/rt.c:1055! BUG_ON(!rq->nr_running) Call Trace: ? __die_body+0x1a/0x60 ? die+0x2a/0x50 ? do_trap+0x85/0x100 ? dequeue_top_rt_rq+0xa2/0xb0 ? do_error_trap+0x64/0xa0 ? dequeue_top_rt_rq+0xa2/0xb0 ? exc_invalid_op+0x4c/0x60 ? dequeue_top_rt_rq+0xa2/0xb0 ? asm_exc_invalid_op+0x12/0x20 ? dequeue_top_rt_rq+0xa2/0xb0 dequeue_rt_entity+0x1f/0x70 dequeue_task_rt+0x2d/0x70 __schedule+0x1a8/0x7e0 ? blk_finish_plug+0x25/0x40 schedule+0x3c/0xb0 futex_wait_queue_me+0xb6/0x120 futex_wait+0xd9/0x240 do_futex+0x344/0xa90 ? get_mm_exe_file+0x30/0x60 ? audit_exe_compare+0x58/0x70 ? audit_filter_rules.constprop.26+0x65e/0x1220 __x64_sys_futex+0x148/0x1f0 do_syscall_64+0x30/0x80 entry_SYSCALL_64_after_hwframe+0x62/0xc7 -> BUG: unable to handle page fault for address: ffff8cf3608bc2c0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x183/0x350 ? spurious_kernel_fault+0x171/0x1c0 ? exc_page_fault+0x3b6/0x520 ? plist_check_list+0x15/0x40 ? plist_check_list+0x2e/0x40 ? asm_exc_page_fault+0x1e/0x30 ? _cond_resched+0x15/0x30 ? futex_wait_queue_me+0xc8/0x120 ? futex_wait+0xd9/0x240 ? try_to_wake_up+0x1b8/0x490 ? futex_wake+0x78/0x160 ? do_futex+0xcd/0xa90 ? plist_check_list+0x15/0x40 ? plist_check_list+0x2e/0x40 ? plist_del+0x6a/0xd0 ? plist_check_list+0x15/0x40 ? plist_check_list+0x2e/0x40 ? dequeue_pushable_task+0x20/0x70 ? __schedule+0x382/0x7e0 ? asm_sysvec_reschedule_ipi+0xa/0x20 ? schedule+0x3c/0xb0 ? exit_to_user_mode_prepare+0x9e/0x150 ? irqentry_exit_to_user_mode+0x5/0x30 ? asm_sysvec_reschedule_ipi+0x12/0x20 Above are some of the common examples of the crashes that were observed due to this issue. Details ======= Let's look at the following scenario to understand this race. 1) CPU A enters push_rt_task a) CPU A has chosen next_task = task p. b) CPU A calls find_lock_lowest_rq(Task p, CPU Z’s rq). c) CPU A identifies CPU X as a destination CPU (X < Z). d) CPU A enters double_lock_balance(CPU Z’s rq, CPU X’s rq). e) Since X is lower than Z, CPU A unlocks CPU Z’s rq. Someone else has locked CPU X’s rq, and thus, CPU A must wait. 2) At CPU Z a) Previous task has completed execution and thus, CPU Z enters schedule, locks its own rq after CPU A releases it. b) CPU Z dequeues previous task and begins executing task p. c) CPU Z unlocks its rq. d) Task p yields the CPU (ex. by doing IO or waiting to acquire a lock) which triggers the schedule function on CPU Z. e) CPU Z enters schedule again, locks its own rq, and dequeues task p. f) As part of dequeue, it sets p.on_rq = 0 and unlocks its rq. 3) At CPU B a) CPU B enters try_to_wake_up with input task p. b) Since CPU Z dequeued task p, p.on_rq = 0, and CPU B updates B.state = WAKING. c) CPU B via select_task_rq determines CPU Y as the target CPU. 4) The race a) CPU A acquires CPU X’s lock and relocks CPU Z. b) CPU A reads task p.cpu = Z and incorrectly concludes task p is still on CPU Z. c) CPU A failed to notice task p had been dequeued from CPU Z while CPU A was waiting for locks in double_lock_balance. If CPU A knew that task p had been dequeued, it would return NULL forcing push_rt_task to give up the task p's migration. d) CPU B updates task p.cpu = Y and calls ttwu_queue. e) CPU B locks Ys rq. CPU B enqueues task p onto Y and sets task p.on_rq = 1. f) CPU B unlocks CPU Y, triggering memory synchronization. g) CPU A reads task p.on_rq = 1, cementing its assumption that task p has not migrated. h) CPU A decides to migrate p to CPU X. This leads to A dequeuing p from Y's queue and various crashes down the line. Solution ======== The solution here is fairly simple. After obtaining the lock (at 4a), the check is enhanced to make sure that the task is still at the head of the pushable tasks list. If not, then it is anyway not suitable for being pushed out. Testing ======= The fix is tested on a cluster of 3 nodes, where the panics due to this are hit every couple of days. A fix similar to this was deployed on such cluster and was stable for more than 30 days. Co-developed-by: Jon Kohler <jon@nutanix.com> Signed-off-by: Jon Kohler <jon@nutanix.com> Co-developed-by: Gauri Patwardhan <gauri.patwardhan@nutanix.com> Signed-off-by: Gauri Patwardhan <gauri.patwardhan@nutanix.com> Co-developed-by: Rahul Chunduru <rahul.chunduru@nutanix.com> Signed-off-by: Rahul Chunduru <rahul.chunduru@nutanix.com> Signed-off-by: Harshit Agarwal <harshit@nutanix.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: "Steven Rostedt (Google)" <rostedt@goodmis.org> Reviewed-by: Phil Auld <pauld@redhat.com> Tested-by: Will Ton <william.ton@nutanix.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250225180553.167995-1-harshit@nutanix.com
2025-04-08sched: Add annotations to RT_GROUP_SCHED fieldsMichal Koutný1-4/+4
Update comments to ease RT throttling understanding. Signed-off-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20250310170442.504716-10-mkoutny@suse.com
2025-04-08sched: Add RT_GROUP WARN checks for non-root task_groupsMichal Koutný1-2/+12
With CONFIG_RT_GROUP_SCHED but runtime disabling of RT_GROUPs we expect the existence of the root task_group only and all rt_sched_entity'ies should be queued on root's rt_rq. If we get a non-root RT_GROUP something went wrong. Signed-off-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20250310170442.504716-9-mkoutny@suse.com
2025-04-08sched: Do not construct nor expose RT_GROUP_SCHED structures if disabledMichal Koutný2-12/+33
Thanks to kernel cmdline being available early, before any cgroup hierarchy exists, we can achieve the RT_GROUP_SCHED boottime disabling goal by simply skipping any creation (and destruction) of RT_GROUP data and its exposure via RT attributes. We can do this thanks to previously placed runtime guards that would redirect all operations to root_task_group's data when RT_GROUP_SCHED disabled. Signed-off-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20250310170442.504716-8-mkoutny@suse.com
2025-04-08sched: Bypass bandwitdh checks with runtime disabled RT_GROUP_SCHEDMichal Koutný3-3/+8
When RT_GROUPs are compiled but not exposed, their bandwidth cannot be configured (and it is not initialized for non-root task_groups neither). Therefore bypass any checks of task vs task_group bandwidth. This will achieve behavior very similar to setups that have !CONFIG_RT_GROUP_SCHED and attach cpu controller to cgroup v2 hierarchy. (On a related note, this may allow having RT tasks with CONFIG_RT_GROUP_SCHED and cgroup v2 hierarchy.) Signed-off-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20250310170442.504716-7-mkoutny@suse.com
2025-04-08sched: Skip non-root task_groups with disabled RT_GROUP_SCHEDMichal Koutný3-4/+14
First, we want to prevent placement of RT tasks on non-root rt_rqs which we achieve in the task migration code that'd fall back to root_task_group's rt_rq. Second, we want to work with only root_task_group's rt_rq when iterating all "real" rt_rqs when RT_GROUP is disabled. To achieve this we keep root_task_group as the first one on the task_groups and break out quickly. Signed-off-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20250310170442.504716-6-mkoutny@suse.com
2025-04-08sched: Add commadline option for RT_GROUP_SCHED togglingMichal Koutný4-0/+58
Only simple implementation with a static key wrapper, it will be wired in later. Signed-off-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20250310170442.504716-5-mkoutny@suse.com
2025-04-08sched: Always initialize rt_rq's task_groupMichal Koutný2-5/+4
rt_rq->tg may be NULL which denotes the root task_group. Store the pointer to root_task_group directly so that callers may use rt_rq->tg homogenously. root_task_group exists always with CONFIG_CGROUPS_SCHED, CONFIG_RT_GROUP_SCHED depends on that. This changes root level rt_rq's default limit from infinity to the value of (originally) global RT throttling. Signed-off-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20250310170442.504716-4-mkoutny@suse.com
2025-04-08sched: Remove unneeed macro wrapMichal Koutný1-2/+0
rt_entity_is_task has split definitions based on CONFIG_RT_GROUP_SCHED, therefore we can use it always. No functional change intended. Signed-off-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20250310170442.504716-3-mkoutny@suse.com
2025-04-08sched: Convert CONFIG_RT_GROUP_SCHED macros to code conditionsMichal Koutný2-7/+5
Convert the blocks guarded by macros to regular code so that the RT group code gets more compile validation. Reasoning is in Documentation/process/coding-style.rst 21) Conditional Compilation. With that, no functional change is expected. Signed-off-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20250310170442.504716-2-mkoutny@suse.com
2025-04-08sched/fair: Allow decaying util_est when util_avg > CPU capaPierre Gondois1-7/+0
commit 10a35e6812aa ("sched/pelt: Skip updating util_est when utilization is higher than CPU's capacity") prevents util_est from being updated if util_avg is higher than the underlying CPU capacity to avoid overestimating the task when the CPU is capped (due to thermal issue for instance). In this scenario, the task will miss its deadlines and start overlapping its wake-up events for instance. The task will appear as always running when the CPU is just not powerful enough to allow having a good estimation of the task. commit b8c96361402a ("sched/fair/util_est: Implement faster ramp-up EWMA on utilization increases") sets ewma to util_avg when ewma > util_avg, allowing ewma to quickly grow instead of slowly converge to the new util_avg value when a task profile changes from small to big. However, the 2 conditions: - Check util_avg against max CPU capacity - Check whether util_est > util_avg are placed in an order such as it is possible to set util_est to a value higher than the CPU capacity if util_est > util_avg, but util_est is prevented to decay as long as: CPU capacity < util_avg < util_est. Just remove the check as either: 1. There is idle time on the CPU. In that case the util_avg value of the task is actually correct. It is possible that the task missed a deadline and appears bigger, but this is also the case when the util_avg of the task is lower than the maximum CPU capacity. 2. There is no idle time. In that case, the util_avg value might aswell be an under estimation of the size of the task. It is possible that undesired frequency spikes will appear when the task is later enqueued with an inflated util_est value, but the frequency spike might aswell be deserved. The absence of idle time prevents from drawing any conclusion. Signed-off-by: Pierre Gondois <pierre.gondois@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.rog> Link: https://lore.kernel.org/r/20250325150542.1077344-1-pierre.gondois@arm.com
2025-04-08sched/topology: Refinement to topology_span_sane speedupSteve Wahl1-33/+19
Simplify the topology_span_sane code further, removing the need to allocate an array and gotos used to make sure the array gets freed. This version is in a separate commit because it could return a different sanity result than the previous code, but only in odd circumstances that are not expected to actually occur; for example, when a CPU is not listed in its own mask. Signed-off-by: Steve Wahl <steve.wahl@hpe.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <vschneid@redhat.com> Reviewed-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com> Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> Tested-by: Valentin Schneider <vschneid@redhat.com> Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com> Link: https://lore.kernel.org/r/20250304160844.75373-3-steve.wahl@hpe.com
2025-04-08sched/topology: improve topology_span_sane speedSteve Wahl1-25/+58
Use a different approach to topology_span_sane(), that checks for the same constraint of no partial overlaps for any two CPU sets for non-NUMA topology levels, but does so in a way that is O(N) rather than O(N^2). Instead of comparing with all other masks to detect collisions, keep one mask that includes all CPUs seen so far and detect collisions with a single cpumask_intersects test. If the current mask has no collisions with previously seen masks, it should be a new mask, which can be uniquely identified by the lowest bit set in this mask. Keep a pointer to this mask for future reference (in an array indexed by the lowest bit set), and add the CPUs in this mask to the list of those seen. If the current mask does collide with previously seen masks, it should be exactly equal to a mask seen before, looked up in the same array indexed by the lowest bit set in the mask, a single comparison. Move the topology_span_sane() check out of the existing topology level loop, let it use its own loop so that the array allocation can be done only once, shared across levels. On a system with 1920 processors (16 sockets, 60 cores, 2 threads), the average time to take one processor offline is reduced from 2.18 seconds to 1.01 seconds. (Off-lining 959 of 1920 processors took 34m49.765s without this change, 16m10.038s with this change in place.) Signed-off-by: Steve Wahl <steve.wahl@hpe.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <vschneid@redhat.com> Reviewed-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com> Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> Tested-by: Valentin Schneider <vschneid@redhat.com> Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com> Link: https://lore.kernel.org/r/20250304160844.75373-2-steve.wahl@hpe.com
2025-04-08sched: Fix trace_sched_switch(.prev_state)Peter Zijlstra1-2/+4
Gabriele noted that in case of signal_pending_state(), the tracepoint sees a stale task-state. Fixes: fa2c3254d7cf ("sched/tracing: Don't re-read p->state when emitting sched_switch event") Reported-by: Gabriele Monaco <gmonaco@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Valentin Schneider <vschneid@redhat.com>
2025-04-04sched/tracepoints: Move and extend the sched_process_exit() tracepointAndrii Nakryiko2-5/+31
It is useful to be able to access current->mm at task exit to, say, record a bunch of VMA information right before the task exits (e.g., for stack symbolization reasons when dealing with short-lived processes that exit in the middle of profiling session). Currently, trace_sched_process_exit() is triggered after exit_mm() which resets current->mm to NULL making this tracepoint unsuitable for inspecting and recording task's mm_struct-related data when tracing process lifetimes. There is a particularly suitable place, though, right after taskstats_exit() is called, but before we do exit_mm() and other exit_*() resource teardowns. taskstats performs a similar kind of accounting that some applications do with BPF, and so co-locating them seems like a good fit. So that's where trace_sched_process_exit() is moved with this patch. Also, existing trace_sched_process_exit() tracepoint is notoriously missing `group_dead` flag that is certainly useful in practice and some of our production applications have to work around this. So plumb `group_dead` through while at it, to have a richer and more complete tracepoint. Note that we can't use sched_process_template anymore, and so we use TRACE_EVENT()-based tracepoint definition. But all the field names and order, as well as assign and output logic remain intact. We just add one extra field at the end in backwards-compatible way. Document the dependency to sched_process_template anyway. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org> Acked-by: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250402180925.90914-1-andrii@kernel.org
2025-04-02mm/page_alloc: Fix try_alloc_pagesAlexei Starovoitov1-0/+3
Fix an obvious bug. try_alloc_pages() should set_page_refcounted. [ Not so obvious: it was probably correct at the time it was written but was at some point then rebased on top of v6.14-rc1. And at that point there was a semantic conflict with commit efabfe1420f5 ("mm/page_alloc: move set_page_refcounted() to callers of get_page_from_freelist()") and became buggy. - Linus ] Fixes: 97769a53f117 ("mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Vlastimil BAbka <vbabka@suse.cz> Reviewed-by: Harry Yoo <harry.yoo@oracle.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-04-02nfs: Add missing release on error in nfs_lock_and_join_requests()Dan Carpenter1-1/+3
Call nfs_release_request() on this error path before returning. Fixes: c3f2235782c3 ("nfs: fold nfs_folio_find_and_lock_request into nfs_lock_and_join_requests") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Link: https://lore.kernel.org/r/3aaaa3d5-1c8a-41e4-98c7-717801ddd171@stanley.mountain Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2025-04-02docs: Fix references to IBM CAPI (cxl) removal versionMichael Ellerman2-3/+3
The IBM CAPI (cxl) driver was removed in 6.15, not 6.14. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2025-04-01Documentation/EDAC: Fix warning document isn't included in any toctreeShiju Jose1-0/+1
Fix the build (htmldocs) warning: Documentation/edac/index.rst: WARNING: document isn't included in any toctree. Fixes: db99ea5f2c03 ("EDAC: Add support for EDAC device features control") Closes: https://lore.kernel.org/all/20250228185102.15842f8b@canb.auug.org.au/ Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Shiju Jose <shiju.jose@huawei.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20250401115823.573-1-shiju.jose@huawei.com
2025-04-01cpufreq: Reference count policy in cpufreq_update_limits()Rafael J. Wysocki1-0/+6
Since acpi_processor_notify() can be called before registering a cpufreq driver or even in cases when a cpufreq driver is not registered at all, cpufreq_update_limits() needs to check if a cpufreq driver is present and prevent it from being unregistered. For this purpose, make it call cpufreq_cpu_get() to obtain a cpufreq policy pointer for the given CPU and reference count the corresponding policy object, if present. Fixes: 5a25e3f7cc53 ("cpufreq: intel_pstate: Driver-specific handling of _PPC updates") Closes: https://lore.kernel.org/linux-acpi/Z-ShAR59cTow0KcR@mail-itl Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Cc: All applicable <stable@vger.kernel.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Link: https://patch.msgid.link/1928789.tdWV9SEqCh@rjwysocki.net
2025-04-01objtool/loongarch: Add unwind hints in prepare_frametrace()Josh Poimboeuf2-1/+12
If 'regs' points to a local stack variable, prepare_frametrace() stores all registers to the stack. This confuses objtool as it expects them to be restored from the stack later. The stores don't affect stack tracing, so use unwind hints to hide them from objtool. Fixes the following warnings: arch/loongarch/kernel/traps.o: warning: objtool: show_stack+0xe0: stack state mismatch: reg1[22]=-1+0 reg2[22]=-2-160 arch/loongarch/kernel/traps.o: warning: objtool: show_stack+0xe0: stack state mismatch: reg1[23]=-1+0 reg2[23]=-2-152 Fixes: cb8a2ef0848c ("LoongArch: Add ORC stack unwinder support") Reported-by: kernel test robot <lkp@intel.com> Tested-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/270cadd8040dda74db2307f23497bb68e65db98d.1743481539.git.jpoimboe@kernel.org Closes: https://lore.kernel.org/oe-kbuild-all/202503280703.OARM8SrY-lkp@intel.com/
2025-04-01rcu-tasks: Always inline rcu_irq_work_resched()Josh Poimboeuf1-1/+1
Thanks to CONFIG_DEBUG_SECTION_MISMATCH, empty functions can be generated out of line. rcu_irq_work_resched() can be called from noinstr code, so make sure it's always inlined. Fixes: 564506495ca9 ("rcu/context-tracking: Move deferred nocb resched to context tracking") Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Frederic Weisbecker <frederic@kernel.org> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/e84f15f013c07e4c410d972e75620c53b62c1b3e.1743481539.git.jpoimboe@kernel.org Closes: https://lore.kernel.org/d1eca076-fdde-484a-b33e-70e0d167c36d@infradead.org
2025-04-01context_tracking: Always inline ct_{nmi,irq}_{enter,exit}()Josh Poimboeuf1-4/+4
Thanks to CONFIG_DEBUG_SECTION_MISMATCH, empty functions can be generated out of line. These can be called from noinstr code, so make sure they're always inlined. Fixes the following warnings: vmlinux.o: warning: objtool: irqentry_nmi_enter+0xa2: call to ct_nmi_enter() leaves .noinstr.text section vmlinux.o: warning: objtool: irqentry_nmi_exit+0x16: call to ct_nmi_exit() leaves .noinstr.text section vmlinux.o: warning: objtool: irqentry_exit+0x78: call to ct_irq_exit() leaves .noinstr.text section Fixes: 6f0e6c1598b1 ("context_tracking: Take IRQ eqs entrypoints over RCU") Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Frederic Weisbecker <frederic@kernel.org> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/8509bce3f536bcd4ae7af3a2cf6930d48c5e631a.1743481539.git.jpoimboe@kernel.org Closes: https://lore.kernel.org/d1eca076-fdde-484a-b33e-70e0d167c36d@infradead.org
2025-04-01sched/smt: Always inline sched_smt_active()Josh Poimboeuf1-1/+1
sched_smt_active() can be called from noinstr code, so it should always be inlined. The CONFIG_SCHED_SMT version already has __always_inline. Do the same for its !CONFIG_SCHED_SMT counterpart. Fixes the following warning: vmlinux.o: error: objtool: intel_idle_ibrs+0x13: call to sched_smt_active() leaves .noinstr.text section Fixes: 321a874a7ef8 ("sched/smt: Expose sched_smt_present static key") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/1d03907b0a247cf7fb5c1d518de378864f603060.1743481539.git.jpoimboe@kernel.org Closes: https://lore.kernel.org/r/202503311434.lyw2Tveh-lkp@intel.com/
2025-04-01objtool: Fix verbose disassembly if CROSS_COMPILE isn't setDavid Laight1-0/+2
In verbose mode, when printing the disassembly of affected functions, if CROSS_COMPILE isn't set, the objdump command string gets prefixed with "(null)". Somehow this worked before. Maybe some versions of glibc return an empty string instead of NULL. Fix it regardless. [ jpoimboe: Rewrite commit log. ] Fixes: ca653464dd097 ("objtool: Add verbose option for disassembling affected functions") Signed-off-by: David Laight <david.laight.linux@gmail.com> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250215142321.14081-1-david.laight.linux@gmail.com Link: https://lore.kernel.org/r/b931a4786bc0127aa4c94e8b35ed617dcbd3d3da.1743481539.git.jpoimboe@kernel.org
2025-04-01objtool: Change "warning:" to "error: " for fatal errorsJosh Poimboeuf11-222/+226
This is similar to GCC's behavior and makes it more obvious why the build failed. Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/0ea76f4b0e7a370711ed9f75fd0792bb5979c2bf.1743481539.git.jpoimboe@kernel.org
2025-04-01objtool: Always fail on fatal errorsJosh Poimboeuf1-11/+4
Objtool writes several object annotations which are used to enable critical kernel runtime functionalities like static calls and retpoline/rethunk patching. In the rare case where it fails to read or write an object, the annotations don't get written, causing runtime code patching to fail and code to become corrupted. Due to the catastrophic nature of such warnings, convert them to errors which fail the build regardless of CONFIG_OBJTOOL_WERROR. Reported-by: Chaitanya Kumar Borah <chaitanya.kumar.borah@intel.com> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/7d35684ca61eac56eb2424f300ca43c5d257b170.1743481539.git.jpoimboe@kernel.org Closes: https://lore.kernel.org/SJ1PR11MB61295789E25C2F5197EFF2F6B9A72@SJ1PR11MB6129.namprd11.prod.outlook.com
2025-04-01Revert "objtool: Increase per-function WARN_FUNC() rate limit"Josh Poimboeuf3-14/+6
This reverts commit 0a7fb6f07e3ad497d31ae9a2082d2cacab43d54a. The "skipping duplicate warnings" warning is technically not an actual warning, which can cause confusion. This feature isn't all that useful anyway. It's exceedingly rare for a function to have more than one unrelated warning. Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/e5abe5e858acf1a9207a5dfa0f37d17ac9dca872.1743481539.git.jpoimboe@kernel.org
2025-04-01objtool: Append "()" to function name in "unexpected end of section" warningJosh Poimboeuf1-1/+1
Append with "()" to clarify it's a function. Before: vmlinux.o: warning: objtool: cdns_mrvl_xspi_setup_clock: unexpected end of section .text.cdns_mrvl_xspi_setup_clock After: vmlinux.o: warning: objtool: cdns_mrvl_xspi_setup_clock(): unexpected end of section .text.cdns_mrvl_xspi_setup_clock Fixes: c5995abe1547 ("objtool: Improve error handling") Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/692e1e0d0b15a71bd35c6b4b87f3c75cd5a57358.1743481539.git.jpoimboe@kernel.org
2025-04-01objtool: Ignore end-of-section jumps for KCOV/GCOVJosh Poimboeuf1-6/+16
When KCOV or GCOV is enabled, dead code can be left behind, in which case objtool silences unreachable and undefined behavior (fallthrough) warnings. Fallthrough warnings, and their variant "end of section" warnings, were silenced with the following commit: 6b023c784204 ("objtool: Silence more KCOV warnings") Another variant of a fallthrough warning is a jump to the end of a function. If that function happens to be at the end of a section, the jump destination doesn't actually exist. Normally that would be a fatal objtool error, but for KCOV/GCOV it's just another undefined behavior fallthrough. Silence it like the others. Fixes the following warning: drivers/iommu/dma-iommu.o: warning: objtool: iommu_dma_sw_msi+0x92: can't find jump dest instruction at .text+0x54d5 Fixes: 6b023c784204 ("objtool: Silence more KCOV warnings") Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/08fbe7d7e1e20612206f1df253077b94f178d93e.1743481539.git.jpoimboe@kernel.org Closes: https://lore.kernel.org/314f8809-cd59-479b-97d7-49356bf1c8d1@infradead.org/
2025-04-01objtool: Silence more KCOV warnings, part 2Josh Poimboeuf1-1/+1
Similar to GCOV, KCOV can leave behind dead code and undefined behavior. Warnings related to those should be ignored. The previous commit: 6b023c784204 ("objtool: Silence more KCOV warnings") ... only did so for CONFIG_CGOV_KERNEL. Also do it for CONFIG_KCOV, but for real this time. Fixes the following warning: vmlinux.o: warning: objtool: synaptics_report_mt_data: unexpected end of section .text.synaptics_report_mt_data Fixes: 6b023c784204 ("objtool: Silence more KCOV warnings") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/a44ba16e194bcbc52c1cef3d3cd9051a62622723.1743481539.git.jpoimboe@kernel.org Closes: https://lore.kernel.org/oe-kbuild-all/202503282236.UhfRsF3B-lkp@intel.com/
2025-03-31Revert "tcp: avoid atomic operations on sk->sk_rmem_alloc"Eric Dumazet4-35/+6
This reverts commit 0de2a5c4b824da2205658ebebb99a55c43cdf60f. I forgot that a TCP socket could receive messages in its error queue. sock_queue_err_skb() can be called without socket lock being held, and changes sk->sk_rmem_alloc. The fact that skbs in error queue are limited by sk->sk_rcvbuf means that error messages can be dropped if socket receive queues are full, which is an orthogonal issue. In future kernels, we could use a separate sk->sk_error_mem_alloc counter specifically for the error queue. Fixes: 0de2a5c4b824 ("tcp: avoid atomic operations on sk->sk_rmem_alloc") Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://patch.msgid.link/20250331075946.31960-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-31bnxt_en: bring back rtnl lock in bnxt_shutdownStanislav Fomichev1-0/+2
Taehee reports missing rtnl from bnxt_shutdown path: inetdev_event (./include/linux/inetdevice.h:256 net/ipv4/devinet.c:1585) notifier_call_chain (kernel/notifier.c:85) __dev_close_many (net/core/dev.c:1732 (discriminator 3)) kernel/locking/mutex.c:713 kernel/locking/mutex.c:732) dev_close_many (net/core/dev.c:1786) netif_close (./include/linux/list.h:124 ./include/linux/list.h:215 bnxt_shutdown (drivers/net/ethernet/broadcom/bnxt/bnxt.c:16707) bnxt_en pci_device_shutdown (drivers/pci/pci-driver.c:511) device_shutdown (drivers/base/core.c:4820) kernel_restart (kernel/reboot.c:271 kernel/reboot.c:285) Bring back the rtnl lock. Link: https://lore.kernel.org/netdev/CAMArcTV4P8PFsc6O2tSgzRno050DzafgqkLA2b7t=Fv_SY=brw@mail.gmail.com/ Fixes: 004b5008016a ("eth: bnxt: remove most dependencies on RTNL") Reported-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Tested-by: Taehee Yoo <ap420073@gmail.com> Tested-by: Breno Leitao <leitao@debian.org> Link: https://patch.msgid.link/20250328174216.3513079-1-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-31eth: gve: add missing netdev locks on reset and shutdown pathsJakub Kicinski1-0/+4
All the misc entry points end up calling into either gve_open() or gve_close(), they take rtnl_lock today but since the recent instance locking changes should also take the instance lock. Found by code inspection and untested. Fixes: cae03e5bdd9e ("net: hold netdev instance lock during queue operations") Acked-by: Stanislav Fomichev <sdf@fomichev.me> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Link: https://patch.msgid.link/20250328164742.1268069-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-31selftests: mptcp: ignore mptcp_diag binaryMatthieu Baerts (NGI0)1-0/+1
A new binary is now generated by the MPTCP selftests: mptcp_diag. Like the other binaries from this directory, there is no need to track this in Git, it should then be ignored. Fixes: 00f5e338cf7e ("selftests: mptcp: Add a tool to get specific msk_info") Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://patch.msgid.link/20250328-net-mptcp-misc-fixes-6-15-v1-4-34161a482a7f@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-31selftests: mptcp: close fd_in before returning in main_loopGeliang Tang1-2/+5
The file descriptor 'fd_in' is opened when cfg_input is configured, but not closed in main_loop(), this patch fixes it. Fixes: 05be5e273c84 ("selftests: mptcp: add disconnect tests") Cc: stable@vger.kernel.org Co-developed-by: Cong Liu <liucong2@kylinos.cn> Signed-off-by: Cong Liu <liucong2@kylinos.cn> Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://patch.msgid.link/20250328-net-mptcp-misc-fixes-6-15-v1-3-34161a482a7f@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-31selftests: mptcp: fix incorrect fd checks in main_loopCong Liu1-2/+2
Fix a bug where the code was checking the wrong file descriptors when opening the input files. The code was checking 'fd' instead of 'fd_in', which could lead to incorrect error handling. Fixes: 05be5e273c84 ("selftests: mptcp: add disconnect tests") Cc: stable@vger.kernel.org Fixes: ca7ae8916043 ("selftests: mptcp: mptfo Initiator/Listener") Co-developed-by: Geliang Tang <geliang@kernel.org> Signed-off-by: Geliang Tang <geliang@kernel.org> Signed-off-by: Cong Liu <liucong2@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://patch.msgid.link/20250328-net-mptcp-misc-fixes-6-15-v1-2-34161a482a7f@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-31mptcp: fix NULL pointer in can_accept_new_subflowGang Yan1-7/+8
When testing valkey benchmark tool with MPTCP, the kernel panics in 'mptcp_can_accept_new_subflow' because subflow_req->msk is NULL. Call trace: mptcp_can_accept_new_subflow (./net/mptcp/subflow.c:63 (discriminator 4)) (P) subflow_syn_recv_sock (./net/mptcp/subflow.c:854) tcp_check_req (./net/ipv4/tcp_minisocks.c:863) tcp_v4_rcv (./net/ipv4/tcp_ipv4.c:2268) ip_protocol_deliver_rcu (./net/ipv4/ip_input.c:207) ip_local_deliver_finish (./net/ipv4/ip_input.c:234) ip_local_deliver (./net/ipv4/ip_input.c:254) ip_rcv_finish (./net/ipv4/ip_input.c:449) ... According to the debug log, the same req received two SYN-ACK in a very short time, very likely because the client retransmits the syn ack due to multiple reasons. Even if the packets are transmitted with a relevant time interval, they can be processed by the server on different CPUs concurrently). The 'subflow_req->msk' ownership is transferred to the subflow the first, and there will be a risk of a null pointer dereference here. This patch fixes this issue by moving the 'subflow_req->msk' under the `own_req == true` conditional. Note that the !msk check in subflow_hmac_valid() can be dropped, because the same check already exists under the own_req mpj branch where the code has been moved to. Fixes: 9466a1ccebbe ("mptcp: enable JOIN requests even if cookies are in use") Cc: stable@vger.kernel.org Suggested-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Gang Yan <yangang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://patch.msgid.link/20250328-net-mptcp-misc-fixes-6-15-v1-1-34161a482a7f@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-31octeontx2-af: Free NIX_AF_INT_VEC_GEN irqGeetha sowjanya1-1/+1
Due to the incorrect initial vector number in rvu_nix_unregister_interrupts(), NIX_AF_INT_VEC_GEN is not geeting free. Fix the vector number to include NIX_AF_INT_VEC_GEN irq. Fixes: 5ed66306eab6 ("octeontx2-af: Add devlink health reporters for NIX") Signed-off-by: Geetha sowjanya <gakula@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250327094054.2312-1-gakula@marvell.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-31octeontx2-af: Fix mbox INTR handler when num VFs > 64Geetha sowjanya1-1/+1
When number of RVU VFs > 64, the vfs value passed to "rvu_queue_work" function is incorrect. Due to which mbox workqueue entries for VFs 0 to 63 never gets added to workqueue. Fixes: 9bdc47a6e328 ("octeontx2-af: Mbox communication support btw AF and it's VFs") Signed-off-by: Geetha sowjanya <gakula@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250327091441.1284-1-gakula@marvell.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-31net: fix use-after-free in the netdev_nl_sock_priv_destroy()Taehee Yoo1-2/+4
In the netdev_nl_sock_priv_destroy(), an instance lock is acquired before calling net_devmem_unbind_dmabuf(), then releasing an instance lock(netdev_unlock(binding->dev)). However, a binding is freed in the net_devmem_unbind_dmabuf(). So using a binding after net_devmem_unbind_dmabuf() occurs UAF. To fix this UAF, it needs to use temporary variable. Fixes: ba6f418fbf64 ("net: bubble up taking netdev instance lock to callers of net_devmem_unbind_dmabuf()") Signed-off-by: Taehee Yoo <ap420073@gmail.com> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Mina Almasry <almasrymina@google.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250328062237.3746875-1-ap420073@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-31selftests: net: use Path helpers in pingJakub Kicinski1-10/+5
Now that net and net-next have converged we can use the Path helpers in the ping test without conflicts. Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20250327222315.1098596-4-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-31selftests: net: use the dummy bpf from net/libJakub Kicinski7-31/+5
Commit 29b036be1b0b ("selftests: drv-net: test XDP, HDS auto and the ioctl path") added an sample XDP_PASS prog in net/lib, so that we can reuse it in various sub-directories. Delete the old sample and use the one from the lib in existing tests. Acked-by: Stanislav Fomichev <sdf@fomichev.me> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20250327222315.1098596-3-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-31selftests: drv-net: replace the rpath helper with Path objectsJakub Kicinski5-18/+13
The single letter + "path" helpers do not have many fans (see Link). Use a Path object with a better name. test_dir is the replacement for rpath(), net_lib_dir is a new path of the $ksft/net/lib directory. The Path() class overloads the "/" operator and can be cast to string automatically, so to get a path to a file tests can do: path = env.test_dir / "binary" Link: https://lore.kernel.org/CA+FuTSemTNVZ5MxXkq8T9P=DYm=nSXcJnL7CJBPZNAT_9UFisQ@mail.gmail.com Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20250327222315.1098596-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-31net: lapbether: use netdev_lockdep_set_classes() helperEric Dumazet1-0/+2
drivers/net/wan/lapbether.c uses stacked devices. Like similar drivers, it must use netdev_lockdep_set_classes() to avoid LOCKDEP splats. This is similar to commit 9bfc9d65a1dc ("hamradio: use netdev_lockdep_set_classes() helper") Fixes: 7e4d784f5810 ("net: hold netdev instance lock during rtnetlink operations") Reported-by: syzbot+377b71db585c9c705f8e@syzkaller.appspotmail.com Closes: https://lore.kernel.org/lkml/67cd611c.050a0220.14db68.0073.GAE@google.com/T/#u Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250327144439.2463509-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>