Age | Commit message (Collapse) | Author | Files | Lines |
|
timer_delete[_sync]() replaces del_timer[_sync](). Convert the whole tree
over and remove the historical wrapper inlines.
Conversion was done with coccinelle plus manual fixups where necessary.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
The mask operation link->flags | DL_FLAG_PM_RUNTIME is always true which
is incorrect. The mask operation should be using the bit-wise &
operator. Fix this.
Fixes: bca84a7b93fd ("PM: sleep: Use DPM_FLAG_SMART_SUSPEND conditionally")
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Link: https://patch.msgid.link/20250319114324.791829-1-colin.i.king@gmail.com
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
When dpm_suspend() fails, some devices with power.direct_complete set
may not have been handled by device_suspend() yet, so runtime PM has
not been disabled for them yet even though power.direct_complete is set.
Since device_resume() expects that runtime PM has been disabled for all
devices with power.direct_complete set, it will attempt to reenable
runtime PM for the devices that have not been processed by device_suspend()
which does not make sense. Had those devices had runtime PM disabled
before device_suspend() had run, device_resume() would have inadvertently
enable runtime PM for them, but this is not expected to happen because
it would require ->prepare() callbacks to return positive values for
devices with runtime PM disabled, which would be invalid.
In practice, this issue is most likely benign because pm_runtime_enable()
will not allow the "disable depth" counter to underflow, but it causes a
warning message to be printed for each affected device.
To allow device_resume() to distinguish the "direct complete" devices
that have been processed by device_suspend() from those which have not
been handled by it, make device_suspend() set power.is_suspended for
"direct complete" devices.
Next, move the power.is_suspended check in device_resume() before the
power.direct_complete check in it to make it skip the "direct complete"
devices that have not been handled by device_suspend().
This change is based on a preliminary patch from Saravana Kannan.
Fixes: aae4518b3124 ("PM / sleep: Mechanism to avoid resuming runtime-suspended devices unnecessarily")
Link: https://lore.kernel.org/linux-pm/20241114220921.2529905-2-saravanak@google.com/
Reported-by: Saravana Kannan <saravanak@google.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Saravana Kannan <saravanak@google.com>
Link: https://patch.msgid.link/12627587.O9o76ZdvQC@rjwysocki.net
|
|
The body of dpm_wait_for_children() is indented by 7 spaces instead of a
single TAB.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://patch.msgid.link/9c8ff2b103c3ba7b0d27bdc8248b05e3b1dc9551.1741776430.git.geert+renesas@glider.be
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
In preparation for subsequent changes, move the power.completion
reinitialization along with clearing power.work_in_progress into a
separate function called dpm_clear_async_state() and rearrange
dpm_async_fn() to get rid of unnecessary indentation.
No intentional functional impact.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Link: https://patch.msgid.link/8494650.T7Z3S40VBb@rjwysocki.net
|
|
Rename the async_in_progress field in struct dev_pm_info to
work_in_progress as after subsequent changes it will mean work in
general rather than just async work.
No functional impact.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Link: https://patch.msgid.link/3338693.aeNJFYEL58@rjwysocki.net
|
|
Modify pm_runtime_block_if_disabled() to return true when runtime PM
is disabled for the device, regardless of the power.last_status value.
This effectively prevents "smart suspend" from being enabled for
devices with runtime PM disabled in device_prepare(), even transiently,
so update the related comment in that function accordingly.
If a device has runtime PM disabled in device_prepare(), it is not
actually known whether or not runtime PM will be enabled for that
device going forward, so it is more appropriate to postpone the
"smart suspend" optimization for the device in the given system
suspend-resume cycle than to enable it and get confused going
forward.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Link: https://patch.msgid.link/13718674.uLZWGnKmhe@rjwysocki.net
|
|
Put the update of the power.smart_suspend device flag under the PM
spinlock of the device in case multiple bit fields in struct dev_pm_info
occupy one memory location which needs to be updated via RMW every time
any of these bit fields is updated.
The lock in question is already held around the power.direct_complete
flag update in device_prepare() for the same reason, so this change does
not add locking-related overhead to the code.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Link: https://patch.msgid.link/2368159.ElGaqSPkdT@rjwysocki.net
|
|
The check before setting power.must_resume in device_suspend_noirq()
does not take power.child_count into account, but it should do that, so
use pm_runtime_need_not_resume() in it for this purpose and adjust the
comment next to it accordingly.
Fixes: 107d47b2b95e ("PM: sleep: core: Simplify the SMART_SUSPEND flag handling")
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Link: https://patch.msgid.link/3353728.44csPzL39Z@rjwysocki.net
|
|
Currently, if power.no_callbacks is set, device_prepare() will also set
power.direct_complete for the device. If power.direct_complete is set
in device_resume(), the clearing of power.is_prepared will be skipped
and if new children appear under the device at that point, a warning
will be printed.
After commit (f76b168b6f11 PM: Rename dev_pm_info.in_suspend to
is_prepared), power.is_prepared is generally cleared in device_resume()
before invoking the resume callback for the device which allows that
callback to add new children without triggering the warning, but this
does not happen for devices with power.direct_complete set.
This problem is visible in USB where usb_set_interface() can be called
before device_complete() clears power.is_prepared for interface devices
and since ep devices are added then, the warning is printed:
usb 1-1: reset high-speed USB device number 3 using ci_hdrc
ep_81: PM: parent 1-1:1.1 should not be sleeping
PM: resume devices took 0.936 seconds
Since it is legitimate to add the ep devices at that point, the
warning above is not particularly useful, so get rid of it by
clearing power.is_prepared in device_resume() for devices with
power.direct_complete set if they have no PM callbacks, in which
case they need not actually resume for the new children to work.
Suggested-by: Rafael J. Wysocki <rafael@kernel.org>
Signed-off-by: Xu Yang <xu.yang_2@nxp.com>
Link: https://patch.msgid.link/20250224070049.3338646-1-xu.yang_2@nxp.com
[ rjw: New subject, changelog edits, rephrased new code comment ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Add an optimization (on top of previous changes) to avoid calling
pm_runtime_blocked(), which involves acquiring the device's PM spinlock,
for devices with no PM callbacks and runtime PM "blocked".
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Link: https://patch.msgid.link/2978873.e9J7NaK4W3@rjwysocki.net
|
|
A recent discussion has revealed that using DPM_FLAG_SMART_SUSPEND
unconditionally is generally problematic because it may lead to
situations in which the device's runtime PM information is internally
inconsistent or does not reflect its real state [1].
For this reason, change the handling of DPM_FLAG_SMART_SUSPEND so that
it is only taken into account if it is consistently set by the drivers
of all devices having any PM callbacks throughout dependency graphs in
accordance with the following rules:
- The "smart suspend" feature is only enabled for devices whose drivers
ask for it (that is, set DPM_FLAG_SMART_SUSPEND) and for devices
without PM callbacks unless they have never had runtime PM enabled.
- The "smart suspend" feature is not enabled for a device if it has not
been enabled for the device's parent unless the parent does not take
children into account or it has never had runtime PM enabled.
- The "smart suspend" feature is not enabled for a device if it has not
been enabled for one of the device's suppliers taking runtime PM into
account unless that supplier has never had runtime PM enabled.
Namely, introduce a new device PM flag called smart_suspend that is only
set if the above conditions are met and update all DPM_FLAG_SMART_SUSPEND
users to check power.smart_suspend instead of directly checking the
latter.
At the same time, drop the power.set_active flage introduced recently
in commit 3775fc538f53 ("PM: sleep: core: Synchronize runtime PM status
of parents and children") because it is now sufficient to check
power.smart_suspend along with the dev_pm_skip_resume() return value
to decide whether or not pm_runtime_set_active() needs to be called
for the device.
Link: https://lore.kernel.org/linux-pm/CAPDyKFroyU3YDSfw_Y6k3giVfajg3NQGwNWeteJWqpW29BojhQ@mail.gmail.com/ [1]
Fixes: 7585946243d6 ("PM: sleep: core: Restrict power.set_active propagation")
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Bjorn Helgaas <bhelgaas@google.com> # drivers/pci
Link: https://patch.msgid.link/1914558.tdWV9SEqCh@rjwysocki.net
|
|
If device_prepare() runs on a device that has never had runtime
PM enabled so far, it may reasonably assume that runtime PM will
not be enabled for that device during the system suspend-resume
cycle currently in progress, but this has never been guaranteed.
To verify this assumption, make device_prepare() arrange for
triggering a device warning accompanied by a call trace dump if
runtime PM is enabled for such a device after it has returned.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Link: https://patch.msgid.link/6131109.lOV4Wx5bFT@rjwysocki.net
|
|
There are only two callers of __pm_runtime_disable(), one of which is
device_suspend_late() and the other is pm_runtime_disable() that has
its own kerneldoc comment and there are no plans to add any more of
them. Since they use different values of the __pm_runtime_disable()
second parameter, the actual code behavior is different in each case,
but it is all documented in the __pm_runtime_disable() kerneldoc comment
which is not particularly straightforward.
For this reason, move the information from the __pm_runtime_disable()
kerneldoc comment to the pm_runtime_disable() one and into a separate
comment in device_suspend_late() and remove the __pm_runtime_disable()
kerneldoc comment altogether.
No functional impact.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Link: https://patch.msgid.link/12617588.O9o76ZdvQC@rjwysocki.net
|
|
Commit 3775fc538f53 ("PM: sleep: core: Synchronize runtime PM status of
parents and children") exposed an issue related to simple_pm_bus_pm_ops
that uses pm_runtime_force_suspend() and pm_runtime_force_resume() as
bus type PM callbacks for the noirq phases of system-wide suspend and
resume.
The problem is that pm_runtime_force_suspend() does not distinguish
runtime-suspended devices from devices for which runtime PM has never
been enabled, so if it sees a device with runtime PM status set to
RPM_ACTIVE, it will assume that runtime PM is enabled for that device
and so it will attempt to suspend it with the help of its runtime PM
callbacks which may not be ready for that. As it turns out, this
causes simple_pm_bus_runtime_suspend() to crash due to a NULL pointer
dereference.
Another problem related to the above commit and simple_pm_bus_pm_ops is
that setting runtime PM status of a device handled by the latter to
RPM_ACTIVE will actually prevent it from being resumed because
pm_runtime_force_resume() only resumes devices with runtime PM status
set to RPM_SUSPENDED.
To mitigate these issues, do not allow power.set_active to propagate
beyond the parent of the device with DPM_FLAG_SMART_SUSPEND set that
will need to be resumed, which should be a sufficient stop-gap for the
time being, but they will need to be properly addressed in the future
because in general during system-wide resume it is necessary to resume
all devices in a dependency chain in which at least one device is going
to be resumed.
Fixes: 3775fc538f53 ("PM: sleep: core: Synchronize runtime PM status of parents and children")
Closes: https://lore.kernel.org/linux-pm/1c2433d4-7e0f-4395-b841-b8eac7c25651@nvidia.com/
Reported-by: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Johan Hovold <johan+linaro@kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/6137505.lOV4Wx5bFT@rjwysocki.net
|
|
Commit 6e176bf8d461 ("PM: sleep: core: Do not skip callbacks in the
resume phase") overlooked the case in which the parent of a device with
DPM_FLAG_SMART_SUSPEND set did not use that flag and could be runtime-
suspended before a transition into a system-wide sleep state. In that
case, if the child is resumed during the subsequent transition from
that state into the working state, its runtime PM status will be set to
RPM_ACTIVE, but the runtime PM status of the parent will not be updated
accordingly, even though the parent will be resumed too, because of the
dev_pm_skip_suspend() check in device_resume_noirq().
Address this problem by tracking the need to set the runtime PM status
to RPM_ACTIVE during system-wide resume transitions for devices with
DPM_FLAG_SMART_SUSPEND set and all of the devices depended on by them.
Fixes: 6e176bf8d461 ("PM: sleep: core: Do not skip callbacks in the resume phase")
Closes: https://lore.kernel.org/linux-pm/Z30p2Etwf3F2AUvD@hovoldconsulting.com/
Reported-by: Johan Hovold <johan@kernel.org>
Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Johan Hovold <johan+linaro@kernel.org>
Tested-by: Johan Hovold <johan+linaro@kernel.org>
Link: https://patch.msgid.link/12619233.O9o76ZdvQC@rjwysocki.net
|
|
Allow configuring the DPM watchdog to warn about slow suspend/resume
functions without causing a system panic(). This allows you to set the
DPM_WATCHDOG_WARNING_TIMEOUT to something like 5 or 10 seconds to get
warnings about slow suspend/resume functions that eventually succeed.
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Tomasz Figa <tfiga@chromium.org>
Link: https://patch.msgid.link/20250109125957.v2.1.I4554f931b8da97948f308ecc651b124338ee9603@changeid
[ rjw: Subject edit ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
There is no function called __device_suspend() any more and it is still
mentioned in a comment in device_resume(), so update that comment.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Link: https://patch.msgid.link/2787627.mvXUDI8C0e@rjwysocki.net
|
|
initcall_debug previous and new output:
...PM: calling pci_pm_suspend+0x0/0x1b0 @ 3233, parent: pci0000:00
...PM: calling pci_pm_suspend @ 3233, parent: pci0000:00
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Simplify the system-wide suspend of devices by invoking dpm_async_fn()
directly from the main loop in each suspend phase instead of using an
additional wrapper function for running it.
No intentional functional impact.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
During a system-wide suspend of devices, dpm_noirq_suspend_devices(),
dpm_suspend_late() and dpm_suspend() move devices from one list to
another. They do it with each device after its PM callback in the
given suspend phase has run or has been scheduled for asynchronous
execution, in case it is deleted from the current list in the
meantime.
However, devices can be moved to a new list before invoking their PM
callbacks (which usually is the case for the devices whose callbacks
are executed asynchronously anyway), because doing so does not affect
the ordering of that list. In either case, each device is moved to
the new list after the previous device has been moved to it or gone
away, and if a device is removed, it does not matter which list it is
in at that point, because deleting an entry from a list does not change
the ordering of the other entries in it.
Accordingly, modify the functions mentioned above to move devices to
new lists without waiting for their PM callbacks to run regardless of
whether or not they run asynchronously.
No intentional functional impact.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
The async_error and pm_transition variables are set under dpm_list_mtx
in multiple places in the system-wide device PM core code, which is
unnecessary and confusing, so rearrange the code so that the variables
in question are set before acquiring the lock.
While at it, add some empty code lines around locking to improve the
consistency of the code.
No intentional functional impact.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
The error logging and failure statistics updates are carried out in two
places in each system-wide device suspend phase, which is unnecessary
code duplication, so do that in one place in each phase, right after
invoking device suspend callbacks.
While at it, add "noirq" or "late" to the "async" string printed when
the failing device callback in the "noirq" or "late" suspend phase,
respectively, was run asynchronously.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
If the handling of two or more devices fails in one suspend-resume
phase, it should be counted once in the statistics which is not
guaranteed to happen during system-wide resume of devices due to
the possible asynchronous execution of device callbacks.
Address this by using the async_error static variable during system-wide
device resume to indicate that there has been a device resume error and
the given suspend-resume phase should be counted as failing.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
It is not necessary to define struct suspend_stats in a header file and the
suspend_stats variable in the core device system-wide PM code. They both
can be defined in kernel/power/main.c, next to the sysfs and debugfs code
accessing suspend_stats, which can be static.
Modify the code in question in accordance with the above observation and
replace the static inline functions manipulating suspend_stats with
regular ones defined in kernel/power/main.c.
While at it, move the enum suspend_stat_step to the end of suspend.h which
is a more suitable place for it.
No intentional functional impact.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
Instead of using a set of individual struct suspend_stats fields
representing suspend step failure counters, use an array of counters
indexed by enum suspend_stat_step for this purpose, which allows
dpm_save_failed_step() to increment the appropriate counter
automatically, so that its callers don't need to do that directly.
It also allows suspend_stats_show() to carry out a loop over the
counters array to print their values.
Because the counters cannot become negative, use unsigned int for
representing them.
The only user-observable impact of this change is a different
ordering of entries in the suspend_stats debugfs file which is not
expected to matter.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
Move is_async() and dpm_async_fn() in the PM core to a more suitable
place.
No functional impact.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
|
|
Notice that devices can be moved to dpm_prepared_list before running
their resume callbacks, in analogy with dpm_noirq_resume_devices() and
dpm_resume_early(), because doing so will not affect the final ordering
of that list.
Namely, if a device is the first dpm_suspended_list entry while
dpm_list_mtx is held, it has not been removed so far and it cannot be
removed until dpm_list_mtx is released, so moving it to dpm_prepared_list
at that point is valid. If it is removed later, while its resume
callback is running, it will be deleted from dpm_prepared_list without
changing the ordering of the other devices in that list.
Accordingly, rearrange the while () loop in dpm_resume() to move
devices to dpm_prepared_list before running their resume callbacks and
implify the locking and device reference counting in it.
No intentional functional impact.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
|
|
Before commit 7839d0078e0d ("PM: sleep: Fix possible deadlocks in core
system-wide PM code"), the resume of devices that were allowed to resume
asynchronously was scheduled before starting the resume of the other
devices, so the former did not have to wait for the latter unless
functional dependencies were present.
Commit 7839d0078e0d removed that optimization in order to address a
correctness issue, but it can be restored with the help of a new device
power management flag, so do that now.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
|
|
It is reported that in low-memory situations the system-wide resume core
code deadlocks, because async_schedule_dev() executes its argument
function synchronously if it cannot allocate memory (and not only in
that case) and that function attempts to acquire a mutex that is already
held. Executing the argument function synchronously from within
dpm_async_fn() may also be problematic for ordering reasons (it may
cause a consumer device's resume callback to be invoked before a
requisite supplier device's one, for example).
Address this by changing the code in question to use
async_schedule_dev_nocall() for scheduling the asynchronous
execution of device suspend and resume functions and to directly
run them synchronously if async_schedule_dev_nocall() returns false.
Link: https://lore.kernel.org/linux-pm/ZYvjiqX6EsL15moe@perf/
Reported-by: Youngmin Nam <youngmin.nam@samsung.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
Tested-by: Youngmin Nam <youngmin.nam@samsung.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Cc: 5.7+ <stable@vger.kernel.org> # 5.7+: 6aa09a5bccd8 async: Split async_schedule_node_domain()
Cc: 5.7+ <stable@vger.kernel.org> # 5.7+: 7d4b5d7a37bd async: Introduce async_schedule_dev_nocall()
Cc: 5.7+ <stable@vger.kernel.org> # 5.7+
|
|
Assignments from pointer variables of type (void *) do not require
explicit type casts, so remove such type cases from the code in
drivers/base/power/main.c where applicable.
Signed-off-by: Li zeming <zeming@nfschina.com>
[ rjw: Subject and changelog edits ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Merge changes related to system sleep, PM domains changes and power
management documentation changes for 5.18-rc1:
- Fix load_image_and_restore() error path (Ye Bin).
- Fix typos in comments in the system wakeup hadling code (Tom Rix).
- Clean up non-kernel-doc comments in hibernation code (Jiapeng
Chong).
- Fix __setup handler error handling in system-wide suspend and
hibernation core code (Randy Dunlap).
- Add device name to suspend_report_result() (Youngjin Jang).
- Make virtual guests honour ACPI S4 hardware signature by
default (David Woodhouse).
- Block power off of a parent PM domain unless child is in deepest
state (Ulf Hansson).
- Use dev_err_probe() to simplify error handling for generic PM
domains (Ahmad Fatoum).
- Fix sleep-in-atomic bug caused by genpd_debug_remove() (Shawn Guo).
- Document Intel uncore frequency scaling (Srinivas Pandruvada).
* pm-sleep:
PM: hibernate: Honour ACPI hardware signature by default for virtual guests
PM: sleep: Add device name to suspend_report_result()
PM: suspend: fix return value of __setup handler
PM: hibernate: fix __setup handler error handling
PM: hibernate: Clean up non-kernel-doc comments
PM: sleep: wakeup: Fix typos in comments
PM: hibernate: fix load_image_and_restore() error path
* pm-domains:
PM: domains: Fix sleep-in-atomic bug caused by genpd_debug_remove()
PM: domains: use dev_err_probe() to simplify error handling
PM: domains: Prevent power off for parent unless child is in deepest state
* pm-docs:
Documentation: admin-guide: pm: Document uncore frequency scaling
|
|
The function device_pm_check_callbacks() can be called under the spin
lock (in the reported case it happens from genpd_add_device() ->
dev_pm_domain_set(), when the genpd uses spinlocks rather than mutexes.
However this function uncoditionally uses spin_lock_irq() /
spin_unlock_irq(), thus not preserving the CPU flags. Use the
irqsave/irqrestore instead.
The backtrace for the reference:
[ 2.752010] ------------[ cut here ]------------
[ 2.756769] raw_local_irq_restore() called with IRQs enabled
[ 2.762596] WARNING: CPU: 4 PID: 1 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x34/0x50
[ 2.772338] Modules linked in:
[ 2.775487] CPU: 4 PID: 1 Comm: swapper/0 Tainted: G S 5.17.0-rc6-00384-ge330d0d82eff-dirty #684
[ 2.781384] Freeing initrd memory: 46024K
[ 2.785839] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 2.785841] pc : warn_bogus_irq_restore+0x34/0x50
[ 2.785844] lr : warn_bogus_irq_restore+0x34/0x50
[ 2.785846] sp : ffff80000805b7d0
[ 2.785847] x29: ffff80000805b7d0 x28: 0000000000000000 x27: 0000000000000002
[ 2.785850] x26: ffffd40e80930b18 x25: ffff7ee2329192b8 x24: ffff7edfc9f60800
[ 2.785853] x23: ffffd40e80930b18 x22: ffffd40e80930d30 x21: ffff7edfc0dffa00
[ 2.785856] x20: ffff7edfc09e3768 x19: 0000000000000000 x18: ffffffffffffffff
[ 2.845775] x17: 6572206f74206465 x16: 6c696166203a3030 x15: ffff80008805b4f7
[ 2.853108] x14: 0000000000000000 x13: ffffd40e809550b0 x12: 00000000000003d8
[ 2.860441] x11: 0000000000000148 x10: ffffd40e809550b0 x9 : ffffd40e809550b0
[ 2.867774] x8 : 00000000ffffefff x7 : ffffd40e809ad0b0 x6 : ffffd40e809ad0b0
[ 2.875107] x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000
[ 2.882440] x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff7edfc03a8000
[ 2.889774] Call trace:
[ 2.892290] warn_bogus_irq_restore+0x34/0x50
[ 2.896770] _raw_spin_unlock_irqrestore+0x94/0xa0
[ 2.901690] genpd_unlock_spin+0x20/0x30
[ 2.905724] genpd_add_device+0x100/0x2d0
[ 2.909850] __genpd_dev_pm_attach+0xa8/0x23c
[ 2.914329] genpd_dev_pm_attach_by_id+0xc4/0x190
[ 2.919167] genpd_dev_pm_attach_by_name+0x3c/0xd0
[ 2.924086] dev_pm_domain_attach_by_name+0x24/0x30
[ 2.929102] psci_dt_attach_cpu+0x24/0x90
[ 2.933230] psci_cpuidle_probe+0x2d4/0x46c
[ 2.937534] platform_probe+0x68/0xe0
[ 2.941304] really_probe.part.0+0x9c/0x2fc
[ 2.945605] __driver_probe_device+0x98/0x144
[ 2.950085] driver_probe_device+0x44/0x15c
[ 2.954385] __device_attach_driver+0xb8/0x120
[ 2.958950] bus_for_each_drv+0x78/0xd0
[ 2.962896] __device_attach+0xd8/0x180
[ 2.966843] device_initial_probe+0x14/0x20
[ 2.971144] bus_probe_device+0x9c/0xa4
[ 2.975092] device_add+0x380/0x88c
[ 2.978679] platform_device_add+0x114/0x234
[ 2.983067] platform_device_register_full+0x100/0x190
[ 2.988344] psci_idle_init+0x6c/0xb0
[ 2.992113] do_one_initcall+0x74/0x3a0
[ 2.996060] kernel_init_freeable+0x2fc/0x384
[ 3.000543] kernel_init+0x28/0x130
[ 3.004132] ret_from_fork+0x10/0x20
[ 3.007817] irq event stamp: 319826
[ 3.011404] hardirqs last enabled at (319825): [<ffffd40e7eda0268>] __up_console_sem+0x78/0x84
[ 3.020332] hardirqs last disabled at (319826): [<ffffd40e7fd6d9d8>] el1_dbg+0x24/0x8c
[ 3.028458] softirqs last enabled at (318312): [<ffffd40e7ec90410>] _stext+0x410/0x588
[ 3.036678] softirqs last disabled at (318299): [<ffffd40e7ed1bf68>] __irq_exit_rcu+0x158/0x174
[ 3.045607] ---[ end trace 0000000000000000 ]---
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Currently, suspend_report_result() prints only function information.
If any driver uses a common PM function, nobody knows who exactly
called the failing function.
A device pinter is needed to recognize the failing device.
For example:
PM: dpm_run_callback(): pnp_bus_suspend+0x0/0x10 returns 0
PM: dpm_run_callback(): pci_pm_suspend+0x0/0x150 returns 0
become after the change:
serial 00:05: PM: dpm_run_callback(): pnp_bus_suspend+0x0/0x10 returns 0
pci 0000:00:01.3: PM: dpm_run_callback(): pci_pm_suspend+0x0/0x150 returns 0
Signed-off-by: Youngjin Jang <yj84.jang@samsung.com>
[ rjw: Changelog edits ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Commit 2aa36604e824 ("PM: sleep: Avoid calling put_device() under
dpm_list_mtx") forgot to update the while () loop termination
condition to also break the loop if error is nonzero, which
causes the loop to become infinite if device_prepare() returns
an error for one device.
Add the missing !error check.
Fixes: 2aa36604e824 ("PM: sleep: Avoid calling put_device() under dpm_list_mtx")
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reported-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Cc: All applicable <stable@vger.kernel.org>
|
|
It is generally unsafe to call put_device() with dpm_list_mtx held,
because the given device's release routine may carry out an action
depending on that lock which then may deadlock, so modify the
system-wide suspend and resume of devices to always drop dpm_list_mtx
before calling put_device() (and adjust white space somewhat while
at it).
For instance, this prevents the following splat from showing up in
the kernel log after a system resume in certain configurations:
[ 3290.969514] ======================================================
[ 3290.969517] WARNING: possible circular locking dependency detected
[ 3290.969519] 5.15.0+ #2420 Tainted: G S
[ 3290.969523] ------------------------------------------------------
[ 3290.969525] systemd-sleep/4553 is trying to acquire lock:
[ 3290.969529] ffff888117ab1138 ((wq_completion)hci0#2){+.+.}-{0:0}, at: flush_workqueue+0x87/0x4a0
[ 3290.969554]
but task is already holding lock:
[ 3290.969556] ffffffff8280fca8 (dpm_list_mtx){+.+.}-{3:3}, at: dpm_resume+0x12e/0x3e0
[ 3290.969571]
which lock already depends on the new lock.
[ 3290.969573]
the existing dependency chain (in reverse order) is:
[ 3290.969575]
-> #3 (dpm_list_mtx){+.+.}-{3:3}:
[ 3290.969583] __mutex_lock+0x9d/0xa30
[ 3290.969591] device_pm_add+0x2e/0xe0
[ 3290.969597] device_add+0x4d5/0x8f0
[ 3290.969605] hci_conn_add_sysfs+0x43/0xb0 [bluetooth]
[ 3290.969689] hci_conn_complete_evt.isra.71+0x124/0x750 [bluetooth]
[ 3290.969747] hci_event_packet+0xd6c/0x28a0 [bluetooth]
[ 3290.969798] hci_rx_work+0x213/0x640 [bluetooth]
[ 3290.969842] process_one_work+0x2aa/0x650
[ 3290.969851] worker_thread+0x39/0x400
[ 3290.969859] kthread+0x142/0x170
[ 3290.969865] ret_from_fork+0x22/0x30
[ 3290.969872]
-> #2 (&hdev->lock){+.+.}-{3:3}:
[ 3290.969881] __mutex_lock+0x9d/0xa30
[ 3290.969887] hci_event_packet+0xba/0x28a0 [bluetooth]
[ 3290.969935] hci_rx_work+0x213/0x640 [bluetooth]
[ 3290.969978] process_one_work+0x2aa/0x650
[ 3290.969985] worker_thread+0x39/0x400
[ 3290.969993] kthread+0x142/0x170
[ 3290.969999] ret_from_fork+0x22/0x30
[ 3290.970004]
-> #1 ((work_completion)(&hdev->rx_work)){+.+.}-{0:0}:
[ 3290.970013] process_one_work+0x27d/0x650
[ 3290.970020] worker_thread+0x39/0x400
[ 3290.970028] kthread+0x142/0x170
[ 3290.970033] ret_from_fork+0x22/0x30
[ 3290.970038]
-> #0 ((wq_completion)hci0#2){+.+.}-{0:0}:
[ 3290.970047] __lock_acquire+0x15cb/0x1b50
[ 3290.970054] lock_acquire+0x26c/0x300
[ 3290.970059] flush_workqueue+0xae/0x4a0
[ 3290.970066] drain_workqueue+0xa1/0x130
[ 3290.970073] destroy_workqueue+0x34/0x1f0
[ 3290.970081] hci_release_dev+0x49/0x180 [bluetooth]
[ 3290.970130] bt_host_release+0x1d/0x30 [bluetooth]
[ 3290.970195] device_release+0x33/0x90
[ 3290.970201] kobject_release+0x63/0x160
[ 3290.970211] dpm_resume+0x164/0x3e0
[ 3290.970215] dpm_resume_end+0xd/0x20
[ 3290.970220] suspend_devices_and_enter+0x1a4/0xba0
[ 3290.970229] pm_suspend+0x26b/0x310
[ 3290.970236] state_store+0x42/0x90
[ 3290.970243] kernfs_fop_write_iter+0x135/0x1b0
[ 3290.970251] new_sync_write+0x125/0x1c0
[ 3290.970257] vfs_write+0x360/0x3c0
[ 3290.970263] ksys_write+0xa7/0xe0
[ 3290.970269] do_syscall_64+0x3a/0x80
[ 3290.970276] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 3290.970284]
other info that might help us debug this:
[ 3290.970285] Chain exists of:
(wq_completion)hci0#2 --> &hdev->lock --> dpm_list_mtx
[ 3290.970297] Possible unsafe locking scenario:
[ 3290.970299] CPU0 CPU1
[ 3290.970300] ---- ----
[ 3290.970302] lock(dpm_list_mtx);
[ 3290.970306] lock(&hdev->lock);
[ 3290.970310] lock(dpm_list_mtx);
[ 3290.970314] lock((wq_completion)hci0#2);
[ 3290.970319]
*** DEADLOCK ***
[ 3290.970321] 7 locks held by systemd-sleep/4553:
[ 3290.970325] #0: ffff888103bcd448 (sb_writers#4){.+.+}-{0:0}, at: ksys_write+0xa7/0xe0
[ 3290.970341] #1: ffff888115a14488 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x103/0x1b0
[ 3290.970355] #2: ffff888100f719e0 (kn->active#233){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x10c/0x1b0
[ 3290.970369] #3: ffffffff82661048 (autosleep_lock){+.+.}-{3:3}, at: state_store+0x12/0x90
[ 3290.970384] #4: ffffffff82658ac8 (system_transition_mutex){+.+.}-{3:3}, at: pm_suspend+0x9f/0x310
[ 3290.970399] #5: ffffffff827f2a48 (acpi_scan_lock){+.+.}-{3:3}, at: acpi_suspend_begin+0x4c/0x80
[ 3290.970416] #6: ffffffff8280fca8 (dpm_list_mtx){+.+.}-{3:3}, at: dpm_resume+0x12e/0x3e0
[ 3290.970428]
stack backtrace:
[ 3290.970431] CPU: 3 PID: 4553 Comm: systemd-sleep Tainted: G S 5.15.0+ #2420
[ 3290.970438] Hardware name: Dell Inc. XPS 13 9380/0RYJWW, BIOS 1.5.0 06/03/2019
[ 3290.970441] Call Trace:
[ 3290.970446] dump_stack_lvl+0x44/0x57
[ 3290.970454] check_noncircular+0x105/0x120
[ 3290.970468] ? __lock_acquire+0x15cb/0x1b50
[ 3290.970474] __lock_acquire+0x15cb/0x1b50
[ 3290.970487] lock_acquire+0x26c/0x300
[ 3290.970493] ? flush_workqueue+0x87/0x4a0
[ 3290.970503] ? __raw_spin_lock_init+0x3b/0x60
[ 3290.970510] ? lockdep_init_map_type+0x58/0x240
[ 3290.970519] flush_workqueue+0xae/0x4a0
[ 3290.970526] ? flush_workqueue+0x87/0x4a0
[ 3290.970544] ? drain_workqueue+0xa1/0x130
[ 3290.970552] drain_workqueue+0xa1/0x130
[ 3290.970561] destroy_workqueue+0x34/0x1f0
[ 3290.970572] hci_release_dev+0x49/0x180 [bluetooth]
[ 3290.970624] bt_host_release+0x1d/0x30 [bluetooth]
[ 3290.970687] device_release+0x33/0x90
[ 3290.970695] kobject_release+0x63/0x160
[ 3290.970705] dpm_resume+0x164/0x3e0
[ 3290.970710] ? dpm_resume_early+0x251/0x3b0
[ 3290.970718] dpm_resume_end+0xd/0x20
[ 3290.970723] suspend_devices_and_enter+0x1a4/0xba0
[ 3290.970737] pm_suspend+0x26b/0x310
[ 3290.970746] state_store+0x42/0x90
[ 3290.970755] kernfs_fop_write_iter+0x135/0x1b0
[ 3290.970764] new_sync_write+0x125/0x1c0
[ 3290.970777] vfs_write+0x360/0x3c0
[ 3290.970785] ksys_write+0xa7/0xe0
[ 3290.970794] do_syscall_64+0x3a/0x80
[ 3290.970803] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 3290.970811] RIP: 0033:0x7f41b1328164
[ 3290.970819] Code: 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 80 00 00 00 00 8b 05 4a d2 2c 00 48 63 ff 85 c0 75 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 f3 c3 66 90 55 53 48 89 d5 48 89 f3 48 83
[ 3290.970824] RSP: 002b:00007ffe6ae21b28 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[ 3290.970831] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f41b1328164
[ 3290.970836] RDX: 0000000000000004 RSI: 000055965e651070 RDI: 0000000000000004
[ 3290.970839] RBP: 000055965e651070 R08: 000055965e64f390 R09: 00007f41b1e3d1c0
[ 3290.970843] R10: 000000000000000a R11: 0000000000000246 R12: 0000000000000004
[ 3290.970846] R13: 0000000000000001 R14: 000055965e64f2b0 R15: 0000000000000004
Cc: All applicable <stable@vger.kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
In the cpuidle-psci case, runtime PM in combination with the generic PM
domain (genpd), may be used when entering/exiting a shared idlestate. More
precisely, genpd relies on runtime PM to be enabled for the attached device
(in this case it belongs to a CPU), to properly manage the reference
counting of its PM domain.
This works fine most of the time, but during system suspend in
dpm_suspend_late(), the PM core disables runtime PM for all devices. Beyond
this point, calls to pm_runtime_get_sync() to runtime resume a device may
fail and therefore it could also mess up the reference counting in genpd.
To fix this problem, let's call wake_up_all_idle_cpus() in
dpm_suspend_late(), prior to disabling runtime PM. In this way a device
that belongs to a CPU, becomes runtime resumed through cpuidle-psci and
stays like that, because the runtime PM usage count has been bumped in
device_prepare().
Diagnosed-by: Maulik Shah <mkshah@codeaurora.org>
Suggested-by: Rafael J. Wysocki <rafael@kernel.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Commit 8651f97bd951 ("PM / cpuidle: System resume hang fix with
cpuidle") that introduced cpuidle pausing during system suspend
did that to work around a platform firmware issue causing systems
to hang during resume if CPUs were allowed to enter idle states
in the system suspend and resume code paths.
However, pausing cpuidle before the last phase of suspending
devices is the source of an otherwise arbitrary difference between
the suspend-to-idle path and other system suspend variants, so it is
cleaner to do that later, before taking secondary CPUs offline (it
is still safer to take secondary CPUs offline with cpuidle paused,
though).
Modify the code accordingly, but in order to avoid code duplication,
introduce new wrapper functions, pm_sleep_disable_secondary_cpus()
and pm_sleep_enable_secondary_cpus(), to combine cpuidle_pause()
and cpuidle_resume(), respectively, with the handling of secondary
CPUs during system-wide transitions to sleep states.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
It is pointless to pause cpuidle in the suspend-to-idle path,
because it is going to be resumed in the same path later and
pausing it does not serve any particular purpose in that case.
Rework the code to avoid doing that.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
There is no reason to allow "syscore" devices to runtime-suspend
during system-wide PM transitions, because they are subject to the
same possible failure modes as any other devices in that respect.
Accordingly, change device_prepare() and device_complete() to call
pm_runtime_get_noresume() and pm_runtime_put(), respectively, for
"syscore" devices too.
Fixes: 057d51a1268f ("Merge branch 'pm-sleep'")
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: 3.10+ <stable@vger.kernel.org> # 3.10+
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
There are variables(power.may_skip_resume and dev->power.must_resume)
and DPM_FLAG_MAY_SKIP_RESUME flags to control the resume of devices after
a system wide suspend transition.
Setting the DPM_FLAG_MAY_SKIP_RESUME flag means that the driver allows
its "noirq" and "early" resume callbacks to be skipped if the device
can be left in suspend after a system-wide transition into the working
state. PM core determines that the driver's "noirq" and "early" resume
callbacks should be skipped or not with dev_pm_skip_resume() function by
checking power.may_skip_resume variable.
power.must_resume variable is getting set to false in __device_suspend()
function without checking device's DPM_FLAG_MAY_SKIP_RESUME settings.
In problematic scenario, where all the devices in the suspend_late
stage are successful and some device can fail to suspend in
suspend_noirq phase. So some devices successfully suspended in suspend_late
stage are not getting chance to execute __device_suspend_noirq()
to set dev->power.must_resume variable to true and not getting
resumed in early_resume phase.
Add a check for device's DPM_FLAG_MAY_SKIP_RESUME flag before
setting power.must_resume variable in __device_suspend function.
Fixes: 6e176bf8d461 ("PM: sleep: core: Do not skip callbacks in the resume phase")
Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Use ktime_us_delta() to make the debug log more precise instead of
shifting the return value of ktime_to_ns() applied to a ktime_sub()
result by 10 bit positions to the right.
Signed-off-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
[ rjw: Changelog rewrite, subject edits ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Use dev_printk() when possible to make messages more consistent with other
device-related messages.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Add dev_wakeup_path() helper to avoid to spread
dev->power.wakeup_path test in drivers.
Signed-off-by: Patrice Chotard <patrice.chotard@st.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Currently there are 4 driver flags to control system suspend/resume
behavior: DPM_FLAG_NO_DIRECT_COMPLETE, DPM_FLAG_SMART_PREPARE,
DPM_FLAG_SMART_SUSPEND and DPM_FLAG_MAY_SKIP_RESUME.
Print these flags during suspend/resume so as to get a brief
understanding of the expected behavior of each device, and to
facilitate suspend/resume debugging/tuning.
To enable this tracing:
echo 'file drivers/base/power/main.c +p' >
/sys/kernel/debug/dynamic_debug/control
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
[ rjw: Subject and changelog edits ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
A break following a return statement is pointless, so drop it.
Signed-off-by: Tom Rix <trix@redhat.com>
[ rjw: Subject and changelog edits ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
It has been reported that system-wide suspend may be aborted in the
absence of any wakeup events due to unforseen interactions of it with
the runtume PM framework.
One failing scenario is when there are multiple devices sharing an
ACPI power resource and runtime-resume needs to be carried out for
one of them during system-wide suspend (for example, because it needs
to be reconfigured before the whole system goes to sleep). In that
case, the runtime-resume of that device involves turning the ACPI
power resource "on" which in turn causes runtime-resume requests
to be queued up for all of the other devices sharing it. Those
requests go to the runtime PM workqueue which is frozen during
system-wide suspend, so they are not actually taken care of until
the resume of the whole system, but the pm_runtime_barrier()
call in __device_suspend() sees them and triggers system wakeup
events for them which then cause the system-wide suspend to be
aborted if wakeup source objects are in active use.
Of course, the logic that leads to triggering those wakeup events is
questionable in the first place, because clearly there are cases in
which a pending runtime resume request for a device is not connected
to any real wakeup events in any way (like the one above). Moreover,
it is racy, because the device may be resuming already by the time
the pm_runtime_barrier() runs and so if the driver doesn't take care
of signaling the wakeup event as appropriate, it will be lost.
However, if the driver does take care of that, the extra
pm_wakeup_event() call in the core is redundant.
Accordingly, drop the conditional pm_wakeup_event() call fron
__device_suspend() and make the latter call pm_runtime_barrier()
alone. Also modify the comment next to that call to reflect the new
code and extend it to mention the need to avoid unwanted interactions
between runtime PM and system-wide device suspend callbacks.
Fixes: 1e2ef05bb8cf8 ("PM: Limit race conditions between runtime PM and system sleep (v2)")
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Reported-by: Utkarsh H Patel <utkarsh.h.patel@intel.com>
Tested-by: Utkarsh H Patel <utkarsh.h.patel@intel.com>
Tested-by: Pengfei Xu <pengfei.xu@intel.com>
Cc: All applicable <stable@vger.kernel.org>
|
|
Now the last users of show_stack() got converted to use an explicit log
level, show_stack_loglvl() can drop it's redundant suffix and become once
again well known show_stack().
Signed-off-by: Dmitry Safonov <dima@arista.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/20200418201944.482088-51-dima@arista.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Aligning with other watchdog messages just before panic - use KERN_EMERG.
Signed-off-by: Dmitry Safonov <dima@arista.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Len Brown <len.brown@intel.com>
Cc: Pavel Machek <pavel@ucw.cz>
Link: http://lkml.kernel.org/r/20200418201944.482088-47-dima@arista.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|