aboutsummaryrefslogtreecommitdiffstats
path: root/drivers
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2017-11-13 19:43:50 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2017-11-13 19:43:50 -0800
commitbd2cd7d5a8f83ddc761025f42a3ca8e56351a6cc (patch)
tree6ea70f09f32544f895020e198dac632145332cc2 /drivers
parentx86 / CPU: Avoid unnecessary IPIs in arch_freq_get_on_cpu() (diff)
parentMerge branches 'pm-devfreq' and 'pm-tools' (diff)
downloadlinux-dev-bd2cd7d5a8f83ddc761025f42a3ca8e56351a6cc.tar.xz
linux-dev-bd2cd7d5a8f83ddc761025f42a3ca8e56351a6cc.zip
Merge tag 'pm-4.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki: "There are no real big ticket items here this time. The most noticeable change is probably the relocation of the OPP (Operating Performance Points) framework to its own directory under drivers/ as it has grown big enough for that. Also Viresh is now going to maintain it and send pull requests for it to me, so you will see this change in the git history going forward (but still not right now). Another noticeable set of changes is the modifications of the PM core, the PCI subsystem and the ACPI PM domain to allow of more integration between system-wide suspend/resume and runtime PM. For now it's just a way to avoid resuming devices from runtime suspend unnecessarily during system suspend (if the driver sets a flag to indicate its readiness for that) and in the works is an analogous mechanism to allow devices to stay suspended after system resume. In addition to that, we have some changes related to supporting frequency-invariant CPU utilization metrics in the scheduler and in the schedutil cpufreq governor on ARM and changes to add support for device performance states to the generic power domains (genpd) framework. The rest is mostly fixes and cleanups of various sorts. Specifics: - Relocate the OPP (Operating Performance Points) framework to its own directory under drivers/ and add support for power domain performance states to it (Viresh Kumar). - Modify the PM core, the PCI bus type and the ACPI PM domain to support power management driver flags allowing device drivers to specify their capabilities and preferences regarding the handling of devices with enabled runtime PM during system suspend/resume and clean up that code somewhat (Rafael Wysocki, Ulf Hansson). - Add frequency-invariant accounting support to the task scheduler on ARM and ARM64 (Dietmar Eggemann). - Fix PM QoS device resume latency framework to prevent "no restriction" requests from overriding requests with specific requirements and drop the confusing PM_QOS_FLAG_REMOTE_WAKEUP device PM QoS flag (Rafael Wysocki). - Drop legacy class suspend/resume operations from the PM core and drop legacy bus type suspend and resume callbacks from ARM/locomo (Rafael Wysocki). - Add min/max frequency support to devfreq and clean it up somewhat (Chanwoo Choi). - Rework wakeup support in the generic power domains (genpd) framework and update some of its users accordingly (Geert Uytterhoeven). - Convert timers in the PM core to use timer_setup() (Kees Cook). - Add support for exposing the SLP_S0 (Low Power S0 Idle) residency counter based on the LPIT ACPI table on Intel platforms (Srinivas Pandruvada). - Add per-CPU PM QoS resume latency support to the ladder cpuidle governor (Ramesh Thomas). - Fix a deadlock between the wakeup notify handler and the notifier removal in the ACPI core (Ville Syrjälä). - Fix a cpufreq schedutil governor issue causing it to use stale cached frequency values sometimes (Viresh Kumar). - Fix an issue in the system suspend core support code causing wakeup events detection to fail in some cases (Rajat Jain). - Fix the generic power domains (genpd) framework to prevent the PM core from using the direct-complete optimization with it as that is guaranteed to fail (Ulf Hansson). - Fix a minor issue in the cpuidle core and clean it up a bit (Gaurav Jindal, Nicholas Piggin). - Fix and clean up the intel_idle and ARM cpuidle drivers (Jason Baron, Len Brown, Leo Yan). - Fix a couple of minor issues in the OPP framework and clean it up (Arvind Yadav, Fabio Estevam, Sudeep Holla, Tobias Jordan). - Fix and clean up some cpufreq drivers and fix a minor issue in the cpufreq statistics code (Arvind Yadav, Bhumika Goyal, Fabio Estevam, Gautham Shenoy, Gustavo Silva, Marek Szyprowski, Masahiro Yamada, Robert Jarzmik, Zumeng Chen). - Fix minor issues in the system suspend and hibernation core, in power management documentation and in the AVS (Adaptive Voltage Scaling) framework (Helge Deller, Himanshu Jha, Joe Perches, Rafael Wysocki). - Fix some issues in the cpupower utility and document that Shuah Khan is going to maintain it going forward (Prarit Bhargava, Shuah Khan)" * tag 'pm-4.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (88 commits) tools/power/cpupower: add libcpupower.so.0.0.1 to .gitignore tools/power/cpupower: Add 64 bit library detection intel_idle: Graceful probe failure when MWAIT is disabled cpufreq: schedutil: Reset cached_raw_freq when not in sync with next_freq freezer: Fix typo in freezable_schedule_timeout() comment PM / s2idle: Clear the events_check_enabled flag cpufreq: stats: Handle the case when trans_table goes beyond PAGE_SIZE cpufreq: arm_big_little: make cpufreq_arm_bL_ops structures const cpufreq: arm_big_little: make function arguments and structure pointer const cpuidle: Avoid assignment in if () argument cpuidle: Clean up cpuidle_enable_device() error handling a bit ACPI / PM: Fix acpi_pm_notifier_lock vs flush_workqueue() deadlock PM / Domains: Fix genpd to deal with drivers returning 1 from ->prepare() cpuidle: ladder: Add per CPU PM QoS resume latency support PM / QoS: Fix device resume latency framework PM / domains: Rework governor code to be more consistent PM / Domains: Remove gpd_dev_ops.active_wakeup() callback soc: rockchip: power-domain: Use GENPD_FLAG_ACTIVE_WAKEUP soc: mediatek: Use GENPD_FLAG_ACTIVE_WAKEUP ARM: shmobile: pm-rmobile: Use GENPD_FLAG_ACTIVE_WAKEUP ...
Diffstat (limited to 'drivers')
-rw-r--r--drivers/Kconfig2
-rw-r--r--drivers/Makefile1
-rw-r--r--drivers/acpi/Kconfig5
-rw-r--r--drivers/acpi/Makefile1
-rw-r--r--drivers/acpi/acpi_lpit.c162
-rw-r--r--drivers/acpi/acpi_lpss.c95
-rw-r--r--drivers/acpi/device_pm.c277
-rw-r--r--drivers/acpi/internal.h6
-rw-r--r--drivers/acpi/osl.c42
-rw-r--r--drivers/acpi/scan.c1
-rw-r--r--drivers/base/arch_topology.c29
-rw-r--r--drivers/base/cpu.c3
-rw-r--r--drivers/base/dd.c2
-rw-r--r--drivers/base/power/Makefile1
-rw-r--r--drivers/base/power/domain.c226
-rw-r--r--drivers/base/power/domain_governor.c73
-rw-r--r--drivers/base/power/generic_ops.c23
-rw-r--r--drivers/base/power/main.c53
-rw-r--r--drivers/base/power/qos.c5
-rw-r--r--drivers/base/power/runtime.c9
-rw-r--r--drivers/base/power/sysfs.c53
-rw-r--r--drivers/base/power/wakeup.c11
-rw-r--r--drivers/cpufreq/arm_big_little.c16
-rw-r--r--drivers/cpufreq/arm_big_little.h4
-rw-r--r--drivers/cpufreq/arm_big_little_dt.c2
-rw-r--r--drivers/cpufreq/cpufreq-dt-platdev.c3
-rw-r--r--drivers/cpufreq/cpufreq-dt.c12
-rw-r--r--drivers/cpufreq/cpufreq.c6
-rw-r--r--drivers/cpufreq/cpufreq_stats.c7
-rw-r--r--drivers/cpufreq/imx6q-cpufreq.c85
-rw-r--r--drivers/cpufreq/powernow-k8.c2
-rw-r--r--drivers/cpufreq/pxa2xx-cpufreq.c191
-rw-r--r--drivers/cpufreq/scpi-cpufreq.c2
-rw-r--r--drivers/cpufreq/spear-cpufreq.c4
-rw-r--r--drivers/cpufreq/speedstep-lib.c2
-rw-r--r--drivers/cpufreq/ti-cpufreq.c6
-rw-r--r--drivers/cpufreq/vexpress-spc-cpufreq.c2
-rw-r--r--drivers/cpuidle/cpuidle-arm.c153
-rw-r--r--drivers/cpuidle/cpuidle.c14
-rw-r--r--drivers/cpuidle/governors/ladder.c7
-rw-r--r--drivers/cpuidle/governors/menu.c4
-rw-r--r--drivers/devfreq/devfreq.c139
-rw-r--r--drivers/devfreq/exynos-bus.c5
-rw-r--r--drivers/devfreq/governor_passive.c2
-rw-r--r--drivers/devfreq/governor_performance.c2
-rw-r--r--drivers/devfreq/governor_powersave.c2
-rw-r--r--drivers/devfreq/governor_simpleondemand.c2
-rw-r--r--drivers/devfreq/governor_userspace.c2
-rw-r--r--drivers/devfreq/rk3399_dmc.c2
-rw-r--r--drivers/gpu/drm/i915/i915_drv.c2
-rw-r--r--drivers/idle/intel_idle.c23
-rw-r--r--drivers/misc/mei/pci-me.c2
-rw-r--r--drivers/misc/mei/pci-txe.c2
-rw-r--r--drivers/opp/Kconfig13
-rw-r--r--drivers/opp/Makefile (renamed from drivers/base/power/opp/Makefile)0
-rw-r--r--drivers/opp/core.c (renamed from drivers/base/power/opp/core.c)143
-rw-r--r--drivers/opp/cpu.c (renamed from drivers/base/power/opp/cpu.c)0
-rw-r--r--drivers/opp/debugfs.c (renamed from drivers/base/power/opp/debugfs.c)10
-rw-r--r--drivers/opp/of.c (renamed from drivers/base/power/opp/of.c)6
-rw-r--r--drivers/opp/opp.h (renamed from drivers/base/power/opp/opp.h)6
-rw-r--r--drivers/pci/pci-driver.c134
-rw-r--r--drivers/pci/pci.c3
-rw-r--r--drivers/power/avs/smartreflex.c10
-rw-r--r--drivers/soc/mediatek/mtk-scpsys.c14
-rw-r--r--drivers/soc/rockchip/pm_domains.c14
65 files changed, 1373 insertions, 767 deletions
diff --git a/drivers/Kconfig b/drivers/Kconfig
index 1d7af3c2ff27..152744c5ef0f 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -209,4 +209,6 @@ source "drivers/tee/Kconfig"
source "drivers/mux/Kconfig"
+source "drivers/opp/Kconfig"
+
endmenu
diff --git a/drivers/Makefile b/drivers/Makefile
index d242d3514d30..1d034b680431 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -126,6 +126,7 @@ obj-$(CONFIG_ACCESSIBILITY) += accessibility/
obj-$(CONFIG_ISDN) += isdn/
obj-$(CONFIG_EDAC) += edac/
obj-$(CONFIG_EISA) += eisa/
+obj-$(CONFIG_PM_OPP) += opp/
obj-$(CONFIG_CPU_FREQ) += cpufreq/
obj-$(CONFIG_CPU_IDLE) += cpuidle/
obj-y += mmc/
diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig
index 5b1938f4b626..4cb763a01f4d 100644
--- a/drivers/acpi/Kconfig
+++ b/drivers/acpi/Kconfig
@@ -81,6 +81,11 @@ endif
config ACPI_SPCR_TABLE
bool
+config ACPI_LPIT
+ bool
+ depends on X86_64
+ default y
+
config ACPI_SLEEP
bool
depends on SUSPEND || HIBERNATION
diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
index cd1abc9bc325..168e14d29d31 100644
--- a/drivers/acpi/Makefile
+++ b/drivers/acpi/Makefile
@@ -57,6 +57,7 @@ acpi-$(CONFIG_DEBUG_FS) += debugfs.o
acpi-$(CONFIG_ACPI_NUMA) += numa.o
acpi-$(CONFIG_ACPI_PROCFS_POWER) += cm_sbs.o
acpi-y += acpi_lpat.o
+acpi-$(CONFIG_ACPI_LPIT) += acpi_lpit.o
acpi-$(CONFIG_ACPI_GENERIC_GSI) += irq.o
acpi-$(CONFIG_ACPI_WATCHDOG) += acpi_watchdog.o
diff --git a/drivers/acpi/acpi_lpit.c b/drivers/acpi/acpi_lpit.c
new file mode 100644
index 000000000000..e94e478dd18b
--- /dev/null
+++ b/drivers/acpi/acpi_lpit.c
@@ -0,0 +1,162 @@
+
+/*
+ * acpi_lpit.c - LPIT table processing functions
+ *
+ * Copyright (C) 2017 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/cpu.h>
+#include <linux/acpi.h>
+#include <asm/msr.h>
+#include <asm/tsc.h>
+
+struct lpit_residency_info {
+ struct acpi_generic_address gaddr;
+ u64 frequency;
+ void __iomem *iomem_addr;
+};
+
+/* Storage for an memory mapped and FFH based entries */
+static struct lpit_residency_info residency_info_mem;
+static struct lpit_residency_info residency_info_ffh;
+
+static int lpit_read_residency_counter_us(u64 *counter, bool io_mem)
+{
+ int err;
+
+ if (io_mem) {
+ u64 count = 0;
+ int error;
+
+ error = acpi_os_read_iomem(residency_info_mem.iomem_addr, &count,
+ residency_info_mem.gaddr.bit_width);
+ if (error)
+ return error;
+
+ *counter = div64_u64(count * 1000000ULL, residency_info_mem.frequency);
+ return 0;
+ }
+
+ err = rdmsrl_safe(residency_info_ffh.gaddr.address, counter);
+ if (!err) {
+ u64 mask = GENMASK_ULL(residency_info_ffh.gaddr.bit_offset +
+ residency_info_ffh.gaddr. bit_width - 1,
+ residency_info_ffh.gaddr.bit_offset);
+
+ *counter &= mask;
+ *counter >>= residency_info_ffh.gaddr.bit_offset;
+ *counter = div64_u64(*counter * 1000000ULL, residency_info_ffh.frequency);
+ return 0;
+ }
+
+ return -ENODATA;
+}
+
+static ssize_t low_power_idle_system_residency_us_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ u64 counter;
+ int ret;
+
+ ret = lpit_read_residency_counter_us(&counter, true);
+ if (ret)
+ return ret;
+
+ return sprintf(buf, "%llu\n", counter);
+}
+static DEVICE_ATTR_RO(low_power_idle_system_residency_us);
+
+static ssize_t low_power_idle_cpu_residency_us_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ u64 counter;
+ int ret;
+
+ ret = lpit_read_residency_counter_us(&counter, false);
+ if (ret)
+ return ret;
+
+ return sprintf(buf, "%llu\n", counter);
+}
+static DEVICE_ATTR_RO(low_power_idle_cpu_residency_us);
+
+int lpit_read_residency_count_address(u64 *address)
+{
+ if (!residency_info_mem.gaddr.address)
+ return -EINVAL;
+
+ *address = residency_info_mem.gaddr.address;
+
+ return 0;
+}
+
+static void lpit_update_residency(struct lpit_residency_info *info,
+ struct acpi_lpit_native *lpit_native)
+{
+ info->frequency = lpit_native->counter_frequency ?
+ lpit_native->counter_frequency : tsc_khz * 1000;
+ if (!info->frequency)
+ info->frequency = 1;
+
+ info->gaddr = lpit_native->residency_counter;
+ if (info->gaddr.space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
+ info->iomem_addr = ioremap_nocache(info->gaddr.address,
+ info->gaddr.bit_width / 8);
+ if (!info->iomem_addr)
+ return;
+
+ /* Silently fail, if cpuidle attribute group is not present */
+ sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
+ &dev_attr_low_power_idle_system_residency_us.attr,
+ "cpuidle");
+ } else if (info->gaddr.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE) {
+ /* Silently fail, if cpuidle attribute group is not present */
+ sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
+ &dev_attr_low_power_idle_cpu_residency_us.attr,
+ "cpuidle");
+ }
+}
+
+static void lpit_process(u64 begin, u64 end)
+{
+ while (begin + sizeof(struct acpi_lpit_native) < end) {
+ struct acpi_lpit_native *lpit_native = (struct acpi_lpit_native *)begin;
+
+ if (!lpit_native->header.type && !lpit_native->header.flags) {
+ if (lpit_native->residency_counter.space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY &&
+ !residency_info_mem.gaddr.address) {
+ lpit_update_residency(&residency_info_mem, lpit_native);
+ } else if (lpit_native->residency_counter.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE &&
+ !residency_info_ffh.gaddr.address) {
+ lpit_update_residency(&residency_info_ffh, lpit_native);
+ }
+ }
+ begin += lpit_native->header.length;
+ }
+}
+
+void acpi_init_lpit(void)
+{
+ acpi_status status;
+ u64 lpit_begin;
+ struct acpi_table_lpit *lpit;
+
+ status = acpi_get_table(ACPI_SIG_LPIT, 0, (struct acpi_table_header **)&lpit);
+
+ if (ACPI_FAILURE(status))
+ return;
+
+ lpit_begin = (u64)lpit + sizeof(*lpit);
+ lpit_process(lpit_begin, lpit_begin + lpit->header.length);
+}
diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
index 032ae44710e5..de7385b824e1 100644
--- a/drivers/acpi/acpi_lpss.c
+++ b/drivers/acpi/acpi_lpss.c
@@ -693,7 +693,7 @@ static int acpi_lpss_activate(struct device *dev)
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
int ret;
- ret = acpi_dev_runtime_resume(dev);
+ ret = acpi_dev_resume(dev);
if (ret)
return ret;
@@ -713,43 +713,9 @@ static int acpi_lpss_activate(struct device *dev)
static void acpi_lpss_dismiss(struct device *dev)
{
- acpi_dev_runtime_suspend(dev);
+ acpi_dev_suspend(dev, false);
}
-#ifdef CONFIG_PM_SLEEP
-static int acpi_lpss_suspend_late(struct device *dev)
-{
- struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
- int ret;
-
- ret = pm_generic_suspend_late(dev);
- if (ret)
- return ret;
-
- if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
- acpi_lpss_save_ctx(dev, pdata);
-
- return acpi_dev_suspend_late(dev);
-}
-
-static int acpi_lpss_resume_early(struct device *dev)
-{
- struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
- int ret;
-
- ret = acpi_dev_resume_early(dev);
- if (ret)
- return ret;
-
- acpi_lpss_d3_to_d0_delay(pdata);
-
- if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
- acpi_lpss_restore_ctx(dev, pdata);
-
- return pm_generic_resume_early(dev);
-}
-#endif /* CONFIG_PM_SLEEP */
-
/* IOSF SB for LPSS island */
#define LPSS_IOSF_UNIT_LPIOEP 0xA0
#define LPSS_IOSF_UNIT_LPIO1 0xAB
@@ -835,19 +801,15 @@ static void lpss_iosf_exit_d3_state(void)
mutex_unlock(&lpss_iosf_mutex);
}
-static int acpi_lpss_runtime_suspend(struct device *dev)
+static int acpi_lpss_suspend(struct device *dev, bool wakeup)
{
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
int ret;
- ret = pm_generic_runtime_suspend(dev);
- if (ret)
- return ret;
-
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
acpi_lpss_save_ctx(dev, pdata);
- ret = acpi_dev_runtime_suspend(dev);
+ ret = acpi_dev_suspend(dev, wakeup);
/*
* This call must be last in the sequence, otherwise PMC will return
@@ -860,7 +822,7 @@ static int acpi_lpss_runtime_suspend(struct device *dev)
return ret;
}
-static int acpi_lpss_runtime_resume(struct device *dev)
+static int acpi_lpss_resume(struct device *dev)
{
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
int ret;
@@ -872,7 +834,7 @@ static int acpi_lpss_runtime_resume(struct device *dev)
if (lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
lpss_iosf_exit_d3_state();
- ret = acpi_dev_runtime_resume(dev);
+ ret = acpi_dev_resume(dev);
if (ret)
return ret;
@@ -881,7 +843,41 @@ static int acpi_lpss_runtime_resume(struct device *dev)
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
acpi_lpss_restore_ctx(dev, pdata);
- return pm_generic_runtime_resume(dev);
+ return 0;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int acpi_lpss_suspend_late(struct device *dev)
+{
+ int ret;
+
+ if (dev_pm_smart_suspend_and_suspended(dev))
+ return 0;
+
+ ret = pm_generic_suspend_late(dev);
+ return ret ? ret : acpi_lpss_suspend(dev, device_may_wakeup(dev));
+}
+
+static int acpi_lpss_resume_early(struct device *dev)
+{
+ int ret = acpi_lpss_resume(dev);
+
+ return ret ? ret : pm_generic_resume_early(dev);
+}
+#endif /* CONFIG_PM_SLEEP */
+
+static int acpi_lpss_runtime_suspend(struct device *dev)
+{
+ int ret = pm_generic_runtime_suspend(dev);
+
+ return ret ? ret : acpi_lpss_suspend(dev, true);
+}
+
+static int acpi_lpss_runtime_resume(struct device *dev)
+{
+ int ret = acpi_lpss_resume(dev);
+
+ return ret ? ret : pm_generic_runtime_resume(dev);
}
#endif /* CONFIG_PM */
@@ -894,13 +890,20 @@ static struct dev_pm_domain acpi_lpss_pm_domain = {
#ifdef CONFIG_PM
#ifdef CONFIG_PM_SLEEP
.prepare = acpi_subsys_prepare,
- .complete = pm_complete_with_resume_check,
+ .complete = acpi_subsys_complete,
.suspend = acpi_subsys_suspend,
.suspend_late = acpi_lpss_suspend_late,
+ .suspend_noirq = acpi_subsys_suspend_noirq,
+ .resume_noirq = acpi_subsys_resume_noirq,
.resume_early = acpi_lpss_resume_early,
.freeze = acpi_subsys_freeze,
+ .freeze_late = acpi_subsys_freeze_late,
+ .freeze_noirq = acpi_subsys_freeze_noirq,
+ .thaw_noirq = acpi_subsys_thaw_noirq,
.poweroff = acpi_subsys_suspend,
.poweroff_late = acpi_lpss_suspend_late,
+ .poweroff_noirq = acpi_subsys_suspend_noirq,
+ .restore_noirq = acpi_subsys_resume_noirq,
.restore_early = acpi_lpss_resume_early,
#endif
.runtime_suspend = acpi_lpss_runtime_suspend,
diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
index fbcc73f7a099..e4ffaeec9ec2 100644
--- a/drivers/acpi/device_pm.c
+++ b/drivers/acpi/device_pm.c
@@ -387,6 +387,7 @@ EXPORT_SYMBOL(acpi_bus_power_manageable);
#ifdef CONFIG_PM
static DEFINE_MUTEX(acpi_pm_notifier_lock);
+static DEFINE_MUTEX(acpi_pm_notifier_install_lock);
void acpi_pm_wakeup_event(struct device *dev)
{
@@ -443,24 +444,25 @@ acpi_status acpi_add_pm_notifier(struct acpi_device *adev, struct device *dev,
if (!dev && !func)
return AE_BAD_PARAMETER;
- mutex_lock(&acpi_pm_notifier_lock);
+ mutex_lock(&acpi_pm_notifier_install_lock);
if (adev->wakeup.flags.notifier_present)
goto out;
- adev->wakeup.ws = wakeup_source_register(dev_name(&adev->dev));
- adev->wakeup.context.dev = dev;
- adev->wakeup.context.func = func;
-
status = acpi_install_notify_handler(adev->handle, ACPI_SYSTEM_NOTIFY,
acpi_pm_notify_handler, NULL);
if (ACPI_FAILURE(status))
goto out;
+ mutex_lock(&acpi_pm_notifier_lock);
+ adev->wakeup.ws = wakeup_source_register(dev_name(&adev->dev));
+ adev->wakeup.context.dev = dev;
+ adev->wakeup.context.func = func;
adev->wakeup.flags.notifier_present = true;
+ mutex_unlock(&acpi_pm_notifier_lock);
out:
- mutex_unlock(&acpi_pm_notifier_lock);
+ mutex_unlock(&acpi_pm_notifier_install_lock);
return status;
}
@@ -472,7 +474,7 @@ acpi_status acpi_remove_pm_notifier(struct acpi_device *adev)
{
acpi_status status = AE_BAD_PARAMETER;
- mutex_lock(&acpi_pm_notifier_lock);
+ mutex_lock(&acpi_pm_notifier_install_lock);
if (!adev->wakeup.flags.notifier_present)
goto out;
@@ -483,14 +485,15 @@ acpi_status acpi_remove_pm_notifier(struct acpi_device *adev)
if (ACPI_FAILURE(status))
goto out;
+ mutex_lock(&acpi_pm_notifier_lock);
adev->wakeup.context.func = NULL;
adev->wakeup.context.dev = NULL;
wakeup_source_unregister(adev->wakeup.ws);
-
adev->wakeup.flags.notifier_present = false;
+ mutex_unlock(&acpi_pm_notifier_lock);
out:
- mutex_unlock(&acpi_pm_notifier_lock);
+ mutex_unlock(&acpi_pm_notifier_install_lock);
return status;
}
@@ -581,8 +584,7 @@ static int acpi_dev_pm_get_state(struct device *dev, struct acpi_device *adev,
d_min = ret;
wakeup = device_may_wakeup(dev) && adev->wakeup.flags.valid
&& adev->wakeup.sleep_state >= target_state;
- } else if (dev_pm_qos_flags(dev, PM_QOS_FLAG_REMOTE_WAKEUP) !=
- PM_QOS_FLAGS_NONE) {
+ } else {
wakeup = adev->wakeup.flags.valid;
}
@@ -848,48 +850,48 @@ static int acpi_dev_pm_full_power(struct acpi_device *adev)
}
/**
- * acpi_dev_runtime_suspend - Put device into a low-power state using ACPI.
+ * acpi_dev_suspend - Put device into a low-power state using ACPI.
* @dev: Device to put into a low-power state.
+ * @wakeup: Whether or not to enable wakeup for the device.
*
- * Put the given device into a runtime low-power state using the standard ACPI
+ * Put the given device into a low-power state using the standard ACPI
* mechanism. Set up remote wakeup if desired, choose the state to put the
* device into (this checks if remote wakeup is expected to work too), and set
* the power state of the device.
*/
-int acpi_dev_runtime_suspend(struct device *dev)
+int acpi_dev_suspend(struct device *dev, bool wakeup)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
- bool remote_wakeup;
+ u32 target_state = acpi_target_system_state();
int error;
if (!adev)
return 0;
- remote_wakeup = dev_pm_qos_flags(dev, PM_QOS_FLAG_REMOTE_WAKEUP) >
- PM_QOS_FLAGS_NONE;
- if (remote_wakeup) {
- error = acpi_device_wakeup_enable(adev, ACPI_STATE_S0);
+ if (wakeup && acpi_device_can_wakeup(adev)) {
+ error = acpi_device_wakeup_enable(adev, target_state);
if (error)
return -EAGAIN;
+ } else {
+ wakeup = false;
}
- error = acpi_dev_pm_low_power(dev, adev, ACPI_STATE_S0);
- if (error && remote_wakeup)
+ error = acpi_dev_pm_low_power(dev, adev, target_state);
+ if (error && wakeup)
acpi_device_wakeup_disable(adev);
return error;
}
-EXPORT_SYMBOL_GPL(acpi_dev_runtime_suspend);
+EXPORT_SYMBOL_GPL(acpi_dev_suspend);
/**
- * acpi_dev_runtime_resume - Put device into the full-power state using ACPI.
+ * acpi_dev_resume - Put device into the full-power state using ACPI.
* @dev: Device to put into the full-power state.
*
* Put the given device into the full-power state using the standard ACPI
- * mechanism at run time. Set the power state of the device to ACPI D0 and
- * disable remote wakeup.
+ * mechanism. Set the power state of the device to ACPI D0 and disable wakeup.
*/
-int acpi_dev_runtime_resume(struct device *dev)
+int acpi_dev_resume(struct device *dev)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
int error;
@@ -901,7 +903,7 @@ int acpi_dev_runtime_resume(struct device *dev)
acpi_device_wakeup_disable(adev);
return error;
}
-EXPORT_SYMBOL_GPL(acpi_dev_runtime_resume);
+EXPORT_SYMBOL_GPL(acpi_dev_resume);
/**
* acpi_subsys_runtime_suspend - Suspend device using ACPI.
@@ -913,7 +915,7 @@ EXPORT_SYMBOL_GPL(acpi_dev_runtime_resume);
int acpi_subsys_runtime_suspend(struct device *dev)
{
int ret = pm_generic_runtime_suspend(dev);
- return ret ? ret : acpi_dev_runtime_suspend(dev);
+ return ret ? ret : acpi_dev_suspend(dev, true);
}
EXPORT_SYMBOL_GPL(acpi_subsys_runtime_suspend);
@@ -926,68 +928,33 @@ EXPORT_SYMBOL_GPL(acpi_subsys_runtime_suspend);
*/
int acpi_subsys_runtime_resume(struct device *dev)
{
- int ret = acpi_dev_runtime_resume(dev);
+ int ret = acpi_dev_resume(dev);
return ret ? ret : pm_generic_runtime_resume(dev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_runtime_resume);
#ifdef CONFIG_PM_SLEEP
-/**
- * acpi_dev_suspend_late - Put device into a low-power state using ACPI.
- * @dev: Device to put into a low-power state.
- *
- * Put the given device into a low-power state during system transition to a
- * sleep state using the standard ACPI mechanism. Set up system wakeup if
- * desired, choose the state to put the device into (this checks if system
- * wakeup is expected to work too), and set the power state of the device.
- */
-int acpi_dev_suspend_late(struct device *dev)
+static bool acpi_dev_needs_resume(struct device *dev, struct acpi_device *adev)
{
- struct acpi_device *adev = ACPI_COMPANION(dev);
- u32 target_state;
- bool wakeup;
- int error;
-
- if (!adev)
- return 0;
-
- target_state = acpi_target_system_state();
- wakeup = device_may_wakeup(dev) && acpi_device_can_wakeup(adev);
- if (wakeup) {
- error = acpi_device_wakeup_enable(adev, target_state);
- if (error)
- return error;
- }
+ u32 sys_target = acpi_target_system_state();
+ int ret, state;
- error = acpi_dev_pm_low_power(dev, adev, target_state);
- if (error && wakeup)
- acpi_device_wakeup_disable(adev);
+ if (!pm_runtime_suspended(dev) || !adev ||
+ device_may_wakeup(dev) != !!adev->wakeup.prepare_count)
+ return true;
- return error;
-}
-EXPORT_SYMBOL_GPL(acpi_dev_suspend_late);
+ if (sys_target == ACPI_STATE_S0)
+ return false;
-/**
- * acpi_dev_resume_early - Put device into the full-power state using ACPI.
- * @dev: Device to put into the full-power state.
- *
- * Put the given device into the full-power state using the standard ACPI
- * mechanism during system transition to the working state. Set the power
- * state of the device to ACPI D0 and disable remote wakeup.
- */
-int acpi_dev_resume_early(struct device *dev)
-{
- struct acpi_device *adev = ACPI_COMPANION(dev);
- int error;
+ if (adev->power.flags.dsw_present)
+ return true;
- if (!adev)
- return 0;
+ ret = acpi_dev_pm_get_state(dev, adev, sys_target, NULL, &state);
+ if (ret)
+ return true;
- error = acpi_dev_pm_full_power(adev);
- acpi_device_wakeup_disable(adev);
- return error;
+ return state != adev->power.state;
}
-EXPORT_SYMBOL_GPL(acpi_dev_resume_early);
/**
* acpi_subsys_prepare - Prepare device for system transition to a sleep state.
@@ -996,39 +963,53 @@ EXPORT_SYMBOL_GPL(acpi_dev_resume_early);
int acpi_subsys_prepare(struct device *dev)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
- u32 sys_target;
- int ret, state;
- ret = pm_generic_prepare(dev);
- if (ret < 0)
- return ret;
-
- if (!adev || !pm_runtime_suspended(dev)
- || device_may_wakeup(dev) != !!adev->wakeup.prepare_count)
- return 0;
+ if (dev->driver && dev->driver->pm && dev->driver->pm->prepare) {
+ int ret = dev->driver->pm->prepare(dev);
- sys_target = acpi_target_system_state();
- if (sys_target == ACPI_STATE_S0)
- return 1;
+ if (ret < 0)
+ return ret;
- if (adev->power.flags.dsw_present)
- return 0;
+ if (!ret && dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_PREPARE))
+ return 0;
+ }
- ret = acpi_dev_pm_get_state(dev, adev, sys_target, NULL, &state);
- return !ret && state == adev->power.state;
+ return !acpi_dev_needs_resume(dev, adev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_prepare);
/**
+ * acpi_subsys_complete - Finalize device's resume during system resume.
+ * @dev: Device to handle.
+ */
+void acpi_subsys_complete(struct device *dev)
+{
+ pm_generic_complete(dev);
+ /*
+ * If the device had been runtime-suspended before the system went into
+ * the sleep state it is going out of and it has never been resumed till
+ * now, resume it in case the firmware powered it up.
+ */
+ if (dev->power.direct_complete && pm_resume_via_firmware())
+ pm_request_resume(dev);
+}
+EXPORT_SYMBOL_GPL(acpi_subsys_complete);
+
+/**
* acpi_subsys_suspend - Run the device driver's suspend callback.
* @dev: Device to handle.
*
- * Follow PCI and resume devices suspended at run time before running their
- * system suspend callbacks.
+ * Follow PCI and resume devices from runtime suspend before running their
+ * system suspend callbacks, unless the driver can cope with runtime-suspended
+ * devices during system suspend and there are no ACPI-specific reasons for
+ * resuming them.
*/
int acpi_subsys_suspend(struct device *dev)
{
- pm_runtime_resume(dev);
+ if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
+ acpi_dev_needs_resume(dev, ACPI_COMPANION(dev)))
+ pm_runtime_resume(dev);
+
return pm_generic_suspend(dev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_suspend);
@@ -1042,12 +1023,48 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend);
*/
int acpi_subsys_suspend_late(struct device *dev)
{
- int ret = pm_generic_suspend_late(dev);
- return ret ? ret : acpi_dev_suspend_late(dev);
+ int ret;
+
+ if (dev_pm_smart_suspend_and_suspended(dev))
+ return 0;
+
+ ret = pm_generic_suspend_late(dev);
+ return ret ? ret : acpi_dev_suspend(dev, device_may_wakeup(dev));
}
EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late);
/**
+ * acpi_subsys_suspend_noirq - Run the device driver's "noirq" suspend callback.
+ * @dev: Device to suspend.
+ */
+int acpi_subsys_suspend_noirq(struct device *dev)
+{
+ if (dev_pm_smart_suspend_and_suspended(dev))
+ return 0;
+
+ return pm_generic_suspend_noirq(dev);
+}
+EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq);
+
+/**
+ * acpi_subsys_resume_noirq - Run the device driver's "noirq" resume callback.
+ * @dev: Device to handle.
+ */
+int acpi_subsys_resume_noirq(struct device *dev)
+{
+ /*
+ * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend
+ * during system suspend, so update their runtime PM status to "active"
+ * as they will be put into D0 going forward.
+ */
+ if (dev_pm_smart_suspend_and_suspended(dev))
+ pm_runtime_set_active(dev);
+
+ return pm_generic_resume_noirq(dev);
+}
+EXPORT_SYMBOL_GPL(acpi_subsys_resume_noirq);
+
+/**
* acpi_subsys_resume_early - Resume device using ACPI.
* @dev: Device to Resume.
*
@@ -1057,7 +1074,7 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late);
*/
int acpi_subsys_resume_early(struct device *dev)
{
- int ret = acpi_dev_resume_early(dev);
+ int ret = acpi_dev_resume(dev);
return ret ? ret : pm_generic_resume_early(dev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_resume_early);
@@ -1074,11 +1091,60 @@ int acpi_subsys_freeze(struct device *dev)
* runtime-suspended devices should not be touched during freeze/thaw
* transitions.
*/
- pm_runtime_resume(dev);
+ if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND))
+ pm_runtime_resume(dev);
+
return pm_generic_freeze(dev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_freeze);
+/**
+ * acpi_subsys_freeze_late - Run the device driver's "late" freeze callback.
+ * @dev: Device to handle.
+ */
+int acpi_subsys_freeze_late(struct device *dev)
+{
+
+ if (dev_pm_smart_suspend_and_suspended(dev))
+ return 0;
+
+ return pm_generic_freeze_late(dev);
+}
+EXPORT_SYMBOL_GPL(acpi_subsys_freeze_late);
+
+/**
+ * acpi_subsys_freeze_noirq - Run the device driver's "noirq" freeze callback.
+ * @dev: Device to handle.
+ */
+int acpi_subsys_freeze_noirq(struct device *dev)
+{
+
+ if (dev_pm_smart_suspend_and_suspended(dev))
+ return 0;
+
+ return pm_generic_freeze_noirq(dev);
+}
+EXPORT_SYMBOL_GPL(acpi_subsys_freeze_noirq);
+
+/**
+ * acpi_subsys_thaw_noirq - Run the device driver's "noirq" thaw callback.
+ * @dev: Device to handle.
+ */
+int acpi_subsys_thaw_noirq(struct device *dev)
+{
+ /*
+ * If the device is in runtime suspend, the "thaw" code may not work
+ * correctly with it, so skip the driver callback and make the PM core
+ * skip all of the subsequent "thaw" callbacks for the device.
+ */
+ if (dev_pm_smart_suspend_and_suspended(dev)) {
+ dev->power.direct_complete = true;
+ return 0;
+ }
+
+ return pm_generic_thaw_noirq(dev);
+}
+EXPORT_SYMBOL_GPL(acpi_subsys_thaw_noirq);
#endif /* CONFIG_PM_SLEEP */
static struct dev_pm_domain acpi_general_pm_domain = {
@@ -1087,13 +1153,20 @@ static struct dev_pm_domain acpi_general_pm_domain = {
.runtime_resume = acpi_subsys_runtime_resume,
#ifdef CONFIG_PM_SLEEP
.prepare = acpi_subsys_prepare,
- .complete = pm_complete_with_resume_check,
+ .complete = acpi_subsys_complete,
.suspend = acpi_subsys_suspend,
.suspend_late = acpi_subsys_suspend_late,
+ .suspend_noirq = acpi_subsys_suspend_noirq,
+ .resume_noirq = acpi_subsys_resume_noirq,
.resume_early = acpi_subsys_resume_early,
.freeze = acpi_subsys_freeze,
+ .freeze_late = acpi_subsys_freeze_late,
+ .freeze_noirq = acpi_subsys_freeze_noirq,
+ .thaw_noirq = acpi_subsys_thaw_noirq,
.poweroff = acpi_subsys_suspend,
.poweroff_late = acpi_subsys_suspend_late,
+ .poweroff_noirq = acpi_subsys_suspend_noirq,
+ .restore_noirq = acpi_subsys_resume_noirq,
.restore_early = acpi_subsys_resume_early,
#endif
},
diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
index 4361c4415b4f..fc8c43e76707 100644
--- a/drivers/acpi/internal.h
+++ b/drivers/acpi/internal.h
@@ -248,4 +248,10 @@ void acpi_watchdog_init(void);
static inline void acpi_watchdog_init(void) {}
#endif
+#ifdef CONFIG_ACPI_LPIT
+void acpi_init_lpit(void);
+#else
+static inline void acpi_init_lpit(void) { }
+#endif
+
#endif /* _ACPI_INTERNAL_H_ */
diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index db78d353bab1..3bb46cb24a99 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -663,6 +663,29 @@ acpi_status acpi_os_write_port(acpi_io_address port, u32 value, u32 width)
EXPORT_SYMBOL(acpi_os_write_port);
+int acpi_os_read_iomem(void __iomem *virt_addr, u64 *value, u32 width)
+{
+
+ switch (width) {
+ case 8:
+ *(u8 *) value = readb(virt_addr);
+ break;
+ case 16:
+ *(u16 *) value = readw(virt_addr);
+ break;
+ case 32:
+ *(u32 *) value = readl(virt_addr);
+ break;
+ case 64:
+ *(u64 *) value = readq(virt_addr);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
acpi_status
acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width)
{
@@ -670,6 +693,7 @@ acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width)
unsigned int size = width / 8;
bool unmap = false;
u64 dummy;
+ int error;
rcu_read_lock();
virt_addr = acpi_map_vaddr_lookup(phys_addr, size);
@@ -684,22 +708,8 @@ acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width)
if (!value)
value = &dummy;
- switch (width) {
- case 8:
- *(u8 *) value = readb(virt_addr);
- break;
- case 16:
- *(u16 *) value = readw(virt_addr);
- break;
- case 32:
- *(u32 *) value = readl(virt_addr);
- break;
- case 64:
- *(u64 *) value = readq(virt_addr);
- break;
- default:
- BUG();
- }
+ error = acpi_os_read_iomem(virt_addr, value, width);
+ BUG_ON(error);
if (unmap)
iounmap(virt_addr);
diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
index 602f8ff212f2..81367edc8a10 100644
--- a/drivers/acpi/scan.c
+++ b/drivers/acpi/scan.c
@@ -2122,6 +2122,7 @@ int __init acpi_scan_init(void)
acpi_int340x_thermal_init();
acpi_amba_init();
acpi_watchdog_init();
+ acpi_init_lpit();
acpi_scan_add_handler(&generic_device_handler);
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index 6df7d6676a48..0739c5b953bf 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -22,14 +22,23 @@
#include <linux/string.h>
#include <linux/sched/topology.h>
-static DEFINE_MUTEX(cpu_scale_mutex);
-static DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE;
+DEFINE_PER_CPU(unsigned long, freq_scale) = SCHED_CAPACITY_SCALE;
-unsigned long topology_get_cpu_scale(struct sched_domain *sd, int cpu)
+void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq,
+ unsigned long max_freq)
{
- return per_cpu(cpu_scale, cpu);
+ unsigned long scale;
+ int i;
+
+ scale = (cur_freq << SCHED_CAPACITY_SHIFT) / max_freq;
+
+ for_each_cpu(i, cpus)
+ per_cpu(freq_scale, i) = scale;
}
+static DEFINE_MUTEX(cpu_scale_mutex);
+DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE;
+
void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity)
{
per_cpu(cpu_scale, cpu) = capacity;
@@ -212,6 +221,8 @@ static struct notifier_block init_cpu_capacity_notifier __initdata = {
static int __init register_cpufreq_notifier(void)
{
+ int ret;
+
/*
* on ACPI-based systems we need to use the default cpu capacity
* until we have the necessary code to parse the cpu capacity, so
@@ -227,8 +238,13 @@ static int __init register_cpufreq_notifier(void)
cpumask_copy(cpus_to_visit, cpu_possible_mask);
- return cpufreq_register_notifier(&init_cpu_capacity_notifier,
- CPUFREQ_POLICY_NOTIFIER);
+ ret = cpufreq_register_notifier(&init_cpu_capacity_notifier,
+ CPUFREQ_POLICY_NOTIFIER);
+
+ if (ret)
+ free_cpumask_var(cpus_to_visit);
+
+ return ret;
}
core_initcall(register_cpufreq_notifier);
@@ -236,6 +252,7 @@ static void __init parsing_done_workfn(struct work_struct *work)
{
cpufreq_unregister_notifier(&init_cpu_capacity_notifier,
CPUFREQ_POLICY_NOTIFIER);
+ free_cpumask_var(cpus_to_visit);
}
#else
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index a73ab95558f5..58a9b608d821 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -386,7 +386,8 @@ int register_cpu(struct cpu *cpu, int num)
per_cpu(cpu_sys_devices, num) = &cpu->dev;
register_cpu_under_node(num, cpu_to_node(num));
- dev_pm_qos_expose_latency_limit(&cpu->dev, 0);
+ dev_pm_qos_expose_latency_limit(&cpu->dev,
+ PM_QOS_RESUME_LATENCY_NO_CONSTRAINT);
return 0;
}
diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index ad44b40fe284..45575e134696 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -464,6 +464,7 @@ pinctrl_bind_failed:
if (dev->pm_domain && dev->pm_domain->dismiss)
dev->pm_domain->dismiss(dev);
pm_runtime_reinit(dev);
+ dev_pm_set_driver_flags(dev, 0);
switch (ret) {
case -EPROBE_DEFER:
@@ -869,6 +870,7 @@ static void __device_release_driver(struct device *dev, struct device *parent)
if (dev->pm_domain && dev->pm_domain->dismiss)
dev->pm_domain->dismiss(dev);
pm_runtime_reinit(dev);
+ dev_pm_set_driver_flags(dev, 0);
klist_remove(&dev->p->knode_driver);
device_pm_check_callbacks(dev);
diff --git a/drivers/base/power/Makefile b/drivers/base/power/Makefile
index 29cd71d8b360..e1bb691cf8f1 100644
--- a/drivers/base/power/Makefile
+++ b/drivers/base/power/Makefile
@@ -2,7 +2,6 @@
obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o
obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o
obj-$(CONFIG_PM_TRACE_RTC) += trace.o
-obj-$(CONFIG_PM_OPP) += opp/
obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o
obj-$(CONFIG_HAVE_CLK) += clock_ops.o
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index e8ca5e2cf1e5..0c80bea05bcb 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -124,6 +124,7 @@ static const struct genpd_lock_ops genpd_spin_ops = {
#define genpd_status_on(genpd) (genpd->status == GPD_STATE_ACTIVE)
#define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE)
#define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON)
+#define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP)
static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
const struct generic_pm_domain *genpd)
@@ -237,6 +238,95 @@ static void genpd_update_accounting(struct generic_pm_domain *genpd)
static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {}
#endif
+/**
+ * dev_pm_genpd_set_performance_state- Set performance state of device's power
+ * domain.
+ *
+ * @dev: Device for which the performance-state needs to be set.
+ * @state: Target performance state of the device. This can be set as 0 when the
+ * device doesn't have any performance state constraints left (And so
+ * the device wouldn't participate anymore to find the target
+ * performance state of the genpd).
+ *
+ * It is assumed that the users guarantee that the genpd wouldn't be detached
+ * while this routine is getting called.
+ *
+ * Returns 0 on success and negative error values on failures.
+ */
+int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state)
+{
+ struct generic_pm_domain *genpd;
+ struct generic_pm_domain_data *gpd_data, *pd_data;
+ struct pm_domain_data *pdd;
+ unsigned int prev;
+ int ret = 0;
+
+ genpd = dev_to_genpd(dev);
+ if (IS_ERR(genpd))
+ return -ENODEV;
+
+ if (unlikely(!genpd->set_performance_state))
+ return -EINVAL;
+
+ if (unlikely(!dev->power.subsys_data ||
+ !dev->power.subsys_data->domain_data)) {
+ WARN_ON(1);
+ return -EINVAL;
+ }
+
+ genpd_lock(genpd);
+
+ gpd_data = to_gpd_data(dev->power.subsys_data->domain_data);
+ prev = gpd_data->performance_state;
+ gpd_data->performance_state = state;
+
+ /* New requested state is same as Max requested state */
+ if (state == genpd->performance_state)
+ goto unlock;
+
+ /* New requested state is higher than Max requested state */
+ if (state > genpd->performance_state)
+ goto update_state;
+
+ /* Traverse all devices within the domain */
+ list_for_each_entry(pdd, &genpd->dev_list, list_node) {
+ pd_data = to_gpd_data(pdd);
+
+ if (pd_data->performance_state > state)
+ state = pd_data->performance_state;
+ }
+
+ if (state == genpd->performance_state)
+ goto unlock;
+
+ /*
+ * We aren't propagating performance state changes of a subdomain to its
+ * masters as we don't have hardware that needs it. Over that, the
+ * performance states of subdomain and its masters may not have
+ * one-to-one mapping and would require additional information. We can
+ * get back to this once we have hardware that needs it. For that
+ * reason, we don't have to consider performance state of the subdomains
+ * of genpd here.
+ */
+
+update_state:
+ if (genpd_status_on(genpd)) {
+ ret = genpd->set_performance_state(genpd, state);
+ if (ret) {
+ gpd_data->performance_state = prev;
+ goto unlock;
+ }
+ }
+
+ genpd->performance_state = state;
+
+unlock:
+ genpd_unlock(genpd);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dev_pm_genpd_set_performance_state);
+
static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
{
unsigned int state_idx = genpd->state_idx;
@@ -256,6 +346,15 @@ static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
return ret;
elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
+
+ if (unlikely(genpd->set_performance_state)) {
+ ret = genpd->set_performance_state(genpd, genpd->performance_state);
+ if (ret) {
+ pr_warn("%s: Failed to set performance state %d (%d)\n",
+ genpd->name, genpd->performance_state, ret);
+ }
+ }
+
if (elapsed_ns <= genpd->states[state_idx].power_on_latency_ns)
return ret;
@@ -346,9 +445,7 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool one_dev_on,
list_for_each_entry(pdd, &genpd->dev_list, list_node) {
enum pm_qos_flags_status stat;
- stat = dev_pm_qos_flags(pdd->dev,
- PM_QOS_FLAG_NO_POWER_OFF
- | PM_QOS_FLAG_REMOTE_WAKEUP);
+ stat = dev_pm_qos_flags(pdd->dev, PM_QOS_FLAG_NO_POWER_OFF);
if (stat > PM_QOS_FLAGS_NONE)
return -EBUSY;
@@ -749,11 +846,7 @@ late_initcall(genpd_power_off_unused);
#if defined(CONFIG_PM_SLEEP) || defined(CONFIG_PM_GENERIC_DOMAINS_OF)
-/**
- * pm_genpd_present - Check if the given PM domain has been initialized.
- * @genpd: PM domain to check.
- */
-static bool pm_genpd_present(const struct generic_pm_domain *genpd)
+static bool genpd_present(const struct generic_pm_domain *genpd)
{
const struct generic_pm_domain *gpd;
@@ -771,12 +864,6 @@ static bool pm_genpd_present(const struct generic_pm_domain *genpd)
#ifdef CONFIG_PM_SLEEP
-static bool genpd_dev_active_wakeup(const struct generic_pm_domain *genpd,
- struct device *dev)
-{
- return GENPD_DEV_CALLBACK(genpd, bool, active_wakeup, dev);
-}
-
/**
* genpd_sync_power_off - Synchronously power off a PM domain and its masters.
* @genpd: PM domain to power off, if possible.
@@ -863,7 +950,7 @@ static void genpd_sync_power_on(struct generic_pm_domain *genpd, bool use_lock,
* @genpd: PM domain the device belongs to.
*
* There are two cases in which a device that can wake up the system from sleep
- * states should be resumed by pm_genpd_prepare(): (1) if the device is enabled
+ * states should be resumed by genpd_prepare(): (1) if the device is enabled
* to wake up the system and it has to remain active for this purpose while the
* system is in the sleep state and (2) if the device is not enabled to wake up
* the system from sleep states and it generally doesn't generate wakeup signals
@@ -881,12 +968,12 @@ static bool resume_needed(struct device *dev,
if (!device_can_wakeup(dev))
return false;
- active_wakeup = genpd_dev_active_wakeup(genpd, dev);
+ active_wakeup = genpd_is_active_wakeup(genpd);
return device_may_wakeup(dev) ? active_wakeup : !active_wakeup;
}
/**
- * pm_genpd_prepare - Start power transition of a device in a PM domain.
+ * genpd_prepare - Start power transition of a device in a PM domain.
* @dev: Device to start the transition of.
*
* Start a power transition of a device (during a system-wide power transition)
@@ -894,7 +981,7 @@ static bool resume_needed(struct device *dev,
* an object of type struct generic_pm_domain representing a PM domain
* consisting of I/O devices.
*/
-static int pm_genpd_prepare(struct device *dev)
+static int genpd_prepare(struct device *dev)
{
struct generic_pm_domain *genpd;
int ret;
@@ -921,7 +1008,7 @@ static int pm_genpd_prepare(struct device *dev)
genpd_unlock(genpd);
ret = pm_generic_prepare(dev);
- if (ret) {
+ if (ret < 0) {
genpd_lock(genpd);
genpd->prepared_count--;
@@ -929,7 +1016,8 @@ static int pm_genpd_prepare(struct device *dev)
genpd_unlock(genpd);
}
- return ret;
+ /* Never return 1, as genpd don't cope with the direct_complete path. */
+ return ret >= 0 ? 0 : ret;
}
/**
@@ -950,7 +1038,7 @@ static int genpd_finish_suspend(struct device *dev, bool poweroff)
if (IS_ERR(genpd))
return -EINVAL;
- if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev))
+ if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd))
return 0;
if (poweroff)
@@ -975,13 +1063,13 @@ static int genpd_finish_suspend(struct device *dev, bool poweroff)
}
/**
- * pm_genpd_suspend_noirq - Completion of suspend of device in an I/O PM domain.
+ * genpd_suspend_noirq - Completion of suspend of device in an I/O PM domain.
* @dev: Device to suspend.
*
* Stop the device and remove power from the domain if all devices in it have
* been stopped.
*/
-static int pm_genpd_suspend_noirq(struct device *dev)
+static int genpd_suspend_noirq(struct device *dev)
{
dev_dbg(dev, "%s()\n", __func__);
@@ -989,12 +1077,12 @@ static int pm_genpd_suspend_noirq(struct device *dev)
}
/**
- * pm_genpd_resume_noirq - Start of resume of device in an I/O PM domain.
+ * genpd_resume_noirq - Start of resume of device in an I/O PM domain.
* @dev: Device to resume.
*
* Restore power to the device's PM domain, if necessary, and start the device.
*/
-static int pm_genpd_resume_noirq(struct device *dev)
+static int genpd_resume_noirq(struct device *dev)
{
struct generic_pm_domain *genpd;
int ret = 0;
@@ -1005,7 +1093,7 @@ static int pm_genpd_resume_noirq(struct device *dev)
if (IS_ERR(genpd))
return -EINVAL;
- if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev))
+ if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd))
return 0;
genpd_lock(genpd);
@@ -1024,7 +1112,7 @@ static int pm_genpd_resume_noirq(struct device *dev)
}
/**
- * pm_genpd_freeze_noirq - Completion of freezing a device in an I/O PM domain.
+ * genpd_freeze_noirq - Completion of freezing a device in an I/O PM domain.
* @dev: Device to freeze.
*
* Carry out a late freeze of a device under the assumption that its
@@ -1032,7 +1120,7 @@ static int pm_genpd_resume_noirq(struct device *dev)
* struct generic_pm_domain representing a power domain consisting of I/O
* devices.
*/
-static int pm_genpd_freeze_noirq(struct device *dev)
+static int genpd_freeze_noirq(struct device *dev)
{
const struct generic_pm_domain *genpd;
int ret = 0;
@@ -1054,13 +1142,13 @@ static int pm_genpd_freeze_noirq(struct device *dev)
}
/**
- * pm_genpd_thaw_noirq - Early thaw of device in an I/O PM domain.
+ * genpd_thaw_noirq - Early thaw of device in an I/O PM domain.
* @dev: Device to thaw.
*
* Start the device, unless power has been removed from the domain already
* before the system transition.
*/
-static int pm_genpd_thaw_noirq(struct device *dev)
+static int genpd_thaw_noirq(struct device *dev)
{
const struct generic_pm_domain *genpd;
int ret = 0;
@@ -1081,14 +1169,14 @@ static int pm_genpd_thaw_noirq(struct device *dev)
}
/**
- * pm_genpd_poweroff_noirq - Completion of hibernation of device in an
+ * genpd_poweroff_noirq - Completion of hibernation of device in an
* I/O PM domain.
* @dev: Device to poweroff.
*
* Stop the device and remove power from the domain if all devices in it have
* been stopped.
*/
-static int pm_genpd_poweroff_noirq(struct device *dev)
+static int genpd_poweroff_noirq(struct device *dev)
{
dev_dbg(dev, "%s()\n", __func__);
@@ -1096,13 +1184,13 @@ static int pm_genpd_poweroff_noirq(struct device *dev)
}
/**
- * pm_genpd_restore_noirq - Start of restore of device in an I/O PM domain.
+ * genpd_restore_noirq - Start of restore of device in an I/O PM domain.
* @dev: Device to resume.
*
* Make sure the domain will be in the same power state as before the
* hibernation the system is resuming from and start the device if necessary.
*/
-static int pm_genpd_restore_noirq(struct device *dev)
+static int genpd_restore_noirq(struct device *dev)
{
struct generic_pm_domain *genpd;
int ret = 0;
@@ -1139,7 +1227,7 @@ static int pm_genpd_restore_noirq(struct device *dev)
}
/**
- * pm_genpd_complete - Complete power transition of a device in a power domain.
+ * genpd_complete - Complete power transition of a device in a power domain.
* @dev: Device to complete the transition of.
*
* Complete a power transition of a device (during a system-wide power
@@ -1147,7 +1235,7 @@ static int pm_genpd_restore_noirq(struct device *dev)
* domain member of an object of type struct generic_pm_domain representing
* a power domain consisting of I/O devices.
*/
-static void pm_genpd_complete(struct device *dev)
+static void genpd_complete(struct device *dev)
{
struct generic_pm_domain *genpd;
@@ -1180,7 +1268,7 @@ static void genpd_syscore_switch(struct device *dev, bool suspend)
struct generic_pm_domain *genpd;
genpd = dev_to_genpd(dev);
- if (!pm_genpd_present(genpd))
+ if (!genpd_present(genpd))
return;
if (suspend) {
@@ -1206,14 +1294,14 @@ EXPORT_SYMBOL_GPL(pm_genpd_syscore_poweron);
#else /* !CONFIG_PM_SLEEP */
-#define pm_genpd_prepare NULL
-#define pm_genpd_suspend_noirq NULL
-#define pm_genpd_resume_noirq NULL
-#define pm_genpd_freeze_noirq NULL
-#define pm_genpd_thaw_noirq NULL
-#define pm_genpd_poweroff_noirq NULL
-#define pm_genpd_restore_noirq NULL
-#define pm_genpd_complete NULL
+#define genpd_prepare NULL
+#define genpd_suspend_noirq NULL
+#define genpd_resume_noirq NULL
+#define genpd_freeze_noirq NULL
+#define genpd_thaw_noirq NULL
+#define genpd_poweroff_noirq NULL
+#define genpd_restore_noirq NULL
+#define genpd_complete NULL
#endif /* CONFIG_PM_SLEEP */
@@ -1239,7 +1327,7 @@ static struct generic_pm_domain_data *genpd_alloc_dev_data(struct device *dev,
gpd_data->base.dev = dev;
gpd_data->td.constraint_changed = true;
- gpd_data->td.effective_constraint_ns = -1;
+ gpd_data->td.effective_constraint_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
gpd_data->nb.notifier_call = genpd_dev_pm_qos_notifier;
spin_lock_irq(&dev->power.lock);
@@ -1574,14 +1662,14 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
genpd->accounting_time = ktime_get();
genpd->domain.ops.runtime_suspend = genpd_runtime_suspend;
genpd->domain.ops.runtime_resume = genpd_runtime_resume;
- genpd->domain.ops.prepare = pm_genpd_prepare;
- genpd->domain.ops.suspend_noirq = pm_genpd_suspend_noirq;
- genpd->domain.ops.resume_noirq = pm_genpd_resume_noirq;
- genpd->domain.ops.freeze_noirq = pm_genpd_freeze_noirq;
- genpd->domain.ops.thaw_noirq = pm_genpd_thaw_noirq;
- genpd->domain.ops.poweroff_noirq = pm_genpd_poweroff_noirq;
- genpd->domain.ops.restore_noirq = pm_genpd_restore_noirq;
- genpd->domain.ops.complete = pm_genpd_complete;
+ genpd->domain.ops.prepare = genpd_prepare;
+ genpd->domain.ops.suspend_noirq = genpd_suspend_noirq;
+ genpd->domain.ops.resume_noirq = genpd_resume_noirq;
+ genpd->domain.ops.freeze_noirq = genpd_freeze_noirq;
+ genpd->domain.ops.thaw_noirq = genpd_thaw_noirq;
+ genpd->domain.ops.poweroff_noirq = genpd_poweroff_noirq;
+ genpd->domain.ops.restore_noirq = genpd_restore_noirq;
+ genpd->domain.ops.complete = genpd_complete;
if (genpd->flags & GENPD_FLAG_PM_CLK) {
genpd->dev_ops.stop = pm_clk_suspend;
@@ -1795,7 +1883,7 @@ int of_genpd_add_provider_simple(struct device_node *np,
mutex_lock(&gpd_list_lock);
- if (pm_genpd_present(genpd)) {
+ if (genpd_present(genpd)) {
ret = genpd_add_provider(np, genpd_xlate_simple, genpd);
if (!ret) {
genpd->provider = &np->fwnode;
@@ -1831,7 +1919,7 @@ int of_genpd_add_provider_onecell(struct device_node *np,
for (i = 0; i < data->num_domains; i++) {
if (!data->domains[i])
continue;
- if (!pm_genpd_present(data->domains[i]))
+ if (!genpd_present(data->domains[i]))
goto error;
data->domains[i]->provider = &np->fwnode;
@@ -2274,7 +2362,7 @@ EXPORT_SYMBOL_GPL(of_genpd_parse_idle_states);
#include <linux/seq_file.h>
#include <linux/init.h>
#include <linux/kobject.h>
-static struct dentry *pm_genpd_debugfs_dir;
+static struct dentry *genpd_debugfs_dir;
/*
* TODO: This function is a slightly modified version of rtpm_status_show
@@ -2302,8 +2390,8 @@ static void rtpm_status_str(struct seq_file *s, struct device *dev)
seq_puts(s, p);
}
-static int pm_genpd_summary_one(struct seq_file *s,
- struct generic_pm_domain *genpd)
+static int genpd_summary_one(struct seq_file *s,
+ struct generic_pm_domain *genpd)
{
static const char * const status_lookup[] = {
[GPD_STATE_ACTIVE] = "on",
@@ -2373,7 +2461,7 @@ static int genpd_summary_show(struct seq_file *s, void *data)
return -ERESTARTSYS;
list_for_each_entry(genpd, &gpd_list, gpd_list_node) {
- ret = pm_genpd_summary_one(s, genpd);
+ ret = genpd_summary_one(s, genpd);
if (ret)
break;
}
@@ -2559,23 +2647,23 @@ define_genpd_debugfs_fops(active_time);
define_genpd_debugfs_fops(total_idle_time);
define_genpd_debugfs_fops(devices);
-static int __init pm_genpd_debug_init(void)
+static int __init genpd_debug_init(void)
{
struct dentry *d;
struct generic_pm_domain *genpd;
- pm_genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL);
+ genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL);
- if (!pm_genpd_debugfs_dir)
+ if (!genpd_debugfs_dir)
return -ENOMEM;
d = debugfs_create_file("pm_genpd_summary", S_IRUGO,
- pm_genpd_debugfs_dir, NULL, &genpd_summary_fops);
+ genpd_debugfs_dir, NULL, &genpd_summary_fops);
if (!d)
return -ENOMEM;
list_for_each_entry(genpd, &gpd_list, gpd_list_node) {
- d = debugfs_create_dir(genpd->name, pm_genpd_debugfs_dir);
+ d = debugfs_create_dir(genpd->name, genpd_debugfs_dir);
if (!d)
return -ENOMEM;
@@ -2595,11 +2683,11 @@ static int __init pm_genpd_debug_init(void)
return 0;
}
-late_initcall(pm_genpd_debug_init);
+late_initcall(genpd_debug_init);
-static void __exit pm_genpd_debug_exit(void)
+static void __exit genpd_debug_exit(void)
{
- debugfs_remove_recursive(pm_genpd_debugfs_dir);
+ debugfs_remove_recursive(genpd_debugfs_dir);
}
-__exitcall(pm_genpd_debug_exit);
+__exitcall(genpd_debug_exit);
#endif /* CONFIG_DEBUG_FS */
diff --git a/drivers/base/power/domain_governor.c b/drivers/base/power/domain_governor.c
index 281f949c5ffe..99896fbf18e4 100644
--- a/drivers/base/power/domain_governor.c
+++ b/drivers/base/power/domain_governor.c
@@ -14,23 +14,29 @@
static int dev_update_qos_constraint(struct device *dev, void *data)
{
s64 *constraint_ns_p = data;
- s32 constraint_ns = -1;
+ s64 constraint_ns;
- if (dev->power.subsys_data && dev->power.subsys_data->domain_data)
+ if (dev->power.subsys_data && dev->power.subsys_data->domain_data) {
+ /*
+ * Only take suspend-time QoS constraints of devices into
+ * account, because constraints updated after the device has
+ * been suspended are not guaranteed to be taken into account
+ * anyway. In order for them to take effect, the device has to
+ * be resumed and suspended again.
+ */
constraint_ns = dev_gpd_data(dev)->td.effective_constraint_ns;
-
- if (constraint_ns < 0) {
+ } else {
+ /*
+ * The child is not in a domain and there's no info on its
+ * suspend/resume latencies, so assume them to be negligible and
+ * take its current PM QoS constraint (that's the only thing
+ * known at this point anyway).
+ */
constraint_ns = dev_pm_qos_read_value(dev);
constraint_ns *= NSEC_PER_USEC;
}
- if (constraint_ns == 0)
- return 0;
- /*
- * constraint_ns cannot be negative here, because the device has been
- * suspended.
- */
- if (constraint_ns < *constraint_ns_p || *constraint_ns_p == 0)
+ if (constraint_ns < *constraint_ns_p)
*constraint_ns_p = constraint_ns;
return 0;
@@ -58,12 +64,12 @@ static bool default_suspend_ok(struct device *dev)
}
td->constraint_changed = false;
td->cached_suspend_ok = false;
- td->effective_constraint_ns = -1;
+ td->effective_constraint_ns = 0;
constraint_ns = __dev_pm_qos_read_value(dev);
spin_unlock_irqrestore(&dev->power.lock, flags);
- if (constraint_ns < 0)
+ if (constraint_ns == 0)
return false;
constraint_ns *= NSEC_PER_USEC;
@@ -76,14 +82,32 @@ static bool default_suspend_ok(struct device *dev)
device_for_each_child(dev, &constraint_ns,
dev_update_qos_constraint);
- if (constraint_ns > 0) {
+ if (constraint_ns == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS) {
+ /* "No restriction", so the device is allowed to suspend. */
+ td->effective_constraint_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
+ td->cached_suspend_ok = true;
+ } else if (constraint_ns == 0) {
+ /*
+ * This triggers if one of the children that don't belong to a
+ * domain has a zero PM QoS constraint and it's better not to
+ * suspend then. effective_constraint_ns is zero already and
+ * cached_suspend_ok is false, so bail out.
+ */
+ return false;
+ } else {
constraint_ns -= td->suspend_latency_ns +
td->resume_latency_ns;
- if (constraint_ns == 0)
+ /*
+ * effective_constraint_ns is zero already and cached_suspend_ok
+ * is false, so if the computed value is not positive, return
+ * right away.
+ */
+ if (constraint_ns <= 0)
return false;
+
+ td->effective_constraint_ns = constraint_ns;
+ td->cached_suspend_ok = true;
}
- td->effective_constraint_ns = constraint_ns;
- td->cached_suspend_ok = constraint_ns >= 0;
/*
* The children have been suspended already, so we don't need to take
@@ -144,18 +168,13 @@ static bool __default_power_down_ok(struct dev_pm_domain *pd,
*/
td = &to_gpd_data(pdd)->td;
constraint_ns = td->effective_constraint_ns;
- /* default_suspend_ok() need not be called before us. */
- if (constraint_ns < 0) {
- constraint_ns = dev_pm_qos_read_value(pdd->dev);
- constraint_ns *= NSEC_PER_USEC;
- }
- if (constraint_ns == 0)
- continue;
-
/*
- * constraint_ns cannot be negative here, because the device has
- * been suspended.
+ * Zero means "no suspend at all" and this runs only when all
+ * devices in the domain are suspended, so it must be positive.
*/
+ if (constraint_ns == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS)
+ continue;
+
if (constraint_ns <= off_on_time_ns)
return false;
diff --git a/drivers/base/power/generic_ops.c b/drivers/base/power/generic_ops.c
index 07c3c4a9522d..b2ed606265a8 100644
--- a/drivers/base/power/generic_ops.c
+++ b/drivers/base/power/generic_ops.c
@@ -9,7 +9,6 @@
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/export.h>
-#include <linux/suspend.h>
#ifdef CONFIG_PM
/**
@@ -298,26 +297,4 @@ void pm_generic_complete(struct device *dev)
if (drv && drv->pm && drv->pm->complete)
drv->pm->complete(dev);
}
-
-/**
- * pm_complete_with_resume_check - Complete a device power transition.
- * @dev: Device to handle.
- *
- * Complete a device power transition during a system-wide power transition and
- * optionally schedule a runtime resume of the device if the system resume in
- * progress has been initated by the platform firmware and the device had its
- * power.direct_complete flag set.
- */
-void pm_complete_with_resume_check(struct device *dev)
-{
- pm_generic_complete(dev);
- /*
- * If the device had been runtime-suspended before the system went into
- * the sleep state it is going out of and it has never been resumed till
- * now, resume it in case the firmware powered it up.
- */
- if (dev->power.direct_complete && pm_resume_via_firmware())
- pm_request_resume(dev);
-}
-EXPORT_SYMBOL_GPL(pm_complete_with_resume_check);
#endif /* CONFIG_PM_SLEEP */
diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index ae47b2ec84b4..db2f04415927 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -526,7 +526,7 @@ static void dpm_watchdog_clear(struct dpm_watchdog *wd)
/*------------------------- Resume routines -------------------------*/
/**
- * device_resume_noirq - Execute an "early resume" callback for given device.
+ * device_resume_noirq - Execute a "noirq resume" callback for given device.
* @dev: Device to handle.
* @state: PM transition of the system being carried out.
* @async: If true, the device is being resumed asynchronously.
@@ -846,16 +846,10 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
goto Driver;
}
- if (dev->class) {
- if (dev->class->pm) {
- info = "class ";
- callback = pm_op(dev->class->pm, state);
- goto Driver;
- } else if (dev->class->resume) {
- info = "legacy class ";
- callback = dev->class->resume;
- goto End;
- }
+ if (dev->class && dev->class->pm) {
+ info = "class ";
+ callback = pm_op(dev->class->pm, state);
+ goto Driver;
}
if (dev->bus) {
@@ -1081,7 +1075,7 @@ static pm_message_t resume_event(pm_message_t sleep_state)
}
/**
- * device_suspend_noirq - Execute a "late suspend" callback for given device.
+ * __device_suspend_noirq - Execute a "noirq suspend" callback for given device.
* @dev: Device to handle.
* @state: PM transition of the system being carried out.
* @async: If true, the device is being suspended asynchronously.
@@ -1241,7 +1235,7 @@ int dpm_suspend_noirq(pm_message_t state)
}
/**
- * device_suspend_late - Execute a "late suspend" callback for given device.
+ * __device_suspend_late - Execute a "late suspend" callback for given device.
* @dev: Device to handle.
* @state: PM transition of the system being carried out.
* @async: If true, the device is being suspended asynchronously.
@@ -1443,7 +1437,7 @@ static void dpm_clear_suppliers_direct_complete(struct device *dev)
}
/**
- * device_suspend - Execute "suspend" callbacks for given device.
+ * __device_suspend - Execute "suspend" callbacks for given device.
* @dev: Device to handle.
* @state: PM transition of the system being carried out.
* @async: If true, the device is being suspended asynchronously.
@@ -1506,17 +1500,10 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
goto Run;
}
- if (dev->class) {
- if (dev->class->pm) {
- info = "class ";
- callback = pm_op(dev->class->pm, state);
- goto Run;
- } else if (dev->class->suspend) {
- pm_dev_dbg(dev, state, "legacy class ");
- error = legacy_suspend(dev, state, dev->class->suspend,
- "legacy class ");
- goto End;
- }
+ if (dev->class && dev->class->pm) {
+ info = "class ";
+ callback = pm_op(dev->class->pm, state);
+ goto Run;
}
if (dev->bus) {
@@ -1663,6 +1650,9 @@ static int device_prepare(struct device *dev, pm_message_t state)
if (dev->power.syscore)
return 0;
+ WARN_ON(dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) &&
+ !pm_runtime_enabled(dev));
+
/*
* If a device's parent goes into runtime suspend at the wrong time,
* it won't be possible to resume the device. To prevent this we
@@ -1711,7 +1701,9 @@ unlock:
* applies to suspend transitions, however.
*/
spin_lock_irq(&dev->power.lock);
- dev->power.direct_complete = ret > 0 && state.event == PM_EVENT_SUSPEND;
+ dev->power.direct_complete = state.event == PM_EVENT_SUSPEND &&
+ pm_runtime_suspended(dev) && ret > 0 &&
+ !dev_pm_test_driver_flags(dev, DPM_FLAG_NEVER_SKIP);
spin_unlock_irq(&dev->power.lock);
return 0;
}
@@ -1860,11 +1852,16 @@ void device_pm_check_callbacks(struct device *dev)
dev->power.no_pm_callbacks =
(!dev->bus || (pm_ops_is_empty(dev->bus->pm) &&
!dev->bus->suspend && !dev->bus->resume)) &&
- (!dev->class || (pm_ops_is_empty(dev->class->pm) &&
- !dev->class->suspend && !dev->class->resume)) &&
+ (!dev->class || pm_ops_is_empty(dev->class->pm)) &&
(!dev->type || pm_ops_is_empty(dev->type->pm)) &&
(!dev->pm_domain || pm_ops_is_empty(&dev->pm_domain->ops)) &&
(!dev->driver || (pm_ops_is_empty(dev->driver->pm) &&
!dev->driver->suspend && !dev->driver->resume));
spin_unlock_irq(&dev->power.lock);
}
+
+bool dev_pm_smart_suspend_and_suspended(struct device *dev)
+{
+ return dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) &&
+ pm_runtime_status_suspended(dev);
+}
diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c
index 277d43a83f53..3382542b39b7 100644
--- a/drivers/base/power/qos.c
+++ b/drivers/base/power/qos.c
@@ -139,6 +139,9 @@ static int apply_constraint(struct dev_pm_qos_request *req,
switch(req->type) {
case DEV_PM_QOS_RESUME_LATENCY:
+ if (WARN_ON(action != PM_QOS_REMOVE_REQ && value < 0))
+ value = 0;
+
ret = pm_qos_update_target(&qos->resume_latency,
&req->data.pnode, action, value);
break;
@@ -189,7 +192,7 @@ static int dev_pm_qos_constraints_allocate(struct device *dev)
plist_head_init(&c->list);
c->target_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE;
c->default_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE;
- c->no_constraint_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE;
+ c->no_constraint_value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT;
c->type = PM_QOS_MIN;
c->notifiers = n;
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index 41d7c2b99f69..2362b9e9701e 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -253,7 +253,7 @@ static int rpm_check_suspend_allowed(struct device *dev)
|| (dev->power.request_pending
&& dev->power.request == RPM_REQ_RESUME))
retval = -EAGAIN;
- else if (__dev_pm_qos_read_value(dev) < 0)
+ else if (__dev_pm_qos_read_value(dev) == 0)
retval = -EPERM;
else if (dev->power.runtime_status == RPM_SUSPENDED)
retval = 1;
@@ -894,9 +894,9 @@ static void pm_runtime_work(struct work_struct *work)
*
* Check if the time is right and queue a suspend request.
*/
-static void pm_suspend_timer_fn(unsigned long data)
+static void pm_suspend_timer_fn(struct timer_list *t)
{
- struct device *dev = (struct device *)data;
+ struct device *dev = from_timer(dev, t, power.suspend_timer);
unsigned long flags;
unsigned long expires;
@@ -1499,8 +1499,7 @@ void pm_runtime_init(struct device *dev)
INIT_WORK(&dev->power.work, pm_runtime_work);
dev->power.timer_expires = 0;
- setup_timer(&dev->power.suspend_timer, pm_suspend_timer_fn,
- (unsigned long)dev);
+ timer_setup(&dev->power.suspend_timer, pm_suspend_timer_fn, 0);
init_waitqueue_head(&dev->power.wait_queue);
}
diff --git a/drivers/base/power/sysfs.c b/drivers/base/power/sysfs.c
index 156ab57bca77..e153e28b1857 100644
--- a/drivers/base/power/sysfs.c
+++ b/drivers/base/power/sysfs.c
@@ -218,7 +218,14 @@ static ssize_t pm_qos_resume_latency_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
- return sprintf(buf, "%d\n", dev_pm_qos_requested_resume_latency(dev));
+ s32 value = dev_pm_qos_requested_resume_latency(dev);
+
+ if (value == 0)
+ return sprintf(buf, "n/a\n");
+ else if (value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
+ value = 0;
+
+ return sprintf(buf, "%d\n", value);
}
static ssize_t pm_qos_resume_latency_store(struct device *dev,
@@ -228,11 +235,21 @@ static ssize_t pm_qos_resume_latency_store(struct device *dev,
s32 value;
int ret;
- if (kstrtos32(buf, 0, &value))
- return -EINVAL;
+ if (!kstrtos32(buf, 0, &value)) {
+ /*
+ * Prevent users from writing negative or "no constraint" values
+ * directly.
+ */
+ if (value < 0 || value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
+ return -EINVAL;
- if (value < 0)
+ if (value == 0)
+ value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT;
+ } else if (!strcmp(buf, "n/a") || !strcmp(buf, "n/a\n")) {
+ value = 0;
+ } else {
return -EINVAL;
+ }
ret = dev_pm_qos_update_request(dev->power.qos->resume_latency_req,
value);
@@ -309,33 +326,6 @@ static ssize_t pm_qos_no_power_off_store(struct device *dev,
static DEVICE_ATTR(pm_qos_no_power_off, 0644,
pm_qos_no_power_off_show, pm_qos_no_power_off_store);
-static ssize_t pm_qos_remote_wakeup_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- return sprintf(buf, "%d\n", !!(dev_pm_qos_requested_flags(dev)
- & PM_QOS_FLAG_REMOTE_WAKEUP));
-}
-
-static ssize_t pm_qos_remote_wakeup_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t n)
-{
- int ret;
-
- if (kstrtoint(buf, 0, &ret))
- return -EINVAL;
-
- if (ret != 0 && ret != 1)
- return -EINVAL;
-
- ret = dev_pm_qos_update_flags(dev, PM_QOS_FLAG_REMOTE_WAKEUP, ret);
- return ret < 0 ? ret : n;
-}
-
-static DEVICE_ATTR(pm_qos_remote_wakeup, 0644,
- pm_qos_remote_wakeup_show, pm_qos_remote_wakeup_store);
-
#ifdef CONFIG_PM_SLEEP
static const char _enabled[] = "enabled";
static const char _disabled[] = "disabled";
@@ -671,7 +661,6 @@ static const struct attribute_group pm_qos_latency_tolerance_attr_group = {
static struct attribute *pm_qos_flags_attrs[] = {
&dev_attr_pm_qos_no_power_off.attr,
- &dev_attr_pm_qos_remote_wakeup.attr,
NULL,
};
static const struct attribute_group pm_qos_flags_attr_group = {
diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
index cdd6f256da59..680ee1d36ac9 100644
--- a/drivers/base/power/wakeup.c
+++ b/drivers/base/power/wakeup.c
@@ -54,7 +54,7 @@ static unsigned int saved_count;
static DEFINE_SPINLOCK(events_lock);
-static void pm_wakeup_timer_fn(unsigned long data);
+static void pm_wakeup_timer_fn(struct timer_list *t);
static LIST_HEAD(wakeup_sources);
@@ -176,7 +176,7 @@ void wakeup_source_add(struct wakeup_source *ws)
return;
spin_lock_init(&ws->lock);
- setup_timer(&ws->timer, pm_wakeup_timer_fn, (unsigned long)ws);
+ timer_setup(&ws->timer, pm_wakeup_timer_fn, 0);
ws->active = false;
ws->last_time = ktime_get();
@@ -481,8 +481,7 @@ static bool wakeup_source_not_registered(struct wakeup_source *ws)
* Use timer struct to check if the given source is initialized
* by wakeup_source_add.
*/
- return ws->timer.function != pm_wakeup_timer_fn ||
- ws->timer.data != (unsigned long)ws;
+ return ws->timer.function != (TIMER_FUNC_TYPE)pm_wakeup_timer_fn;
}
/*
@@ -724,9 +723,9 @@ EXPORT_SYMBOL_GPL(pm_relax);
* in @data if it is currently active and its timer has not been canceled and
* the expiration time of the timer is not in future.
*/
-static void pm_wakeup_timer_fn(unsigned long data)
+static void pm_wakeup_timer_fn(struct timer_list *t)
{
- struct wakeup_source *ws = (struct wakeup_source *)data;
+ struct wakeup_source *ws = from_timer(ws, t, timer);
unsigned long flags;
spin_lock_irqsave(&ws->lock, flags);
diff --git a/drivers/cpufreq/arm_big_little.c b/drivers/cpufreq/arm_big_little.c
index 17504129fd77..65ec5f01aa8d 100644
--- a/drivers/cpufreq/arm_big_little.c
+++ b/drivers/cpufreq/arm_big_little.c
@@ -57,7 +57,7 @@ static bool bL_switching_enabled;
#define VIRT_FREQ(cluster, freq) ((cluster == A7_CLUSTER) ? freq >> 1 : freq)
static struct thermal_cooling_device *cdev[MAX_CLUSTERS];
-static struct cpufreq_arm_bL_ops *arm_bL_ops;
+static const struct cpufreq_arm_bL_ops *arm_bL_ops;
static struct clk *clk[MAX_CLUSTERS];
static struct cpufreq_frequency_table *freq_table[MAX_CLUSTERS + 1];
static atomic_t cluster_usage[MAX_CLUSTERS + 1];
@@ -213,6 +213,7 @@ static int bL_cpufreq_set_target(struct cpufreq_policy *policy,
{
u32 cpu = policy->cpu, cur_cluster, new_cluster, actual_cluster;
unsigned int freqs_new;
+ int ret;
cur_cluster = cpu_to_cluster(cpu);
new_cluster = actual_cluster = per_cpu(physical_cluster, cpu);
@@ -229,7 +230,14 @@ static int bL_cpufreq_set_target(struct cpufreq_policy *policy,
}
}
- return bL_cpufreq_set_rate(cpu, actual_cluster, new_cluster, freqs_new);
+ ret = bL_cpufreq_set_rate(cpu, actual_cluster, new_cluster, freqs_new);
+
+ if (!ret) {
+ arch_set_freq_scale(policy->related_cpus, freqs_new,
+ policy->cpuinfo.max_freq);
+ }
+
+ return ret;
}
static inline u32 get_table_count(struct cpufreq_frequency_table *table)
@@ -609,7 +617,7 @@ static int __bLs_register_notifier(void) { return 0; }
static int __bLs_unregister_notifier(void) { return 0; }
#endif
-int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops)
+int bL_cpufreq_register(const struct cpufreq_arm_bL_ops *ops)
{
int ret, i;
@@ -653,7 +661,7 @@ int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops)
}
EXPORT_SYMBOL_GPL(bL_cpufreq_register);
-void bL_cpufreq_unregister(struct cpufreq_arm_bL_ops *ops)
+void bL_cpufreq_unregister(const struct cpufreq_arm_bL_ops *ops)
{
if (arm_bL_ops != ops) {
pr_err("%s: Registered with: %s, can't unregister, exiting\n",
diff --git a/drivers/cpufreq/arm_big_little.h b/drivers/cpufreq/arm_big_little.h
index 184d7c3a112a..88a176e466c8 100644
--- a/drivers/cpufreq/arm_big_little.h
+++ b/drivers/cpufreq/arm_big_little.h
@@ -37,7 +37,7 @@ struct cpufreq_arm_bL_ops {
void (*free_opp_table)(const struct cpumask *cpumask);
};
-int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops);
-void bL_cpufreq_unregister(struct cpufreq_arm_bL_ops *ops);
+int bL_cpufreq_register(const struct cpufreq_arm_bL_ops *ops);
+void bL_cpufreq_unregister(const struct cpufreq_arm_bL_ops *ops);
#endif /* CPUFREQ_ARM_BIG_LITTLE_H */
diff --git a/drivers/cpufreq/arm_big_little_dt.c b/drivers/cpufreq/arm_big_little_dt.c
index 39b3f51d9a30..b944f290c8a4 100644
--- a/drivers/cpufreq/arm_big_little_dt.c
+++ b/drivers/cpufreq/arm_big_little_dt.c
@@ -61,7 +61,7 @@ static int dt_get_transition_latency(struct device *cpu_dev)
return transition_latency;
}
-static struct cpufreq_arm_bL_ops dt_bL_ops = {
+static const struct cpufreq_arm_bL_ops dt_bL_ops = {
.name = "dt-bl",
.get_transition_latency = dt_get_transition_latency,
.init_opp_table = dev_pm_opp_of_cpumask_add_table,
diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
index a753c50e9e41..ecc56e26f8f6 100644
--- a/drivers/cpufreq/cpufreq-dt-platdev.c
+++ b/drivers/cpufreq/cpufreq-dt-platdev.c
@@ -48,7 +48,6 @@ static const struct of_device_id whitelist[] __initconst = {
{ .compatible = "samsung,exynos3250", },
{ .compatible = "samsung,exynos4210", },
- { .compatible = "samsung,exynos4212", },
{ .compatible = "samsung,exynos5250", },
#ifndef CONFIG_BL_SWITCHER
{ .compatible = "samsung,exynos5800", },
@@ -83,8 +82,6 @@ static const struct of_device_id whitelist[] __initconst = {
{ .compatible = "rockchip,rk3368", },
{ .compatible = "rockchip,rk3399", },
- { .compatible = "socionext,uniphier-ld6b", },
-
{ .compatible = "st-ericsson,u8500", },
{ .compatible = "st-ericsson,u8540", },
{ .compatible = "st-ericsson,u9500", },
diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
index d83ab94d041a..545946ad0752 100644
--- a/drivers/cpufreq/cpufreq-dt.c
+++ b/drivers/cpufreq/cpufreq-dt.c
@@ -43,9 +43,17 @@ static struct freq_attr *cpufreq_dt_attr[] = {
static int set_target(struct cpufreq_policy *policy, unsigned int index)
{
struct private_data *priv = policy->driver_data;
+ unsigned long freq = policy->freq_table[index].frequency;
+ int ret;
+
+ ret = dev_pm_opp_set_rate(priv->cpu_dev, freq * 1000);
- return dev_pm_opp_set_rate(priv->cpu_dev,
- policy->freq_table[index].frequency * 1000);
+ if (!ret) {
+ arch_set_freq_scale(policy->related_cpus, freq,
+ policy->cpuinfo.max_freq);
+ }
+
+ return ret;
}
/*
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index ea43b147a7fe..41d148af7748 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -161,6 +161,12 @@ u64 get_cpu_idle_time(unsigned int cpu, u64 *wall, int io_busy)
}
EXPORT_SYMBOL_GPL(get_cpu_idle_time);
+__weak void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq,
+ unsigned long max_freq)
+{
+}
+EXPORT_SYMBOL_GPL(arch_set_freq_scale);
+
/*
* This is a generic cpufreq init() routine which can be used by cpufreq
* drivers of SMP systems. It will do following:
diff --git a/drivers/cpufreq/cpufreq_stats.c b/drivers/cpufreq/cpufreq_stats.c
index e75880eb037d..1e55b5790853 100644
--- a/drivers/cpufreq/cpufreq_stats.c
+++ b/drivers/cpufreq/cpufreq_stats.c
@@ -118,8 +118,11 @@ static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
break;
len += snprintf(buf + len, PAGE_SIZE - len, "\n");
}
- if (len >= PAGE_SIZE)
- return PAGE_SIZE;
+
+ if (len >= PAGE_SIZE) {
+ pr_warn_once("cpufreq transition table exceeds PAGE_SIZE. Disabling\n");
+ return -EFBIG;
+ }
return len;
}
cpufreq_freq_attr_ro(trans_table);
diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c
index 14466a9b01c0..628fe899cb48 100644
--- a/drivers/cpufreq/imx6q-cpufreq.c
+++ b/drivers/cpufreq/imx6q-cpufreq.c
@@ -12,6 +12,7 @@
#include <linux/err.h>
#include <linux/module.h>
#include <linux/of.h>
+#include <linux/of_address.h>
#include <linux/pm_opp.h>
#include <linux/platform_device.h>
#include <linux/regulator/consumer.h>
@@ -191,6 +192,57 @@ static struct cpufreq_driver imx6q_cpufreq_driver = {
.suspend = cpufreq_generic_suspend,
};
+#define OCOTP_CFG3 0x440
+#define OCOTP_CFG3_SPEED_SHIFT 16
+#define OCOTP_CFG3_SPEED_1P2GHZ 0x3
+#define OCOTP_CFG3_SPEED_996MHZ 0x2
+#define OCOTP_CFG3_SPEED_852MHZ 0x1
+
+static void imx6q_opp_check_speed_grading(struct device *dev)
+{
+ struct device_node *np;
+ void __iomem *base;
+ u32 val;
+
+ np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-ocotp");
+ if (!np)
+ return;
+
+ base = of_iomap(np, 0);
+ if (!base) {
+ dev_err(dev, "failed to map ocotp\n");
+ goto put_node;
+ }
+
+ /*
+ * SPEED_GRADING[1:0] defines the max speed of ARM:
+ * 2b'11: 1200000000Hz;
+ * 2b'10: 996000000Hz;
+ * 2b'01: 852000000Hz; -- i.MX6Q Only, exclusive with 996MHz.
+ * 2b'00: 792000000Hz;
+ * We need to set the max speed of ARM according to fuse map.
+ */
+ val = readl_relaxed(base + OCOTP_CFG3);
+ val >>= OCOTP_CFG3_SPEED_SHIFT;
+ val &= 0x3;
+
+ if ((val != OCOTP_CFG3_SPEED_1P2GHZ) &&
+ of_machine_is_compatible("fsl,imx6q"))
+ if (dev_pm_opp_disable(dev, 1200000000))
+ dev_warn(dev, "failed to disable 1.2GHz OPP\n");
+ if (val < OCOTP_CFG3_SPEED_996MHZ)
+ if (dev_pm_opp_disable(dev, 996000000))
+ dev_warn(dev, "failed to disable 996MHz OPP\n");
+ if (of_machine_is_compatible("fsl,imx6q")) {
+ if (val != OCOTP_CFG3_SPEED_852MHZ)
+ if (dev_pm_opp_disable(dev, 852000000))
+ dev_warn(dev, "failed to disable 852MHz OPP\n");
+ }
+ iounmap(base);
+put_node:
+ of_node_put(np);
+}
+
static int imx6q_cpufreq_probe(struct platform_device *pdev)
{
struct device_node *np;
@@ -252,28 +304,21 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
goto put_reg;
}
- /*
- * We expect an OPP table supplied by platform.
- * Just, incase the platform did not supply the OPP
- * table, it will try to get it.
- */
- num = dev_pm_opp_get_opp_count(cpu_dev);
- if (num < 0) {
- ret = dev_pm_opp_of_add_table(cpu_dev);
- if (ret < 0) {
- dev_err(cpu_dev, "failed to init OPP table: %d\n", ret);
- goto put_reg;
- }
+ ret = dev_pm_opp_of_add_table(cpu_dev);
+ if (ret < 0) {
+ dev_err(cpu_dev, "failed to init OPP table: %d\n", ret);
+ goto put_reg;
+ }
- /* Because we have added the OPPs here, we must free them */
- free_opp = true;
+ imx6q_opp_check_speed_grading(cpu_dev);
- num = dev_pm_opp_get_opp_count(cpu_dev);
- if (num < 0) {
- ret = num;
- dev_err(cpu_dev, "no OPP table is found: %d\n", ret);
- goto out_free_opp;
- }
+ /* Because we have added the OPPs here, we must free them */
+ free_opp = true;
+ num = dev_pm_opp_get_opp_count(cpu_dev);
+ if (num < 0) {
+ ret = num;
+ dev_err(cpu_dev, "no OPP table is found: %d\n", ret);
+ goto out_free_opp;
}
ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
diff --git a/drivers/cpufreq/powernow-k8.c b/drivers/cpufreq/powernow-k8.c
index 062d71434e47..b01e31db5f83 100644
--- a/drivers/cpufreq/powernow-k8.c
+++ b/drivers/cpufreq/powernow-k8.c
@@ -1043,7 +1043,7 @@ static int powernowk8_cpu_init(struct cpufreq_policy *pol)
data = kzalloc(sizeof(*data), GFP_KERNEL);
if (!data) {
- pr_err("unable to alloc powernow_k8_data");
+ pr_err("unable to alloc powernow_k8_data\n");
return -ENOMEM;
}
diff --git a/drivers/cpufreq/pxa2xx-cpufreq.c b/drivers/cpufreq/pxa2xx-cpufreq.c
index ce345bf34d5d..06b024a3e474 100644
--- a/drivers/cpufreq/pxa2xx-cpufreq.c
+++ b/drivers/cpufreq/pxa2xx-cpufreq.c
@@ -58,56 +58,40 @@ module_param(pxa27x_maxfreq, uint, 0);
MODULE_PARM_DESC(pxa27x_maxfreq, "Set the pxa27x maxfreq in MHz"
"(typically 624=>pxa270, 416=>pxa271, 520=>pxa272)");
+struct pxa_cpufreq_data {
+ struct clk *clk_core;
+};
+static struct pxa_cpufreq_data pxa_cpufreq_data;
+
struct pxa_freqs {
unsigned int khz;
- unsigned int membus;
- unsigned int cccr;
- unsigned int div2;
- unsigned int cclkcfg;
int vmin;
int vmax;
};
-/* Define the refresh period in mSec for the SDRAM and the number of rows */
-#define SDRAM_TREF 64 /* standard 64ms SDRAM */
-static unsigned int sdram_rows;
-
-#define CCLKCFG_TURBO 0x1
-#define CCLKCFG_FCS 0x2
-#define CCLKCFG_HALFTURBO 0x4
-#define CCLKCFG_FASTBUS 0x8
-#define MDREFR_DB2_MASK (MDREFR_K2DB2 | MDREFR_K1DB2)
-#define MDREFR_DRI_MASK 0xFFF
-
-#define MDCNFG_DRAC2(mdcnfg) (((mdcnfg) >> 21) & 0x3)
-#define MDCNFG_DRAC0(mdcnfg) (((mdcnfg) >> 5) & 0x3)
-
/*
* PXA255 definitions
*/
-/* Use the run mode frequencies for the CPUFREQ_POLICY_PERFORMANCE policy */
-#define CCLKCFG CCLKCFG_TURBO | CCLKCFG_FCS
-
static const struct pxa_freqs pxa255_run_freqs[] =
{
- /* CPU MEMBUS CCCR DIV2 CCLKCFG run turbo PXbus SDRAM */
- { 99500, 99500, 0x121, 1, CCLKCFG, -1, -1}, /* 99, 99, 50, 50 */
- {132700, 132700, 0x123, 1, CCLKCFG, -1, -1}, /* 133, 133, 66, 66 */
- {199100, 99500, 0x141, 0, CCLKCFG, -1, -1}, /* 199, 199, 99, 99 */
- {265400, 132700, 0x143, 1, CCLKCFG, -1, -1}, /* 265, 265, 133, 66 */
- {331800, 165900, 0x145, 1, CCLKCFG, -1, -1}, /* 331, 331, 166, 83 */
- {398100, 99500, 0x161, 0, CCLKCFG, -1, -1}, /* 398, 398, 196, 99 */
+ /* CPU MEMBUS run turbo PXbus SDRAM */
+ { 99500, -1, -1}, /* 99, 99, 50, 50 */
+ {132700, -1, -1}, /* 133, 133, 66, 66 */
+ {199100, -1, -1}, /* 199, 199, 99, 99 */
+ {265400, -1, -1}, /* 265, 265, 133, 66 */
+ {331800, -1, -1}, /* 331, 331, 166, 83 */
+ {398100, -1, -1}, /* 398, 398, 196, 99 */
};
/* Use the turbo mode frequencies for the CPUFREQ_POLICY_POWERSAVE policy */
static const struct pxa_freqs pxa255_turbo_freqs[] =
{
- /* CPU MEMBUS CCCR DIV2 CCLKCFG run turbo PXbus SDRAM */
- { 99500, 99500, 0x121, 1, CCLKCFG, -1, -1}, /* 99, 99, 50, 50 */
- {199100, 99500, 0x221, 0, CCLKCFG, -1, -1}, /* 99, 199, 50, 99 */
- {298500, 99500, 0x321, 0, CCLKCFG, -1, -1}, /* 99, 287, 50, 99 */
- {298600, 99500, 0x1c1, 0, CCLKCFG, -1, -1}, /* 199, 287, 99, 99 */
- {398100, 99500, 0x241, 0, CCLKCFG, -1, -1}, /* 199, 398, 99, 99 */
+ /* CPU run turbo PXbus SDRAM */
+ { 99500, -1, -1}, /* 99, 99, 50, 50 */
+ {199100, -1, -1}, /* 99, 199, 50, 99 */
+ {298500, -1, -1}, /* 99, 287, 50, 99 */
+ {298600, -1, -1}, /* 199, 287, 99, 99 */
+ {398100, -1, -1}, /* 199, 398, 99, 99 */
};
#define NUM_PXA25x_RUN_FREQS ARRAY_SIZE(pxa255_run_freqs)
@@ -122,47 +106,14 @@ static unsigned int pxa255_turbo_table;
module_param(pxa255_turbo_table, uint, 0);
MODULE_PARM_DESC(pxa255_turbo_table, "Selects the frequency table (0 = run table, !0 = turbo table)");
-/*
- * PXA270 definitions
- *
- * For the PXA27x:
- * Control variables are A, L, 2N for CCCR; B, HT, T for CLKCFG.
- *
- * A = 0 => memory controller clock from table 3-7,
- * A = 1 => memory controller clock = system bus clock
- * Run mode frequency = 13 MHz * L
- * Turbo mode frequency = 13 MHz * L * N
- * System bus frequency = 13 MHz * L / (B + 1)
- *
- * In CCCR:
- * A = 1
- * L = 16 oscillator to run mode ratio
- * 2N = 6 2 * (turbo mode to run mode ratio)
- *
- * In CCLKCFG:
- * B = 1 Fast bus mode
- * HT = 0 Half-Turbo mode
- * T = 1 Turbo mode
- *
- * For now, just support some of the combinations in table 3-7 of
- * PXA27x Processor Family Developer's Manual to simplify frequency
- * change sequences.
- */
-#define PXA27x_CCCR(A, L, N2) (A << 25 | N2 << 7 | L)
-#define CCLKCFG2(B, HT, T) \
- (CCLKCFG_FCS | \
- ((B) ? CCLKCFG_FASTBUS : 0) | \
- ((HT) ? CCLKCFG_HALFTURBO : 0) | \
- ((T) ? CCLKCFG_TURBO : 0))
-
static struct pxa_freqs pxa27x_freqs[] = {
- {104000, 104000, PXA27x_CCCR(1, 8, 2), 0, CCLKCFG2(1, 0, 1), 900000, 1705000 },
- {156000, 104000, PXA27x_CCCR(1, 8, 3), 0, CCLKCFG2(1, 0, 1), 1000000, 1705000 },
- {208000, 208000, PXA27x_CCCR(0, 16, 2), 1, CCLKCFG2(0, 0, 1), 1180000, 1705000 },
- {312000, 208000, PXA27x_CCCR(1, 16, 3), 1, CCLKCFG2(1, 0, 1), 1250000, 1705000 },
- {416000, 208000, PXA27x_CCCR(1, 16, 4), 1, CCLKCFG2(1, 0, 1), 1350000, 1705000 },
- {520000, 208000, PXA27x_CCCR(1, 16, 5), 1, CCLKCFG2(1, 0, 1), 1450000, 1705000 },
- {624000, 208000, PXA27x_CCCR(1, 16, 6), 1, CCLKCFG2(1, 0, 1), 1550000, 1705000 }
+ {104000, 900000, 1705000 },
+ {156000, 1000000, 1705000 },
+ {208000, 1180000, 1705000 },
+ {312000, 1250000, 1705000 },
+ {416000, 1350000, 1705000 },
+ {520000, 1450000, 1705000 },
+ {624000, 1550000, 1705000 }
};
#define NUM_PXA27x_FREQS ARRAY_SIZE(pxa27x_freqs)
@@ -241,51 +192,29 @@ static void pxa27x_guess_max_freq(void)
}
}
-static void init_sdram_rows(void)
-{
- uint32_t mdcnfg = __raw_readl(MDCNFG);
- unsigned int drac2 = 0, drac0 = 0;
-
- if (mdcnfg & (MDCNFG_DE2 | MDCNFG_DE3))
- drac2 = MDCNFG_DRAC2(mdcnfg);
-
- if (mdcnfg & (MDCNFG_DE0 | MDCNFG_DE1))
- drac0 = MDCNFG_DRAC0(mdcnfg);
-
- sdram_rows = 1 << (11 + max(drac0, drac2));
-}
-
-static u32 mdrefr_dri(unsigned int freq)
-{
- u32 interval = freq * SDRAM_TREF / sdram_rows;
-
- return (interval - (cpu_is_pxa27x() ? 31 : 0)) / 32;
-}
-
static unsigned int pxa_cpufreq_get(unsigned int cpu)
{
- return get_clk_frequency_khz(0);
+ struct pxa_cpufreq_data *data = cpufreq_get_driver_data();
+
+ return (unsigned int) clk_get_rate(data->clk_core) / 1000;
}
static int pxa_set_target(struct cpufreq_policy *policy, unsigned int idx)
{
struct cpufreq_frequency_table *pxa_freqs_table;
const struct pxa_freqs *pxa_freq_settings;
- unsigned long flags;
- unsigned int new_freq_cpu, new_freq_mem;
- unsigned int unused, preset_mdrefr, postset_mdrefr, cclkcfg;
+ struct pxa_cpufreq_data *data = cpufreq_get_driver_data();
+ unsigned int new_freq_cpu;
int ret = 0;
/* Get the current policy */
find_freq_tables(&pxa_freqs_table, &pxa_freq_settings);
new_freq_cpu = pxa_freq_settings[idx].khz;
- new_freq_mem = pxa_freq_settings[idx].membus;
if (freq_debug)
- pr_debug("Changing CPU frequency to %d Mhz, (SDRAM %d Mhz)\n",
- new_freq_cpu / 1000, (pxa_freq_settings[idx].div2) ?
- (new_freq_mem / 2000) : (new_freq_mem / 1000));
+ pr_debug("Changing CPU frequency from %d Mhz to %d Mhz\n",
+ policy->cur / 1000, new_freq_cpu / 1000);
if (vcc_core && new_freq_cpu > policy->cur) {
ret = pxa_cpufreq_change_voltage(&pxa_freq_settings[idx]);
@@ -293,53 +222,7 @@ static int pxa_set_target(struct cpufreq_policy *policy, unsigned int idx)
return ret;
}
- /* Calculate the next MDREFR. If we're slowing down the SDRAM clock
- * we need to preset the smaller DRI before the change. If we're
- * speeding up we need to set the larger DRI value after the change.
- */
- preset_mdrefr = postset_mdrefr = __raw_readl(MDREFR);
- if ((preset_mdrefr & MDREFR_DRI_MASK) > mdrefr_dri(new_freq_mem)) {
- preset_mdrefr = (preset_mdrefr & ~MDREFR_DRI_MASK);
- preset_mdrefr |= mdrefr_dri(new_freq_mem);
- }
- postset_mdrefr =
- (postset_mdrefr & ~MDREFR_DRI_MASK) | mdrefr_dri(new_freq_mem);
-
- /* If we're dividing the memory clock by two for the SDRAM clock, this
- * must be set prior to the change. Clearing the divide must be done
- * after the change.
- */
- if (pxa_freq_settings[idx].div2) {
- preset_mdrefr |= MDREFR_DB2_MASK;
- postset_mdrefr |= MDREFR_DB2_MASK;
- } else {
- postset_mdrefr &= ~MDREFR_DB2_MASK;
- }
-
- local_irq_save(flags);
-
- /* Set new the CCCR and prepare CCLKCFG */
- writel(pxa_freq_settings[idx].cccr, CCCR);
- cclkcfg = pxa_freq_settings[idx].cclkcfg;
-
- asm volatile(" \n\
- ldr r4, [%1] /* load MDREFR */ \n\
- b 2f \n\
- .align 5 \n\
-1: \n\
- str %3, [%1] /* preset the MDREFR */ \n\
- mcr p14, 0, %2, c6, c0, 0 /* set CCLKCFG[FCS] */ \n\
- str %4, [%1] /* postset the MDREFR */ \n\
- \n\
- b 3f \n\
-2: b 1b \n\
-3: nop \n\
- "
- : "=&r" (unused)
- : "r" (MDREFR), "r" (cclkcfg),
- "r" (preset_mdrefr), "r" (postset_mdrefr)
- : "r4", "r5");
- local_irq_restore(flags);
+ clk_set_rate(data->clk_core, new_freq_cpu * 1000);
/*
* Even if voltage setting fails, we don't report it, as the frequency
@@ -369,8 +252,6 @@ static int pxa_cpufreq_init(struct cpufreq_policy *policy)
pxa_cpufreq_init_voltages();
- init_sdram_rows();
-
/* set default policy and cpuinfo */
policy->cpuinfo.transition_latency = 1000; /* FIXME: 1 ms, assumed */
@@ -429,11 +310,17 @@ static struct cpufreq_driver pxa_cpufreq_driver = {
.init = pxa_cpufreq_init,
.get = pxa_cpufreq_get,
.name = "PXA2xx",
+ .driver_data = &pxa_cpufreq_data,
};
static int __init pxa_cpu_init(void)
{
int ret = -ENODEV;
+
+ pxa_cpufreq_data.clk_core = clk_get_sys(NULL, "core");
+ if (IS_ERR(pxa_cpufreq_data.clk_core))
+ return PTR_ERR(pxa_cpufreq_data.clk_core);
+
if (cpu_is_pxa25x() || cpu_is_pxa27x())
ret = cpufreq_register_driver(&pxa_cpufreq_driver);
return ret;
diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c
index 8de2364b5995..05d299052c5c 100644
--- a/drivers/cpufreq/scpi-cpufreq.c
+++ b/drivers/cpufreq/scpi-cpufreq.c
@@ -53,7 +53,7 @@ static int scpi_init_opp_table(const struct cpumask *cpumask)
return ret;
}
-static struct cpufreq_arm_bL_ops scpi_cpufreq_ops = {
+static const struct cpufreq_arm_bL_ops scpi_cpufreq_ops = {
.name = "scpi",
.get_transition_latency = scpi_get_transition_latency,
.init_opp_table = scpi_init_opp_table,
diff --git a/drivers/cpufreq/spear-cpufreq.c b/drivers/cpufreq/spear-cpufreq.c
index 4894924a3ca2..195f27f9c1cb 100644
--- a/drivers/cpufreq/spear-cpufreq.c
+++ b/drivers/cpufreq/spear-cpufreq.c
@@ -177,7 +177,7 @@ static int spear_cpufreq_probe(struct platform_device *pdev)
np = of_cpu_device_node_get(0);
if (!np) {
- pr_err("No cpu node found");
+ pr_err("No cpu node found\n");
return -ENODEV;
}
@@ -187,7 +187,7 @@ static int spear_cpufreq_probe(struct platform_device *pdev)
prop = of_find_property(np, "cpufreq_tbl", NULL);
if (!prop || !prop->value) {
- pr_err("Invalid cpufreq_tbl");
+ pr_err("Invalid cpufreq_tbl\n");
ret = -ENODEV;
goto out_put_node;
}
diff --git a/drivers/cpufreq/speedstep-lib.c b/drivers/cpufreq/speedstep-lib.c
index ccab452a4ef5..8085ec9000d1 100644
--- a/drivers/cpufreq/speedstep-lib.c
+++ b/drivers/cpufreq/speedstep-lib.c
@@ -367,7 +367,7 @@ unsigned int speedstep_detect_processor(void)
} else
return SPEEDSTEP_CPU_PIII_C;
}
-
+ /* fall through */
default:
return 0;
}
diff --git a/drivers/cpufreq/ti-cpufreq.c b/drivers/cpufreq/ti-cpufreq.c
index 4bf47de6101f..923317f03b4b 100644
--- a/drivers/cpufreq/ti-cpufreq.c
+++ b/drivers/cpufreq/ti-cpufreq.c
@@ -205,6 +205,7 @@ static int ti_cpufreq_init(void)
np = of_find_node_by_path("/");
match = of_match_node(ti_cpufreq_of_match, np);
+ of_node_put(np);
if (!match)
return -ENODEV;
@@ -217,7 +218,8 @@ static int ti_cpufreq_init(void)
opp_data->cpu_dev = get_cpu_device(0);
if (!opp_data->cpu_dev) {
pr_err("%s: Failed to get device for CPU0\n", __func__);
- return -ENODEV;
+ ret = ENODEV;
+ goto free_opp_data;
}
opp_data->opp_node = dev_pm_opp_of_get_opp_desc_node(opp_data->cpu_dev);
@@ -262,6 +264,8 @@ register_cpufreq_dt:
fail_put_node:
of_node_put(opp_data->opp_node);
+free_opp_data:
+ kfree(opp_data);
return ret;
}
diff --git a/drivers/cpufreq/vexpress-spc-cpufreq.c b/drivers/cpufreq/vexpress-spc-cpufreq.c
index 87e5bdc5ec74..53237289e606 100644
--- a/drivers/cpufreq/vexpress-spc-cpufreq.c
+++ b/drivers/cpufreq/vexpress-spc-cpufreq.c
@@ -42,7 +42,7 @@ static int ve_spc_get_transition_latency(struct device *cpu_dev)
return 1000000; /* 1 ms */
}
-static struct cpufreq_arm_bL_ops ve_spc_cpufreq_ops = {
+static const struct cpufreq_arm_bL_ops ve_spc_cpufreq_ops = {
.name = "vexpress-spc",
.get_transition_latency = ve_spc_get_transition_latency,
.init_opp_table = ve_spc_init_opp_table,
diff --git a/drivers/cpuidle/cpuidle-arm.c b/drivers/cpuidle/cpuidle-arm.c
index 52a75053ee03..ddee1b601b89 100644
--- a/drivers/cpuidle/cpuidle-arm.c
+++ b/drivers/cpuidle/cpuidle-arm.c
@@ -72,12 +72,94 @@ static const struct of_device_id arm_idle_state_match[] __initconst = {
};
/*
- * arm_idle_init
+ * arm_idle_init_cpu
*
* Registers the arm specific cpuidle driver with the cpuidle
* framework. It relies on core code to parse the idle states
* and initialize them using driver data structures accordingly.
*/
+static int __init arm_idle_init_cpu(int cpu)
+{
+ int ret;
+ struct cpuidle_driver *drv;
+ struct cpuidle_device *dev;
+
+ drv = kmemdup(&arm_idle_driver, sizeof(*drv), GFP_KERNEL);
+ if (!drv)
+ return -ENOMEM;
+
+ drv->cpumask = (struct cpumask *)cpumask_of(cpu);
+
+ /*
+ * Initialize idle states data, starting at index 1. This
+ * driver is DT only, if no DT idle states are detected (ret
+ * == 0) let the driver initialization fail accordingly since
+ * there is no reason to initialize the idle driver if only
+ * wfi is supported.
+ */
+ ret = dt_init_idle_driver(drv, arm_idle_state_match, 1);
+ if (ret <= 0) {
+ ret = ret ? : -ENODEV;
+ goto out_kfree_drv;
+ }
+
+ ret = cpuidle_register_driver(drv);
+ if (ret) {
+ pr_err("Failed to register cpuidle driver\n");
+ goto out_kfree_drv;
+ }
+
+ /*
+ * Call arch CPU operations in order to initialize
+ * idle states suspend back-end specific data
+ */
+ ret = arm_cpuidle_init(cpu);
+
+ /*
+ * Skip the cpuidle device initialization if the reported
+ * failure is a HW misconfiguration/breakage (-ENXIO).
+ */
+ if (ret == -ENXIO)
+ return 0;
+
+ if (ret) {
+ pr_err("CPU %d failed to init idle CPU ops\n", cpu);
+ goto out_unregister_drv;
+ }
+
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (!dev) {
+ pr_err("Failed to allocate cpuidle device\n");
+ ret = -ENOMEM;
+ goto out_unregister_drv;
+ }
+ dev->cpu = cpu;
+
+ ret = cpuidle_register_device(dev);
+ if (ret) {
+ pr_err("Failed to register cpuidle device for CPU %d\n",
+ cpu);
+ goto out_kfree_dev;
+ }
+
+ return 0;
+
+out_kfree_dev:
+ kfree(dev);
+out_unregister_drv:
+ cpuidle_unregister_driver(drv);
+out_kfree_drv:
+ kfree(drv);
+ return ret;
+}
+
+/*
+ * arm_idle_init - Initializes arm cpuidle driver
+ *
+ * Initializes arm cpuidle driver for all CPUs, if any CPU fails
+ * to register cpuidle driver then rollback to cancel all CPUs
+ * registeration.
+ */
static int __init arm_idle_init(void)
{
int cpu, ret;
@@ -85,79 +167,20 @@ static int __init arm_idle_init(void)
struct cpuidle_device *dev;
for_each_possible_cpu(cpu) {
-
- drv = kmemdup(&arm_idle_driver, sizeof(*drv), GFP_KERNEL);
- if (!drv) {
- ret = -ENOMEM;
- goto out_fail;
- }
-
- drv->cpumask = (struct cpumask *)cpumask_of(cpu);
-
- /*
- * Initialize idle states data, starting at index 1. This
- * driver is DT only, if no DT idle states are detected (ret
- * == 0) let the driver initialization fail accordingly since
- * there is no reason to initialize the idle driver if only
- * wfi is supported.
- */
- ret = dt_init_idle_driver(drv, arm_idle_state_match, 1);
- if (ret <= 0) {
- ret = ret ? : -ENODEV;
- goto init_fail;
- }
-
- ret = cpuidle_register_driver(drv);
- if (ret) {
- pr_err("Failed to register cpuidle driver\n");
- goto init_fail;
- }
-
- /*
- * Call arch CPU operations in order to initialize
- * idle states suspend back-end specific data
- */
- ret = arm_cpuidle_init(cpu);
-
- /*
- * Skip the cpuidle device initialization if the reported
- * failure is a HW misconfiguration/breakage (-ENXIO).
- */
- if (ret == -ENXIO)
- continue;
-
- if (ret) {
- pr_err("CPU %d failed to init idle CPU ops\n", cpu);
- goto out_fail;
- }
-
- dev = kzalloc(sizeof(*dev), GFP_KERNEL);
- if (!dev) {
- pr_err("Failed to allocate cpuidle device\n");
- ret = -ENOMEM;
+ ret = arm_idle_init_cpu(cpu);
+ if (ret)
goto out_fail;
- }
- dev->cpu = cpu;
-
- ret = cpuidle_register_device(dev);
- if (ret) {
- pr_err("Failed to register cpuidle device for CPU %d\n",
- cpu);
- kfree(dev);
- goto out_fail;
- }
}
return 0;
-init_fail:
- kfree(drv);
+
out_fail:
while (--cpu >= 0) {
dev = per_cpu(cpuidle_devices, cpu);
+ drv = cpuidle_get_cpu_driver(dev);
cpuidle_unregister_device(dev);
- kfree(dev);
- drv = cpuidle_get_driver();
cpuidle_unregister_driver(drv);
+ kfree(dev);
kfree(drv);
}
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 484cc8909d5c..68a16827f45f 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -208,6 +208,7 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
return -EBUSY;
}
target_state = &drv->states[index];
+ broadcast = false;
}
/* Take note of the planned idle state. */
@@ -387,9 +388,12 @@ int cpuidle_enable_device(struct cpuidle_device *dev)
if (dev->enabled)
return 0;
+ if (!cpuidle_curr_governor)
+ return -EIO;
+
drv = cpuidle_get_cpu_driver(dev);
- if (!drv || !cpuidle_curr_governor)
+ if (!drv)
return -EIO;
if (!dev->registered)
@@ -399,9 +403,11 @@ int cpuidle_enable_device(struct cpuidle_device *dev)
if (ret)
return ret;
- if (cpuidle_curr_governor->enable &&
- (ret = cpuidle_curr_governor->enable(drv, dev)))
- goto fail_sysfs;
+ if (cpuidle_curr_governor->enable) {
+ ret = cpuidle_curr_governor->enable(drv, dev);
+ if (ret)
+ goto fail_sysfs;
+ }
smp_wmb();
diff --git a/drivers/cpuidle/governors/ladder.c b/drivers/cpuidle/governors/ladder.c
index ce1a2ffffb2a..1ad8745fd6d6 100644
--- a/drivers/cpuidle/governors/ladder.c
+++ b/drivers/cpuidle/governors/ladder.c
@@ -17,6 +17,7 @@
#include <linux/pm_qos.h>
#include <linux/jiffies.h>
#include <linux/tick.h>
+#include <linux/cpu.h>
#include <asm/io.h>
#include <linux/uaccess.h>
@@ -67,10 +68,16 @@ static int ladder_select_state(struct cpuidle_driver *drv,
struct cpuidle_device *dev)
{
struct ladder_device *ldev = this_cpu_ptr(&ladder_devices);
+ struct device *device = get_cpu_device(dev->cpu);
struct ladder_device_state *last_state;
int last_residency, last_idx = ldev->last_state_idx;
int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0;
int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
+ int resume_latency = dev_pm_qos_raw_read_value(device);
+
+ if (resume_latency < latency_req &&
+ resume_latency != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
+ latency_req = resume_latency;
/* Special case when user has set very strict latency requirement */
if (unlikely(latency_req == 0)) {
diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
index 48eaf2879228..aa390404e85f 100644
--- a/drivers/cpuidle/governors/menu.c
+++ b/drivers/cpuidle/governors/menu.c
@@ -298,8 +298,8 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
data->needs_update = 0;
}
- /* resume_latency is 0 means no restriction */
- if (resume_latency && resume_latency < latency_req)
+ if (resume_latency < latency_req &&
+ resume_latency != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
latency_req = resume_latency;
/* Special case when user has set very strict latency requirement */
diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
index a1c4ee818614..78fb496ecb4e 100644
--- a/drivers/devfreq/devfreq.c
+++ b/drivers/devfreq/devfreq.c
@@ -28,6 +28,9 @@
#include <linux/of.h>
#include "governor.h"
+#define MAX(a,b) ((a > b) ? a : b)
+#define MIN(a,b) ((a < b) ? a : b)
+
static struct class *devfreq_class;
/*
@@ -69,6 +72,34 @@ static struct devfreq *find_device_devfreq(struct device *dev)
return ERR_PTR(-ENODEV);
}
+static unsigned long find_available_min_freq(struct devfreq *devfreq)
+{
+ struct dev_pm_opp *opp;
+ unsigned long min_freq = 0;
+
+ opp = dev_pm_opp_find_freq_ceil(devfreq->dev.parent, &min_freq);
+ if (IS_ERR(opp))
+ min_freq = 0;
+ else
+ dev_pm_opp_put(opp);
+
+ return min_freq;
+}
+
+static unsigned long find_available_max_freq(struct devfreq *devfreq)
+{
+ struct dev_pm_opp *opp;
+ unsigned long max_freq = ULONG_MAX;
+
+ opp = dev_pm_opp_find_freq_floor(devfreq->dev.parent, &max_freq);
+ if (IS_ERR(opp))
+ max_freq = 0;
+ else
+ dev_pm_opp_put(opp);
+
+ return max_freq;
+}
+
/**
* devfreq_get_freq_level() - Lookup freq_table for the frequency
* @devfreq: the devfreq instance
@@ -85,11 +116,7 @@ static int devfreq_get_freq_level(struct devfreq *devfreq, unsigned long freq)
return -EINVAL;
}
-/**
- * devfreq_set_freq_table() - Initialize freq_table for the frequency
- * @devfreq: the devfreq instance
- */
-static void devfreq_set_freq_table(struct devfreq *devfreq)
+static int set_freq_table(struct devfreq *devfreq)
{
struct devfreq_dev_profile *profile = devfreq->profile;
struct dev_pm_opp *opp;
@@ -99,7 +126,7 @@ static void devfreq_set_freq_table(struct devfreq *devfreq)
/* Initialize the freq_table from OPP table */
count = dev_pm_opp_get_opp_count(devfreq->dev.parent);
if (count <= 0)
- return;
+ return -EINVAL;
profile->max_state = count;
profile->freq_table = devm_kcalloc(devfreq->dev.parent,
@@ -108,7 +135,7 @@ static void devfreq_set_freq_table(struct devfreq *devfreq)
GFP_KERNEL);
if (!profile->freq_table) {
profile->max_state = 0;
- return;
+ return -ENOMEM;
}
for (i = 0, freq = 0; i < profile->max_state; i++, freq++) {
@@ -116,11 +143,13 @@ static void devfreq_set_freq_table(struct devfreq *devfreq)
if (IS_ERR(opp)) {
devm_kfree(devfreq->dev.parent, profile->freq_table);
profile->max_state = 0;
- return;
+ return PTR_ERR(opp);
}
dev_pm_opp_put(opp);
profile->freq_table[i] = freq;
}
+
+ return 0;
}
/**
@@ -227,7 +256,7 @@ static int devfreq_notify_transition(struct devfreq *devfreq,
int update_devfreq(struct devfreq *devfreq)
{
struct devfreq_freqs freqs;
- unsigned long freq, cur_freq;
+ unsigned long freq, cur_freq, min_freq, max_freq;
int err = 0;
u32 flags = 0;
@@ -245,19 +274,21 @@ int update_devfreq(struct devfreq *devfreq)
return err;
/*
- * Adjust the frequency with user freq and QoS.
+ * Adjust the frequency with user freq, QoS and available freq.
*
* List from the highest priority
* max_freq
* min_freq
*/
+ max_freq = MIN(devfreq->scaling_max_freq, devfreq->max_freq);
+ min_freq = MAX(devfreq->scaling_min_freq, devfreq->min_freq);
- if (devfreq->min_freq && freq < devfreq->min_freq) {
- freq = devfreq->min_freq;
+ if (min_freq && freq < min_freq) {
+ freq = min_freq;
flags &= ~DEVFREQ_FLAG_LEAST_UPPER_BOUND; /* Use GLB */
}
- if (devfreq->max_freq && freq > devfreq->max_freq) {
- freq = devfreq->max_freq;
+ if (max_freq && freq > max_freq) {
+ freq = max_freq;
flags |= DEVFREQ_FLAG_LEAST_UPPER_BOUND; /* Use LUB */
}
@@ -280,10 +311,9 @@ int update_devfreq(struct devfreq *devfreq)
freqs.new = freq;
devfreq_notify_transition(devfreq, &freqs, DEVFREQ_POSTCHANGE);
- if (devfreq->profile->freq_table)
- if (devfreq_update_status(devfreq, freq))
- dev_err(&devfreq->dev,
- "Couldn't update frequency transition information.\n");
+ if (devfreq_update_status(devfreq, freq))
+ dev_err(&devfreq->dev,
+ "Couldn't update frequency transition information.\n");
devfreq->previous_freq = freq;
return err;
@@ -466,6 +496,19 @@ static int devfreq_notifier_call(struct notifier_block *nb, unsigned long type,
int ret;
mutex_lock(&devfreq->lock);
+
+ devfreq->scaling_min_freq = find_available_min_freq(devfreq);
+ if (!devfreq->scaling_min_freq) {
+ mutex_unlock(&devfreq->lock);
+ return -EINVAL;
+ }
+
+ devfreq->scaling_max_freq = find_available_max_freq(devfreq);
+ if (!devfreq->scaling_max_freq) {
+ mutex_unlock(&devfreq->lock);
+ return -EINVAL;
+ }
+
ret = update_devfreq(devfreq);
mutex_unlock(&devfreq->lock);
@@ -555,10 +598,28 @@ struct devfreq *devfreq_add_device(struct device *dev,
if (!devfreq->profile->max_state && !devfreq->profile->freq_table) {
mutex_unlock(&devfreq->lock);
- devfreq_set_freq_table(devfreq);
+ err = set_freq_table(devfreq);
+ if (err < 0)
+ goto err_out;
mutex_lock(&devfreq->lock);
}
+ devfreq->min_freq = find_available_min_freq(devfreq);
+ if (!devfreq->min_freq) {
+ mutex_unlock(&devfreq->lock);
+ err = -EINVAL;
+ goto err_dev;
+ }
+ devfreq->scaling_min_freq = devfreq->min_freq;
+
+ devfreq->max_freq = find_available_max_freq(devfreq);
+ if (!devfreq->max_freq) {
+ mutex_unlock(&devfreq->lock);
+ err = -EINVAL;
+ goto err_dev;
+ }
+ devfreq->scaling_max_freq = devfreq->max_freq;
+
dev_set_name(&devfreq->dev, "devfreq%d",
atomic_inc_return(&devfreq_no));
err = device_register(&devfreq->dev);
@@ -1082,6 +1143,14 @@ unlock:
return ret;
}
+static ssize_t min_freq_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct devfreq *df = to_devfreq(dev);
+
+ return sprintf(buf, "%lu\n", MAX(df->scaling_min_freq, df->min_freq));
+}
+
static ssize_t max_freq_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
@@ -1108,17 +1177,15 @@ unlock:
mutex_unlock(&df->lock);
return ret;
}
+static DEVICE_ATTR_RW(min_freq);
-#define show_one(name) \
-static ssize_t name##_show \
-(struct device *dev, struct device_attribute *attr, char *buf) \
-{ \
- return sprintf(buf, "%lu\n", to_devfreq(dev)->name); \
-}
-show_one(min_freq);
-show_one(max_freq);
+static ssize_t max_freq_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct devfreq *df = to_devfreq(dev);
-static DEVICE_ATTR_RW(min_freq);
+ return sprintf(buf, "%lu\n", MIN(df->scaling_max_freq, df->max_freq));
+}
static DEVICE_ATTR_RW(max_freq);
static ssize_t available_frequencies_show(struct device *d,
@@ -1126,22 +1193,16 @@ static ssize_t available_frequencies_show(struct device *d,
char *buf)
{
struct devfreq *df = to_devfreq(d);
- struct device *dev = df->dev.parent;
- struct dev_pm_opp *opp;
ssize_t count = 0;
- unsigned long freq = 0;
+ int i;
- do {
- opp = dev_pm_opp_find_freq_ceil(dev, &freq);
- if (IS_ERR(opp))
- break;
+ mutex_lock(&df->lock);
- dev_pm_opp_put(opp);
+ for (i = 0; i < df->profile->max_state; i++)
count += scnprintf(&buf[count], (PAGE_SIZE - count - 2),
- "%lu ", freq);
- freq++;
- } while (1);
+ "%lu ", df->profile->freq_table[i]);
+ mutex_unlock(&df->lock);
/* Truncate the trailing space */
if (count)
count--;
diff --git a/drivers/devfreq/exynos-bus.c b/drivers/devfreq/exynos-bus.c
index 49f68929e024..c25658b26598 100644
--- a/drivers/devfreq/exynos-bus.c
+++ b/drivers/devfreq/exynos-bus.c
@@ -436,7 +436,8 @@ static int exynos_bus_probe(struct platform_device *pdev)
ondemand_data->downdifferential = 5;
/* Add devfreq device to monitor and handle the exynos bus */
- bus->devfreq = devm_devfreq_add_device(dev, profile, "simple_ondemand",
+ bus->devfreq = devm_devfreq_add_device(dev, profile,
+ DEVFREQ_GOV_SIMPLE_ONDEMAND,
ondemand_data);
if (IS_ERR(bus->devfreq)) {
dev_err(dev, "failed to add devfreq device\n");
@@ -488,7 +489,7 @@ passive:
passive_data->parent = parent_devfreq;
/* Add devfreq device for exynos bus with passive governor */
- bus->devfreq = devm_devfreq_add_device(dev, profile, "passive",
+ bus->devfreq = devm_devfreq_add_device(dev, profile, DEVFREQ_GOV_PASSIVE,
passive_data);
if (IS_ERR(bus->devfreq)) {
dev_err(dev,
diff --git a/drivers/devfreq/governor_passive.c b/drivers/devfreq/governor_passive.c
index 673ad8cc9a1d..3bc29acbd54e 100644
--- a/drivers/devfreq/governor_passive.c
+++ b/drivers/devfreq/governor_passive.c
@@ -183,7 +183,7 @@ static int devfreq_passive_event_handler(struct devfreq *devfreq,
}
static struct devfreq_governor devfreq_passive = {
- .name = "passive",
+ .name = DEVFREQ_GOV_PASSIVE,
.immutable = 1,
.get_target_freq = devfreq_passive_get_target_freq,
.event_handler = devfreq_passive_event_handler,
diff --git a/drivers/devfreq/governor_performance.c b/drivers/devfreq/governor_performance.c
index c72f942f30a8..4d23ecfbd948 100644
--- a/drivers/devfreq/governor_performance.c
+++ b/drivers/devfreq/governor_performance.c
@@ -42,7 +42,7 @@ static int devfreq_performance_handler(struct devfreq *devfreq,
}
static struct devfreq_governor devfreq_performance = {
- .name = "performance",
+ .name = DEVFREQ_GOV_PERFORMANCE,
.get_target_freq = devfreq_performance_func,
.event_handler = devfreq_performance_handler,
};
diff --git a/drivers/devfreq/governor_powersave.c b/drivers/devfreq/governor_powersave.c
index 0c6bed567e6d..0c42f23249ef 100644
--- a/drivers/devfreq/governor_powersave.c
+++ b/drivers/devfreq/governor_powersave.c
@@ -39,7 +39,7 @@ static int devfreq_powersave_handler(struct devfreq *devfreq,
}
static struct devfreq_governor devfreq_powersave = {
- .name = "powersave",
+ .name = DEVFREQ_GOV_POWERSAVE,
.get_target_freq = devfreq_powersave_func,
.event_handler = devfreq_powersave_handler,
};
diff --git a/drivers/devfreq/governor_simpleondemand.c b/drivers/devfreq/governor_simpleondemand.c
index ae72ba5e78df..28e0f2de7100 100644
--- a/drivers/devfreq/governor_simpleondemand.c
+++ b/drivers/devfreq/governor_simpleondemand.c
@@ -125,7 +125,7 @@ static int devfreq_simple_ondemand_handler(struct devfreq *devfreq,
}
static struct devfreq_governor devfreq_simple_ondemand = {
- .name = "simple_ondemand",
+ .name = DEVFREQ_GOV_SIMPLE_ONDEMAND,
.get_target_freq = devfreq_simple_ondemand_func,
.event_handler = devfreq_simple_ondemand_handler,
};
diff --git a/drivers/devfreq/governor_userspace.c b/drivers/devfreq/governor_userspace.c
index 77028c27593c..080607c3f34d 100644
--- a/drivers/devfreq/governor_userspace.c
+++ b/drivers/devfreq/governor_userspace.c
@@ -87,7 +87,7 @@ static struct attribute *dev_entries[] = {
NULL,
};
static const struct attribute_group dev_attr_group = {
- .name = "userspace",
+ .name = DEVFREQ_GOV_USERSPACE,
.attrs = dev_entries,
};
diff --git a/drivers/devfreq/rk3399_dmc.c b/drivers/devfreq/rk3399_dmc.c
index 1b89ebbad02c..5dfbfa3cc878 100644
--- a/drivers/devfreq/rk3399_dmc.c
+++ b/drivers/devfreq/rk3399_dmc.c
@@ -431,7 +431,7 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
data->devfreq = devm_devfreq_add_device(dev,
&rk3399_devfreq_dmc_profile,
- "simple_ondemand",
+ DEVFREQ_GOV_SIMPLE_ONDEMAND,
&data->ondemand_data);
if (IS_ERR(data->devfreq))
return PTR_ERR(data->devfreq);
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 9f45cfeae775..f124de3a0668 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1304,7 +1304,7 @@ int i915_driver_load(struct pci_dev *pdev, const struct pci_device_id *ent)
* becaue the HDA driver may require us to enable the audio power
* domain during system suspend.
*/
- pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME;
+ dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
ret = i915_driver_init_early(dev_priv, ent);
if (ret < 0)
diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
index f0b06b14e782..b2ccce5fb071 100644
--- a/drivers/idle/intel_idle.c
+++ b/drivers/idle/intel_idle.c
@@ -913,10 +913,9 @@ static __cpuidle int intel_idle(struct cpuidle_device *dev,
struct cpuidle_state *state = &drv->states[index];
unsigned long eax = flg2MWAIT(state->flags);
unsigned int cstate;
+ bool uninitialized_var(tick);
int cpu = smp_processor_id();
- cstate = (((eax) >> MWAIT_SUBSTATE_SIZE) & MWAIT_CSTATE_MASK) + 1;
-
/*
* leave_mm() to avoid costly and often unnecessary wakeups
* for flushing the user TLB's associated with the active mm.
@@ -924,12 +923,19 @@ static __cpuidle int intel_idle(struct cpuidle_device *dev,
if (state->flags & CPUIDLE_FLAG_TLB_FLUSHED)
leave_mm(cpu);
- if (!(lapic_timer_reliable_states & (1 << (cstate))))
- tick_broadcast_enter();
+ if (!static_cpu_has(X86_FEATURE_ARAT)) {
+ cstate = (((eax) >> MWAIT_SUBSTATE_SIZE) &
+ MWAIT_CSTATE_MASK) + 1;
+ tick = false;
+ if (!(lapic_timer_reliable_states & (1 << (cstate)))) {
+ tick = true;
+ tick_broadcast_enter();
+ }
+ }
mwait_idle_with_hints(eax, ecx);
- if (!(lapic_timer_reliable_states & (1 << (cstate))))
+ if (!static_cpu_has(X86_FEATURE_ARAT) && tick)
tick_broadcast_exit();
return index;
@@ -1061,7 +1067,7 @@ static const struct idle_cpu idle_cpu_dnv = {
};
#define ICPU(model, cpu) \
- { X86_VENDOR_INTEL, 6, model, X86_FEATURE_MWAIT, (unsigned long)&cpu }
+ { X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (unsigned long)&cpu }
static const struct x86_cpu_id intel_idle_ids[] __initconst = {
ICPU(INTEL_FAM6_NEHALEM_EP, idle_cpu_nehalem),
@@ -1125,6 +1131,11 @@ static int __init intel_idle_probe(void)
return -ENODEV;
}
+ if (!boot_cpu_has(X86_FEATURE_MWAIT)) {
+ pr_debug("Please enable MWAIT in BIOS SETUP\n");
+ return -ENODEV;
+ }
+
if (boot_cpu_data.cpuid_level < CPUID_MWAIT_LEAF)
return -ENODEV;
diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
index 78b3172c8e6e..f4f17552c9b8 100644
--- a/drivers/misc/mei/pci-me.c
+++ b/drivers/misc/mei/pci-me.c
@@ -225,7 +225,7 @@ static int mei_me_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
* MEI requires to resume from runtime suspend mode
* in order to perform link reset flow upon system suspend.
*/
- pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME;
+ dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
/*
* ME maps runtime suspend/resume to D0i states,
diff --git a/drivers/misc/mei/pci-txe.c b/drivers/misc/mei/pci-txe.c
index 0566f9bfa7de..e1b909123fb0 100644
--- a/drivers/misc/mei/pci-txe.c
+++ b/drivers/misc/mei/pci-txe.c
@@ -141,7 +141,7 @@ static int mei_txe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
* MEI requires to resume from runtime suspend mode
* in order to perform link reset flow upon system suspend.
*/
- pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME;
+ dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
/*
* TXE maps runtime suspend/resume to own power gating states,
diff --git a/drivers/opp/Kconfig b/drivers/opp/Kconfig
new file mode 100644
index 000000000000..a7fbb93f302c
--- /dev/null
+++ b/drivers/opp/Kconfig
@@ -0,0 +1,13 @@
+config PM_OPP
+ bool
+ select SRCU
+ ---help---
+ SOCs have a standard set of tuples consisting of frequency and
+ voltage pairs that the device will support per voltage domain. This
+ is called Operating Performance Point or OPP. The actual definitions
+ of OPP varies over silicon within the same family of devices.
+
+ OPP layer organizes the data internally using device pointers
+ representing individual voltage domains and provides SOC
+ implementations a ready to use framework to manage OPPs.
+ For more information, read <file:Documentation/power/opp.txt>
diff --git a/drivers/base/power/opp/Makefile b/drivers/opp/Makefile
index e70ceb406fe9..e70ceb406fe9 100644
--- a/drivers/base/power/opp/Makefile
+++ b/drivers/opp/Makefile
diff --git a/drivers/base/power/opp/core.c b/drivers/opp/core.c
index a6de32530693..92fa94a6dcc1 100644
--- a/drivers/base/power/opp/core.c
+++ b/drivers/opp/core.c
@@ -19,6 +19,7 @@
#include <linux/slab.h>
#include <linux/device.h>
#include <linux/export.h>
+#include <linux/pm_domain.h>
#include <linux/regulator/consumer.h>
#include "opp.h"
@@ -296,7 +297,7 @@ int dev_pm_opp_get_opp_count(struct device *dev)
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) {
count = PTR_ERR(opp_table);
- dev_err(dev, "%s: OPP table not found (%d)\n",
+ dev_dbg(dev, "%s: OPP table not found (%d)\n",
__func__, count);
return count;
}
@@ -535,6 +536,44 @@ _generic_set_opp_clk_only(struct device *dev, struct clk *clk,
return ret;
}
+static inline int
+_generic_set_opp_domain(struct device *dev, struct clk *clk,
+ unsigned long old_freq, unsigned long freq,
+ unsigned int old_pstate, unsigned int new_pstate)
+{
+ int ret;
+
+ /* Scaling up? Scale domain performance state before frequency */
+ if (freq > old_freq) {
+ ret = dev_pm_genpd_set_performance_state(dev, new_pstate);
+ if (ret)
+ return ret;
+ }
+
+ ret = _generic_set_opp_clk_only(dev, clk, old_freq, freq);
+ if (ret)
+ goto restore_domain_state;
+
+ /* Scaling down? Scale domain performance state after frequency */
+ if (freq < old_freq) {
+ ret = dev_pm_genpd_set_performance_state(dev, new_pstate);
+ if (ret)
+ goto restore_freq;
+ }
+
+ return 0;
+
+restore_freq:
+ if (_generic_set_opp_clk_only(dev, clk, freq, old_freq))
+ dev_err(dev, "%s: failed to restore old-freq (%lu Hz)\n",
+ __func__, old_freq);
+restore_domain_state:
+ if (freq > old_freq)
+ dev_pm_genpd_set_performance_state(dev, old_pstate);
+
+ return ret;
+}
+
static int _generic_set_opp_regulator(const struct opp_table *opp_table,
struct device *dev,
unsigned long old_freq,
@@ -653,7 +692,16 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
/* Only frequency scaling */
if (!opp_table->regulators) {
- ret = _generic_set_opp_clk_only(dev, clk, old_freq, freq);
+ /*
+ * We don't support devices with both regulator and
+ * domain performance-state for now.
+ */
+ if (opp_table->genpd_performance_state)
+ ret = _generic_set_opp_domain(dev, clk, old_freq, freq,
+ IS_ERR(old_opp) ? 0 : old_opp->pstate,
+ opp->pstate);
+ else
+ ret = _generic_set_opp_clk_only(dev, clk, old_freq, freq);
} else if (!opp_table->set_opp) {
ret = _generic_set_opp_regulator(opp_table, dev, old_freq, freq,
IS_ERR(old_opp) ? NULL : old_opp->supplies,
@@ -988,6 +1036,9 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
return ret;
}
+ if (opp_table->get_pstate)
+ new_opp->pstate = opp_table->get_pstate(dev, new_opp->rate);
+
list_add(&new_opp->node, head);
mutex_unlock(&opp_table->lock);
@@ -1476,13 +1527,13 @@ err:
EXPORT_SYMBOL_GPL(dev_pm_opp_register_set_opp_helper);
/**
- * dev_pm_opp_register_put_opp_helper() - Releases resources blocked for
+ * dev_pm_opp_unregister_set_opp_helper() - Releases resources blocked for
* set_opp helper
* @opp_table: OPP table returned from dev_pm_opp_register_set_opp_helper().
*
* Release resources blocked for platform specific set_opp helper.
*/
-void dev_pm_opp_register_put_opp_helper(struct opp_table *opp_table)
+void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table)
{
if (!opp_table->set_opp) {
pr_err("%s: Doesn't have custom set_opp helper set\n",
@@ -1497,7 +1548,82 @@ void dev_pm_opp_register_put_opp_helper(struct opp_table *opp_table)
dev_pm_opp_put_opp_table(opp_table);
}
-EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper);
+EXPORT_SYMBOL_GPL(dev_pm_opp_unregister_set_opp_helper);
+
+/**
+ * dev_pm_opp_register_get_pstate_helper() - Register get_pstate() helper.
+ * @dev: Device for which the helper is getting registered.
+ * @get_pstate: Helper.
+ *
+ * TODO: Remove this callback after the same information is available via Device
+ * Tree.
+ *
+ * This allows a platform to initialize the performance states of individual
+ * OPPs for its devices, until we get similar information directly from DT.
+ *
+ * This must be called before the OPPs are initialized for the device.
+ */
+struct opp_table *dev_pm_opp_register_get_pstate_helper(struct device *dev,
+ int (*get_pstate)(struct device *dev, unsigned long rate))
+{
+ struct opp_table *opp_table;
+ int ret;
+
+ if (!get_pstate)
+ return ERR_PTR(-EINVAL);
+
+ opp_table = dev_pm_opp_get_opp_table(dev);
+ if (!opp_table)
+ return ERR_PTR(-ENOMEM);
+
+ /* This should be called before OPPs are initialized */
+ if (WARN_ON(!list_empty(&opp_table->opp_list))) {
+ ret = -EBUSY;
+ goto err;
+ }
+
+ /* Already have genpd_performance_state set */
+ if (WARN_ON(opp_table->genpd_performance_state)) {
+ ret = -EBUSY;
+ goto err;
+ }
+
+ opp_table->genpd_performance_state = true;
+ opp_table->get_pstate = get_pstate;
+
+ return opp_table;
+
+err:
+ dev_pm_opp_put_opp_table(opp_table);
+
+ return ERR_PTR(ret);
+}
+EXPORT_SYMBOL_GPL(dev_pm_opp_register_get_pstate_helper);
+
+/**
+ * dev_pm_opp_unregister_get_pstate_helper() - Releases resources blocked for
+ * get_pstate() helper
+ * @opp_table: OPP table returned from dev_pm_opp_register_get_pstate_helper().
+ *
+ * Release resources blocked for platform specific get_pstate() helper.
+ */
+void dev_pm_opp_unregister_get_pstate_helper(struct opp_table *opp_table)
+{
+ if (!opp_table->genpd_performance_state) {
+ pr_err("%s: Doesn't have performance states set\n",
+ __func__);
+ return;
+ }
+
+ /* Make sure there are no concurrent readers while updating opp_table */
+ WARN_ON(!list_empty(&opp_table->opp_list));
+
+ opp_table->genpd_performance_state = false;
+ opp_table->get_pstate = NULL;
+
+ dev_pm_opp_put_opp_table(opp_table);
+}
+EXPORT_SYMBOL_GPL(dev_pm_opp_unregister_get_pstate_helper);
/**
* dev_pm_opp_add() - Add an OPP table from a table definitions
@@ -1706,6 +1832,13 @@ void _dev_pm_opp_remove_table(struct opp_table *opp_table, struct device *dev,
if (remove_all || !opp->dynamic)
dev_pm_opp_put(opp);
}
+
+ /*
+ * The OPP table is getting removed, drop the performance state
+ * constraints.
+ */
+ if (opp_table->genpd_performance_state)
+ dev_pm_genpd_set_performance_state(dev, 0);
} else {
_remove_opp_dev(_find_opp_dev(dev, opp_table), opp_table);
}
diff --git a/drivers/base/power/opp/cpu.c b/drivers/opp/cpu.c
index 2d87bc1adf38..2d87bc1adf38 100644
--- a/drivers/base/power/opp/cpu.c
+++ b/drivers/opp/cpu.c
diff --git a/drivers/base/power/opp/debugfs.c b/drivers/opp/debugfs.c
index 81cf120fcf43..b03c03576a62 100644
--- a/drivers/base/power/opp/debugfs.c
+++ b/drivers/opp/debugfs.c
@@ -41,16 +41,15 @@ static bool opp_debug_create_supplies(struct dev_pm_opp *opp,
{
struct dentry *d;
int i;
- char *name;
for (i = 0; i < opp_table->regulator_count; i++) {
- name = kasprintf(GFP_KERNEL, "supply-%d", i);
+ char name[15];
+
+ snprintf(name, sizeof(name), "supply-%d", i);
/* Create per-opp directory */
d = debugfs_create_dir(name, pdentry);
- kfree(name);
-
if (!d)
return false;
@@ -100,6 +99,9 @@ int opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table)
if (!debugfs_create_bool("suspend", S_IRUGO, d, &opp->suspend))
return -ENOMEM;
+ if (!debugfs_create_u32("performance_state", S_IRUGO, d, &opp->pstate))
+ return -ENOMEM;
+
if (!debugfs_create_ulong("rate_hz", S_IRUGO, d, &opp->rate))
return -ENOMEM;
diff --git a/drivers/base/power/opp/of.c b/drivers/opp/of.c
index 0b718886479b..cb716aa2f44b 100644
--- a/drivers/base/power/opp/of.c
+++ b/drivers/opp/of.c
@@ -16,7 +16,7 @@
#include <linux/cpu.h>
#include <linux/errno.h>
#include <linux/device.h>
-#include <linux/of.h>
+#include <linux/of_device.h>
#include <linux/slab.h>
#include <linux/export.h>
@@ -397,6 +397,7 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
dev_err(dev, "%s: Failed to add OPP, %d\n", __func__,
ret);
_dev_pm_opp_remove_table(opp_table, dev, false);
+ of_node_put(np);
goto put_opp_table;
}
}
@@ -603,7 +604,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
if (cpu == cpu_dev->id)
continue;
- cpu_np = of_get_cpu_node(cpu, NULL);
+ cpu_np = of_cpu_device_node_get(cpu);
if (!cpu_np) {
dev_err(cpu_dev, "%s: failed to get cpu%d node\n",
__func__, cpu);
@@ -613,6 +614,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
/* Get OPP descriptor node */
tmp_np = _opp_of_get_opp_desc_node(cpu_np);
+ of_node_put(cpu_np);
if (!tmp_np) {
pr_err("%pOF: Couldn't find opp node\n", cpu_np);
ret = -ENOENT;
diff --git a/drivers/base/power/opp/opp.h b/drivers/opp/opp.h
index 166eef990599..4d00061648a3 100644
--- a/drivers/base/power/opp/opp.h
+++ b/drivers/opp/opp.h
@@ -58,6 +58,7 @@ extern struct list_head opp_tables;
* @dynamic: not-created from static DT entries.
* @turbo: true if turbo (boost) OPP
* @suspend: true if suspend OPP
+ * @pstate: Device's power domain's performance state.
* @rate: Frequency in hertz
* @supplies: Power supplies voltage/current values
* @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's
@@ -76,6 +77,7 @@ struct dev_pm_opp {
bool dynamic;
bool turbo;
bool suspend;
+ unsigned int pstate;
unsigned long rate;
struct dev_pm_opp_supply *supplies;
@@ -135,8 +137,10 @@ enum opp_table_access {
* @clk: Device's clock handle
* @regulators: Supply regulators
* @regulator_count: Number of power supply regulators
+ * @genpd_performance_state: Device's power domain support performance state.
* @set_opp: Platform specific set_opp callback
* @set_opp_data: Data to be passed to set_opp callback
+ * @get_pstate: Platform specific get_pstate callback
* @dentry: debugfs dentry pointer of the real device directory (not links).
* @dentry_name: Name of the real dentry.
*
@@ -170,9 +174,11 @@ struct opp_table {
struct clk *clk;
struct regulator **regulators;
unsigned int regulator_count;
+ bool genpd_performance_state;
int (*set_opp)(struct dev_pm_set_opp_data *data);
struct dev_pm_set_opp_data *set_opp_data;
+ int (*get_pstate)(struct device *dev, unsigned long rate);
#ifdef CONFIG_DEBUG_FS
struct dentry *dentry;
diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index 11bd267fc137..07b8a9b385ab 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -680,17 +680,13 @@ static int pci_pm_prepare(struct device *dev)
{
struct device_driver *drv = dev->driver;
- /*
- * Devices having power.ignore_children set may still be necessary for
- * suspending their children in the next phase of device suspend.
- */
- if (dev->power.ignore_children)
- pm_runtime_resume(dev);
-
if (drv && drv->pm && drv->pm->prepare) {
int error = drv->pm->prepare(dev);
- if (error)
+ if (error < 0)
return error;
+
+ if (!error && dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_PREPARE))
+ return 0;
}
return pci_dev_keep_suspended(to_pci_dev(dev));
}
@@ -731,18 +727,25 @@ static int pci_pm_suspend(struct device *dev)
if (!pm) {
pci_pm_default_suspend(pci_dev);
- goto Fixup;
+ return 0;
}
/*
- * PCI devices suspended at run time need to be resumed at this point,
- * because in general it is necessary to reconfigure them for system
- * suspend. Namely, if the device is supposed to wake up the system
- * from the sleep state, we may need to reconfigure it for this purpose.
- * In turn, if the device is not supposed to wake up the system from the
- * sleep state, we'll have to prevent it from signaling wake-up.
+ * PCI devices suspended at run time may need to be resumed at this
+ * point, because in general it may be necessary to reconfigure them for
+ * system suspend. Namely, if the device is expected to wake up the
+ * system from the sleep state, it may have to be reconfigured for this
+ * purpose, or if the device is not expected to wake up the system from
+ * the sleep state, it should be prevented from signaling wakeup events
+ * going forward.
+ *
+ * Also if the driver of the device does not indicate that its system
+ * suspend callbacks can cope with runtime-suspended devices, it is
+ * better to resume the device from runtime suspend here.
*/
- pm_runtime_resume(dev);
+ if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
+ !pci_dev_keep_suspended(pci_dev))
+ pm_runtime_resume(dev);
pci_dev->state_saved = false;
if (pm->suspend) {
@@ -762,17 +765,27 @@ static int pci_pm_suspend(struct device *dev)
}
}
- Fixup:
- pci_fixup_device(pci_fixup_suspend, pci_dev);
-
return 0;
}
+static int pci_pm_suspend_late(struct device *dev)
+{
+ if (dev_pm_smart_suspend_and_suspended(dev))
+ return 0;
+
+ pci_fixup_device(pci_fixup_suspend, to_pci_dev(dev));
+
+ return pm_generic_suspend_late(dev);
+}
+
static int pci_pm_suspend_noirq(struct device *dev)
{
struct pci_dev *pci_dev = to_pci_dev(dev);
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
+ if (dev_pm_smart_suspend_and_suspended(dev))
+ return 0;
+
if (pci_has_legacy_pm_support(pci_dev))
return pci_legacy_suspend_late(dev, PMSG_SUSPEND);
@@ -805,6 +818,9 @@ static int pci_pm_suspend_noirq(struct device *dev)
pci_prepare_to_sleep(pci_dev);
}
+ dev_dbg(dev, "PCI PM: Suspend power state: %s\n",
+ pci_power_name(pci_dev->current_state));
+
pci_pm_set_unknown_state(pci_dev);
/*
@@ -831,6 +847,14 @@ static int pci_pm_resume_noirq(struct device *dev)
struct device_driver *drv = dev->driver;
int error = 0;
+ /*
+ * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend
+ * during system suspend, so update their runtime PM status to "active"
+ * as they are going to be put into D0 shortly.
+ */
+ if (dev_pm_smart_suspend_and_suspended(dev))
+ pm_runtime_set_active(dev);
+
pci_pm_default_resume_early(pci_dev);
if (pci_has_legacy_pm_support(pci_dev))
@@ -873,6 +897,7 @@ static int pci_pm_resume(struct device *dev)
#else /* !CONFIG_SUSPEND */
#define pci_pm_suspend NULL
+#define pci_pm_suspend_late NULL
#define pci_pm_suspend_noirq NULL
#define pci_pm_resume NULL
#define pci_pm_resume_noirq NULL
@@ -907,7 +932,8 @@ static int pci_pm_freeze(struct device *dev)
* devices should not be touched during freeze/thaw transitions,
* however.
*/
- pm_runtime_resume(dev);
+ if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND))
+ pm_runtime_resume(dev);
pci_dev->state_saved = false;
if (pm->freeze) {
@@ -919,17 +945,25 @@ static int pci_pm_freeze(struct device *dev)
return error;
}
- if (pcibios_pm_ops.freeze)
- return pcibios_pm_ops.freeze(dev);
-
return 0;
}
+static int pci_pm_freeze_late(struct device *dev)
+{
+ if (dev_pm_smart_suspend_and_suspended(dev))
+ return 0;
+
+ return pm_generic_freeze_late(dev);;
+}
+
static int pci_pm_freeze_noirq(struct device *dev)
{
struct pci_dev *pci_dev = to_pci_dev(dev);
struct device_driver *drv = dev->driver;
+ if (dev_pm_smart_suspend_and_suspended(dev))
+ return 0;
+
if (pci_has_legacy_pm_support(pci_dev))
return pci_legacy_suspend_late(dev, PMSG_FREEZE);
@@ -959,6 +993,16 @@ static int pci_pm_thaw_noirq(struct device *dev)
struct device_driver *drv = dev->driver;
int error = 0;
+ /*
+ * If the device is in runtime suspend, the code below may not work
+ * correctly with it, so skip that code and make the PM core skip all of
+ * the subsequent "thaw" callbacks for the device.
+ */
+ if (dev_pm_smart_suspend_and_suspended(dev)) {
+ dev->power.direct_complete = true;
+ return 0;
+ }
+
if (pcibios_pm_ops.thaw_noirq) {
error = pcibios_pm_ops.thaw_noirq(dev);
if (error)
@@ -983,12 +1027,6 @@ static int pci_pm_thaw(struct device *dev)
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
int error = 0;
- if (pcibios_pm_ops.thaw) {
- error = pcibios_pm_ops.thaw(dev);
- if (error)
- return error;
- }
-
if (pci_has_legacy_pm_support(pci_dev))
return pci_legacy_resume(dev);
@@ -1014,11 +1052,13 @@ static int pci_pm_poweroff(struct device *dev)
if (!pm) {
pci_pm_default_suspend(pci_dev);
- goto Fixup;
+ return 0;
}
/* The reason to do that is the same as in pci_pm_suspend(). */
- pm_runtime_resume(dev);
+ if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
+ !pci_dev_keep_suspended(pci_dev))
+ pm_runtime_resume(dev);
pci_dev->state_saved = false;
if (pm->poweroff) {
@@ -1030,13 +1070,17 @@ static int pci_pm_poweroff(struct device *dev)
return error;
}
- Fixup:
- pci_fixup_device(pci_fixup_suspend, pci_dev);
+ return 0;
+}
- if (pcibios_pm_ops.poweroff)
- return pcibios_pm_ops.poweroff(dev);
+static int pci_pm_poweroff_late(struct device *dev)
+{
+ if (dev_pm_smart_suspend_and_suspended(dev))
+ return 0;
- return 0;
+ pci_fixup_device(pci_fixup_suspend, to_pci_dev(dev));
+
+ return pm_generic_poweroff_late(dev);
}
static int pci_pm_poweroff_noirq(struct device *dev)
@@ -1044,6 +1088,9 @@ static int pci_pm_poweroff_noirq(struct device *dev)
struct pci_dev *pci_dev = to_pci_dev(dev);
struct device_driver *drv = dev->driver;
+ if (dev_pm_smart_suspend_and_suspended(dev))
+ return 0;
+
if (pci_has_legacy_pm_support(to_pci_dev(dev)))
return pci_legacy_suspend_late(dev, PMSG_HIBERNATE);
@@ -1085,6 +1132,10 @@ static int pci_pm_restore_noirq(struct device *dev)
struct device_driver *drv = dev->driver;
int error = 0;
+ /* This is analogous to the pci_pm_resume_noirq() case. */
+ if (dev_pm_smart_suspend_and_suspended(dev))
+ pm_runtime_set_active(dev);
+
if (pcibios_pm_ops.restore_noirq) {
error = pcibios_pm_ops.restore_noirq(dev);
if (error)
@@ -1108,12 +1159,6 @@ static int pci_pm_restore(struct device *dev)
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
int error = 0;
- if (pcibios_pm_ops.restore) {
- error = pcibios_pm_ops.restore(dev);
- if (error)
- return error;
- }
-
/*
* This is necessary for the hibernation error path in which restore is
* called without restoring the standard config registers of the device.
@@ -1139,10 +1184,12 @@ static int pci_pm_restore(struct device *dev)
#else /* !CONFIG_HIBERNATE_CALLBACKS */
#define pci_pm_freeze NULL
+#define pci_pm_freeze_late NULL
#define pci_pm_freeze_noirq NULL
#define pci_pm_thaw NULL
#define pci_pm_thaw_noirq NULL
#define pci_pm_poweroff NULL
+#define pci_pm_poweroff_late NULL
#define pci_pm_poweroff_noirq NULL
#define pci_pm_restore NULL
#define pci_pm_restore_noirq NULL
@@ -1258,10 +1305,13 @@ static const struct dev_pm_ops pci_dev_pm_ops = {
.prepare = pci_pm_prepare,
.complete = pci_pm_complete,
.suspend = pci_pm_suspend,
+ .suspend_late = pci_pm_suspend_late,
.resume = pci_pm_resume,
.freeze = pci_pm_freeze,
+ .freeze_late = pci_pm_freeze_late,
.thaw = pci_pm_thaw,
.poweroff = pci_pm_poweroff,
+ .poweroff_late = pci_pm_poweroff_late,
.restore = pci_pm_restore,
.suspend_noirq = pci_pm_suspend_noirq,
.resume_noirq = pci_pm_resume_noirq,
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 6078dfc11b11..374f5686e2bc 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -2166,8 +2166,7 @@ bool pci_dev_keep_suspended(struct pci_dev *pci_dev)
if (!pm_runtime_suspended(dev)
|| pci_target_state(pci_dev, wakeup) != pci_dev->current_state
- || platform_pci_need_resume(pci_dev)
- || (pci_dev->dev_flags & PCI_DEV_FLAGS_NEEDS_RESUME))
+ || platform_pci_need_resume(pci_dev))
return false;
/*
diff --git a/drivers/power/avs/smartreflex.c b/drivers/power/avs/smartreflex.c
index 974fd684bab2..89bf4d6cb486 100644
--- a/drivers/power/avs/smartreflex.c
+++ b/drivers/power/avs/smartreflex.c
@@ -355,7 +355,7 @@ int sr_configure_errgen(struct omap_sr *sr)
u8 senp_shift, senn_shift;
if (!sr) {
- pr_warn("%s: NULL omap_sr from %pF\n",
+ pr_warn("%s: NULL omap_sr from %pS\n",
__func__, (void *)_RET_IP_);
return -EINVAL;
}
@@ -422,7 +422,7 @@ int sr_disable_errgen(struct omap_sr *sr)
u32 vpboundint_en, vpboundint_st;
if (!sr) {
- pr_warn("%s: NULL omap_sr from %pF\n",
+ pr_warn("%s: NULL omap_sr from %pS\n",
__func__, (void *)_RET_IP_);
return -EINVAL;
}
@@ -477,7 +477,7 @@ int sr_configure_minmax(struct omap_sr *sr)
u8 senp_shift, senn_shift;
if (!sr) {
- pr_warn("%s: NULL omap_sr from %pF\n",
+ pr_warn("%s: NULL omap_sr from %pS\n",
__func__, (void *)_RET_IP_);
return -EINVAL;
}
@@ -562,7 +562,7 @@ int sr_enable(struct omap_sr *sr, unsigned long volt)
int ret;
if (!sr) {
- pr_warn("%s: NULL omap_sr from %pF\n",
+ pr_warn("%s: NULL omap_sr from %pS\n",
__func__, (void *)_RET_IP_);
return -EINVAL;
}
@@ -614,7 +614,7 @@ int sr_enable(struct omap_sr *sr, unsigned long volt)
void sr_disable(struct omap_sr *sr)
{
if (!sr) {
- pr_warn("%s: NULL omap_sr from %pF\n",
+ pr_warn("%s: NULL omap_sr from %pS\n",
__func__, (void *)_RET_IP_);
return;
}
diff --git a/drivers/soc/mediatek/mtk-scpsys.c b/drivers/soc/mediatek/mtk-scpsys.c
index e1ce8b1b5090..e570b6af2e6f 100644
--- a/drivers/soc/mediatek/mtk-scpsys.c
+++ b/drivers/soc/mediatek/mtk-scpsys.c
@@ -361,17 +361,6 @@ out:
return ret;
}
-static bool scpsys_active_wakeup(struct device *dev)
-{
- struct generic_pm_domain *genpd;
- struct scp_domain *scpd;
-
- genpd = pd_to_genpd(dev->pm_domain);
- scpd = container_of(genpd, struct scp_domain, genpd);
-
- return scpd->data->active_wakeup;
-}
-
static void init_clks(struct platform_device *pdev, struct clk **clk)
{
int i;
@@ -466,7 +455,8 @@ static struct scp *init_scp(struct platform_device *pdev,
genpd->name = data->name;
genpd->power_off = scpsys_power_off;
genpd->power_on = scpsys_power_on;
- genpd->dev_ops.active_wakeup = scpsys_active_wakeup;
+ if (scpd->data->active_wakeup)
+ genpd->flags |= GENPD_FLAG_ACTIVE_WAKEUP;
}
return scp;
diff --git a/drivers/soc/rockchip/pm_domains.c b/drivers/soc/rockchip/pm_domains.c
index 40b75748835f..5c342167b9db 100644
--- a/drivers/soc/rockchip/pm_domains.c
+++ b/drivers/soc/rockchip/pm_domains.c
@@ -358,17 +358,6 @@ static void rockchip_pd_detach_dev(struct generic_pm_domain *genpd,
pm_clk_destroy(dev);
}
-static bool rockchip_active_wakeup(struct device *dev)
-{
- struct generic_pm_domain *genpd;
- struct rockchip_pm_domain *pd;
-
- genpd = pd_to_genpd(dev->pm_domain);
- pd = container_of(genpd, struct rockchip_pm_domain, genpd);
-
- return pd->info->active_wakeup;
-}
-
static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
struct device_node *node)
{
@@ -489,8 +478,9 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
pd->genpd.power_on = rockchip_pd_power_on;
pd->genpd.attach_dev = rockchip_pd_attach_dev;
pd->genpd.detach_dev = rockchip_pd_detach_dev;
- pd->genpd.dev_ops.active_wakeup = rockchip_active_wakeup;
pd->genpd.flags = GENPD_FLAG_PM_CLK;
+ if (pd_info->active_wakeup)
+ pd->genpd.flags |= GENPD_FLAG_ACTIVE_WAKEUP;
pm_genpd_init(&pd->genpd, NULL, false);
pmu->genpd_data.domains[id] = &pd->genpd;