aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2019-02-01soc: qcom: rpmh: Avoid accessing freed memory from batch APIStephen Boyd1-13/+21
Using the batch API from the interconnect driver sometimes leads to a KASAN error due to an access to freed memory. This is easier to trigger with threadirqs on the kernel commandline. BUG: KASAN: use-after-free in rpmh_tx_done+0x114/0x12c Read of size 1 at addr fffffff51414ad84 by task irq/110-apps_rs/57 CPU: 0 PID: 57 Comm: irq/110-apps_rs Tainted: G W 4.19.10 #72 Call trace: dump_backtrace+0x0/0x2f8 show_stack+0x20/0x2c __dump_stack+0x20/0x28 dump_stack+0xcc/0x10c print_address_description+0x74/0x240 kasan_report+0x250/0x26c __asan_report_load1_noabort+0x20/0x2c rpmh_tx_done+0x114/0x12c tcs_tx_done+0x450/0x768 irq_forced_thread_fn+0x58/0x9c irq_thread+0x120/0x1dc kthread+0x248/0x260 ret_from_fork+0x10/0x18 Allocated by task 385: kasan_kmalloc+0xac/0x148 __kmalloc+0x170/0x1e4 rpmh_write_batch+0x174/0x540 qcom_icc_set+0x8dc/0x9ac icc_set+0x288/0x2e8 a6xx_gmu_stop+0x320/0x3c0 a6xx_pm_suspend+0x108/0x124 adreno_suspend+0x50/0x60 pm_generic_runtime_suspend+0x60/0x78 __rpm_callback+0x214/0x32c rpm_callback+0x54/0x184 rpm_suspend+0x3f8/0xa90 pm_runtime_work+0xb4/0x178 process_one_work+0x544/0xbc0 worker_thread+0x514/0x7d0 kthread+0x248/0x260 ret_from_fork+0x10/0x18 Freed by task 385: __kasan_slab_free+0x12c/0x1e0 kasan_slab_free+0x10/0x1c kfree+0x134/0x588 rpmh_write_batch+0x49c/0x540 qcom_icc_set+0x8dc/0x9ac icc_set+0x288/0x2e8 a6xx_gmu_stop+0x320/0x3c0 a6xx_pm_suspend+0x108/0x124 adreno_suspend+0x50/0x60 cr50_spi spi5.0: SPI transfer timed out pm_generic_runtime_suspend+0x60/0x78 __rpm_callback+0x214/0x32c rpm_callback+0x54/0x184 rpm_suspend+0x3f8/0xa90 pm_runtime_work+0xb4/0x178 process_one_work+0x544/0xbc0 worker_thread+0x514/0x7d0 kthread+0x248/0x260 ret_from_fork+0x10/0x18 The buggy address belongs to the object at fffffff51414ac80 which belongs to the cache kmalloc-512 of size 512 The buggy address is located 260 bytes inside of 512-byte region [fffffff51414ac80, fffffff51414ae80) The buggy address belongs to the page: page:ffffffbfd4505200 count:1 mapcount:0 mapping:fffffff51e00c680 index:0x0 compound_mapcount: 0 flags: 0x4000000000008100(slab|head) raw: 4000000000008100 ffffffbfd4529008 ffffffbfd44f9208 fffffff51e00c680 raw: 0000000000000000 0000000000200020 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: fffffff51414ac80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fffffff51414ad00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >fffffff51414ad80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ fffffff51414ae00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fffffff51414ae80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc The batch API sets the same completion for each rpmh message that's sent and then loops through all the messages and waits for that single completion declared on the stack to be completed before returning from the function and freeing the message structures. Unfortunately, some messages may still be in process and 'stuck' in the TCS. At some later point, the tcs_tx_done() interrupt will run and try to process messages that have already been freed at the end of rpmh_write_batch(). This will in turn access the 'needs_free' member of the rpmh_request structure and cause KASAN to complain. Furthermore, if there's a message that's completed in rpmh_tx_done() and freed immediately after the complete() call is made we'll be racing with potentially freed memory when accessing the 'needs_free' member: CPU0 CPU1 ---- ---- rpmh_tx_done() complete(&compl) wait_for_completion(&compl) kfree(rpm_msg) if (rpm_msg->needs_free) <KASAN warning splat> Let's fix this by allocating a chunk of completions for each message and waiting for all of them to be completed before returning from the batch API. Alternatively, we could wait for the last message in the batch, but that may be a more complicated change because it looks like tcs_tx_done() just iterates through the indices of the queue and completes each message instead of tracking the last inserted message and completing that first. Fixes: c8790cb6da58 ("drivers: qcom: rpmh: add support for batch RPMH request") Cc: Lina Iyer <ilina@codeaurora.org> Cc: "Raju P.L.S.S.S.N" <rplsssn@codeaurora.org> Cc: Matthias Kaehlcke <mka@chromium.org> Cc: Evan Green <evgreen@chromium.org> Cc: stable@vger.kernel.org Reviewed-by: Lina Iyer <ilina@codeaurora.org> Reviewed-by: Evan Green <evgreen@chromium.org> Signed-off-by: Stephen Boyd <swboyd@chromium.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-02-01drivers: qcom: rpmh: avoid sending sleep/wake sets immediatelyRaju P.L.S.S.S.N1-2/+1
Fix the redundant call being made to send the sleep and wake requests immediately to the controller. As per the patch [1], the sleep and wake request votes are cached in rpmh controller and sent during rpmh_flush(). These requests needs to be sent only during entry of deeper system low power modes or suspend. [1] https://patchwork.kernel.org/patch/10477533/ Reviewed-by: Lina Iyer <ilina@codeaurora.org> Reviewed-by: Evan Green <evgreen@chromium.org> Signed-off-by: Raju P.L.S.S.S.N <rplsssn@codeaurora.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-02-01soc: qcom: rmtfs-mem: Make sysfs attributes world-readableEvan Green1-3/+3
In order to run an rmtfs daemon as an unprivileged user, that user would need access to the phys_addr and size sysfs attributes. Sharing these attributes with unprivileged users doesn't really leak anything sensitive, since if you have access to physical memory, the jig is up anyway. Make those attributes readable by all. Reviewed-by: Brian Norris <briannorris@chromium.org> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Evan Green <evgreen@chromium.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-02-01soc: qcom: rmtfs-mem: Add class to enable ueventsEvan Green1-5/+21
Currently the qcom_rmtfs_memN devices are entirely invisible to the udev world. Add a class to the rmtfs device so that uevents fire when the device is added. Reviewed-by: Brian Norris <briannorris@chromium.org> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Evan Green <evgreen@chromium.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-01-22soc: qcom: update config dependencies for QCOM_RPMPDRajendra Nayak1-1/+1
Since QCOM_RPMPD is bool and it depends on QCOM_SMD_RPM which is tristate, configurations such as arm64:allmodconfig result in CONFIG_QCOM_RPMPD=y CONFIG_QCOM_SMD_RPM=m This in turn results in drivers/soc/qcom/rpmpd.o: In function `rpmpd_send_corner': rpmpd.c:(.text+0x10c): undefined reference to `qcom_rpm_smd_write' drivers/soc/qcom/rpmpd.o: In function `rpmpd_power_on': rpmpd.c:(.text+0x3b4): undefined reference to `qcom_rpm_smd_write' drivers/soc/qcom/rpmpd.o: In function `rpmpd_power_off': rpmpd.c:(.text+0x520): undefined reference to `qcom_rpm_smd_write' make: *** [vmlinux] Error 1 Fix it by making QCOM_RPMPD depend on QCOM_SMD_RPM=y Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Rajendra Nayak <rnayak@codeaurora.org> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-01-22soc: qcom: rpmpd: Drop family A RPM dependencyBjorn Andersson2-7/+5
The MFD_QCOM_RPM is the RPM in family A, but the rpmpd driver only implements support for SMD based devices. Drop the dependency and remove includes of the family A headers. No functional change. Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Reviewed-by: Rajendra Nayak <rnayak@codeaurora.org> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-01-22MAINTAINERS: update list of qcom driversAmit Kucheria1-7/+25
Several drivers didn't have a specific maintainer (other than the subsystem maintainer). Add some generic regex patterns to capture most qcom drivers in the list of supported drivers. For the rest, add explicit filenames. Sort the entries, while we're at it. Signed-off-by: Amit Kucheria <amit.kucheria@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-01-22soc: qcom: rpmhpd: Mark mx as a parent for cxRajendra Nayak1-0/+11
Specify the active + sleep and active-only MX power domains as the parents of the corresponding CX power domains. This will ensure that performance state requests on CX automatically generate equivalent requests on MX power domains. This is used to enforce a requirement that exists for various hardware blocks on SDM845 that MX performance state >= CX performance state for a given operating frequency. Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <swboyd@chromium.org> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Rajendra Nayak <rnayak@codeaurora.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-01-22soc: qcom: rpmhpd: Add RPMh power domain driverRajendra Nayak3-0/+405
The RPMh power domain driver aggregates the corner votes from various consumers for the ARC resources and communicates it to RPMh. With RPMh we use 2 different numbering space for corners, one used by the clients to express their performance needs, and another used to communicate to RPMh hardware. The clients express their performance requirements using a sparse numbering space which are mapped to meaningful levels like RET, SVS, NOMINAL, TURBO etc which then get mapped to another number space between 0 and 15 which is communicated to RPMh. The sparse number space, also referred to as vlvl is mapped to the continuous number space of 0 to 15, also referred to as hlvl, using command DB. Some power domain clients could request a performance state only while the CPU is active, while some others could request for a certain performance state all the time regardless of the state of the CPU. We handle this by internally aggregating the votes from both type of clients and then send the aggregated votes to RPMh. There are also 3 different types of votes that are comunicated to RPMh for every resource. 1. ACTIVE_ONLY: This specifies the requirement for the resource when the CPU is active 2. SLEEP: This specifies the requirement for the resource when the CPU is going to sleep 3. WAKE_ONLY: This specifies the requirement for the resource when the CPU is coming out of sleep to active state We add data for all power domains on sdm845 SoC as part of the patch. The driver can be extended to support other SoCs which support RPMh Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Stephen Boyd <swboyd@chromium.org> Signed-off-by: Rajendra Nayak <rnayak@codeaurora.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-01-22soc: qcom: rpmpd: Add support for get/set performance stateRajendra Nayak1-0/+35
Add support for the .set_performace_state() and .opp_to_performance_state() callbacks in the rpmpd driver. Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Stephen Boyd <swboyd@chromium.org> Signed-off-by: Rajendra Nayak <rnayak@codeaurora.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-01-22soc: qcom: rpmpd: Add a Power domain driver to model cornersRajendra Nayak3-0/+292
The Power domains for corners just pass the performance state set by the consumers to the RPM (Remote Power manager) which then takes care of setting the appropriate voltage on the corresponding rails to meet the performance needs. We add all power domain data needed on msm8996 here. This driver can easily be extended by adding data for other qualcomm SoCs as well. Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Rob Herring <robh@kernel.org> Reviewed-by: Stephen Boyd <swboyd@chromium.org> Signed-off-by: Rajendra Nayak <rnayak@codeaurora.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-01-22dt-bindings: power: Add qcom rpm power domain driver bindingsRajendra Nayak2-0/+184
Add DT bindings to describe the rpm/rpmh power domains found on Qualcomm Technologies, Inc. SoCs. These power domains communicate a performance state to RPM/RPMh, which then translates it into corresponding voltage on a PMIC rail. Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Stephen Boyd <swboyd@chromium.org> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Rajendra Nayak <rnayak@codeaurora.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-01-22OPP: Add support for parsing the 'opp-level' propertyRajendra Nayak4-0/+29
Now that the OPP bindings are updated to include an optional 'opp-level' property, add support to parse it from device tree and store it as part of dev_pm_opp structure. Also add and export an helper 'dev_pm_opp_get_level()' that can be used to get the level value read from device tree when present. Reviewed-by: Stephen Boyd <swboyd@chromium.org> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rajendra Nayak <rnayak@codeaurora.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-01-22dt-bindings: opp: Introduce opp-level bindingsRajendra Nayak1-0/+3
Add opp-level as an additional property in the OPP node to describe the performance level of the device. On some SoCs (especially from Qualcomm and MediaTek) this value is communicated to a remote microprocessor by the CPU, which then takes some actions (like adjusting voltage values across various rails) based on the value passed. Reviewed-by: Stephen Boyd <swboyd@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Rob Herring <robh@kernel.org> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rajendra Nayak <rnayak@codeaurora.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-01-10qcom-scm: Include <linux/err.h> headerFabio Estevam1-0/+1
Since commit e6f6d63ed14c ("drm/msm: add headless gpu device for imx5") the DRM_MSM symbol can be selected by SOC_IMX5 causing the following error when building imx_v6_v7_defconfig: In file included from ../drivers/gpu/drm/msm/adreno/a5xx_gpu.c:17:0: ../include/linux/qcom_scm.h: In function 'qcom_scm_set_cold_boot_addr': ../include/linux/qcom_scm.h:73:10: error: 'ENODEV' undeclared (first use in this function) return -ENODEV; Include the <linux/err.h> header file to fix this problem. Reported-by: kernelci.org bot <bot@kernelci.org> Fixes: e6f6d63ed14c ("drm/msm: add headless gpu device for imx5") Signed-off-by: Fabio Estevam <festevam@gmail.com> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-01-06Linux 5.0-rc1Linus Torvalds1-3/+3
2019-01-06Change mincore() to count "mapped" pages rather than "cached" pagesLinus Torvalds1-81/+13
The semantics of what "in core" means for the mincore() system call are somewhat unclear, but Linux has always (since 2.3.52, which is when mincore() was initially done) treated it as "page is available in page cache" rather than "page is mapped in the mapping". The problem with that traditional semantic is that it exposes a lot of system cache state that it really probably shouldn't, and that users shouldn't really even care about. So let's try to avoid that information leak by simply changing the semantics to be that mincore() counts actual mapped pages, not pages that might be cheaply mapped if they were faulted (note the "might be" part of the old semantics: being in the cache doesn't actually guarantee that you can access them without IO anyway, since things like network filesystems may have to revalidate the cache before use). In many ways the old semantics were somewhat insane even aside from the information leak issue. From the very beginning (and that beginning is a long time ago: 2.3.52 was released in March 2000, I think), the code had a comment saying Later we can get more picky about what "in core" means precisely. and this is that "later". Admittedly it is much later than is really comfortable. NOTE! This is a real semantic change, and it is for example known to change the output of "fincore", since that program literally does a mmmap without populating it, and then doing "mincore()" on that mapping that doesn't actually have any pages in it. I'm hoping that nobody actually has any workflow that cares, and the info leak is real. We may have to do something different if it turns out that people have valid reasons to want the old semantics, and if we can limit the information leak sanely. Cc: Kevin Easton <kevin@guarana.org> Cc: Jiri Kosina <jikos@kernel.org> Cc: Masatake YAMATO <yamato@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-06Fix 'acccess_ok()' on alpha and SHLinus Torvalds2-5/+10
Commit 594cc251fdd0 ("make 'user_access_begin()' do 'access_ok()'") broke both alpha and SH booting in qemu, as noticed by Guenter Roeck. It turns out that the bug wasn't actually in that commit itself (which would have been surprising: it was mostly a no-op), but in how the addition of access_ok() to the strncpy_from_user() and strnlen_user() functions now triggered the case where those functions would test the access of the very last byte of the user address space. The string functions actually did that user range test before too, but they did it manually by just comparing against user_addr_max(). But with user_access_begin() doing the check (using "access_ok()"), it now exposed problems in the architecture implementations of that function. For example, on alpha, the access_ok() helper macro looked like this: #define __access_ok(addr, size) \ ((get_fs().seg & (addr | size | (addr+size))) == 0) and what it basically tests is of any of the high bits get set (the USER_DS masking value is 0xfffffc0000000000). And that's completely wrong for the "addr+size" check. Because it's off-by-one for the case where we check to the very end of the user address space, which is exactly what the strn*_user() functions do. Why? Because "addr+size" will be exactly the size of the address space, so trying to access the last byte of the user address space will fail the __access_ok() check, even though it shouldn't. As a result, the user string accessor functions failed consistently - because they literally don't know how long the string is going to be, and the max access is going to be that last byte of the user address space. Side note: that alpha macro is buggy for another reason too - it re-uses the arguments twice. And SH has another version of almost the exact same bug: #define __addr_ok(addr) \ ((unsigned long __force)(addr) < current_thread_info()->addr_limit.seg) so far so good: yes, a user address must be below the limit. But then: #define __access_ok(addr, size) \ (__addr_ok((addr) + (size))) is wrong with the exact same off-by-one case: the case when "addr+size" is exactly _equal_ to the limit is actually perfectly fine (think "one byte access at the last address of the user address space") The SH version is actually seriously buggy in another way: it doesn't actually check for overflow, even though it did copy the _comment_ that talks about overflow. So it turns out that both SH and alpha actually have completely buggy implementations of access_ok(), but they happened to work in practice (although the SH overflow one is a serious serious security bug, not that anybody likely cares about SH security). This fixes the problems by using a similar macro on both alpha and SH. It isn't trying to be clever, the end address is based on this logic: unsigned long __ao_end = __ao_a + __ao_b - !!__ao_b; which basically says "add start and length, and then subtract one unless the length was zero". We can't subtract one for a zero length, or we'd just hit an underflow instead. For a lot of access_ok() users the length is a constant, so this isn't actually as expensive as it initially looks. Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net> Cc: Matt Turner <mattst88@gmail.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-06fscrypt: add Adiantum supportEric Biggers7-188/+468
Add support for the Adiantum encryption mode to fscrypt. Adiantum is a tweakable, length-preserving encryption mode with security provably reducible to that of XChaCha12 and AES-256, subject to a security bound. It's also a true wide-block mode, unlike XTS. See the paper "Adiantum: length-preserving encryption for entry-level processors" (https://eprint.iacr.org/2018/720.pdf) for more details. Also see commit 059c2a4d8e16 ("crypto: adiantum - add Adiantum support"). On sufficiently long messages, Adiantum's bottlenecks are XChaCha12 and the NH hash function. These algorithms are fast even on processors without dedicated crypto instructions. Adiantum makes it feasible to enable storage encryption on low-end mobile devices that lack AES instructions; currently such devices are unencrypted. On ARM Cortex-A7, on 4096-byte messages Adiantum encryption is about 4 times faster than AES-256-XTS encryption; decryption is about 5 times faster. In fscrypt, Adiantum is suitable for encrypting both file contents and names. With filenames, it fixes a known weakness: when two filenames in a directory share a common prefix of >= 16 bytes, with CTS-CBC their encrypted filenames share a common prefix too, leaking information. Adiantum does not have this problem. Since Adiantum also accepts long tweaks (IVs), it's also safe to use the master key directly for Adiantum encryption rather than deriving per-file keys, provided that the per-file nonce is included in the IVs and the master key isn't used for any other encryption mode. This configuration saves memory and improves performance. A new fscrypt policy flag is added to allow users to opt-in to this configuration. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2019-01-06kconfig: rename generated .*conf-cfg to *conf-cfgMasahiro Yamada2-18/+19
Remove the dot-prefixing since it is just a matter of the .gitignore file. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: remove unnecessary stubs for archheader and archscriptsMasahiro Yamada1-5/+1
Make simply skips a missing rule when it is marked as .PHONY. Remove the dummy targets. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: use assignment instead of define ... endef for filechk_* rulesMasahiro Yamada6-26/+12
You do not have to use define ... endef for filechk_* rules. For simple cases, the use of assignment looks cleaner, IMHO. I updated the usage for scripts/Kbuild.include in case somebody misunderstands the 'define ... endif' is the requirement. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-01-06arch: remove redundant UAPI generic-y definesMasahiro Yamada24-408/+0
Now that Kbuild automatically creates asm-generic wrappers for missing mandatory headers, it is redundant to list the same headers in generic-y and mandatory-y. Suggested-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Sam Ravnborg <sam@ravnborg.org>
2019-01-06kbuild: generate asm-generic wrappers if mandatory headers are missingMasahiro Yamada3-10/+10
Some time ago, Sam pointed out a certain degree of overwrap between generic-y and mandatory-y. (https://lkml.org/lkml/2017/7/10/121) I tweaked the meaning of mandatory-y a little bit; now it defines the minimum set of ASM headers that all architectures must have. If arch does not have specific implementation of a mandatory header, Kbuild will let it fallback to the asm-generic one by automatically generating a wrapper. This will allow to drop lots of redundant generic-y defines. Previously, "mandatory" was used in the context of UAPI, but I guess this can be extended to kernel space ASM headers. Suggested-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Sam Ravnborg <sam@ravnborg.org>
2019-01-06arch: remove stale comments "UAPI Header export list"Masahiro Yamada24-25/+0
These comments are leftovers of commit fcc8487d477a ("uapi: export all headers under uapi directories"). Prior to that commit, exported headers must be explicitly added to header-y. Now, all headers under the uapi/ directories are exported. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06riscv: remove redundant kernel-space generic-yMasahiro Yamada1-25/+0
This commit removes redundant generic-y defines in arch/riscv/include/asm/Kbuild. [1] It is redundant to define the same generic-y in both arch/$(ARCH)/include/asm/Kbuild and arch/$(ARCH)/include/uapi/asm/Kbuild. Remove the following generic-y: errno.h fcntl.h ioctl.h ioctls.h ipcbuf.h mman.h msgbuf.h param.h poll.h posix_types.h resource.h sembuf.h setup.h shmbuf.h signal.h socket.h sockios.h stat.h statfs.h swab.h termbits.h termios.h types.h [2] It is redundant to define generic-y when arch-specific implementation exists in arch/$(ARCH)/include/asm/*.h Remove the following generic-y: cacheflush.h module.h Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: change filechk to surround the given command with { }Masahiro Yamada7-12/+14
filechk_* rules often consist of multiple 'echo' lines. They must be surrounded with { } or ( ) to work correctly. Otherwise, only the string from the last 'echo' would be written into the target. Let's take care of that in the 'filechk' in scripts/Kbuild.include to clean up filechk_* rules. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: remove redundant target cleaning on failureMasahiro Yamada9-25/+16
Since commit 9c2af1c7377a ("kbuild: add .DELETE_ON_ERROR special target"), the target file is automatically deleted on failure. The boilerplate code ... || { rm -f $@; false; } is unneeded. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: clean up rule_dtc_dt_yamlMasahiro Yamada1-2/+2
Commit 3a2429e1faf4 ("kbuild: change if_changed_rule for multi-line recipe") and commit 4f0e3a57d6eb ("kbuild: Add support for DT binding schema checks") came in via different sub-systems. This is a follow-up cleanup. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: remove UIMAGE_IN and UIMAGE_OUTMasahiro Yamada1-4/+2
The only/last user of UIMAGE_IN/OUT was removed by commit 4722a3e6b716 ("microblaze: fix multiple bugs in arch/microblaze/boot/Makefile"). The input and output should always be $< and $@. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06jump_label: move 'asm goto' support test to KconfigMasahiro Yamada42-119/+65
Currently, CONFIG_JUMP_LABEL just means "I _want_ to use jump label". The jump label is controlled by HAVE_JUMP_LABEL, which is defined like this: #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL) # define HAVE_JUMP_LABEL #endif We can improve this by testing 'asm goto' support in Kconfig, then make JUMP_LABEL depend on CC_HAS_ASM_GOTO. Ugly #ifdef HAVE_JUMP_LABEL will go away, and CONFIG_JUMP_LABEL will match to the real kernel capability. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
2019-01-06kallsyms: lower alignment on ARMMathias Krause1-2/+2
As mentioned in the info pages of gas, the '.align' pseudo op's interpretation of the alignment value is architecture specific. It might either be a byte value or taken to the power of two. On ARM it's actually the latter which leads to unnecessary large alignments of 16 bytes for 32 bit builds or 256 bytes for 64 bit builds. Fix this by switching to '.balign' instead which is consistent across all architectures. Signed-off-by: Mathias Krause <minipli@googlemail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06scripts: coccinelle: boolinit: drop warnings on named constantsJulia Lawall1-0/+5
Coccinelle doesn't always have access to the values of named (#define) constants, and they may likely often be bound to true and false values anyway, resulting in false positives. So stop warning about them. Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06scripts: coccinelle: check for redeclarationJulia Lawall1-0/+3
Avoid reporting on the use of an iterator index variable when the variable is redeclared. Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kconfig: remove unused "file" field of yylval unionMasahiro Yamada1-1/+0
This has never been used. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06nds32: remove redundant kernel-space generic-yMasahiro Yamada1-10/+0
This commit removes redundant generic-y defines in arch/nds32/include/asm/Kbuild. [1] It is redundant to define the same generic-y in both arch/$(ARCH)/include/asm/Kbuild and arch/$(ARCH)/include/uapi/asm/Kbuild. Remove the following generic-y: bitsperlong.h bpf_perf_event.h errno.h fcntl.h ioctl.h ioctls.h mman.h shmbuf.h stat.h [2] It is redundant to define generic-y when arch-specific implementation exists in arch/$(ARCH)/include/asm/*.h Remove the following generic-y: ftrace.h Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06nios2: remove unneeded HAS_DMA defineMasahiro Yamada1-3/+0
kernel/dma/Kconfig globally defines HAS_DMA as follows: config HAS_DMA bool depends on !NO_DMA default y Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
2019-01-05lib/genalloc.c: include vmalloc.hOlof Johansson1-0/+1
Fixes build break on most ARM/ARM64 defconfigs: lib/genalloc.c: In function 'gen_pool_add_virt': lib/genalloc.c:190:10: error: implicit declaration of function 'vzalloc_node'; did you mean 'kzalloc_node'? lib/genalloc.c:190:8: warning: assignment to 'struct gen_pool_chunk *' from 'int' makes pointer from integer without a cast [-Wint-conversion] lib/genalloc.c: In function 'gen_pool_destroy': lib/genalloc.c:254:3: error: implicit declaration of function 'vfree'; did you mean 'kfree'? Fixes: 6862d2fc8185 ('lib/genalloc.c: use vzalloc_node() to allocate the bitmap') Cc: Huang Shijie <sjhuang@iluvatar.ai> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Alexey Skidanov <alexey.skidanov@intel.com> Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-05dma-direct: fix DMA_ATTR_NO_KERNEL_MAPPING for remapped allocationsChristoph Hellwig1-6/+7
We need to return a dma_addr_t even if we don't have a kernel mapping. Do so by consolidating the phys_to_dma call in a single place and jump to it from all the branches that return successfully. Fixes: bfd56cd60521 ("dma-mapping: support highmem in the generic remap allocator") Reported-by: Liviu Dudau <liviu@dudau.co.uk Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Liviu Dudau <liviu@dudau.co.uk>
2019-01-05x86/amd_gart: fix unmapping of non-GART mappingsChristoph Hellwig1-1/+9
In many cases we don't have to create a GART mapping at all, which also means there is nothing to unmap. Fix the range check that was incorrectly modified when removing the mapping_error method. Fixes: 9e8aa6b546 ("x86/amd_gart: remove the mapping_error dma_map_ops method") Reported-by: Michal Kubecek <mkubecek@suse.cz> Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Michal Kubecek <mkubecek@suse.cz>
2019-01-04ia64: fix compile without swiotlbChristoph Hellwig2-1/+3
Some non-generic ia64 configs don't build swiotlb, and thus should not pull in the generic non-coherent DMA infrastructure. Fixes: 68c608345c ("swiotlb: remove dma_mark_clean") Reported-by: Tony Luck <tony.luck@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-04x86: re-introduce non-generic memcpy_{to,from}ioLinus Torvalds4-18/+51
This has been broken forever, and nobody ever really noticed because it's purely a performance issue. Long long ago, in commit 6175ddf06b61 ("x86: Clean up mem*io functions") Brian Gerst simplified the memory copies to and from iomem, since on x86, the instructions to access iomem are exactly the same as the regular instructions. That is technically true, and things worked, and nobody said anything. Besides, back then the regular memcpy was pretty simple and worked fine. Nobody noticed except for David Laight, that is. David has a testing a TLP monitor he was writing for an FPGA, and has been occasionally complaining about how memcpy_toio() writes things one byte at a time. Which is completely unacceptable from a performance standpoint, even if it happens to technically work. The reason it's writing one byte at a time is because while it's technically true that accesses to iomem are the same as accesses to regular memory on x86, the _granularity_ (and ordering) of accesses matter to iomem in ways that they don't matter to regular cached memory. In particular, when ERMS is set, we default to using "rep movsb" for larger memory copies. That is indeed perfectly fine for real memory, since the whole point is that the CPU is going to do cacheline optimizations and executes the memory copy efficiently for cached memory. With iomem? Not so much. With iomem, "rep movsb" will indeed work, but it will copy things one byte at a time. Slowly and ponderously. Now, originally, back in 2010 when commit 6175ddf06b61 was done, we didn't use ERMS, and this was much less noticeable. Our normal memcpy() was simpler in other ways too. Because in fact, it's not just about using the string instructions. Our memcpy() these days does things like "read and write overlapping values" to handle the last bytes of the copy. Again, for normal memory, overlapping accesses isn't an issue. For iomem? It can be. So this re-introduces the specialized memcpy_toio(), memcpy_fromio() and memset_io() functions. It doesn't particularly optimize them, but it tries to at least not be horrid, or do overlapping accesses. In fact, this uses the existing __inline_memcpy() function that we still had lying around that uses our very traditional "rep movsl" loop followed by movsw/movsb for the final bytes. Somebody may decide to try to improve on it, but if we've gone almost a decade with only one person really ever noticing and complaining, maybe it's not worth worrying about further, once it's not _completely_ broken? Reported-by: David Laight <David.Laight@aculab.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-04Use __put_user_goto in __put_user_size() and unsafe_put_user()Linus Torvalds1-31/+22
This actually enables the __put_user_goto() functionality in unsafe_put_user(). For an example of the effect of this, this is the code generated for the unsafe_put_user(signo, &infop->si_signo, Efault); in the waitid() system call: movl %ecx,(%rbx) # signo, MEM[(struct __large_struct *)_2] It's just one single store instruction, along with generating an exception table entry pointing to the Efault label case in case that instruction faults. Before, we would generate this: xorl %edx, %edx movl %ecx,(%rbx) # signo, MEM[(struct __large_struct *)_3] testl %edx, %edx jne .L309 with the exception table generated for that 'mov' instruction causing us to jump to a stub that set %edx to -EFAULT and then jumped back to the 'testl' instruction. So not only do we now get rid of the extra code in the normal sequence, we also avoid unnecessarily keeping that extra error register live across it all. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-04x86 uaccess: Introduce __put_user_gotoLinus Torvalds1-11/+17
This is finally the actual reason for the odd error handling in the "unsafe_get/put_user()" functions, introduced over three years ago. Using a "jump to error label" interface is somewhat odd, but very convenient as a programming interface, and more importantly, it fits very well with simply making the target be the exception handler address directly from the inline asm. The reason it took over three years to actually do this? We need "asm goto" support for it, which only became the default on x86 last year. It's now been a year that we've forced asm goto support (see commit e501ce957a78 "x86: Force asm-goto"), and so let's just do it here too. [ Side note: this commit was originally done back in 2016. The above commentary about timing is obviously about it only now getting merged into my real upstream tree - Linus ] Sadly, gcc still only supports "asm goto" with asms that do not have any outputs, so we are limited to only the put_user case for this. Maybe in several more years we can do the get_user case too. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-05parisc: Remap hugepage-aligned pages in set_kernel_text_rw()Helge Deller1-2/+2
The alternative coding patch for parisc in kernel 4.20 broke booting machines with PA8500-PA8700 CPUs. The problem is, that for such machines the parisc kernel automatically utilizes huge pages to access kernel text code, but the set_kernel_text_rw() function, which is used shortly before applying any alternative patches, didn't used the correctly hugepage-aligned addresses to remap the kernel text read-writeable. Fixes: 3847dab77421 ("parisc: Add alternative coding infrastructure") Cc: <stable@vger.kernel.org> [4.20] Signed-off-by: Helge Deller <deller@gmx.de>
2019-01-04ARM: multi_v7_defconfig: enable CONFIG_UNIPHIER_MDMACMasahiro Yamada1-0/+1
Enable the UniPhier MIO DMAC driver. This is used as the DMA engine for accelerating the SD/eMMC controller drivers. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Olof Johansson <olof@lixom.net>
2019-01-04Add CREDITS entry for Shaohua LiJens Axboe1-0/+6
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-04mm/page_io.c: fix polled swap page inJens Axboe1-1/+3
swap_readpage() wants to do polling to bring in pages if asked to, but it doesn't mark the bio as being polled. Additionally, the looping around the blk_poll() check isn't correct - if we get a zero return, we should call io_schedule(), we can't just assume that the bio has completed. The regular bio->bi_private check should be used for that. Link: http://lkml.kernel.org/r/e15243a8-2cdf-c32c-ecee-f289377c8ef9@kernel.dk Signed-off-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-04checkpatch: add Co-developed-by to signature tagsJorge Ramirez-Ortiz1-0/+1
As per Documentation/process/submitting-patches, Co-developed-by is a valid signature. This commit removes the warning. Link: http://lkml.kernel.org/r/1544808928-20002-3-git-send-email-jorge.ramirez-ortiz@linaro.org Signed-off-by: Jorge Ramirez-Ortiz <jorge.ramirez-ortiz@linaro.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Himanshu Jha <himanshujha199640@gmail.com> Cc: Joe Perches <joe@perches.com> Cc: Jonathan Cameron <jic23@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-04docs: fix Co-Developed-by docsJorge Ramirez-Ortiz1-2/+2
The accepted terminology will be Co-developed-by therefore lose the capital letter from now on. Link: http://lkml.kernel.org/r/1544808928-20002-2-git-send-email-jorge.ramirez-ortiz@linaro.org Signed-off-by: Jorge Ramirez-Ortiz <jorge.ramirez-ortiz@linaro.org> Acked-by: Himanshu Jha <himanshujha199640@gmail.com> Cc: Jonathan Cameron <jic23@kernel.org> Cc: Joe Perches <joe@perches.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Niklas Cassel <niklas.cassel@linaro.org> Cc: Jonathan Corbet <corbet@lwn.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>