aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2018-12-07arm64: hibernate: Avoid sending cross-calling with interrupts disabledWill Deacon1-1/+1
Since commit 3b8c9f1cdfc50 ("arm64: IPI each CPU after invalidating the I-cache for kernel mappings"), a call to flush_icache_range() will use an IPI to cross-call other online CPUs so that any stale instructions are flushed from their pipelines. This triggers a WARN during the hibernation resume path, where flush_icache_range() is called with interrupts disabled and is therefore prone to deadlock: | Disabling non-boot CPUs ... | CPU1: shutdown | psci: CPU1 killed. | CPU2: shutdown | psci: CPU2 killed. | CPU3: shutdown | psci: CPU3 killed. | WARNING: CPU: 0 PID: 1 at ../kernel/smp.c:416 smp_call_function_many+0xd4/0x350 | Modules linked in: | CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.20.0-rc4 #1 Since all secondary CPUs have been taken offline prior to invalidating the I-cache, there's actually no need for an IPI and we can simply call __flush_icache_range() instead. Cc: <stable@vger.kernel.org> Fixes: 3b8c9f1cdfc50 ("arm64: IPI each CPU after invalidating the I-cache for kernel mappings") Reported-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> Tested-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> Tested-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-12-07blk-mq: punt failed direct issue to dispatch listJens Axboe1-28/+5
After the direct dispatch corruption fix, we permanently disallow direct dispatch of non read/write requests. This works fine off the normal IO path, as they will be retried like any other failed direct dispatch request. But for the blk_insert_cloned_request() that only DM uses to bypass the bottom level scheduler, we always first attempt direct dispatch. For some types of requests, that's now a permanent failure, and no amount of retrying will make that succeed. This results in a livelock. Instead of making special cases for what we can direct issue, and now having to deal with DM solving the livelock while still retaining a BUSY condition feedback loop, always just add a request that has been through ->queue_rq() to the hardware queue dispatch list. These are safe to use as no merging can take place there. Additionally, if requests do have prepped data from drivers, we aren't dependent on them not sharing space in the request structure to safely add them to the IO scheduler lists. This basically reverts ffe81d45322c and is based on a patch from Ming, but with the list insert case covered as well. Fixes: ffe81d45322c ("blk-mq: fix corruption with direct issue") Cc: stable@vger.kernel.org Suggested-by: Ming Lei <ming.lei@redhat.com> Reported-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Ming Lei <ming.lei@redhat.com> Acked-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07nvmet-rdma: fix response use after freeIsrael Rukshin1-1/+2
nvmet_rdma_release_rsp() may free the response before using it at error flow. Fixes: 8407879 ("nvmet-rdma: fix possible bogus dereference under heavy load") Signed-off-by: Israel Rukshin <israelr@mellanox.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-12-07nvme: validate controller state before rescheduling keep aliveJames Smart1-1/+9
Delete operations are seeing NULL pointer references in call_timer_fn. Tracking these back, the timer appears to be the keep alive timer. nvme_keep_alive_work() which is tied to the timer that is cancelled by nvme_stop_keep_alive(), simply starts the keep alive io but doesn't wait for it's completion. So nvme_stop_keep_alive() only stops a timer when it's pending. When a keep alive is in flight, there is no timer running and the nvme_stop_keep_alive() will have no affect on the keep alive io. Thus, if the io completes successfully, the keep alive timer will be rescheduled. In the failure case, delete is called, the controller state is changed, the nvme_stop_keep_alive() is called while the io is outstanding, and the delete path continues on. The keep alive happens to successfully complete before the delete paths mark it as aborted as part of the queue termination, so the timer is restarted. The delete paths then tear down the controller, and later on the timer code fires and the timer entry is now corrupt. Fix by validating the controller state before rescheduling the keep alive. Testing with the fix has confirmed the condition above was hit. Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-12-07block, bfq: fix decrement of num_active_groupsPaolo Valente3-25/+107
Since commit '2d29c9f89fcd ("block, bfq: improve asymmetric scenarios detection")', if there are process groups with I/O requests waiting for completion, then BFQ tags the scenario as 'asymmetric'. This detection is needed for preserving service guarantees (for details, see comments on the computation * of the variable asymmetric_scenario in the function bfq_better_to_idle). Unfortunately, commit '2d29c9f89fcd ("block, bfq: improve asymmetric scenarios detection")' contains an error exactly in the updating of the number of groups with I/O requests waiting for completion: if a group has more than one descendant process, then the above number of groups, which is renamed from num_active_groups to a more appropriate num_groups_with_pending_reqs by this commit, may happen to be wrongly decremented multiple times, namely every time one of the descendant processes gets all its pending I/O requests completed. A correct, complete solution should work as follows. Consider a group that is inactive, i.e., that has no descendant process with pending I/O inside BFQ queues. Then suppose that num_groups_with_pending_reqs is still accounting for this group, because the group still has some descendant process with some I/O request still in flight. num_groups_with_pending_reqs should be decremented when the in-flight request of the last descendant process is finally completed (assuming that nothing else has changed for the group in the meantime, in terms of composition of the group and active/inactive state of child groups and processes). To accomplish this, an additional pending-request counter must be added to entities, and must be updated correctly. To avoid this additional field and operations, this commit resorts to the following tradeoff between simplicity and accuracy: for an inactive group that is still counted in num_groups_with_pending_reqs, this commit decrements num_groups_with_pending_reqs when the first descendant process of the group remains with no request waiting for completion. This simplified scheme provides a fix to the unbalanced decrements introduced by 2d29c9f89fcd. Since this error was also caused by lack of comments on this non-trivial issue, this commit also adds related comments. Fixes: 2d29c9f89fcd ("block, bfq: improve asymmetric scenarios detection") Reported-by: Steven Barrett <steven@liquorix.net> Tested-by: Steven Barrett <steven@liquorix.net> Tested-by: Lucjan Lucjanov <lucjan.lucjanov@gmail.com> Reviewed-by: Federico Motta <federico@willer.it> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07crypto: user - Disable statistics interfaceHerbert Xu1-1/+1
Since this user-space API is still undergoing significant changes, this patch disables it for the current merge window. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-12-06i2c: uniphier-f: fix violation of tLOW requirement for Fast-modeMasahiro Yamada1-1/+18
Currently, the clock duty is set as tLOW/tHIGH = 1/1. For Fast-mode, tLOW is set to 1.25 us while the I2C spec requires tLOW >= 1.3 us. tLOW/tHIGH = 5/4 would meet both Standard-mode and Fast-mode: Standard-mode: tLOW = 5.56 us, tHIGH = 4.44 us Fast-mode: tLOW = 1.39 us, tHIGH = 1.11 us Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-12-06i2c: uniphier: fix violation of tLOW requirement for Fast-modeMasahiro Yamada1-1/+7
Currently, the clock duty is set as tLOW/tHIGH = 1/1. For Fast-mode, tLOW is set to 1.25 us while the I2C spec requires tLOW >= 1.3 us. tLOW/tHIGH = 5/4 would meet both Standard-mode and Fast-mode: Standard-mode: tLOW = 5.56 us, tHIGH = 4.44 us Fast-mode: tLOW = 1.39 us, tHIGH = 1.11 us Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-12-06i2c: uniphier-f: fill TX-FIFO only in IRQ handler for repeated STARTMasahiro Yamada1-4/+9
- For a repeated START condition, this controller starts data transfer immediately after the slave address is written to the TX-FIFO. - Once the TX-FIFO empty interrupt is asserted, the controller makes a pause even if additional data are written to the TX-FIFO. Given those circumstances, the data after a repeated START may not be transferred if the interrupt is asserted while the TX-FIFO is being filled up. A more reliable way is to append TX data only in the interrupt handler. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-12-06i2c: uniphier-f: fix timeout error after reading 8 bytesMasahiro Yamada1-3/+14
I was totally screwed up in commit eaba68785c2d ("i2c: uniphier-f: fix race condition when IRQ is cleared"). Since that commit, if the number of read bytes is multiple of the FIFO size (8, 16, 24... bytes), the STOP condition could be issued twice, depending on the timing. If this happens, the controller will go wrong, resulting in the timeout error. It was more than 3 years ago when I wrote this driver, so my memory about this hardware was vague. Please let me correct the description in the commit log of eaba68785c2d. Clearing the IRQ status on exiting the IRQ handler is absolutely fine. This controller makes a pause while any IRQ status is asserted. If the IRQ status is cleared first, the hardware may start the next transaction before the IRQ handler finishes what it supposed to do. This partially reverts the bad commit with clear comments so that I will never repeat this mistake. I also investigated what is happening at the last moment of the read mode. The UNIPHIER_FI2C_INT_RF interrupt is asserted a bit earlier (by half a period of the clock cycle) than UNIPHIER_FI2C_INT_RB. I consulted a hardware engineer, and I got the following information: UNIPHIER_FI2C_INT_RF asserted at the falling edge of SCL at the 8th bit. UNIPHIER_FI2C_INT_RB asserted at the rising edge of SCL at the 9th (ACK) bit. In order to avoid calling uniphier_fi2c_stop() twice, check the latter interrupt. I also commented this because it is obscure hardware internal. Fixes: eaba68785c2d ("i2c: uniphier-f: fix race condition when IRQ is cleared") Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-12-06i2c: scmi: Fix probe error on devices with an empty SMB0001 ACPI device nodeHans de Goede1-3/+7
Some AMD based HP laptops have a SMB0001 ACPI device node which does not define any methods. This leads to the following error in dmesg: [ 5.222731] cmi: probe of SMB0001:00 failed with error -5 This commit makes acpi_smbus_cmi_add() return -ENODEV instead in this case silencing the error. In case of a failure of the i2c_add_adapter() call this commit now propagates the error from that call instead of -EIO. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-12-06i2c: axxia: properly handle master timeoutAdamski, Krzysztof (Nokia - PL/Wroclaw)1-11/+29
According to Intel (R) Axxia TM Lionfish Communication Processor Peripheral Subsystem Hardware Reference Manual, the AXXIA I2C module have a programmable Master Wait Timer, which among others, checks the time between commands send in manual mode. When a timeout (25ms) passes, TSS bit is set in Master Interrupt Status register and a Stop command is issued by the hardware. The axxia_i2c_xfer(), does not properly handle this situation, however. For each message a separate axxia_i2c_xfer_msg() is called and this function incorrectly assumes that any interrupt might happen only when waiting for completion. This is mostly correct but there is one exception - a master timeout can trigger if enough time has passed between individual transfers. It will, by definition, happen between transfers when the interrupts are disabled by the code. If that happens, the hardware issues Stop command. The interrupt indicating timeout will not be triggered as soon as we enable them since the Master Interrupt Status is cleared when master mode is entered again (which happens before enabling irqs) meaning this error is lost and the transfer is continued even though the Stop was issued on the bus. The subsequent operations completes without error but a bogus value (0xFF in case of read) is read as the client device is confused because aborted transfer. No error is returned from master_xfer() making caller believe that a valid value was read. To fix the problem, the TSS bit (indicating timeout) in Master Interrupt Status register is checked before each transfer. If it is set, there was a timeout before this transfer and (as described above) the hardware already issued Stop command so the transaction should be aborted thus -ETIMEOUT is returned from the master_xfer() callback. In order to be sure no timeout was issued we can't just read the status just before starting new transaction as there will always be a small window of time (few CPU cycles at best) where this might still happen. For this reason we have to temporally disable the timer before checking for TSS bit. Disabling it will, however, clear the TSS bit so in order to preserve that information, we have to read it in ISR so we have to ensure that the TSS interrupt is not masked between transfers of one transaction. There is no need to call bus recovery or controller reinitialization if that happens so it's skipped. Signed-off-by: Krzysztof Adamski <krzysztof.adamski@nokia.com> Reviewed-by: Alexander Sverdlin <alexander.sverdlin@nokia.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-12-06vhost/vsock: fix use-after-free in network stack callersStefan Hajnoczi1-24/+33
If the network stack calls .send_pkt()/.cancel_pkt() during .release(), a struct vhost_vsock use-after-free is possible. This occurs because .release() does not wait for other CPUs to stop using struct vhost_vsock. Switch to an RCU-enabled hashtable (indexed by guest CID) so that .release() can wait for other CPUs by calling synchronize_rcu(). This also eliminates vhost_vsock_lock acquisition in the data path so it could have a positive effect on performance. This is CVE-2018-14625 "kernel: use-after-free Read in vhost_transport_send_pkt". Cc: stable@vger.kernel.org Reported-and-tested-by: syzbot+bd391451452fb0b93039@syzkaller.appspotmail.com Reported-by: syzbot+e3e074963495f92a89ed@syzkaller.appspotmail.com Reported-by: syzbot+d5a0a170c5069658b141@syzkaller.appspotmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
2018-12-06virtio/s390: fix race in ccw_io_helper()Halil Pasic1-1/+6
While ccw_io_helper() seems like intended to be exclusive in a sense that it is supposed to facilitate I/O for at most one thread at any given time, there is actually nothing ensuring that threads won't pile up at vcdev->wait_q. If they do, all threads get woken up and see the status that belongs to some other request than their own. This can lead to bugs. For an example see: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1788432 This race normally does not cause any problems. The operations provided by struct virtio_config_ops are usually invoked in a well defined sequence, normally don't fail, and are normally used quite infrequent too. Yet, if some of the these operations are directly triggered via sysfs attributes, like in the case described by the referenced bug, userspace is given an opportunity to force races by increasing the frequency of the given operations. Let us fix the problem by ensuring, that for each device, we finish processing the previous request before starting with a new one. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reported-by: Colin Ian King <colin.king@canonical.com> Cc: stable@vger.kernel.org Message-Id: <20180925121309.58524-3-pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-12-06virtio/s390: avoid race on vcdev->configHalil Pasic1-2/+8
Currently we have a race on vcdev->config in virtio_ccw_get_config() and in virtio_ccw_set_config(). This normally does not cause problems, as these are usually infrequent operations. However, for some devices writing to/reading from the config space can be triggered through sysfs attributes. For these, userspace can force the race by increasing the frequency. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Cc: stable@vger.kernel.org Message-Id: <20180925121309.58524-2-pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-12-06vhost/vsock: fix reset orphans race with close timeoutStefan Hajnoczi1-7/+15
If a local process has closed a connected socket and hasn't received a RST packet yet, then the socket remains in the table until a timeout expires. When a vhost_vsock instance is released with the timeout still pending, the socket is never freed because vhost_vsock has already set the SOCK_DONE flag. Check if the close timer is pending and let it close the socket. This prevents the race which can leak sockets. Reported-by: Maximilian Riemensberger <riemensberger@cadami.net> Cc: Graham Whaley <graham.whaley@gmail.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-12-06dmaengine: dw: Fix FIFO size for Intel MerrifieldAndy Shevchenko1-3/+3
Intel Merrifield has a reduced size of FIFO used in iDMA 32-bit controller, i.e. 512 bytes instead of 1024. Fix this by partitioning it as 64 bytes per channel. Note, in the future we might switch to 'fifo-size' property instead of hard coded value. Fixes: 199244d69458 ("dmaengine: dw: add support of iDMA 32-bit hardware") Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: stable@vger.kernel.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
2018-12-06stackleak: Register the 'stackleak_cleanup' pass before the '*free_cfg' passAlexander Popov1-3/+5
Currently the 'stackleak_cleanup' pass deleting a CALL insn is executed after the 'reload' pass. That allows gcc to do some weird optimization in function prologues and epilogues, which are generated later [1]. Let's avoid that by registering the 'stackleak_cleanup' pass before the '*free_cfg' pass. It's the moment when the stack frame size is already final, function prologues and epilogues are generated, and the machine-dependent code transformations are not done. [1] https://www.openwall.com/lists/kernel-hardening/2018/11/23/2 Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Alexander Popov <alex.popov@linux.com> Signed-off-by: Kees Cook <keescook@chromium.org>
2018-12-06ARM: ensure that processor vtables is not lost after bootRussell King1-0/+10
Marek Szyprowski reported problems with CPU hotplug in current kernels. This was tracked down to the processor vtables being located in an init section, and therefore discarded after kernel boot, despite being required after boot to properly initialise the non-boot CPUs. Arrange for these tables to end up in .rodata when required. Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Krzysztof Kozlowski <krzk@kernel.org> Fixes: 383fb3ee8024 ("ARM: spectre-v2: per-CPU vtables to work around big.Little systems") Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-12-06asm-generic: unistd.h: fixup broken macro include.Guo Ren1-0/+4
The broken macros make the glibc compile error. If there is no __NR3264_fstat*, we should also removed related definitions. Reported-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org> Fixes: bf4b6a7d371e ("y2038: Remove stat64 family from default syscall set") [arnd: Both Marcin and Guo provided this patch to fix up my clearly broken commit, I applied the version with the better changelog.] Signed-off-by: Guo Ren <ren_guo@c-sky.com> Signed-off-by: Mao Han <han_mao@c-sky.com> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2018-12-06drm/ast: Fix connector leak during driver unloadSam Bobroff1-0/+1
When unloading the ast driver, a warning message is printed by drm_mode_config_cleanup() because a reference is still held to one of the drm_connector structs. Correct this by calling drm_crtc_force_disable_all() in ast_fbdev_destroy(). Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/1e613f3c630c7bbc72e04a44b178259b9164d2f6.1543798395.git.sbobroff@linux.ibm.com
2018-12-05Uprobes: Fix kernel oops with delayed_uprobe_remove()Ravi Bangoria1-0/+2
There could be a race between task exit and probe unregister: exit_mm() mmput() __mmput() uprobe_unregister() uprobe_clear_state() put_uprobe() delayed_uprobe_remove() delayed_uprobe_remove() put_uprobe() is calling delayed_uprobe_remove() without taking delayed_uprobe_lock and thus the race sometimes results in a kernel crash. Fix this by taking delayed_uprobe_lock before calling delayed_uprobe_remove() from put_uprobe(). Detailed crash log can be found at: Link: http://lkml.kernel.org/r/000000000000140c370577db5ece@google.com Link: http://lkml.kernel.org/r/20181205033423.26242-1-ravi.bangoria@linux.ibm.com Acked-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Reported-by: syzbot+cb1fb754b771caca0a88@syzkaller.appspotmail.com Fixes: 1cc33161a83d ("uprobes: Support SDT markers having reference count (semaphore)") Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-12-05stackleak: Mark stackleak_track_stack() as notraceAnders Roxell1-1/+1
Function graph tracing recurses into itself when stackleak is enabled, causing the ftrace graph selftest to run for up to 90 seconds and trigger the softlockup watchdog. Breakpoint 2, ftrace_graph_caller () at ../arch/arm64/kernel/entry-ftrace.S:200 200 mcount_get_lr_addr x0 // pointer to function's saved lr (gdb) bt \#0 ftrace_graph_caller () at ../arch/arm64/kernel/entry-ftrace.S:200 \#1 0xffffff80081d5280 in ftrace_caller () at ../arch/arm64/kernel/entry-ftrace.S:153 \#2 0xffffff8008555484 in stackleak_track_stack () at ../kernel/stackleak.c:106 \#3 0xffffff8008421ff8 in ftrace_ops_test (ops=0xffffff8009eaa840 <graph_ops>, ip=18446743524091297036, regs=<optimized out>) at ../kernel/trace/ftrace.c:1507 \#4 0xffffff8008428770 in __ftrace_ops_list_func (regs=<optimized out>, ignored=<optimized out>, parent_ip=<optimized out>, ip=<optimized out>) at ../kernel/trace/ftrace.c:6286 \#5 ftrace_ops_no_ops (ip=18446743524091297036, parent_ip=18446743524091242824) at ../kernel/trace/ftrace.c:6321 \#6 0xffffff80081d5280 in ftrace_caller () at ../arch/arm64/kernel/entry-ftrace.S:153 \#7 0xffffff800832fd10 in irq_find_mapping (domain=0xffffffc03fc4bc80, hwirq=27) at ../kernel/irq/irqdomain.c:876 \#8 0xffffff800832294c in __handle_domain_irq (domain=0xffffffc03fc4bc80, hwirq=27, lookup=true, regs=0xffffff800814b840) at ../kernel/irq/irqdesc.c:650 \#9 0xffffff80081d52b4 in ftrace_graph_caller () at ../arch/arm64/kernel/entry-ftrace.S:205 Rework so we mark stackleak_track_stack as notrace Co-developed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Kees Cook <keescook@chromium.org>
2018-12-05mm, thp: restore node-local hugepage allocationsDavid Rientjes3-29/+17
This is a full revert of ac5b2c18911f ("mm: thp: relax __GFP_THISNODE for MADV_HUGEPAGE mappings") and a partial revert of 89c83fb539f9 ("mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask"). By not setting __GFP_THISNODE, applications can allocate remote hugepages when the local node is fragmented or low on memory when either the thp defrag setting is "always" or the vma has been madvised with MADV_HUGEPAGE. Remote access to hugepages often has much higher latency than local pages of the native page size. On Haswell, ac5b2c18911f was shown to have a 13.9% access regression after this commit for binaries that remap their text segment to be backed by transparent hugepages. The intent of ac5b2c18911f is to address an issue where a local node is low on memory or fragmented such that a hugepage cannot be allocated. In every scenario where this was described as a fix, there is abundant and unfragmented remote memory available to allocate from, even with a greater access latency. If remote memory is also low or fragmented, not setting __GFP_THISNODE was also measured on Haswell to have a 40% regression in allocation latency. Restore __GFP_THISNODE for thp allocations. Fixes: ac5b2c18911f ("mm: thp: relax __GFP_THISNODE for MADV_HUGEPAGE mappings") Fixes: 89c83fb539f9 ("mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask") Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Michal Hocko <mhocko@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-05drm/amdgpu/vcn: Update vcn.cur_state during suspendJames Zhu1-1/+2
Replace vcn_v1_0_stop with vcn_v1_0_set_powergating_state during suspend, to keep adev->vcn.cur_state update. It will fix VCN S3 hung issue. Signed-off-by: James Zhu <James.Zhu@amd.com> Reviewed-by: Leo Liu <leo.liu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-12-05ALSA: hda/realtek: Fix mic issue on Acer AIO Veriton Z4860G/Z6860GChris Chiu1-0/+2
Acer AIO Veriton Z4860G/Z6860G with the same ALC286 codec has issues with the input from external microphone. The issue can be fixed by the fixup ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE for Veriton Z4660G. Signed-off-by: Jian-Hong Pan <jian-hong@endlessm.com> Signed-off-by: Daniel Drake <drake@endlessm.com> Signed-off-by: Chris Chiu <chiu@endlessm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2018-12-05ALSA: hda/realtek: Fix mic issue on Acer AIO Veriton Z4660GChris Chiu1-0/+1
Acer AIO Veriton Z4660G with ALC286 codec has issue with the input from external microphones connecting via 'Front Mic' jack. The fixup ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE enables the jack sensing of the headset and fix the audio input issue of external microphone. Signed-off-by: Jian-Hong Pan <jian-hong@endlessm.com> Signed-off-by: Daniel Drake <drake@endlessm.com> Signed-off-by: Chris Chiu <chiu@endlessm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2018-12-05ALSA: hda/realtek - Add support for Acer Aspire C24-860 headset micChris Chiu1-0/+1
The Acer AIO Aspire C24-860 with ALC286 can't detect the headset microphone. Just like another Acer AIO U27-880, it needs a different pin value for 0x18 and the headset fixup to make headset mic work. Signed-off-by: Jian-Hong Pan <jian-hong@endlessm.com> Signed-off-by: Daniel Drake <drake@endlessm.com> Signed-off-by: Chris Chiu <chiu@endlessm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2018-12-05ALSA: hda/realtek: ALC286 mic and headset-mode fixups for Acer Aspire U27-880Chris Chiu1-0/+14
Acer Aspire U27-880(AIO) with ALC286 codec can not detect headset mic and internal mic not working either. It needs the similar quirk like Sony laptops to fix headphone jack sensing and enables use of the internal microphone. Unfortunately jack sensing for the headset mic is still not working. Signed-off-by: Jian-Hong Pan <jian-hong@endlessm.com> Signed-off-by: Daniel Drake <drake@endlessm.com> Signed-off-by: Chris Chiu <chiu@endlessm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2018-12-05thermal: broadcom: constify thermal_zone_of_device_ops structureJulia Lawall1-1/+1
The thermal_zone_of_device_ops structure can be const as it is only passed as the last argument of thermal_zone_of_sensor_register and the corresponding parameter is declared as const. Done with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Reviewed-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Eduardo Valentin <edubezval@gmail.com>
2018-12-05thermal: armada: constify thermal_zone_of_device_ops structureJulia Lawall1-1/+1
The thermal_zone_of_device_ops structure can be const as it is only passed as the last argument of devm_thermal_zone_of_sensor_register and the corresponding parameter is declared as const. Done with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Reviewed-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Eduardo Valentin <edubezval@gmail.com>
2018-12-05SUNRPC: Don't force a redundant disconnection in xs_read_stream()Trond Myklebust1-7/+1
If the connection is broken, then xs_tcp_state_change() will take care of scheduling the socket close as soon as appropriate. xs_read_stream() just needs to report the error. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2018-12-05SUNRPC: Fix up socket pollingTrond Myklebust1-2/+2
Ensure that we do not exit the socket read callback without clearing XPRT_SOCK_DATA_READY. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2018-12-05SUNRPC: Use the discard iterator rather than MSG_TRUNCTrond Myklebust1-2/+3
When discarding message data from the stream, we're better off using the discard iterator, since that will work with non-TCP streams. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2018-12-05SUNRPC: Treat EFAULT as a truncated message in xs_read_stream_request()Trond Myklebust1-1/+2
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2018-12-05SUNRPC: Fix up handling of the XDRBUF_SPARSE_PAGES flagTrond Myklebust1-11/+11
If the allocator fails before it has reached the target number of pages, then we need to recheck that we're not seeking past the page buffer. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2018-12-05SUNRPC: Fix RPC receive hangsTrond Myklebust1-20/+19
The RPC code is occasionally hanging when the receive code fails to empty the socket buffer due to a partial read of the data. When we convert that to an EAGAIN, it appears we occasionally leave data in the socket. The fix is to just keep reading until the socket returns EAGAIN/EWOULDBLOCK. Reported-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Cristian Marussi <cristian.marussi@arm.com> Reported-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Cristian Marussi <cristian.marussi@arm.com>
2018-12-05Revert "mfd: cros_ec: Use devm_kzalloc for private data"Enric Balletbo i Serra1-1/+7
This reverts commit 3aa2177e47878f7e7616da8a2050c44f22301b6e. That commit triggered a new WARN when unloading the module (see at the end of the commit message). When a class_dev is embedded in a structure then that class_dev is the thing that controls the lifetime of that structure, for that reason device managed allocations can't be used here. See Documentation/kobject.txt. Revert the above patch, so the struct is allocated using kzalloc and we have a release function for it that frees the allocated memory, otherwise it is broken. ------------[ cut here ]------------ Device 'cros_ec' does not have a release() function, it is broken and must be fixed. WARNING: CPU: 3 PID: 3675 at drivers/base/core.c:895 device_release+0x80/0x90 Modules linked in: btusb btrtl btintel btbcm bluetooth ... CPU: 3 PID: 3675 Comm: rmmod Not tainted 4.20.0-rc4 #76 Hardware name: Google Kevin (DT) pstate: 40000005 (nZcv daif -PAN -UAO) pc : device_release+0x80/0x90 lr : device_release+0x80/0x90 sp : ffff00000c47bc70 x29: ffff00000c47bc70 x28: ffff8000e86b0d40 x27: 0000000000000000 x26: 0000000000000000 x25: 0000000056000000 x24: 0000000000000015 x23: ffff8000f0bbf860 x22: ffff000000d320a0 x21: ffff8000ee93e100 x20: ffff8000ed931428 x19: ffff8000ed931418 x18: 0000000000000020 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000400 x14: 0000000000000143 x13: 0000000000000000 x12: 0000000000000400 x11: 0000000000000157 x10: 0000000000000960 x9 : ffff00000c47b9b0 x8 : ffff8000e86b1700 x7 : 0000000000000000 x6 : ffff8000f7d520b8 x5 : ffff8000f7d520b8 x4 : 0000000000000000 x3 : ffff8000f7d58e68 x2 : ffff8000e86b0d40 x1 : 37d859939c964800 x0 : 0000000000000000 Call trace: device_release+0x80/0x90 kobject_put+0x74/0xe8 device_unregister+0x20/0x30 ec_device_remove+0x34/0x48 [cros_ec_dev] platform_drv_remove+0x28/0x48 device_release_driver_internal+0x1a8/0x240 driver_detach+0x40/0x80 bus_remove_driver+0x54/0xa8 driver_unregister+0x2c/0x58 platform_driver_unregister+0x10/0x18 cros_ec_dev_exit+0x1c/0x2d8 [cros_ec_dev] __arm64_sys_delete_module+0x16c/0x1f8 el0_svc_common+0x84/0xd8 el0_svc_handler+0x2c/0x80 el0_svc+0x8/0xc ---[ end trace a57c4625f3c60ae8 ]--- Cc: stable@vger.kernel.org Fixes: 3aa2177e4787 ("mfd: cros_ec: Use devm_kzalloc for private data") Signed-off-by: Enric Balletbo i Serra <enric.balletbo@collabora.com> Reviewed-by: Guenter Roeck <groeck@chromium.org> Reviewed-by: Dmitry Torokhov <dtor@chromium.org> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2018-12-05dmaengine: cppi41: delete channel from pending list when stop channelBin Liu1-1/+15
The driver defines three states for a cppi channel. - idle: .chan_busy == 0 && not in .pending list - pending: .chan_busy == 0 && in .pending list - busy: .chan_busy == 1 && not in .pending list There are cases in which the cppi channel could be in the pending state when cppi41_dma_issue_pending() is called after cppi41_runtime_suspend() is called. cppi41_stop_chan() has a bug for these cases to set channels to idle state. It only checks the .chan_busy flag, but not the .pending list, then later when cppi41_runtime_resume() is called the channels in .pending list will be transitioned to busy state. Removing channels from the .pending list solves the problem. Fixes: 975faaeb9985 ("dma: cppi41: start tear down only if channel is busy") Cc: stable@vger.kernel.org # v3.15+ Signed-off-by: Bin Liu <b-liu@ti.com> Reviewed-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2018-12-05dmaengine: imx-sdma: use GFP_NOWAIT for dma descriptor allocationsLucas Stach1-1/+1
DMA buffer descriptors aren't allocated from atomic context, so they can use the less heavyweigth GFP_NOWAIT. Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Robin Gong <yibin.gong@nxp.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2018-12-05dmaengine: imx-sdma: implement channel termination via workerLucas Stach1-13/+38
The dmaengine documentation states that device_terminate_all may be asynchronous and need not wait for the active transfers to stop. This allows us to move most of the functionality currently implemented in the sdma channel termination function to run in a worker, outside of any atomic context. Moving this out of atomic context has two benefits: we can now sleep while waiting for the channel to terminate, instead of busy waiting and the freeing of the dma descriptors happens with IRQs enabled, getting rid of a warning in the dma mapping code. As the termination is now async, we need to implement the device_synchronize dma engine function which simply waits for the worker to finish its execution. Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Robin Gong <yibin.gong@nxp.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2018-12-05Revert "dmaengine: imx-sdma: alloclate bd memory from dma pool"Lucas Stach1-12/+6
This reverts commit fe5b85c656bc. The SDMA engine needs the descriptors to be contiguous in memory. As the dma pool API is only able to provide a single descriptor per alloc invocation there is no guarantee that multiple descriptors satisfy this requirement. Also the code in question is broken as it only allocates memory for a single descriptor, without looking at the number of descriptors required for the transfer, leading to out-of-bounds accesses when the descriptors are written. Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Robin Gong <yibin.gong@nxp.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2018-12-05Revert "dmaengine: imx-sdma: Use GFP_NOWAIT for dma allocations"Lucas Stach1-2/+2
This reverts commit c1199875d327, as this depends on another commit that is going to be reverted. Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Robin Gong <yibin.gong@nxp.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2018-12-04thermal: bcm2835: Switch to SPDX identifierStefan Wahren1-10/+1
Adopt the SPDX license identifier headers to ease license compliance management. Cc: Martin Sperl <kernel@martin.sperl.org> Signed-off-by: Stefan Wahren <stefan.wahren@i2se.com> Signed-off-by: Eduardo Valentin <edubezval@gmail.com>
2018-12-04thermal: armada: fix legacy resource fixupRussell King1-13/+11
When the armada thermal module is inserted, removed and then reinserted, the system panics as per the messages below. The reason is that "edit" a live resource in the resource tree twice, and end up with it pointing to some other hardware. Editing live resources (resources that are part of the registered resource tree) is not permissible - the resource tree is an ordered set of resources, sorted by start address, and when a new resource is inserted, it is validated that it (a) fits within its parent resource and (b) does not overlap a neighbouring resource. Get rid of this resource editing. We can instead adjust the return value from ioremap() as ioremap() deals with the creation of page- based mappings - provided the adjustment does not cross a page boundary. SError Interrupt on CPU1, code 0xbf000000 -- SError CPU: 1 PID: 2749 Comm: modprobe Not tainted 4.19.0+ #175 Hardware name: Marvell 8040 MACCHIATOBin Double shot (DT) pstate: 20400085 (nzCv daIf +PAN -UAO) pc : regmap_mmio_read+0x3c/0x60 lr : regmap_mmio_read+0x3c/0x60 sp : ffffff800d453900 x29: ffffff800d453900 x28: ffffff800096a1d0 x27: 0000000000000100 x26: ffffff80009696d8 x25: ffffff8000969000 x24: ffffffc13a588918 x23: ffffffc13a9a28a8 x22: ffffff800d4539dc x21: 0000000000000084 x20: ffffff800d4539dc x19: ffffffc13a5d5480 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000030 x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f x9 : 0000000000000000 x8 : ffffffc13a5d5a80 x7 : 0000000000000000 x6 : 000000000000003f x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffffff800851be70 x2 : ffffff800851bd60 x1 : ffffff800d492ff8 x0 : 0000000000000000 Kernel panic - not syncing: Asynchronous SError Interrupt CPU: 1 PID: 2749 Comm: modprobe Not tainted 4.19.0+ #175 Hardware name: Marvell 8040 MACCHIATOBin Double shot (DT) Call trace: dump_backtrace+0x0/0x158 show_stack+0x14/0x1c dump_stack+0x90/0xb0 panic+0x128/0x298 print_tainted+0x0/0xa8 arm64_serror_panic+0x74/0x80 do_serror+0x5c/0xb8 el1_error+0xb4/0x144 regmap_mmio_read+0x3c/0x60 _regmap_bus_reg_read+0x18/0x20 _regmap_read+0x64/0x180 regmap_read+0x44/0x6c armada_ap806_init+0x24/0x5c [armada_thermal] armada_thermal_probe+0x2c8/0x37c [armada_thermal] platform_drv_probe+0x4c/0xb0 really_probe+0x21c/0x2b4 driver_probe_device+0x58/0xfc __driver_attach+0xd4/0xd8 bus_for_each_dev+0x50/0xa0 driver_attach+0x20/0x28 bus_add_driver+0x1c4/0x228 driver_register+0x6c/0x124 __platform_driver_register+0x4c/0x54 armada_thermal_driver_init+0x20/0x1000 [armada_thermal] do_one_initcall+0x30/0x204 do_init_module+0x5c/0x1d4 load_module+0x1a88/0x212c __se_sys_finit_module+0xa0/0xac __arm64_sys_finit_module+0x1c/0x24 el0_svc_common+0x94/0xf0 el0_svc_handler+0x24/0x80 el0_svc+0x8/0x3c0 SMP: stopping secondary CPUs Kernel Offset: disabled CPU features: 0x0,21806000 Memory Limit: none Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Tested-by: Miquel Raynal <miquel.raynal@bootlin.com> Signed-off-by: Eduardo Valentin <edubezval@gmail.com>
2018-12-04thermal: armada: fix legacy validity test senseRussell King1-1/+1
Commit 8c0e64ac4075 ("thermal: armada: get rid of the ->is_valid() pointer") removed the unnecessary indirection through a function pointer, but in doing so, also removed the negation operator too: - if (priv->data->is_valid && !priv->data->is_valid(priv)) { + if (armada_is_valid(priv)) { which results in: armada_thermal f06f808c.thermal: Temperature sensor reading not valid armada_thermal f2400078.thermal: Temperature sensor reading not valid armada_thermal f4400078.thermal: Temperature sensor reading not valid at boot, or whenever the "temp" sysfs file is read. Replace the negation operator. Fixes: 8c0e64ac4075 ("thermal: armada: get rid of the ->is_valid() pointer") Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Eduardo Valentin <edubezval@gmail.com>
2018-12-04blk-mq: fix corruption with direct issueJens Axboe1-1/+25
If we attempt a direct issue to a SCSI device, and it returns BUSY, then we queue the request up normally. However, the SCSI layer may have already setup SG tables etc for this particular command. If we later merge with this request, then the old tables are no longer valid. Once we issue the IO, we only read/write the original part of the request, not the new state of it. This causes data corruption, and is most often noticed with the file system complaining about the just read data being invalid: [ 235.934465] EXT4-fs error (device sda1): ext4_iget:4831: inode #7142: comm dpkg-query: bad extra_isize 24937 (inode size 256) because most of it is garbage... This doesn't happen from the normal issue path, as we will simply defer the request to the hardware queue dispatch list if we fail. Once it's on the dispatch list, we never merge with it. Fix this from the direct issue path by flagging the request as REQ_NOMERGE so we don't change the size of it before issue. See also: https://bugzilla.kernel.org/show_bug.cgi?id=201685 Tested-by: Guenter Roeck <linux@roeck-us.net> Fixes: 6ce3dd6eec1 ("blk-mq: issue directly if hw queue isn't busy in case of 'none'") Cc: stable@vger.kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-04ARM: 8816/1: dma-mapping: fix potential uninitialized returnNathan Jones1-1/+1
While trying to use the dma_mmap_*() interface, it was noticed that this interface returns strange values when passed an incorrect length. If neither of the if() statements fire then the return value is uninitialized. In the worst case it returns 0 which means the caller will think the function succeeded. Fixes: 1655cf8829d8 ("ARM: dma-mapping: Remove traces of NOMMU code") Signed-off-by: Nathan Jones <nathanj439@gmail.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-12-04ARM: 8815/1: V7M: align v7m_dma_inv_range() with v7 counterpartVladimir Murzin1-5/+9
Chris has discovered and reported that v7_dma_inv_range() may corrupt memory if address range is not aligned to cache line size. Since the whole cache-v7m.S was lifted form cache-v7.S the same observation applies to v7m_dma_inv_range(). So the fix just mirrors what has been done for v7 with a little specific of M-class. Cc: Chris Cole <chris@sageembedded.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-12-04ARM: 8814/1: mm: improve/fix ARM v7_dma_inv_range() unaligned address handlingChris Cole1-3/+5
This patch addresses possible memory corruption when v7_dma_inv_range(start_address, end_address) address parameters are not aligned to whole cache lines. This function issues "invalidate" cache management operations to all cache lines from start_address (inclusive) to end_address (exclusive). When start_address and/or end_address are not aligned, the start and/or end cache lines are first issued "clean & invalidate" operation. The assumption is this is done to ensure that any dirty data addresses outside the address range (but part of the first or last cache lines) are cleaned/flushed so that data is not lost, which could happen if just an invalidate is issued. The problem is that these first/last partial cache lines are issued "clean & invalidate" and then "invalidate". This second "invalidate" is not required and worse can cause "lost" writes to addresses outside the address range but part of the cache line. If another component writes to its part of the cache line between the "clean & invalidate" and "invalidate" operations, the write can get lost. This fix is to remove the extra "invalidate" operation when unaligned addressed are used. A kernel module is available that has a stress test to reproduce the issue and a unit test of the updated v7_dma_inv_range(). It can be downloaded from http://ftp.sageembedded.com/outgoing/linux/cache-test-20181107.tgz. v7_dma_inv_range() is call by dmac_[un]map_area(addr, len, direction) when the direction is DMA_FROM_DEVICE. One can (I believe) successfully argue that DMA from a device to main memory should use buffers aligned to cache line size, because the "clean & invalidate" might overwrite data that the device just wrote using DMA. But if a driver does use unaligned buffers, at least this fix will prevent memory corruption outside the buffer. Signed-off-by: Chris Cole <chris@sageembedded.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>