aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2019-02-25dmaengine: imx-sdma: add a test for imx8mq multi sdma devicesAngus Ainslie (Purism)2-0/+7
On i.mx8mq, there are two sdma instances, and the common dma framework will get a channel dynamically from any available sdma instance whether it's the first sdma device or the second sdma device. Some IPs like SAI only work with sdma2 not sdma1. To make sure the sdma channel is from the correct sdma device, use the node pointer to match. Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca> Reviewed-by: Lucas Stach <l.stach@pengutronix.de> Tested-by: Daniel Baluta <daniel.baluta@nxp.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25dmaengine: imx-sdma: add clock ratio 1:1 checkAngus Ainslie (Purism)1-4/+14
On i.mx8 mscale B0 chip, AHB/SDMA clock ratio 2:1 can't be supportted, since SDMA clock ratio has to be increased to 250Mhz, AHB can't reach to 500Mhz, so use 1:1 instead. Based on NXP commit MLK-16841-1 by Robin Gong <yibin.gong@nxp.com> Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca> Reviewed-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25dmaengine: dmatest: move test data alloc & free into functionsAlexandru Ardelean1-55/+55
This patch starts to take advantage of the `dmatest_data` struct by moving the common allocation & free-ing bits into functions. Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25dmaengine: dmatest: add short-hand `buf_size` var in dmatest_func()Alexandru Ardelean1-15/+17
This is just a cosmetic change, since this variable gets used quite a bit inside the dmatest_func() routine. Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25dmaengine: dmatest: wrap src & dst data into a structAlexandru Ardelean1-94/+101
This change wraps the data for the source & destination buffers into a `struct dmatest_data`. The rename patterns are: * src_cnt -> src->cnt * dst_cnt -> dst->cnt * src_off -> src->off * dst_off -> dst->off * thread->srcs -> src->aligned * thread->usrcs -> src->raw * thread->dsts -> dst->aligned * thread->udsts -> dst->raw The intent is to make a function that moves duplicate parts of the code into common alloc & free functions, which will unclutter the `dmatest_func()` function. Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25dmaengine: ioatdma: support latency tolerance report (LTR) for v3.4Dave Jiang2-0/+41
IOATDMA 3.4 supports PCIe LTR mechanism. The registers are non-standard PCIe LTR support. This needs to be setup in order to not suffer performance impact and provide proper power management. The channel is set to active when it is allocated, and to passive when it's freed. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25dmaengine: ioatdma: add descriptor pre-fetch support for v3.4Dave Jiang3-2/+28
Adding support for new feature on ioatdma 3.4 hardware that provides descriptor pre-fetching in order to reduce small DMA latencies. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25dmaengine: ioatdma: disable DCA enabling on IOATDMA v3.4Dave Jiang2-0/+3
IOATDMA v3.4 does not support DCA. Disable Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25dmaengine: ioatdma: Add Snow Ridge ioatdma device idDave Jiang3-1/+6
Add Snowridge Xeon-D ioatdma PCI device id. Also applies for Icelake SP Xeon. This introduces ioatdma v3.4 platform. Also bumping driver version to 5.0 since we are adding additional code for 3.4 support. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25dmaengine: sprd: Change channel id to slave id for DMA cell specifierBaolin Wang1-15/+4
We will describe the slave id in DMA cell specifier instead of DMA channel id, thus we should save the slave id from DMA engine translation function, and remove the channel id validation. Meanwhile we do not need set default slave id in sprd_dma_alloc_chan_resources(), remove it. Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25dt-bindings: dmaengine: sprd: Change channel id to slave id for DMA cell specifierBaolin Wang1-1/+1
For Spreadtrum DMA engine, all channels are equal, which means slave can request any channels with setting a unique slave id to trigger this channel. Thus we can remove the channel id from device tree to assign the channel dynamically, moreover we should add the slave id in device tree. Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-19dmaengine: mv_xor: Use correct device for DMA APIRobin Murphy1-1/+1
Using dma_dev->dev for mappings before it's assigned with the correct device is unlikely to work as expected, and with future dma-direct changes, passing a NULL device may end up crashing entirely. I don't know enough about this hardware or the mv_xor_prep_dma_interrupt() operation to implement the appropriate error-handling logic that would have revealed those dma_map_single() calls failing on arm64 for as long as the driver has been enabled there, but moving the assignment earlier will at least make the current code operate as intended. Fixes: 22843545b200 ("dma: mv_xor: Add support for DMA_INTERRUPT") Reported-by: John David Anglin <dave.anglin@bell.net> Tested-by: John David Anglin <dave.anglin@bell.net> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Tested-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-11Documentation :dmaengine: clarify DMA desc. pointer after submissionFederico Vaga1-0/+7
It clarifies that the DMA description pointer returned by `dmaengine_prep_*` function should not be used after submission. Signed-off-by: Federico Vaga <federico.vaga@cern.ch> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-11Documentation: dmaengine: fix dmatest.rst warningRandy Dunlap1-0/+1
Fix markup warning: insert a blank line before the hint. Documentation/driver-api/dmaengine/dmatest.rst:63: WARNING: Unexpected indentation. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04dmaengine: bcm2835: Drop outdated comment on supported transactionsLukas Wunner1-3/+0
Remove an outdated comment claiming the driver only supports cyclic transactions. The driver has been supporting other transaction types for more than two years. Signed-off-by: Lukas Wunner <lukas@wunner.de> Cc: Frank Pavlic <f.pavlic@kunbus.de> Cc: Martin Sperl <kernel@martin.sperl.org> Cc: Florian Meier <florian.meier@koalo.de> Tested-by: Stefan Wahren <stefan.wahren@i2se.com> Acked-by: Florian Kauer <florian.kauer@koalo.de> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04dmaengine: bcm2835: Drop gratuitous list deletionLukas Wunner1-10/+0
The BCM2835 DMA driver deletes a channel from a list upon termination without having added it to a list first. Moreover that operation is protected by a spinlock which isn't taken anywhere else. These appear to be remnants of an older version of the driver which accidentally got mainlined. Remove the dead code. Signed-off-by: Lukas Wunner <lukas@wunner.de> Cc: Frank Pavlic <f.pavlic@kunbus.de> Cc: Martin Sperl <kernel@martin.sperl.org> Cc: Florian Meier <florian.meier@koalo.de> Tested-by: Stefan Wahren <stefan.wahren@i2se.com> Acked-by: Florian Kauer <florian.kauer@koalo.de> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04dmaengine: bcm2835: Enforce control block alignmentLukas Wunner1-1/+5
Per section 4.2.1.1 of the BCM2835 ARM Peripherals spec, control blocks "must start at a 256 bit aligned address": https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf This rule is currently satisfied only by accident because struct bcm2835_dma_cb has a size of 256 bit and the DMA pool API happens to allocate blocks consecutively. It seems safer to be explicit and tell the DMA pool allocator about the required alignment. Signed-off-by: Lukas Wunner <lukas@wunner.de> Cc: Frank Pavlic <f.pavlic@kunbus.de> Cc: Martin Sperl <kernel@martin.sperl.org> Cc: Florian Meier <florian.meier@koalo.de> Tested-by: Stefan Wahren <stefan.wahren@i2se.com> Acked-by: Florian Kauer <florian.kauer@koalo.de> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04dmaengine: bcm2835: Return void from abort of transactionsLukas Wunner1-3/+2
bcm2835_dma_abort() returns an int but bcm2835_dma_terminate_all() (its sole caller) does not evaluate the return value. Change the return type to void. Signed-off-by: Lukas Wunner <lukas@wunner.de> Cc: Frank Pavlic <f.pavlic@kunbus.de> Cc: Martin Sperl <kernel@martin.sperl.org> Cc: Florian Meier <florian.meier@koalo.de> Tested-by: Stefan Wahren <stefan.wahren@i2se.com> Acked-by: Florian Kauer <florian.kauer@koalo.de> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04dmaengine: bcm2835: Fix abort of transactionsLukas Wunner1-32/+9
There are multiple issues with bcm2835_dma_abort() (which is called on termination of a transaction): * The algorithm to abort the transaction first pauses the channel by clearing the ACTIVE flag in the CS register, then waits for the PAUSED flag to clear. Page 49 of the spec documents the latter as follows: "Indicates if the DMA is currently paused and not transferring data. This will occur if the active bit has been cleared [...]" https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf So the function is entering an infinite loop because it is waiting for PAUSED to clear which is always set due to the function having cleared the ACTIVE flag. The only thing that's saving it from itself is the upper bound of 10000 loop iterations. The code comment says that the intention is to "wait for any current AXI transfer to complete", so the author probably wanted to check the WAITING_FOR_OUTSTANDING_WRITES flag instead. Amend the function accordingly. * The CS register is only read at the beginning of the function. It needs to be read again after pausing the channel and before checking for outstanding writes, otherwise writes which were issued between the register read at the beginning of the function and pausing the channel may not be waited for. * The function seeks to abort the transfer by writing 0 to the NEXTCONBK register and setting the ABORT and ACTIVE flags. Thereby, the 0 in NEXTCONBK is sought to be loaded into the CONBLK_AD register. However experimentation has shown this approach to not work: The CONBLK_AD register remains the same as before and the CS register contains 0x00000030 (PAUSED | DREQ_STOPS_DMA). In other words, the control block is not aborted but merely paused and it will be resumed once the next DMA transaction is started. That is absolutely not the desired behavior. A simpler approach is to set the channel's RESET flag instead. This reliably zeroes the NEXTCONBK as well as the CS register. It requires less code and only a single MMIO write. This is also what popular user space DMA drivers do, e.g.: https://github.com/metachris/RPIO/blob/master/source/c_pwm/pwm.c Note that the spec is contradictory whether the NEXTCONBK register is writeable at all. On the one hand, page 41 claims: "The value loaded into the NEXTCONBK register can be overwritten so that the linked list of Control Block data structures can be dynamically altered. However it is only safe to do this when the DMA is paused." On the other hand, page 40 specifies: "Only three registers in each channel's register set are directly writeable (CS, CONBLK_AD and DEBUG). The other registers (TI, SOURCE_AD, DEST_AD, TXFR_LEN, STRIDE & NEXTCONBK), are automatically loaded from a Control Block data structure held in external memory." Fixes: 96286b576690 ("dmaengine: Add support for BCM2835") Signed-off-by: Lukas Wunner <lukas@wunner.de> Cc: stable@vger.kernel.org # v3.14+ Cc: Frank Pavlic <f.pavlic@kunbus.de> Cc: Martin Sperl <kernel@martin.sperl.org> Cc: Florian Meier <florian.meier@koalo.de> Cc: Clive Messer <clive.m.messer@gmail.com> Cc: Matthias Reichl <hias@horus.com> Tested-by: Stefan Wahren <stefan.wahren@i2se.com> Acked-by: Florian Kauer <florian.kauer@koalo.de> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04dmaengine: bcm2835: Fix interrupt race on RTLukas Wunner1-15/+18
If IRQ handlers are threaded (either because CONFIG_PREEMPT_RT_BASE is enabled or "threadirqs" was passed on the command line) and if system load is sufficiently high that wakeup latency of IRQ threads degrades, SPI DMA transactions on the BCM2835 occasionally break like this: ks8851 spi0.0: SPI transfer timed out bcm2835-dma 3f007000.dma: DMA transfer could not be terminated ks8851 spi0.0 eth2: ks8851_rdfifo: spi_sync() failed The root cause is an assumption made by the DMA driver which is documented in a code comment in bcm2835_dma_terminate_all(): /* * Stop DMA activity: we assume the callback will not be called * after bcm_dma_abort() returns (even if it does, it will see * c->desc is NULL and exit.) */ That assumption falls apart if the IRQ handler bcm2835_dma_callback() is threaded: A client may terminate a descriptor and issue a new one before the IRQ handler had a chance to run. In fact the IRQ handler may miss an *arbitrary* number of descriptors. The result is the following race condition: 1. A descriptor finishes, its interrupt is deferred to the IRQ thread. 2. A client calls dma_terminate_async() which sets channel->desc = NULL. 3. The client issues a new descriptor. Because channel->desc is NULL, bcm2835_dma_issue_pending() immediately starts the descriptor. 4. Finally the IRQ thread runs and writes BCM2835_DMA_INT to the CS register to acknowledge the interrupt. This clears the ACTIVE flag, so the newly issued descriptor is paused in the middle of the transaction. Because channel->desc is not NULL, the IRQ thread finalizes the descriptor and tries to start the next one. I see two possible solutions: The first is to call synchronize_irq() in bcm2835_dma_issue_pending() to wait until the IRQ thread has finished before issuing a new descriptor. The downside of this approach is unnecessary latency if clients desire rapidly terminating and re-issuing descriptors and don't have any use for an IRQ callback. (The SPI TX DMA channel is a case in point.) A better alternative is to make the IRQ thread recognize that it has missed descriptors and avoid finalizing the newly issued descriptor. So first of all, set the ACTIVE flag when acknowledging the interrupt. This keeps a newly issued descriptor running. If the descriptor was finished, the channel remains idle despite the ACTIVE flag being set. However the ACTIVE flag can then no longer be used to check whether the channel is idle, so instead check whether the register containing the current control block address is zero and finalize the current descriptor only if so. That way, there is no impact on latency and throughput if the client doesn't care for the interrupt: Only minimal additional overhead is introduced for non-cyclic descriptors as one further MMIO read is necessary per interrupt to check for idleness of the channel. Cyclic descriptors are sped up slightly by removing one MMIO write per interrupt. Fixes: 96286b576690 ("dmaengine: Add support for BCM2835") Signed-off-by: Lukas Wunner <lukas@wunner.de> Cc: stable@vger.kernel.org # v3.14+ Cc: Frank Pavlic <f.pavlic@kunbus.de> Cc: Martin Sperl <kernel@martin.sperl.org> Cc: Florian Meier <florian.meier@koalo.de> Cc: Clive Messer <clive.m.messer@gmail.com> Cc: Matthias Reichl <hias@horus.com> Tested-by: Stefan Wahren <stefan.wahren@i2se.com> Acked-by: Florian Kauer <florian.kauer@koalo.de> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20dmaengine: axi-dmac: Use struct_size() in kzalloc()Gustavo A. R. Silva1-2/+1
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20dmaengine: timb_dma: Use struct_size() in kzalloc()Gustavo A. R. Silva1-2/+2
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20dmaengine: qcom_hidma: assign channel cookie correctlyShunyong Yang1-8/+9
When dma_cookie_complete() is called in hidma_process_completed(), dma_cookie_status() will return DMA_COMPLETE in hidma_tx_status(). Then, hidma_txn_is_success() will be called to use channel cookie mchan->last_success to do additional DMA status check. Current code assigns mchan->last_success after dma_cookie_complete(). This causes a race condition of dma_cookie_status() returns DMA_COMPLETE before mchan->last_success is assigned correctly. The race will cause hidma_tx_status() return DMA_ERROR but the transaction is actually a success. Moreover, in async_tx case, it will cause a timeout panic in async_tx_quiesce(). Kernel panic - not syncing: async_tx_quiesce: DMA error waiting for transaction ... Call trace: [<ffff000008089994>] dump_backtrace+0x0/0x1f4 [<ffff000008089bac>] show_stack+0x24/0x2c [<ffff00000891e198>] dump_stack+0x84/0xa8 [<ffff0000080da544>] panic+0x12c/0x29c [<ffff0000045d0334>] async_tx_quiesce+0xa4/0xc8 [async_tx] [<ffff0000045d03c8>] async_trigger_callback+0x70/0x1c0 [async_tx] [<ffff0000048b7d74>] raid_run_ops+0x86c/0x1540 [raid456] [<ffff0000048bd084>] handle_stripe+0x5e8/0x1c7c [raid456] [<ffff0000048be9ec>] handle_active_stripes.isra.45+0x2d4/0x550 [raid456] [<ffff0000048beff4>] raid5d+0x38c/0x5d0 [raid456] [<ffff000008736538>] md_thread+0x108/0x168 [<ffff0000080fb1cc>] kthread+0x10c/0x138 [<ffff000008084d34>] ret_from_fork+0x10/0x18 Cc: Joey Zheng <yu.zheng@hxt-semitech.com> Reviewed-by: Sinan Kaya <okaya@kernel.org> Signed-off-by: Shunyong Yang <shunyong.yang@hxt-semitech.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20dmaengine: qcom_hidma: initialize tx flags in hidma_prep_dma_*Shunyong Yang1-0/+2
In async_tx_test_ack(), it uses flags in struct dma_async_tx_descriptor to check the ACK status. As hidma reuses the descriptor in a free list when hidma_prep_dma_*(memcpy/memset) is called, the flag will keep ACKed if the descriptor has been used before. This will cause a BUG_ON in async_tx_quiesce(). kernel BUG at crypto/async_tx/async_tx.c:282! Internal error: Oops - BUG: 0 1 SMP ... task: ffff8017dd3ec000 task.stack: ffff8017dd3e8000 PC is at async_tx_quiesce+0x54/0x78 [async_tx] LR is at async_trigger_callback+0x98/0x110 [async_tx] This patch initializes flags in dma_async_tx_descriptor by the flags passed from the caller when hidma_prep_dma_*(memcpy/memset) is called. Cc: Joey Zheng <yu.zheng@hxt-semitech.com> Reviewed-by: Sinan Kaya <okaya@kernel.org> Signed-off-by: Shunyong Yang <shunyong.yang@hxt-semitech.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: qcom: bam_dma: use struct_size() in kzalloc()Gustavo A. R. Silva1-2/+2
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: st_fdma: use struct_size() in kzalloc()Gustavo A. R. Silva1-2/+1
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Acked-by: Patrice Chotard <patrice.chotard@st.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: dma-jz4780: Use struct_size() in devm_kzalloc()Gustavo A. R. Silva1-3/+2
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = devm_kzalloc(dev, sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = devm_kzalloc(dev, struct_size(instance, entry, count), GFP_KERNEL); This issue was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: bcm2835: Use struct_size() in kzalloc()Gustavo A. R. Silva1-2/+1
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: qcom_hidma: Check for driver register failureAditya Pakki1-2/+1
While initializing the driver, the function platform_driver_register can fail and return an error. Consistent with other invocations, this patch returns the error upstream. Signed-off-by: Aditya Pakki <pakki001@umn.edu> Acked-by: Sinan Kaya <okaya@kernel.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: sa11x0: drop useless LIST_HEADJulia Lawall1-2/+0
Drop LIST_HEAD where the variable it declares has never been used. The semantic patch that fixes this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ identifier x; @@ - LIST_HEAD(x); ... when != x // </smpl> Fixes: 4a533218fccf ("dmaengine: sa11x0: Split device_control") Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: pl330: drop useless LIST_HEADJulia Lawall1-1/+0
Drop LIST_HEAD where the variable it declares is never used. The variable has not been used since the function was introduced in 740aa95703c5 ("dmaengine: pl330: Split device_control"). The semantic patch that fixes this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ identifier x; @@ - LIST_HEAD(x); ... when != x // </smpl> Fixes: 740aa95703c5 ("dmaengine: pl330: Split device_control") Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: st_fdma: drop useless LIST_HEADJulia Lawall1-3/+0
Drop LIST_HEAD where the variable it declares is never used. The declarations were introduced with the file, but the declared variables were not used. The semantic patch that fixes this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ identifier x; @@ - LIST_HEAD(x); ... when != x // </smpl> Fixes: 6b4cd727eaf1 ("dmaengine: st_fdma: Add STMicroelectronics FDMA engine driver support") Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: dw: drop useless LIST_HEADJulia Lawall1-1/+0
Drop LIST_HEAD where the variable it declares is never used. Commit ab703f818ac3 ("dmaengine: dw: lazy allocation of dma descriptors") removed the uses, but not the declaration. The semantic patch that fixes this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ identifier x; @@ - LIST_HEAD(x); ... when != x // </smpl> Fixes: ab703f818ac3 ("dmaengine: dw: lazy allocation of dma descriptors") Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: at_hdmac: drop useless LIST_HEADJulia Lawall1-5/+0
Drop LIST_HEAD where the variable it declares is never used. tmp_list has been declared since the introduction of the driver and has never been used. The two declarations of list were introduced with the containing functions but were also not used. The semantic patch that fixes this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ identifier x; @@ - LIST_HEAD(x); ... when != x // </smpl> Fixes: dc78baa2b90b ("dmaengine: at_hdmac: new driver for the Atmel AHB DMA Controller") Fixes: 4facfe7f09f2b ("dmaengine: hdmac: Split device_control") Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-06Linux 5.0-rc1Linus Torvalds1-3/+3
2019-01-06Change mincore() to count "mapped" pages rather than "cached" pagesLinus Torvalds1-81/+13
The semantics of what "in core" means for the mincore() system call are somewhat unclear, but Linux has always (since 2.3.52, which is when mincore() was initially done) treated it as "page is available in page cache" rather than "page is mapped in the mapping". The problem with that traditional semantic is that it exposes a lot of system cache state that it really probably shouldn't, and that users shouldn't really even care about. So let's try to avoid that information leak by simply changing the semantics to be that mincore() counts actual mapped pages, not pages that might be cheaply mapped if they were faulted (note the "might be" part of the old semantics: being in the cache doesn't actually guarantee that you can access them without IO anyway, since things like network filesystems may have to revalidate the cache before use). In many ways the old semantics were somewhat insane even aside from the information leak issue. From the very beginning (and that beginning is a long time ago: 2.3.52 was released in March 2000, I think), the code had a comment saying Later we can get more picky about what "in core" means precisely. and this is that "later". Admittedly it is much later than is really comfortable. NOTE! This is a real semantic change, and it is for example known to change the output of "fincore", since that program literally does a mmmap without populating it, and then doing "mincore()" on that mapping that doesn't actually have any pages in it. I'm hoping that nobody actually has any workflow that cares, and the info leak is real. We may have to do something different if it turns out that people have valid reasons to want the old semantics, and if we can limit the information leak sanely. Cc: Kevin Easton <kevin@guarana.org> Cc: Jiri Kosina <jikos@kernel.org> Cc: Masatake YAMATO <yamato@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-06Fix 'acccess_ok()' on alpha and SHLinus Torvalds2-5/+10
Commit 594cc251fdd0 ("make 'user_access_begin()' do 'access_ok()'") broke both alpha and SH booting in qemu, as noticed by Guenter Roeck. It turns out that the bug wasn't actually in that commit itself (which would have been surprising: it was mostly a no-op), but in how the addition of access_ok() to the strncpy_from_user() and strnlen_user() functions now triggered the case where those functions would test the access of the very last byte of the user address space. The string functions actually did that user range test before too, but they did it manually by just comparing against user_addr_max(). But with user_access_begin() doing the check (using "access_ok()"), it now exposed problems in the architecture implementations of that function. For example, on alpha, the access_ok() helper macro looked like this: #define __access_ok(addr, size) \ ((get_fs().seg & (addr | size | (addr+size))) == 0) and what it basically tests is of any of the high bits get set (the USER_DS masking value is 0xfffffc0000000000). And that's completely wrong for the "addr+size" check. Because it's off-by-one for the case where we check to the very end of the user address space, which is exactly what the strn*_user() functions do. Why? Because "addr+size" will be exactly the size of the address space, so trying to access the last byte of the user address space will fail the __access_ok() check, even though it shouldn't. As a result, the user string accessor functions failed consistently - because they literally don't know how long the string is going to be, and the max access is going to be that last byte of the user address space. Side note: that alpha macro is buggy for another reason too - it re-uses the arguments twice. And SH has another version of almost the exact same bug: #define __addr_ok(addr) \ ((unsigned long __force)(addr) < current_thread_info()->addr_limit.seg) so far so good: yes, a user address must be below the limit. But then: #define __access_ok(addr, size) \ (__addr_ok((addr) + (size))) is wrong with the exact same off-by-one case: the case when "addr+size" is exactly _equal_ to the limit is actually perfectly fine (think "one byte access at the last address of the user address space") The SH version is actually seriously buggy in another way: it doesn't actually check for overflow, even though it did copy the _comment_ that talks about overflow. So it turns out that both SH and alpha actually have completely buggy implementations of access_ok(), but they happened to work in practice (although the SH overflow one is a serious serious security bug, not that anybody likely cares about SH security). This fixes the problems by using a similar macro on both alpha and SH. It isn't trying to be clever, the end address is based on this logic: unsigned long __ao_end = __ao_a + __ao_b - !!__ao_b; which basically says "add start and length, and then subtract one unless the length was zero". We can't subtract one for a zero length, or we'd just hit an underflow instead. For a lot of access_ok() users the length is a constant, so this isn't actually as expensive as it initially looks. Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net> Cc: Matt Turner <mattst88@gmail.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-06fscrypt: add Adiantum supportEric Biggers7-188/+468
Add support for the Adiantum encryption mode to fscrypt. Adiantum is a tweakable, length-preserving encryption mode with security provably reducible to that of XChaCha12 and AES-256, subject to a security bound. It's also a true wide-block mode, unlike XTS. See the paper "Adiantum: length-preserving encryption for entry-level processors" (https://eprint.iacr.org/2018/720.pdf) for more details. Also see commit 059c2a4d8e16 ("crypto: adiantum - add Adiantum support"). On sufficiently long messages, Adiantum's bottlenecks are XChaCha12 and the NH hash function. These algorithms are fast even on processors without dedicated crypto instructions. Adiantum makes it feasible to enable storage encryption on low-end mobile devices that lack AES instructions; currently such devices are unencrypted. On ARM Cortex-A7, on 4096-byte messages Adiantum encryption is about 4 times faster than AES-256-XTS encryption; decryption is about 5 times faster. In fscrypt, Adiantum is suitable for encrypting both file contents and names. With filenames, it fixes a known weakness: when two filenames in a directory share a common prefix of >= 16 bytes, with CTS-CBC their encrypted filenames share a common prefix too, leaking information. Adiantum does not have this problem. Since Adiantum also accepts long tweaks (IVs), it's also safe to use the master key directly for Adiantum encryption rather than deriving per-file keys, provided that the per-file nonce is included in the IVs and the master key isn't used for any other encryption mode. This configuration saves memory and improves performance. A new fscrypt policy flag is added to allow users to opt-in to this configuration. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2019-01-06kconfig: rename generated .*conf-cfg to *conf-cfgMasahiro Yamada2-18/+19
Remove the dot-prefixing since it is just a matter of the .gitignore file. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: remove unnecessary stubs for archheader and archscriptsMasahiro Yamada1-5/+1
Make simply skips a missing rule when it is marked as .PHONY. Remove the dummy targets. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: use assignment instead of define ... endef for filechk_* rulesMasahiro Yamada6-26/+12
You do not have to use define ... endef for filechk_* rules. For simple cases, the use of assignment looks cleaner, IMHO. I updated the usage for scripts/Kbuild.include in case somebody misunderstands the 'define ... endif' is the requirement. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-01-06arch: remove redundant UAPI generic-y definesMasahiro Yamada24-408/+0
Now that Kbuild automatically creates asm-generic wrappers for missing mandatory headers, it is redundant to list the same headers in generic-y and mandatory-y. Suggested-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Sam Ravnborg <sam@ravnborg.org>
2019-01-06kbuild: generate asm-generic wrappers if mandatory headers are missingMasahiro Yamada3-10/+10
Some time ago, Sam pointed out a certain degree of overwrap between generic-y and mandatory-y. (https://lkml.org/lkml/2017/7/10/121) I tweaked the meaning of mandatory-y a little bit; now it defines the minimum set of ASM headers that all architectures must have. If arch does not have specific implementation of a mandatory header, Kbuild will let it fallback to the asm-generic one by automatically generating a wrapper. This will allow to drop lots of redundant generic-y defines. Previously, "mandatory" was used in the context of UAPI, but I guess this can be extended to kernel space ASM headers. Suggested-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Sam Ravnborg <sam@ravnborg.org>
2019-01-06arch: remove stale comments "UAPI Header export list"Masahiro Yamada24-25/+0
These comments are leftovers of commit fcc8487d477a ("uapi: export all headers under uapi directories"). Prior to that commit, exported headers must be explicitly added to header-y. Now, all headers under the uapi/ directories are exported. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06riscv: remove redundant kernel-space generic-yMasahiro Yamada1-25/+0
This commit removes redundant generic-y defines in arch/riscv/include/asm/Kbuild. [1] It is redundant to define the same generic-y in both arch/$(ARCH)/include/asm/Kbuild and arch/$(ARCH)/include/uapi/asm/Kbuild. Remove the following generic-y: errno.h fcntl.h ioctl.h ioctls.h ipcbuf.h mman.h msgbuf.h param.h poll.h posix_types.h resource.h sembuf.h setup.h shmbuf.h signal.h socket.h sockios.h stat.h statfs.h swab.h termbits.h termios.h types.h [2] It is redundant to define generic-y when arch-specific implementation exists in arch/$(ARCH)/include/asm/*.h Remove the following generic-y: cacheflush.h module.h Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: change filechk to surround the given command with { }Masahiro Yamada7-12/+14
filechk_* rules often consist of multiple 'echo' lines. They must be surrounded with { } or ( ) to work correctly. Otherwise, only the string from the last 'echo' would be written into the target. Let's take care of that in the 'filechk' in scripts/Kbuild.include to clean up filechk_* rules. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: remove redundant target cleaning on failureMasahiro Yamada9-25/+16
Since commit 9c2af1c7377a ("kbuild: add .DELETE_ON_ERROR special target"), the target file is automatically deleted on failure. The boilerplate code ... || { rm -f $@; false; } is unneeded. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: clean up rule_dtc_dt_yamlMasahiro Yamada1-2/+2
Commit 3a2429e1faf4 ("kbuild: change if_changed_rule for multi-line recipe") and commit 4f0e3a57d6eb ("kbuild: Add support for DT binding schema checks") came in via different sub-systems. This is a follow-up cleanup. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: remove UIMAGE_IN and UIMAGE_OUTMasahiro Yamada1-4/+2
The only/last user of UIMAGE_IN/OUT was removed by commit 4722a3e6b716 ("microblaze: fix multiple bugs in arch/microblaze/boot/Makefile"). The input and output should always be $< and $@. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06jump_label: move 'asm goto' support test to KconfigMasahiro Yamada42-119/+65
Currently, CONFIG_JUMP_LABEL just means "I _want_ to use jump label". The jump label is controlled by HAVE_JUMP_LABEL, which is defined like this: #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL) # define HAVE_JUMP_LABEL #endif We can improve this by testing 'asm goto' support in Kconfig, then make JUMP_LABEL depend on CC_HAS_ASM_GOTO. Ugly #ifdef HAVE_JUMP_LABEL will go away, and CONFIG_JUMP_LABEL will match to the real kernel capability. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Tested-by: Sedat Dilek <sedat.dilek@gmail.com>