aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/dma/dw (follow)
AgeCommit message (Collapse)AuthorFilesLines
2017-05-24dmaengine: DW DMAC: Handle return value of clk_prepare_enableArvind Yadav1-1/+5
clk_prepare_enable() can fail here and we must check its return value. Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-05-15dmaengine: dw: Remove AVR32 bits from the driverAndy Shevchenko3-375/+14
AVR32 is gone. Now it's time to clean up the driver by removing leftovers that was used by AVR32 related code. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2017-02-21Merge tag 'dmaengine-4.11-rc1' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds4-67/+223
Pull dmaengine updates from Vinod Koul: "This time we fairly boring and bit small update. - Support for Intel iDMA 32-bit hardware - deprecate broken support for channel switching in async_tx - bunch of updates on stm32-dma - Cyclic support for zx dma and making in generic zx dma driver - Small updates to bunch of other drivers" * tag 'dmaengine-4.11-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (29 commits) async_tx: deprecate broken support for channel switching dmaengine: rcar-dmac: Widen DMA mask to 40 bits dmaengine: sun6i: allow build on ARM64 platforms (sun50i) dmaengine: Provide a wrapper for memcpy operations dmaengine: zx: fix build warning dmaengine: dw: we do support Merrifield SoC in PCI mode dmaengine: dw: add support of iDMA 32-bit hardware dmaengine: dw: introduce register mappings for iDMA 32-bit dmaengine: dw: introduce block2bytes() and bytes2block() dmaengine: dw: extract dwc_chan_pause() for future use dmaengine: dw: replace convert_burst() with one liner dmaengine: dw: register IRQ and DMA pool with instance ID dmaengine: dw: Fix data corruption in large device to memory transfers dmaengine: ste_dma40: indicate granularity on channels dmaengine: ste_dma40: indicate directions on channels dmaengine: stm32-dma: Add error messages if xlate fails dmaengine: dw: pci: remove LPE Audio DMA ID dmaengine: stm32-dma: Add max_burst support dmaengine: stm32-dma: Add synchronization support dmaengine: stm32-dma: Fix residue computation issue in cyclic mode ...
2017-01-25dmaengine: dw: we do support Merrifield SoC in PCI modeAndy Shevchenko1-0/+15
Intel Merrifield platform contains Intel integrated DMA (iDMA 32-bit) which has a slightly different register mapping, e.g. some bits in CTL_* and CFG_* channel registers, and has to use platform data since there is no autoconfiguration. The iDMA 32-bit specification is available in the publicly available documentation for Intel Braswell and BayTrail SoCs as LPE Audio DMA. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-01-25dmaengine: dw: add support of iDMA 32-bit hardwareAndy Shevchenko1-9/+92
iDMA 32-bit is Intel designed DMA controller that behaves like Synopsys Designware DMA. This patch adds a support of the new Intel hardware. Due to iDMA 32-bit has no autoconfiguration the platform code must provide a platform data to dw_dma_probe(). By default full FIFO (1024 bytes) is assigned to channel 0. Here we slice FIFO on equal parts between channels for iDMA 32-bit case. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-01-25dmaengine: dw: introduce register mappings for iDMA 32-bitAndy Shevchenko1-3/+49
The integrated DMA (iDMA 32-bit) is Intel designed DMA controller which mimics Synopsys Designware DMA. This patch appends the register mappings for the parts which are slightly different to the DesignWare hardware. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-01-25dmaengine: dw: introduce block2bytes() and bytes2block()Andy Shevchenko2-23/+35
The newly introduced helpers prepare driver to support new DMA controller hardware. While here, introduce DWC_CTLH_BLOCK_TS() macro as well. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-01-25dmaengine: dw: extract dwc_chan_pause() for future useAndy Shevchenko1-5/+9
iDMA 32-bit has a special handling of the FIFO during pause() / terminate_all(). Prepare code to implement that. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-01-25dmaengine: dw: replace convert_burst() with one linerAndy Shevchenko1-18/+11
Replace convert_burst() with one liner in place. The change simplifies further extension of the driver to cover new DMA controller hardware. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-01-25dmaengine: dw: register IRQ and DMA pool with instance IDAndy Shevchenko4-2/+8
It is really useful not only for debugging to have an IRQ line and DMA pool labeled with driver and its instance ID. Do this for DesignWare DMA driver. All current users of this IP would be enhanced later on. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-01-25dmaengine: dw: Fix data corruption in large device to memory transfersJarkko Nikula1-11/+9
When transferring more data than the maximum block size supported by the HW multiplied by source width the transfer is split into smaller chunks. Currently code calculates the memory width and thus aligment before splitting for both memory to device and device to memory transfers. For memory to device transfers this work fine since alignment is preserved through the splitting and split blocks are still memory width aligned. However in device to memory transfers aligment breaks when maximum block size multiplied by register width doesn't have the same alignment than the buffer. For instance when transferring from an 8-bit register 4100 bytes (32-bit aligned) on a DW DMA that has maximum block size of 4095 elements. An attempt to do such transfers caused data corruption. Fix this by calculating and setting the destination memory width after splitting by using the split block aligment and length. Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-01-10dmaengine: dw: pci: remove LPE Audio DMA IDAndy Shevchenko1-2/+1
LPE Audio driver should take care of DMA IPs by itself. Keeping an ID like this in dw_dma_pci.c is anyway wrong since that block has two DMA controllers under one ID (like MFD device). That's also why I didn't include LPE Audio ID for Intel Merrifield previously. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-01-02dmaengine: dw: fix typo in KconfigJean Delvare1-1/+1
platfroms -> platforms Signed-off-by: Jean Delvare <jdelvare@suse.de> Fixes: fed42c198b45 ("dma: dw: add PCI part of the driver") Cc: Viresh Kumar <vireshk@kernel.org> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-11-30dmaengine: DW DMAC: add multi-block property to device treeEugeniy Paltsev3-3/+14
Several versions of DW DMAC have multi block transfers hardware support. Hardware support of multi block transfers is disabled by default if we use DT to configure DMAC and software emulation of multi block transfers used instead. Add multi-block property, so it is possible to enable hardware multi block transfers (if present) via DT. Switch from per device is_nollp variable to multi_block array to be able enable/disable multi block transfers separately per channel. Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-11-30dmaengine: DW DMAC: enable memory-to-memory transfers supportEugeniy Paltsev1-0/+6
All known devices, which use DT for configuration, support memory-to-memory transfers. So enable it by default, if we read configuration from DT. Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-10-06Merge tag 'dmaengine-4.9-rc1' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds1-8/+6
Pull dmaengine updates from Vinod Koul: "This is bit large pile of code which bring in some nice additions: - Error reporting: we have added a new mechanism for users of dmaenegine to register a callback_result which tells them the result of the dma transaction. Right now only one user (ntb) is using it. - As we discussed on KS mailing list and pointed out NO_IRQ has no place in kernel, this also remove NO_IRQ from dmaengine subsystem (both arm and ppc users) - Support for IOMMU slave transfers and its implementation for arm. - To get better build coverage, enable COMPILE_TEST for bunch of driver, and fix the warning and sparse complaints on these. - Apart from above, usual updates spread across drivers" * tag 'dmaengine-4.9-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (169 commits) async_pq_val: fix DMA memory leak dmaengine: virt-dma: move function declarations dmaengine: omap-dma: Enable burst and data pack for SG DT: dmaengine: rcar-dmac: document R8A7743/5 support dmaengine: fsldma: Unmap region obtained by of_iomap dmaengine: jz4780: fix resource leaks on error exit return dma-debug: fix ia64 build, use PHYS_PFN dmaengine: coh901318: fix integer overflow when shifting more than 32 places dmaengine: edma: avoid uninitialized variable use dma-mapping: fix m32r build warning dma-mapping: fix ia64 build, use PHYS_PFN dmaengine: ti-dma-crossbar: enable COMPILE_TEST dmaengine: omap-dma: enable COMPILE_TEST dmaengine: edma: enable COMPILE_TEST dmaengine: ti-dma-crossbar: Fix of_device_id data parameter usage dmaengine: ti-dma-crossbar: Correct type for of_find_property() third parameter dmaengine/ARM: omap-dma: Fix the DMAengine compile test on non OMAP configs dmaengine: edma: Rename set_bits and remove unused clear_bits helper dmaengine: edma: Use correct type for of_find_property() third parameter dmaengine: edma: Fix of_device_id data parameter usage (legacy vs TPCC) ...
2016-08-31dmaengine: dw: override LLP support if asked in platform dataAndy Shevchenko1-5/+1
There are at least two known devices, e.g. DMA controller found on ARC AXS101 SDP board, that have LLP register and no multi block transfer support at the same time. Override autodetection by user provided data. Reported-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Reviewed-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Tested-by: Bryan O'Donoghue <pure.logic@nexus-software.ie> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-08-31dmaengine: dw: set polarity of handshake interfaceAndy Shevchenko1-0/+4
Intel Quark UART uses DesignWare DMA IP. Though the DMA IP is connected in such way that handshake interface uses inverted polarity. We have to provide a possibility to set this in the DMA driver when configuring a channel. Introduce a new member of custom slave configuration called 'hs_polarity' and set active low polarity in case this value is 'true'. Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Tested-by: Bryan O'Donoghue <pure.logic@nexus-software.ie> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-08-31dmaengine: dw: keep copy of custom slave config in dwcAndy Shevchenko2-23/+11
It seems we need to extend custom slave configuration by one more member to support Intel Quart UART. It becomes a burden to manage all members of struct dw_dma_slave one-by-one. Replace the set of fields by embedding struct dw_dma_slave into struct dw_dma_chan. Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Tested-by: Bryan O'Donoghue <pure.logic@nexus-software.ie> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-08-08dmaengine: dw: convert callback to helper functionDave Jiang1-8/+6
This is in preperation of moving to a callback that provides results to the callback for the transaction. The conversion will maintain current behavior and the driver must convert to new callback mechanism at a later time in order to receive results. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Lars-Peter Clausen <lars@metafoo.de> Cc: Viresh Kumar <vireshk@kernel.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-02dmaengine: dw: pass platform data via struct dw_dma_chipAndy Shevchenko3-8/+11
We pass struct dw_dma_chip to dw_dma_probe() anyway, thus we may use it to pass a platform data as well. While here, constify the source of the platform data. Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-02dmaengine: dw: keep entire platform data in struct dw_dmaAndy Shevchenko3-20/+19
Keep the entire platform data in the struct dw_dma. It makes the driver a bit cleaner. Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-02dmaengine: dw: revisit data_width propertyAndy Shevchenko2-33/+14
There several changes are done here: - Convert the property to be in bytes Besides that this is a common practice for such property, the use of a value in bytes much more convenient than handling the encoded one. - Rename data_width to data-width in the device tree bindings The change leaves the support for the old format as well just in case someone will use a newer kernel with an old device tree blob. - While here, replace dwc_fast_ffs() by __ffs() Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-02dmaengine: dw: platform: check nr_masters to be non-zeroAndy Shevchenko1-10/+10
The value of nr_masters equal to 0 is invalid since this DMA controller has to have at least one master. Check this before we proceed with the rest of properties. Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-19dmaengine: dw: lazy allocation of dma descriptorsChristian Lamparter2-119/+48
This patch changes the driver to allocate DMA descriptors when needed. This stops memory resources to be wasted and letting them sit idle in the free_list structure when the device doesn't need it... This also solves the problem, that a driver has to guess the number of how many descriptors it needs to allocate in advance. Currently, the dma engine will just fail when put under load by sata_dwc_460ex. Signed-off-by: Christian Lamparter <chunkeey@googlemail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13dmaengine: dw: set cdesc to NULL when free cyclic transfersAndy Shevchenko1-0/+2
To be sure we have the cyclic transfers already gone we set cdesc to NULL. It will prevent the double free. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13dmaengine: dw: move residue to a descriptorAndy Shevchenko2-21/+41
Residue is a property of any active descriptor. So, any descriptor may be in different state but residue is a feature of active descriptor. Check if the asked descriptor is active and return proper residue value for it. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13dmaengine: dw: move dwc->initialized to dwc->flagsAndy Shevchenko2-5/+5
We have already dedicated variable for flags, therefore no need to create an additional storage for that. Covert dwc->initialized to use dwc->flags. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13dmaengine: dw: move dwc->paused to dwc->flagsAndy Shevchenko2-8/+6
We have already dedicated variable for flags, therefore no need to create an additional storage for that. Convert dwc->paused to use dwc->flags. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13dmaengine: dw: define counter variables as unsigned intAndy Shevchenko1-5/+5
The code is fixed to satisfy a compiler otherwise we have drivers/dma/dw/core.c: In function ‘dwc_handle_cyclic’: drivers/dma/dw/core.c:568: warning: comparison between signed and unsigned drivers/dma/dw/core.c: In function ‘dw_dma_tasklet’: drivers/dma/dw/core.c:590: warning: comparison between signed and unsigned drivers/dma/dw/core.c: In function ‘dw_dma_off’: drivers/dma/dw/core.c:1103: warning: comparison between signed and unsigned drivers/dma/dw/core.c: In function ‘dw_dma_cyclic_free’: drivers/dma/dw/core.c:1469: warning: comparison between signed and unsigned drivers/dma/dw/core.c: In function ‘dw_dma_probe’: drivers/dma/dw/core.c:1574: warning: comparison between signed and unsigned There is no functional change. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13dmaengine: dw: substitute dma_read_byaddr by dma_readl_nativeAndy Shevchenko2-9/+3
Since struct dw_dma is allocated and regs member is assigned properly we can use standard IO accessors to the DMA registers. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13dmaengine: dw: clear LLP_[SD]_EN bits in last descriptor of a chainMans Rullgard1-0/+2
The datasheet requires that the LLP_[SD]_EN bits be cleared whenever LLP.LOC is zero, i.e. in the last descriptor of a multi-block chain. Make the driver do this. Signed-off-by: Mans Rullgard <mans@mansr.com> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13dmaengine: dw: set LMS field in descriptorsMans Rullgard2-12/+18
The LMS field indicates from which master the descriptor is to be read. This patch assumes this is always the same as the memory side in a peripheral transfer which is true for all known systems. Signed-off-by: Mans Rullgard <mans@mansr.com> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13dmaengine: dw: fix byte order of hw descriptor fieldsMans Rullgard2-68/+87
If the DMA controller uses a different byte order than the host CPU, the hardware linked list descriptor fields need to be byte-swapped. This patch makes the driver write these fields using the same byte order it uses for mmio accesses to the DMA engine. I do not know if this is guaranteed to always be correct. Signed-off-by: Mans Rullgard <mans@mansr.com> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13dmaengine: dw: set src and dst master select according to xfer directionMans Rullgard1-2/+6
On some architectures the DMA controller can have two masters connected to different buses and thus access to memory is possible only through one and to peripheral through the other. This patch changes the src and dst master setting to match the direction of the transfer. Signed-off-by: Mans Rullgard <mans@mansr.com> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13dmaengine: dw: rename masters to reflect actual topologyAndy Shevchenko3-18/+17
The source and destination masters are reflecting buses or their layers to where the different devices can be connected. The patch changes the master names to reflect which one is related to which independently on the transfer direction. The outcome of the change is that the memory data width is now always limited by a data width of the master which is dedicated to communicate to memory. The patch will not break anything since all current users have the same data width for all masters. Though it would be nice to revisit avr32 platforms to check what is the actual hardware topology in use there. It seems that it has one bus and two masters on it as stated by Table 8-2, that's why everything works independently on the master in use. The purpose of the sequential patch is to fix the driver for configuration of more than one bus. The change is done in the assumption that src_master and dst_master are reflecting a connection to the memory and peripheral correspondently on avr32 and otherwise on the rest. Acked-by: Hans-Christian Egtvedt <egtvedt@samfundet.no> Acked-by: Mark Brown <broonie@kernel.org> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13dmaengine: dw: fix master selectionAndy Shevchenko1-15/+19
The commit 895005202987 ("dmaengine: dw: apply both HS interfaces and remove slave_id usage") cleaned up the code to avoid usage of depricated slave_id member of generic slave configuration. Meanwhile it broke the master selection by removing important call to dwc_set_masters() in ->device_alloc_chan_resources() which copied masters from custom slave configuration to the internal channel structure. Everything works until now since there is no customized connection of DesignWare DMA IP to the bus, i.e. one bus and one or more masters are in use. The configurations where 2 masters are connected to the different masters are not working anymore. We are expecting one user of such configuration and need to select masters properly. Besides that it is obviously a performance regression since only one master is in use in multi-master configuration. Select masters in accordance with what user asked for. Keep this patch in a form more suitable for back porting. We are safe to take necessary data in ->device_alloc_chan_resources() because we don't support generic slave configuration embedded into custom one, and thus the only way to provide such is to use the parameter to a filter function which is called exactly before channel resource allocation. While here, replase BUG_ON to less noisy dev_warn() and prevent channel allocation in case of error. Fixes: 895005202987 ("dmaengine: dw: apply both HS interfaces and remove slave_id usage") Cc: stable@vger.kernel.org Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-17Merge tag 'dmaengine-4.6-rc1' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds1-1/+1
Pull dmaengine updates from Vinod Koul: "This is smallish update with minor changes to core and new driver and usual updates. Nothing super exciting here.. - We have made slave address as physical to enable driver to do the mapping. - We now expose the maxburst for slave dma as new capability so clients can know this and program accordingly - addition of device synchronize callbacks on omap and edma. - pl330 updates to support DMAFLUSHP for Rockchip platforms. - Updates and improved sg handling in Xilinx VDMA driver. - New hidma qualcomm dma driver, though some bits are still in progress" * tag 'dmaengine-4.6-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (40 commits) dmaengine: IOATDMA: revise channel reset workaround on CB3.3 platforms dmaengine: add Qualcomm Technologies HIDMA channel driver dmaengine: add Qualcomm Technologies HIDMA management driver dmaengine: hidma: Add Device Tree binding dmaengine: qcom_bam_dma: move to qcom directory dmaengine: tegra: Move of_device_id table near to its user dmaengine: xilinx_vdma: Remove unnecessary variable initializations dmaengine: sirf: use __maybe_unused to hide pm functions dmaengine: rcar-dmac: clear pertinence number of channels dmaengine: sh: shdmac: don't open code of_device_get_match_data() dmaengine: tegra: don't open code of_device_get_match_data() dmaengine: qcom_bam_dma: Make driver work for BE dmaengine: sun4i: support module autoloading dma/mic_x100_dma: IS_ERR() vs PTR_ERR() typo dmaengine: xilinx_vdma: Use readl_poll_timeout instead of do while loop's dmaengine: xilinx_vdma: Simplify spin lock handling dmaengine: xilinx_vdma: Fix issues with non-parking mode dmaengine: xilinx_vdma: Improve SG engine handling dmaengine: pl330: fix to support the burst mode dmaengine: make slave address physical ...
2016-02-15dmaengine: dw: disable BLOCK IRQs for non-cyclic xferAndy Shevchenko1-5/+10
The commit 2895b2cad6e7 ("dmaengine: dw: fix cyclic transfer callbacks") re-enabled BLOCK interrupts with regard to make cyclic transfers work. However, this change becomes a regression for non-cyclic transfers as interrupt counters under stress test had been grown enormously (approximately per 4-5 bytes in the UART loop back test). Taking into consideration above enable BLOCK interrupts if and only if channel is programmed to perform cyclic transfer. Fixes: 2895b2cad6e7 ("dmaengine: dw: fix cyclic transfer callbacks") Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Mans Rullgard <mans@mansr.com> Tested-by: Mans Rullgard <mans@mansr.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Cc: <stable@vger.kernel.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-02-08dmaengine: dw: pci: add ID for WildcatPoint PCHAndy Shevchenko1-0/+4
WildcatPoint PCH as seen on MacBook 12-inch (Early 2015) has PCI enabled DesignWare DMA controller. Enable it by adding its ID to the corresponding driver. Reported-by: Leif Liddy <leif.liddy@gmail.com> BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=110901 Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-01-25dmaengine: dw: fix a typo for bitfields of CTL_LOJie Yang1-1/+1
The offset of SINC should be 9, not 7, here fix this typo. Signed-off-by: Jie Yang <yang.jie@intel.com> Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-01-20Merge tag 'dmaengine-fix-4.5-rc1' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds1-28/+16
Pull dmaengine fixes from Vinod Koul: "Here is my second pull request for this window: A few driver fixes have piled up and one missed rcar bindings patch which got somehow lost in for-linus branch so cherry-picked that one. Fixes are for dw, at_hdmac, edma" * tag 'dmaengine-fix-4.5-rc1' of git://git.infradead.org/users/vkoul/slave-dma: dmaengine: rcar-dmac: Document SoC specific bindings dmaengine: at_xdmac: fix resume for cyclic transfers dmaengine: dw: fix cyclic transfer callbacks dmaengine: dw: fix cyclic transfer setup dmaengine: edma: Fix paRAM slot allocation for entry channel 0
2016-01-14dmaengine: dw: fix cyclic transfer callbacksMans Rullgard1-6/+15
Cyclic transfer callbacks rely on block completion interrupts which were disabled in commit ff7b05f29fd4 ("dmaengine/dw_dmac: Don't handle block interrupts"). This re-enables block interrupts so the cyclic callbacks can work. Other transfer types are not affected as they set the INT_EN bit only on the last block. Fixes: ff7b05f29fd4 ("dmaengine/dw_dmac: Don't handle block interrupts") Signed-off-by: Mans Rullgard <mans@mansr.com> Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Cc: <stable@vger.kernel.org>
2016-01-14dmaengine: dw: fix cyclic transfer setupMans Rullgard1-22/+1
Commit 61e183f83069 ("dmaengine/dw_dmac: Reconfigure interrupt and chan_cfg register on resume") moved some channel initialisation to a new function which must be called before starting a transfer. This updates dw_dma_cyclic_start() to use dwc_dostart() like the other modes, thus ensuring dwc_initialize() gets called and removing some code duplication. Fixes: 61e183f83069 ("dmaengine/dw_dmac: Reconfigure interrupt and chan_cfg register on resume") Signed-off-by: Mans Rullgard <mans@mansr.com> Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Cc: <stable@vger.kernel.org>
2016-01-13Merge tag 'dmaengine-4.5-rc1' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds1-2/+5
Pull dmaengine updates from Vinod Koul: "This round we have few new features, new driver and updates to few drivers. The new features to dmaengine core are: - Synchronized transfer termination API to terminate the dmaengine transfers in synchronized and async fashion as required by users. We have its user now in ALSA dmaengine lib, img, at_xdma, axi_dmac drivers. - Universal API for channel request and start consolidation of request flows. It's user is ompa-dma driver. - Introduce reuse of descriptors and use in pxa_dma driver Add/Remove: - New STM32 DMA driver - Removal of unused R-Car HPB-DMAC driver Updates: - ti-dma-crossbar updates for supporting eDMA - tegra-apb pm updates - idma64 - mv_xor updates - ste_dma updates" * tag 'dmaengine-4.5-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (54 commits) dmaengine: mv_xor: add suspend/resume support dmaengine: mv_xor: de-duplicate mv_chan_set_mode*() dmaengine: mv_xor: remove mv_xor_chan->current_type field dmaengine: omap-dma: Add support for DMA filter mapping to slave devices dmaengine: edma: Add support for DMA filter mapping to slave devices dmaengine: core: Introduce new, universal API to request a channel dmaengine: core: Move and merge the code paths using private_candidate dmaengine: core: Skip mask matching when it is not provided to private_candidate dmaengine: mdc: Correct terminate_all handling dmaengine: edma: Add probe callback to edma_tptc_driver dmaengine: dw: fix potential memory leak in dw_dma_parse_dt() dmaengine: stm32-dma: Fix unchecked deference of chan->desc dmaengine: sh: Remove unused R-Car HPB-DMAC driver dmaengine: usb-dmac: Document SoC specific compatibility strings ste_dma40: Delete an unnecessary variable initialisation in d40_probe() ste_dma40: Delete another unnecessary check in d40_probe() ste_dma40: Delete an unnecessary check before the function call "kmem_cache_destroy" dmaengine: tegra-apb: Free interrupts before killing tasklets dmaengine: tegra-apb: Update driver to use GFP_NOWAIT dmaengine: tegra-apb: Only save channel state for those in use ...
2016-01-07Revert "dmaengine: dw: platform: provide platform data for Intel"Andy Shevchenko1-16/+1
Since we have a work around to prevent a system hangup we don't need to provide a platform data explicitly anymore. This reverts commit 175267b389f781748e2bbb6c737e76b5c9bc4c88. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2016-01-07dmaengine: dw: return immediately from IRQ when DMA isn't in useAndy Shevchenko1-2/+7
There is no need to bother the hardware when all channels are idle. We have not to get any interrupts. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2016-01-07dmaengine: dw: platform: power on device on shutdownAndy Shevchenko1-0/+12
We have to call dw_dma_disable() to stop any ongoing transfer. On some platforms we can't do that since DMA device is powered off. Moreover we have no possibility at that point to check if the platform is affected or not. That's why we call pm_runtime_get_sync() / pm_runtime_put() unconditionally. On the other hand we can't use pm_runtime_suspended() because runtime PM framework is not fully used by the driver. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2015-12-18dmaengine: dw: fix potential memory leak in dw_dma_parse_dt()Mans Rullgard1-2/+5
If the "dma-channels" DT property is missing, the dw_dma_parse_dt() function return NULL, but not before allocating memory for a struct dw_dma_platform_data through devres. If the device supports parameter detection, the probe still succeeds and the allocated memory is not released until the device is removed. Fix this by deferring the allocation until after checking the "dma-channels" property. Signed-off-by: Mans Rullgard <mans@mansr.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-11-10Merge tag 'dmaengine-4.4-rc1' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds3-51/+61
Pull dmaengine updates from Vinod Koul: "This time we have a very typical update which is mostly fixes and updates to drivers and no new drivers. - the biggest change is coming from Peter for edma cleanup which even caused some last minute regression, things seem settled now - idma64 and dw updates - iotdma updates - module autoload fixes for various drivers - scatter gather support for hdmac" * tag 'dmaengine-4.4-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (77 commits) dmaengine: edma: Add dummy driver skeleton for edma3-tptc Revert "ARM: DTS: am33xx: Use the new DT bindings for the eDMA3" Revert "ARM: DTS: am437x: Use the new DT bindings for the eDMA3" dmaengine: dw: some Intel devices has no memcpy support dmaengine: dw: platform: provide platform data for Intel dmaengine: dw: don't override platform data with autocfg dmaengine: hdmac: Add scatter-gathered memset support dmaengine: hdmac: factorise memset descriptor allocation dmaengine: virt-dma: Fix kernel-doc annotations ARM: DTS: am437x: Use the new DT bindings for the eDMA3 ARM: DTS: am33xx: Use the new DT bindings for the eDMA3 dmaengine: edma: New device tree binding dmaengine: Kconfig: edma: Select TI_DMA_CROSSBAR in case of ARCH_OMAP dmaengine: ti-dma-crossbar: Add support for crossbar on AM33xx/AM43xx dmaengine: edma: Merge the of parsing functions dmaengine: edma: Do not allocate memory for edma_rsv_info in case of DT boot dmaengine: edma: Refactor the dma device and channel struct initialization dmaengine: edma: Get qDMA channel information from HW also dmaengine: edma: Merge map_dmach_to_queue into assign_channel_eventq dmaengine: edma: Correct PaRAM access function names (_parm_ to _param_) ...