aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2015-06-25dmaengine: fsl-edma: clear pending interrupts on initializationStefan Agner1-4/+5
Clear pending interrupts before requesting interrupts and move interrupt initialization after channels have been initialized. This avoids a NULL pointer dereference panic when using kexec while DMA requests were running. Signed-off-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-25dmaengine: xdmac: Add memset supportMaxime Ripard1-0/+89
The XDMAC supports memset transfers, both over contiguous areas, and over discontiguous areas through a LLI. The current memset operation only supports contiguous memset for now, add some support for it. Scatter-gathered memset will come eventually. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-17Documentation: dmaengine: document DMA_CTRL_ACKRobert Jarzmik1-5/+6
Add documentation about acking the transfers, and their reusability. Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-17dmaengine: virt-dma: don't always free descriptor upon completionRobert Jarzmik2-7/+25
This patch attempts to enhance the case of a transfer submitted multiple times, and where the cost of creating the descriptors chain is not negligible. This happens with big video buffers (several megabytes, ie. several thousands of linked descriptors in one scatter-gather list). In these cases, a video driver would want to do : - tx = dmaengine_prep_slave_sg() - dma_engine_submit(tx); - dma_async_issue_pending() - wait for video completion - read video data (or not, skipping a frame is also possible) - dma_engine_submit(tx) => here, the descriptors chain recalculation will take time => the dma coherent allocation over and over might create holes in the dma pool, which is counter-productive. - dma_async_issue_pending() - etc ... In order to cope with this case, virt-dma is modified to prevent freeing the descriptors upon completion if DMA_CTRL_ACK flag is set in the transfer. Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-12dmaengine: Revert "drivers/dma: remove unused support for MEMSET operations"Maxime Ripard2-0/+26
This reverts commit 48a9db462d99494583dad829969616ac90a8df4e. Some platforms actually need support for the memset operations. Bring it back. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-12dmaengine: hdmac: Implement interleaved transfersMaxime Ripard2-0/+111
The AT91 HDMAC controller supports interleaved transfers through what's called the Picture-in-Picture mode, which allows to transfer a squared portion of a framebuffer. This means that this interleaved transfer only supports interleaved transfers which have a transfer size and ICGs that are fixed across all the chunks. While this is a quite drastic restriction of the interleaved transfers compared to what the dmaengine API allows, this is still useful, and our driver will only reject transfers that do not conform to this. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-12dmaengine: Move icg helpers to global headerMaxime Ripard2-42/+31
Now that we can have ICGs set for both the source and destination (using the icg field of struct data_chunk) or for only the source or the destination (using the dst_icg or src_icg respectively), and that these fields can be ignored depending on other parameters (src_inc, src_sgl, etc.), the logic to get the actual ICG value can be quite tricky. The XDMAC driver was already implementing it, but since we will need it in other drivers, we can move it to the main header file. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-10dmaengine: mv_xor: improve descriptors list handling and reduce lockingLior Amsalem2-105/+53
This patch change the way free descriptors are marked. Instead of having a field for descriptor in use, all the descriptors in the all_slots list are free for use. This simplify the allocation method and reduce the locking needed. Signed-off-by: Lior Amsalem <alior@marvell.com> Reviewed-by: Ofer Heifetz <oferh@marvell.com> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-10dmaengine: mv_xor: Enlarge descriptor pool sizeLior Amsalem1-1/+1
Now that we have 2 channels assigned to 2 CPUs and all requests are chained on same channels, we need much more descriptors available to satisfy async_tx workload. 3072 descriptors was found in our lab as the number of descriptors which allow the async_tx stack to work without waiting for free descriptors on submission of new requests. Signed-off-by: Lior Amsalem <alior@marvell.com> Reviewed-by: Nadav Haklai <nadavh@marvell.com> Tested-by: Nadav Haklai <nadavh@marvell.com> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-10dmaengine: mv_xor: add support for a38x command in descriptor modeLior Amsalem3-15/+76
The Marvell Armada 38x SoC introduce new features to the XOR engine, especially the fact that the engine mode (MEMCPY/XOR/PQ/etc) can be part of the descriptor and not set through the controller registers. This new feature allows mixing of different commands (even PQ) on the same channel/chain without the need to stop the engine to reconfigure the engine mode. Refactor the driver to be able to use that new feature on the Armada 38x, while keeping the old behaviour on the older SoCs. Signed-off-by: Lior Amsalem <alior@marvell.com> Reviewed-by: Ofer Heifetz <oferh@marvell.com> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-10dmaengine: mv_xor: Rename function for consistent namingMaxime Ripard1-43/+44
The current function names isn't very consistent, and functions with the same prefix might operate on either a channel or a descriptor, which is kind of confusing. Rename these functions to have a consistent and clearer naming scheme. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-10dmaengine: mv_xor: bug fix for racing condition in descriptors cleanupLior Amsalem2-26/+47
This patch fixes a bug in the XOR driver where the cleanup function can be called and free descriptors that never been processed by the engine (which result in data errors). The cleanup function will free descriptors based on the ownership bit in the descriptors. Fixes: ff7b04796d98 ("dmaengine: DMA engine driver for Marvell XOR engine") Signed-off-by: Lior Amsalem <alior@marvell.com> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Reviewed-by: Ofer Heifetz <oferh@marvell.com> Cc: stable@vger.kernel.org Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-08dmaengine: pl330: fix wording in mcbufsz messageMichal Suchanek1-2/+2
The kernel is not trying to increase mcbufsz. It suggests you should try doing so. Also print the calculated required size of mcbufsz. Signed-off-by: Michal Suchanek <hramrach@gmail.com> Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-08dmaengine: sirf: add CSRatlas7 SoC supportHao Liu2-90/+336
add support for new CSR atlas7 SoC. atlas7 exists V1 and V2 IP. atlas7 DMAv1 is basically moved from marco, which has never been delivered to customers and renamed in this patch. atlas7 DMAv2 supports chain DMA by a chain table, this patch also adds chain DMA support for atlas7. atlas7 DMAv1 and DMAv2 co-exist in the same chip. there are some HW configuration differences(register offset etc.) with old prima2 chips, so we use compatible string to differentiate old prima2 and new atlas7, then results in different set in HW for them. Signed-off-by: Hao Liu <Hao.Liu@csr.com> Signed-off-by: Yanchang Li <Yanchang.Li@csr.com> Signed-off-by: Barry Song <Baohua.Song@csr.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-02dmaengine: xgene-dma: Fix "incorrect type in assignement" warningsRameshwar Prasad Sahu1-107/+66
This patch fixes sparse warnings like incorrect type in assignment (different base types), cast to restricted __le64. Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Rameshwar Prasad Sahu <rsahu@apm.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-06-02dmaengine: fix kernel-doc documentationStefan Agner1-2/+2
Fix function names in kernel-doc function comments. Signed-off-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-29dmaengine: pxa_dma: add support for legacy transitionRobert Jarzmik1-0/+28
In order to achieve smooth transition of pxa drivers from old legacy dma handling to new dmaengine, introduce a function to "hide" dma physical channels from dmaengine. This is temporary situation where pxa dma will be handled in 2 places : - arch/arm/plat-pxa/dma.c - drivers/dma/pxa_dma.c The resources, ie. dma channels, will be controlled by pxa_dma. The legacy code will request or release a channel with pxad_toggle_reserved_channel(). This is not very pretty, but it ensures both legacy and dmaengine consumers can live in the same kernel until the conversion is done. Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26dmaengine: pxa_dma: add debug informationRobert Jarzmik1-0/+244
Reuse the debugging features which were available in pxa architecture. This is a copy of the code from arch/arm/plat-pxa/dma, which is doomed to disappear once the conversion is completed towards dmaengine. This is a transfer of the commit "[ARM] pxa/dma: add debugfs entries" (d294948c2ce4e1c85f452154469752cc9b8e876d). Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26dmaengine: pxa: add pxa dmaengine driverRobert Jarzmik4-0/+1234
This is a new driver for pxa SoCs, which is also compatible with the former mmp_pdma. The rationale behind a new driver (as opposed to incremental patching) was : - the new driver relies on virt-dma, which obsoletes all the internal structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the functions - mmp_pdma allocates dma coherent descriptors containing not only hardware descriptors but linked list information The new driver only puts the dma hardware descriptors (ie. 4 u32) into the dma pool allocated memory. This changes completely the way descriptors are handled - the architecture behind the interrupt/tasklet management was rewritten to be more conforming to virt-dma - the buffers alignment is handled differently The former driver assumed that the DMA channel stopped between each descriptor. The new one chains descriptors to let the channel running. This is a necessary guarantee for real-time high bandwidth usecases such as video capture on "old" architectures such as pxa. - hot chaining / cold chaining / no chaining Whenever possible, submitting a descriptor "hot chains" it to a running channel. There is still no guarantee that the descriptor will be issued, as the channel might be stopped just before the descriptor is submitted. Yet this allows to submit several video buffers, and resubmit a buffer while another is under handling. As before, dma_async_issue_pending() is the only guarantee to have all the buffers issued. When an alignment issue is detected (ie. one address in a descriptor is not a multiple of 8), if the already running channel is in "aligned mode", the channel will stop, and restarted in "misaligned mode" to finished the issued list. - descriptors reusing A submitted, issued and completed descriptor can be reused, ie resubmitted if it was prepared with the proper flag (DMA_PREP_ACK). Only a channel resources release will in this case release that buffer. This allows a rolling ring of buffers to be reused, where there are several thousands of hardware descriptors used (video buffer for example). Additionally, a set of more casual features is introduced : - debugging traces - lockless way to know if a descriptor is terminated or not The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x), with dmatest, pxa_camera and pxamci. Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-05-26MAINTAINERS: add pxa dma driver to pxa architectureRobert Jarzmik1-0/+1
Add the pxa dma driver as maintained by the pxa architecture maintainers, as it is part of the core IP. Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>