aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/dma/ioat (follow)
AgeCommit message (Collapse)AuthorFilesLines
2013-11-14ioat: kill msix_single_vector supportDan Williams3-32/+3
Once we have determined that we will not have all of our desired msix vectors there is no point in attempting a single msix allocation. The driver will already need to read registers to determine the source of the interrupt the fact that it is msix is moot. Fallback directly to msi. Reported-by: Alexander Gordeev <agordeev@redhat.com> Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2013-11-14ioatdma: clean up sed pool kmem_cacheDan Williams4-42/+22
Use a single cache for all sed allocations. No need to make it per channel. This also avoids the slub_debug warnings for multiple caches with the same name. Switching to dmam_pool_create() to fix leaking the dma pools on initialization failure and lets us kill ioat3_dma_remove(). Cc: Dave Jiang <dave.jiang@intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2013-11-14ioatdma: fix selection of 16 vs 8 source pathDan Williams1-15/+15
When performing continuations there are implied sources that need to be added to the source count. Quoting dma_set_maxpq: /* dma_maxpq - reduce maxpq in the face of continued operations * @dma - dma device with PQ capability * @flags - to check if DMA_PREP_CONTINUE and DMA_PREP_PQ_DISABLE_P are set * * When an engine does not support native continuation we need 3 extra * source slots to reuse P and Q with the following coefficients: * 1/ {00} * P : remove P from Q', but use it as a source for P' * 2/ {01} * Q : use Q to continue Q' calculation * 3/ {00} * Q : subtract Q from P' to cancel (2) * * In the case where P is disabled we only need 1 extra source: * 1/ {01} * Q : use Q to continue Q' calculation */ ...fix the selection of the 16 source path to take these implied sources into account. Note this also kills the BUG_ON(src_cnt < 9) check in __ioat3_prep_pq16_lock(). Besides not accounting for implied sources the check is redundant given we already made the path selection. Cc: <stable@vger.kernel.org> Cc: Dave Jiang <dave.jiang@intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2013-11-14ioatdma: fix sed pool selectionDan Williams1-15/+1
The array to lookup the sed pool based on the number of sources (pq16_idx_to_sedi) is 16 entries and expects a max source index. However, we pass the total source count which runs off the end of the array when src_cnt == 16. The minimal fix is to just pass src_cnt-1, but given we know the source count is > 8 we can just calculate the sed pool by (src_cnt - 2) >> 3. Cc: Dave Jiang <dave.jiang@intel.com> Cc: <stable@vger.kernel.org> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2013-11-14ioatdma: Fix bug in selftest after removal of DMA_MEMSET.Dave Jiang1-0/+2
Commit 48a9db4 (3.11) removed the memset op in the xor selftest for ioatdma. The issue is that with the removal of that op, it never replaced the memset with a CPU memset. The memory being operated on is expected to be zeroes but was not. This is causing the xor selftest to fail. Cc: <stable@vger.kernel.org> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2013-11-14dmaengine: remove DMA unmap flagsBartlomiej Zolnierkiewicz2-11/+4
Remove no longer needed DMA unmap flags: - DMA_COMPL_SKIP_SRC_UNMAP - DMA_COMPL_SKIP_DEST_UNMAP - DMA_COMPL_SRC_UNMAP_SINGLE - DMA_COMPL_DEST_UNMAP_SINGLE Cc: Vinod Koul <vinod.koul@intel.com> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Jon Mason <jon.mason@intel.com> Acked-by: Mark Brown <broonie@linaro.org> [djbw: clean up straggling skip unmap flags in ntb] Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2013-11-14dmaengine: remove DMA unmap from driversBartlomiej Zolnierkiewicz4-195/+0
Remove support for DMA unmapping from drivers as it is no longer needed (DMA core code is now handling it). Cc: Vinod Koul <vinod.koul@intel.com> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> [djbw: fix up chan2parent() unused warning in drivers/dma/dw/core.c] Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2013-11-13dmaengine: prepare for generic 'unmap' dataDan Williams3-0/+3
Add a hook for a common dma unmap implementation to enable removal of the per driver custom unmap code. (A reworked version of Bartlomiej Zolnierkiewicz's patches to remove the custom callbacks and the size increase of dma_async_tx_descriptor for drivers that don't care about raid). Cc: Vinod Koul <vinod.koul@intel.com> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Dave Jiang <dave.jiang@intel.com> [bzolnier: prepare pl330 driver for adding missing unmap while at it] Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2013-10-25dmaengine: ioat: use DMA_COMPLETE for dma completion statusVinod Koul2-6/+6
Acked-by: Dan Williams <dan.j.williams@intel.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-23ioatdma: silence GCC warningsPaul Bolle1-1/+1
Building dma_v3.o triggers a GCC warning: drivers/dma/ioat/dma_v3.c: In function ‘__ioat3_prep_pq16_lock’: drivers/dma/ioat/dma_v3.c:264:11: warning: array subscript is below array bounds [-Warray-bounds] drivers/dma/ioat/dma_v3.c:264:11: warning: array subscript is below array bounds [-Warray-bounds] This warning is caused by pq16_set_src(). It uses "int idx" as an index to an eight element array. Changing "idx" to "unsigned" silences this warning. Apparently GCC can then determine that "idx" will never be negative. Signed-off-by: Paul Bolle <pebolle@tiscali.nl> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <djbw@fb.com>
2013-08-22ioatdma: disable RAID on non-Atom platforms and reenable unaligned copiesBrice Goglin1-23/+1
Disable RAID on non-Atom platform and remove related fixups such as the 64-byte alignement restriction on legacy DMA operations (introduced in commit f26df1a1 as a workaround for silicon errata). Signed-off-by: Brice Goglin <Brice.Goglin@inria.fr> Acked-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Jon Mason <jon.mason@intel.com> Signed-off-by: Dan Williams <djbw@fb.com>
2013-07-03drivers/dma: remove unused support for MEMSET operationsBartlomiej Zolnierkiewicz4-142/+3
There have never been any real users of MEMSET operations since they have been introduced in January 2007 by commit 7405f74badf4 ("dmaengine: refactor dmaengine around dma_async_tx_descriptor"). Therefore remove support for them for now, it can be always brought back when needed. [sebastian.hesselbarth@gmail.com: fix drivers/dma/mv_xor] Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Cc: Vinod Koul <vinod.koul@intel.com> Acked-by: Dan Williams <djbw@fb.com> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Olof Johansson <olof@lixom.net> Cc: Kevin Hilman <khilman@linaro.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-09Merge branch 'for-linus' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds7-137/+950
Pull slave-dmaengine updates from Vinod Koul: "This time we have dmatest improvements from Andy along with dw_dmac fixes. He has also done support for acpi for dmanegine. Also we have bunch of fixes going in DT support for dmanegine for various folks. Then Haswell and other ioat changes from Dave and SUDMAC support from Shimoda." * 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (53 commits) dma: tegra: implement suspend/resume callbacks dma:of: Use a mutex to protect the of_dma_list dma: of: Fix of_node reference leak dmaengine: sirf: move driver init from module_init to subsys_initcall sudmac: add support for SUDMAC dma: sh: add Kconfig at_hdmac: move to generic DMA binding ioatdma: ioat3_alloc_sed can be static ioatdma: Adding write back descriptor error status support for ioatdma 3.3 ioatdma: S1200 platforms ioatdma channel 2 and 3 falsely advertise RAID cap ioatdma: Adding support for 16 src PQ ops and super extended descriptors ioatdma: Removing hw bug workaround for CB3.x .2 and earlier dw_dmac: add ACPI support dmaengine: call acpi_dma_request_slave_channel as well dma: acpi-dma: introduce ACPI DMA helpers dma: of: Remove unnecessary list_empty check DMA: OF: Check properties value before running be32_to_cpup() on it DMA: of: Constant names ioatdma: skip silicon bug workaround for pq_align for cb3.3 ioatdma: Removing PQ val disable for cb3.3 ...
2013-04-16ioatdma: ioat3_alloc_sed can be staticFengguang Wu1-2/+2
Reported-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: Adding write back descriptor error status support for ioatdma 3.3Dave Jiang4-25/+105
v3.3 provides support for write back descriptor error status. This allows reporting of errors in a descriptor field. In supporting this, certain errors such as P/Q validation errors no longer halts the channel. The DMA engine can continue to execute until the end of the chain and allow software to report the "errors" up the stack. We are also going to mask those error interrupts and handle them when the "chain" has completed at the end. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: S1200 platforms ioatdma channel 2 and 3 falsely advertise RAID capDave Jiang1-0/+15
This workaround checks for channel 2&3 and remove RAID cap. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: Adding support for 16 src PQ ops and super extended descriptorsDave Jiang6-22/+438
v3.3 introduced 16 sources PQ operations. This also introduced super extended descriptors to support the 16 srcs operations. This patch adds support for the 16 sources ops and in turn adds the super extended descriptors for those ops. 5 SED pools are created depending on the descriptor sizes. An SED can be a 64 bytes sized descriptor or larger and must be physically contiguous. A kmem cache pool is created for allocating the software descriptor that manages the hardware descriptor. The super extended descriptor will take place of extended descriptor under certain operations and be "attached" to the op descriptor during operation. This is a new feature for ioatdma v3.3. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: Removing hw bug workaround for CB3.x .2 and earlierDave Jiang1-11/+20
CB3.2 and earlier hardware has silicon bugs that are no longer needed with the new hardware. We don't have to use a NULL op to signal interrupt for RAID ops any longer. This code make sure the legacy workarounds only happen on legacy hardware. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: skip silicon bug workaround for pq_align for cb3.3Dave Jiang1-2/+10
The alignment workaround is only necessary for cb3.2 or earlier platforms. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: Removing PQ val disable for cb3.3Dave Jiang3-12/+125
The PQ Val ops work on the newer hardware so we should actually provide support for it and remove the disabling bits. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: skip legacy reset bits since v3.3 plattform doesn't need itDave Jiang1-13/+21
Make it so only 3.2 and earlier platform need the PCI config register clearings since this implementation does not have the registers. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: channel reset scheme fixup on Intel Atom S1200 platformsDave Jiang3-83/+171
The Intel Atom S1200 family ioatdma changed the channel reset behavior. It does a reset similar to PCI FLR by resetting all the MSIX registers. We have to re-init msix interrupts because of this. This workaround is only specific to this platform and is not expected to carry over to the later generations. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: Add 64bit chansts register read for ioat v3.3.Dave Jiang1-1/+21
The channel status register for v3.3 is now 64bit. Use readq if available on v3.3 platforms. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: Adding PCI IDs for Intel Atom S1200 product family ioatdma devicesDave Jiang2-0/+12
These should be good for the IOAT DMA devices on the Intel Atom S1269, S1279, and S1289 platforms. We are also adding IOAT v3.3 definition for the new DMA engine. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: Adding Haswell devid for ioatdmaDave Jiang3-6/+55
Adding Haswell PCI device IDs for ioatdma and simplify the detection of certain Xeon CPUs that has alignment bugs so that modifications can be changed at a single place going forward. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: allow all channels to have irq coalescing supportDave Jiang1-9/+3
Looks like only the RAID channels are allowed to have irq coalescing support in the existing code. Fixing that. The ioat3 cleanup code can handle memcpy ops anyways Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: make debug output more readableDave Jiang2-2/+3
Making OP field a hex instead of integer to make it more readable. Also add the dump out of the NEXT field. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-03-22ioat/dca: Update DCA BIOS workarounds to use TAINT_FIRMWARE_WORKAROUNDAlexander Duyck1-3/+8
This patch is meant to be a follow-up for a patch originally submitted under the title "ioat: Do not enable DCA if tag map is invalid". It was brought to my attention that the preferred approach for BIOS workarounds is to set the taint flag for TAINT_FIRMWARE_WORKAROUND for systems that require BIOS workarounds. This change makes it so that the DCA workarounds for broken BIOSes will now use WARN_TAINT_ONCE(1, TAINT_FIRMWARE_WORKAROUND, ...) instead of just printing a message via dev_err. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-02-26Merge branch 'next' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds6-136/+227
Pull slave-dmaengine updates from Vinod Koul: "This is fairly big pull by my standards as I had missed last merge window. So we have the support for device tree for slave-dmaengine, large updates to dw_dmac driver from Andy for reusing on different architectures. Along with this we have fixes on bunch of the drivers" Fix up trivial conflicts, usually due to #include line movement next to each other. * 'next' of git://git.infradead.org/users/vkoul/slave-dma: (111 commits) Revert "ARM: SPEAr13xx: Pass DW DMAC platform data from DT" ARM: dts: pl330: Add #dma-cells for generic dma binding support DMA: PL330: Register the DMA controller with the generic DMA helpers DMA: PL330: Add xlate function DMA: PL330: Add new pl330 filter for DT case. dma: tegra20-apb-dma: remove unnecessary assignment edma: do not waste memory for dma_mask dma: coh901318: set residue only if dma is in progress dma: coh901318: avoid unbalanced locking dmaengine.h: remove redundant else keyword dma: of-dma: protect list write operation by spin_lock dmaengine: ste_dma40: do not remove descriptors for cyclic transfers dma: of-dma.c: fix memory leakage dw_dmac: apply default dma_mask if needed dmaengine: ioat - fix spare sparse complain dmaengine: move drivers/of/dma.c -> drivers/dma/of-dma.c ioatdma: fix race between updating ioat->head and IOAT_COMPLETION_PENDING dw_dmac: add support for Lynxpoint DMA controllers dw_dmac: return proper residue value dw_dmac: fill individual length of descriptor ...
2013-02-13dmaengine: ioat - fix spare sparse complainFengguang Wu1-1/+1
>> drivers/dma/ioat/dma_v3.c:371:6: sparse: symbol 'ioat3_timer_event' was not declared. Reported-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-12ioatdma: fix race between updating ioat->head and IOAT_COMPLETION_PENDINGDave Jiang3-97/+128
There is a race that can hit during __cleanup() when the ioat->head pointer is incremented during descriptor submission. The __cleanup() can clear the PENDING flag when it does not see any active descriptors. This causes new submitted descriptors to be ignored because the COMPLETION_PENDING flag is cleared. This was introduced when code was adapted from ioatdma v1 to ioatdma v2. For v2 and v3, IOAT_COMPLETION_PENDING flag will be abandoned and a new flag IOAT_CHAN_ACTIVE will be utilized. This flag will also be protected under the prep_lock when being modified in order to avoid the race. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-24Merge branch 'fixes' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds1-1/+1
Pull slave-dmaengine fixes from Vinod Koul: "A few fixes on slave dmanengine. There are trivial fixes in imx-dma, tegra-dma & ioat driver" * 'fixes' of git://git.infradead.org/users/vkoul/slave-dma: dma: tegra: implement flags parameters for cyclic transfer dmaengine: imx-dma: Disable use of hw_chain to fix sg_dma transfers. ioat: Fix DMA memory sync direction correct flag
2013-01-07ioat: remove chanerr mask setting for IOAT v3.xDave Jiang1-6/+1
The existing code set a value in the PCI_CHANERRMSK_INT register for a workaround to address a pre-silicon bug on the Intel 5520 IO hub that has been fixed when the hardware was released. There is no need for this code. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <djbw@fb.com>
2013-01-07ioat: Add alignment workaround for IVB platformsDave Jiang3-12/+32
The PCI IDs for IvyBridge IOAT DMA needs to go into a header file since dma_v3.c looks them up for certain hardware workarounds. Need to add to the alignment workaround for IOAT 3.2 since it wasn't fixed in IVB. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <djbw@fb.com>
2013-01-07ioat3: add missing DMA unmap to ioat_xor_val_self_test()Bartlomiej Zolnierkiewicz1-17/+59
Make ioat_xor_val_self_test() do DMA unmapping itself and fix handling of failure cases. Cc: Dan Williams <djbw@fb.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <djbw@fb.com>
2013-01-07ioat: add missing DMA unmap to ioat_dma_self_test()Bartlomiej Zolnierkiewicz1-4/+7
Make ioat_dma_self_test() do DMA unmapping itself and fix handling of failure cases. Cc: Dan Williams <djbw@fb.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <djbw@fb.com>
2013-01-06ioat: Fix DMA memory sync direction correct flagShuah Khan1-1/+1
ioat does DMA memory sync with DMA_TO_DEVICE direction on a buffer allocated for DMA_FROM_DEVICE dma, resulting in the following warning from dma debug. Fixed the dma_sync_single_for_device() call to use the correct direction. [ 226.288947] WARNING: at lib/dma-debug.c:990 check_sync+0x132/0x550() [ 226.288948] Hardware name: ProLiant DL380p Gen8 [ 226.288951] ioatdma 0000:00:04.0: DMA-API: device driver syncs DMA memory with different direction [device address=0x00000000ffff7000] [size=4096 bytes] [mapped with DMA_FROM_DEVICE] [synced with DMA_TO_DEVICE] [ 226.288953] Modules linked in: iTCO_wdt(+) sb_edac(+) ioatdma(+) microcode serio_raw pcspkr edac_core hpwdt(+) iTCO_vendor_support hpilo(+) dca acpi_power_meter ata_generic pata_acpi sd_mod crc_t10dif ata_piix libata hpsa tg3 netxen_nic(+) sunrpc dm_mirror dm_region_hash dm_log dm_mod [ 226.288967] Pid: 1055, comm: work_for_cpu Tainted: G W 3.3.0-0.20.el7.x86_64 #1 [ 226.288968] Call Trace: [ 226.288974] [<ffffffff810644cf>] warn_slowpath_common+0x7f/0xc0 [ 226.288977] [<ffffffff810645c6>] warn_slowpath_fmt+0x46/0x50 [ 226.288980] [<ffffffff81345502>] check_sync+0x132/0x550 [ 226.288983] [<ffffffff81345c9f>] debug_dma_sync_single_for_device+0x3f/0x50 [ 226.288988] [<ffffffff81661002>] ? wait_for_common+0x72/0x180 [ 226.288995] [<ffffffffa019590f>] ioat_xor_val_self_test+0x3e5/0x832 [ioatdma] [ 226.288999] [<ffffffff811a5739>] ? kfree+0x259/0x270 [ 226.289004] [<ffffffffa0195d77>] ioat3_dma_self_test+0x1b/0x20 [ioatdma] [ 226.289008] [<ffffffffa01952c3>] ioat_probe+0x2f8/0x348 [ioatdma] [ 226.289011] [<ffffffffa0195f51>] ioat3_dma_probe+0x1d5/0x2aa [ioatdma] [ 226.289016] [<ffffffffa0194d12>] ioat_pci_probe+0x139/0x17c [ioatdma] [ 226.289020] [<ffffffff81354b8c>] local_pci_probe+0x5c/0xd0 [ 226.289023] [<ffffffff81083e50>] ? destroy_work_on_stack+0x20/0x20 [ 226.289025] [<ffffffff81083e68>] do_work_for_cpu+0x18/0x30 [ 226.289029] [<ffffffff8108d997>] kthread+0xb7/0xc0 [ 226.289033] [<ffffffff8166cef4>] kernel_thread_helper+0x4/0x10 [ 226.289036] [<ffffffff81662d20>] ? _raw_spin_unlock_irq+0x30/0x50 [ 226.289038] [<ffffffff81663234>] ? retint_restore_args+0x13/0x13 [ 226.289041] [<ffffffff8108d8e0>] ? kthread_worker_fn+0x1a0/0x1a0 [ 226.289044] [<ffffffff8166cef0>] ? gs_change+0x13/0x13 [ 226.289045] ---[ end trace e1618afc7a606089 ]--- [ 226.289047] Mapped at: [ 226.289048] [<ffffffff81345307>] debug_dma_map_page+0x87/0x150 [ 226.289050] [<ffffffffa019653c>] dma_map_page.constprop.18+0x70/0xb34 [ioatdma] [ 226.289054] [<ffffffffa0195702>] ioat_xor_val_self_test+0x1d8/0x832 [ioatdma] [ 226.289058] [<ffffffffa0195d77>] ioat3_dma_self_test+0x1b/0x20 [ioatdma] [ 226.289061] [<ffffffffa01952c3>] ioat_probe+0x2f8/0x348 [ioatdma] Signed-off-by: Shuah Khan <shuah.khan@hp.com> CC: <stable@vger.kernel.org> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
2013-01-03Drivers: dma: remove __dev* attributes.Greg Kroah-Hartman7-33/+28
CONFIG_HOTPLUG is going away as an option. As a result, the __dev* markings need to be removed. This change removes the use of __devinit, __devexit_p, __devinitconst, and __devexit from these drivers. Based on patches originally written by Bill Pemberton, but redone by me in order to handle some of the coding style issues better, by hand. Cc: Bill Pemberton <wfp5p@virginia.edu> Cc: Viresh Kumar <viresh.linux@gmail.com> Cc: Dan Williams <djbw@fb.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Barry Song <baohua.song@csr.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Alexander Duyck <alexander.h.duyck@intel.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Jassi Brar <jassisinghbrar@gmail.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Bill Pemberton <wfp5p@virginia.edu> Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-12-12Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-nextLinus Torvalds1-0/+23
Pull networking changes from David Miller: 1) Allow to dump, monitor, and change the bridge multicast database using netlink. From Cong Wang. 2) RFC 5961 TCP blind data injection attack mitigation, from Eric Dumazet. 3) Networking user namespace support from Eric W. Biederman. 4) tuntap/virtio-net multiqueue support by Jason Wang. 5) Support for checksum offload of encapsulated packets (basically, tunneled traffic can still be checksummed by HW). From Joseph Gasparakis. 6) Allow BPF filter access to VLAN tags, from Eric Dumazet and Daniel Borkmann. 7) Bridge port parameters over netlink and BPDU blocking support from Stephen Hemminger. 8) Improve data access patterns during inet socket demux by rearranging socket layout, from Eric Dumazet. 9) TIPC protocol updates and cleanups from Ying Xue, Paul Gortmaker, and Jon Maloy. 10) Update TCP socket hash sizing to be more in line with current day realities. The existing heurstics were choosen a decade ago. From Eric Dumazet. 11) Fix races, queue bloat, and excessive wakeups in ATM and associated drivers, from Krzysztof Mazur and David Woodhouse. 12) Support DOVE (Distributed Overlay Virtual Ethernet) extensions in VXLAN driver, from David Stevens. 13) Add "oops_only" mode to netconsole, from Amerigo Wang. 14) Support set and query of VEB/VEPA bridge mode via PF_BRIDGE, also allow DCB netlink to work on namespaces other than the initial namespace. From John Fastabend. 15) Support PTP in the Tigon3 driver, from Matt Carlson. 16) tun/vhost zero copy fixes and improvements, plus turn it on by default, from Michael S. Tsirkin. 17) Support per-association statistics in SCTP, from Michele Baldessari. And many, many, driver updates, cleanups, and improvements. Too numerous to mention individually. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1722 commits) net/mlx4_en: Add support for destination MAC in steering rules net/mlx4_en: Use generic etherdevice.h functions. net: ethtool: Add destination MAC address to flow steering API bridge: add support of adding and deleting mdb entries bridge: notify mdb changes via netlink ndisc: Unexport ndisc_{build,send}_skb(). uapi: add missing netconf.h to export list pkt_sched: avoid requeues if possible solos-pci: fix double-free of TX skb in DMA mode bnx2: Fix accidental reversions. bna: Driver Version Updated to 3.1.2.1 bna: Firmware update bna: Add RX State bna: Rx Page Based Allocation bna: TX Intr Coalescing Fix bna: Tx and Rx Optimizations bna: Code Cleanup and Enhancements ath9k: check pdata variable before dereferencing it ath5k: RX timestamp is reported at end of frame ath9k_htc: RX timestamp is reported at end of frame ...
2012-11-28dma: remove use of __devexit_pBill Pemberton1-1/+1
CONFIG_HOTPLUG is going away as an option so __devexit_p is no longer needed. Signed-off-by: Bill Pemberton <wfp5p@virginia.edu> Acked-by: Barry Song <baohua.song@csr.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-11-15ioat: Do not enable DCA if tag map is invalidAlexander Duyck1-0/+23
I have encountered several systems that have an invalid tag map. This invalid tag map results in only two tags being generated 0x1F which is usually applied to the first core in a Hyper-threaded pair and 0x00 which is applied to the second core in a Hyper-threaded pair. The net result of all this is that DCA causes significant cache thrash because the 0x1F tag will send traffic to the second socket, which the 0x00 tag will not DCA tag the frame resulting in no benefit. For now the best solution from the driver's perspective is to just disable DCA if the tag map is invalid. The correct solution for this would be to have the BIOS on affected systems updated so that the correct tags are generated for a given APIC ID. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Stephen Ko <stephen.s.ko@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-10-10Merge branch 'next' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2-2/+23
Pull slave-dmaengine updates from Vinod Koul: "This time we have Andy updates on dw_dmac which is attempting to make this IP block available as PCI and platform device though not fully complete this time. We also have TI EDMA moving the dma driver to use dmaengine APIs, also have a new driver for mmp-tdma, along with bunch of small updates. Now for your excitement the merge is little unusual here, while merging the auto merge on linux-next picks wrong choice for pl330 (drivers/dma/pl330.c) and this causes build failure. The correct resolution is in linux-next. (DMA: PL330: Fix build error) I didn't back merge your tree this time as you are better than me so no point in doing that for me :)" Fixed the pl330 conflict as in linux-next, along with trivial header file conflicts due to changed includes. * 'next' of git://git.infradead.org/users/vkoul/slave-dma: (29 commits) dma: tegra: fix interrupt name issue with apb dma. dw_dmac: fix a regression in dwc_prep_dma_memcpy dw_dmac: introduce software emulation of LLP transfers dw_dmac: autoconfigure data_width or get it via platform data dw_dmac: autoconfigure block_size or use platform data dw_dmac: get number of channels from hardware if possible dw_dmac: fill optional encoded parameters in register structure dw_dmac: mark dwc_dump_chan_regs as inline DMA: PL330: return ENOMEM instead of 0 from pl330_alloc_chan_resources DMA: PL330: Remove redundant runtime_suspend/resume functions DMA: PL330: Remove controller clock enable/disable dmaengine: use kmem_cache_zalloc instead of kmem_cache_alloc/memset DMA: PL330: Set the capability of pdm0 and pdm1 as DMA_PRIVATE ARM: EXYNOS: Set the capability of pdm0 and pdm1 as DMA_PRIVATE dma: tegra: use list_move_tail instead of list_del/list_add_tail mxs/dma: Enlarge the CCW descriptor area to 4 pages dw_dmac: utilize slave_id to pass request line dmaengine: mmp_tdma: add dt support dmaengine: mmp-pdma support spi: davici - make davinci select edma ...
2012-09-18dmaengine: use kmem_cache_zalloc instead of kmem_cache_alloc/memsetWei Yongjun1-2/+1
Using kmem_cache_zalloc() instead of kmem_cache_alloc() and memset(). spatch with a semantic match is used to found this problem. (http://coccinelle.lip6.fr/) Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
2012-09-01ioat: remove unused #definesJon Mason1-4/+0
IOAT has a redefine of PCI Vendor, PCI Subvendor, etc for PCI_VENDOR_ID_INTEL but they are never used. Remove them. Signed-off-by: Jon Mason <jdmason@kudzu.us> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2012-08-31ioat: Adding Ivy Bridge IOATDMA PCI device IDsDave Jiang1-0/+22
Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@db.com> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
2012-04-10Merge tag 'dmaengine-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/dmaengineLinus Torvalds5-23/+64
Pull dmaengine fixes from Dan Williams: 1/ regression fix for Xen as it now trips over a broken assumption about the dma address size on 32-bit builds 2/ new quirk for netdma to ignore dma channels that cannot meet netdma alignment requirements 3/ fixes for two long standing issues in ioatdma (ring size overflow) and iop-adma (potential stack corruption) * tag 'dmaengine-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/dmaengine: netdma: adding alignment check for NETDMA ops ioatdma: DMA copy alignment needed to address IOAT DMA silicon errata ioat: ring size variables need to be 32bit to avoid overflow iop-adma: Corrected array overflow in RAID6 Xscale(R) test. ioat: fix size of 'completion' for Xen
2012-04-05ioatdma: DMA copy alignment needed to address IOAT DMA silicon errataDave Jiang1-0/+41
Silicon errata where when RAID and legacy descriptors are mixed, the legacy (memcpy and friends) operation must have alignment of 64 bytes to avoid hanging. This effects Intel Xeon C55xx, C35xx, E5-2600. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2012-04-05ioat: ring size variables need to be 32bit to avoid overflowDave Jiang2-4/+4
The alloc order can be up to 16 and 1 << 16 will over flow the 16bit integer. Change the appropriate variables to 16bit to avoid overflow. Reported-by: Jim Harris <james.r.harris@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2012-03-23ioat: fix size of 'completion' for XenDan Williams4-19/+19
Starting with v3.2 Jonathan reports that Xen crashes loading the ioatdma driver. A debug run shows: ioatdma 0000:00:16.4: desc[0]: (0x300cc7000->0x300cc7040) cookie: 0 flags: 0x2 ctl: 0x29 (op: 0 int_en: 1 compl: 1) ... ioatdma 0000:00:16.4: ioat_get_current_completion: phys_complete: 0xcc7000 ...which shows that in this environment GFP_KERNEL memory may be backed by a 64-bit dma address. This breaks the driver's assumption that an unsigned long should be able to contain the physical address for descriptor memory. Switch to dma_addr_t which beyond being the right size, is the true type for the data i.e. an io-virtual address inidicating the engine's last processed descriptor. [stable: 3.2+] Cc: <stable@vger.kernel.org> Reported-by: Jonathan Nieder <jrnieder@gmail.com> Reported-by: William Dauchy <wdauchy@gmail.com> Tested-by: William Dauchy <wdauchy@gmail.com> Tested-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2012-03-13dmaengine: fix for cookie changes and mergeVinod Koul1-0/+1
Fixed trivial issues in drivers: drivers/dma/imx-sdma.c drivers/dma/intel_mid_dma.c drivers/dma/ioat/dma_v3.c drivers/dma/iop-adma.c drivers/dma/sirf-dma.c drivers/dma/timb_dma.c Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>