aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/crypto (follow)
AgeCommit message (Collapse)AuthorFilesLines
2016-07-26Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds55-1060/+5136
Pull crypto updates from Herbert Xu: "Here is the crypto update for 4.8: API: - first part of skcipher low-level conversions - add KPP (Key-agreement Protocol Primitives) interface. Algorithms: - fix IPsec/cryptd reordering issues that affects aesni - RSA no longer does explicit leading zero removal - add SHA3 - add DH - add ECDH - improve DRBG performance by not doing CTR by hand Drivers: - add x86 AVX2 multibuffer SHA256/512 - add POWER8 optimised crc32c - add xts support to vmx - add DH support to qat - add RSA support to caam - add Layerscape support to caam - add SEC1 AEAD support to talitos - improve performance by chaining requests in marvell/cesa - add support for Araneus Alea I USB RNG - add support for Broadcom BCM5301 RNG - add support for Amlogic Meson RNG - add support Broadcom NSP SoC RNG" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (180 commits) crypto: vmx - Fix aes_p8_xts_decrypt build failure crypto: vmx - Ignore generated files crypto: vmx - Adding support for XTS crypto: vmx - Adding asm subroutines for XTS crypto: skcipher - add comment for skcipher_alg->base crypto: testmgr - Print akcipher algorithm name crypto: marvell - Fix wrong flag used for GFP in mv_cesa_dma_add_iv_op crypto: nx - off by one bug in nx_of_update_msc() crypto: rsa-pkcs1pad - fix rsa-pkcs1pad request struct crypto: scatterwalk - Inline start/map/done crypto: scatterwalk - Remove unnecessary BUG in scatterwalk_start crypto: scatterwalk - Remove unnecessary advance in scatterwalk_pagedone crypto: scatterwalk - Fix test in scatterwalk_done crypto: api - Optimise away crypto_yield when hard preemption is on crypto: scatterwalk - add no-copy support to copychunks crypto: scatterwalk - Remove scatterwalk_bytes_sglen crypto: omap - Stop using crypto scatterwalk_bytes_sglen crypto: skcipher - Remove top-level givcipher interface crypto: user - Remove crypto_lookup_skcipher call crypto: cts - Convert to skcipher ...
2016-07-26Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linuxLinus Torvalds1-0/+13
Pull s390 updates from Martin Schwidefsky: "There are a couple of new things for s390 with this merge request: - a new scheduling domain "drawer" is added to reflect the unusual topology found on z13 machines. Performance tests showed up to 8 percent gain with the additional domain. - the new crc-32 checksum crypto module uses the vector-galois-field multiply and sum SIMD instruction to speed up crc-32 and crc-32c. - proper __ro_after_init support, this requires RO_AFTER_INIT_DATA in the generic vmlinux.lds linker script definitions. - kcov instrumentation support. A prerequisite for that is the inline assembly basic block cleanup, which is the reason for the net/iucv/iucv.c change. - support for 2GB pages is added to the hugetlbfs backend. Then there are two removals: - the oprofile hardware sampling support is dead code and is removed. The oprofile user space uses the perf interface nowadays. - the ETR clock synchronization is removed, this has been superseeded be the STP clock synchronization. And it always has been "interesting" code.. And the usual bug fixes and cleanups" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (82 commits) s390/pci: Delete an unnecessary check before the function call "pci_dev_put" s390/smp: clean up a condition s390/cio/chp : Remove deprecated create_singlethread_workqueue s390/chsc: improve channel path descriptor determination s390/chsc: sanitize fmt check for chp_desc determination s390/cio: make fmt1 channel path descriptor optional s390/chsc: fix ioctl CHSC_INFO_CU command s390/cio/device_ops: fix kernel doc s390/cio: allow to reset channel measurement block s390/console: Make preferred console handling more consistent s390/mm: fix gmap tlb flush issues s390/mm: add support for 2GB hugepages s390: have unique symbol for __switch_to address s390/cpuinfo: show maximum thread id s390/ptrace: clarify bits in the per_struct s390: stack address vs thread_info s390: remove pointless load within __switch_to s390: enable kcov support s390/cpumf: use basic block for ecctr inline assembly s390/hypfs: use basic block for diag inline assembly ...
2016-07-21Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Herbert Xu4-5/+5
Merge the crypto tree to resolve conflict in qat Makefile.
2016-07-21crypto: qat - make qat_asym_algs.o depend on asn1 headersJan Stancek1-0/+1
Parallel build can sporadically fail because asn1 headers may not be built yet by the time qat_asym_algs.o is compiled: drivers/crypto/qat/qat_common/qat_asym_algs.c:55:32: fatal error: qat_rsapubkey-asn1.h: No such file or directory #include "qat_rsapubkey-asn1.h" Cc: stable@vger.kernel.org Signed-off-by: Jan Stancek <jstancek@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-20crypto: vmx - Fix aes_p8_xts_decrypt build failureHerbert Xu1-2/+0
We use _GLOBAL so there is no need to do the manual alignment, in fact it causes a build failure. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-20crypto: vmx - Ignore generated filesPaulo Flabiano Smorigo1-0/+2
Ignore assembly files generated by the perl script. Signed-off-by: Paulo Flabiano Smorigo <pfsmorigo@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-19crypto: vmx - Adding support for XTSLeonidas S. Barbosa3-1/+193
This patch add XTS support using VMX-crypto driver. Signed-off-by: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Paulo Flabiano Smorigo <pfsmorigo@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-19crypto: vmx - Adding asm subroutines for XTSPaulo Flabiano Smorigo2-2/+1867
This patch add XTS subroutines using VMX-crypto driver. It gives a boost of 20 times using XTS. These code has been adopted from OpenSSL project in collaboration with the original author (Andy Polyakov <appro@openssl.org>). Signed-off-by: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Paulo Flabiano Smorigo <pfsmorigo@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-19crypto: marvell - Fix wrong flag used for GFP in mv_cesa_dma_add_iv_opRomain Perier1-1/+1
Use the parameter 'gfp_flags' instead of 'flag' as second argument of dma_pool_alloc(). The parameter 'flag' is for the TDMA descriptor, its content has no sense for the allocator. Fixes: bac8e805a30d ("crypto: marvell - Copy IV vectors by DMA...") Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-19crypto: nx - off by one bug in nx_of_update_msc()Dan Carpenter1-1/+1
The props->ap[] array is defined like this: struct alg_props ap[NX_MAX_FC][NX_MAX_MODE][3]; So we can see that if msc->fc and msc->mode are == to NX_MAX_FC or NX_MAX_MODE then we're off by one. Fixes: ae0222b7289d ('powerpc/crypto: nx driver code supporting nx encryption') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-18crypto: omap - Stop using crypto scatterwalk_bytes_sglenHerbert Xu2-10/+20
We already have a generic function sg_nents_for_len which does the same thing. This patch switches omap over to it and also adds error handling in case the SG list is short. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-11crypto: qat - Stop dropping leading zeros from RSA outputSalvatore Benedetto1-20/+0
There is not need to drop leading zeros from the RSA output operations results. Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-11crypto: qat - Add DH supportSalvatore Benedetto2-72/+522
Add DH support under kpp api. Drop struct qat_rsa_request and introduce a more generic struct qat_asym_request and share it between RSA and DH requests. Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05crypto: qat - Add RSA CRT modeSalvatore Benedetto1-25/+209
Extend qat driver to use RSA CRT mode when all CRT related components are present in the private key. Simplify code in qat_rsa_setkey by adding qat_rsa_clear_ctx. Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05crypto: qat - Use alternative reset methods depending on the specific deviceConor McLoughlin6-9/+43
Different product families will use FLR or SBR. Virtual Function devices have no reset method. Signed-off-by: Conor McLoughlin <conor.mcloughlin@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05crypto: bfin_crc - Simplify use of devm_ioremap_resourceAmitoj Kaur Chawla1-5/+0
Remove unneeded error handling on the result of a call to platform_get_resource when the value is passed to devm_ioremap_resource. The Coccinelle semantic patch that makes this change is as follows: // <smpl> @@ expression pdev,res,n,e,e1; expression ret != 0; identifier l; @@ - res = platform_get_resource(pdev, IORESOURCE_MEM, n); ... when != res - if (res == NULL) { ... \(goto l;\|return ret;\) } ... when != res + res = platform_get_resource(pdev, IORESOURCE_MEM, n); e = devm_ioremap_resource(e1, res); // </smpl> Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05crypto: caam - add support for RSA algorithmTudor Ambarus9-1/+789
Add RSA support to caam driver. Initial author is Yashpal Dutta <yashpal.dutta@freescale.com>. Signed-off-by: Tudor Ambarus <tudor-dan.ambarus@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05crypto: qat - Switch to new rsa_helper functionsSalvatore Benedetto5-55/+21
Drop all asn1 related code and use the new rsa_helper functions rsa_parse_[pub|priv]_key for parsing the key Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: omap-sham - increase cra_proirity to 400Bin Liu1-12/+12
The arm-neon-sha implementations have cra_priority of 150...300, so increase omap-sham priority to 400 to ensure it is on top of any software alg. Signed-off-by: Bin Liu <b-liu@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: sahara - Use skcipher for fallbackHerbert Xu1-62/+50
This patch replaces use of the obsolete ablkcipher with skcipher. It also removes shash_fallback which is totally unused. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: qce - Use skcipher for fallbackHerbert Xu2-12/+17
This patch replaces use of the obsolete ablkcipher with skcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: picoxcell - Use skcipher for fallbackHerbert Xu1-29/+31
This patch replaces use of the obsolete ablkcipher with skcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: mxs-dcp - Use skcipher for fallbackHerbert Xu1-26/+21
This patch replaces use of the obsolete ablkcipher with skcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: ccp - Use skcipher for fallbackHerbert Xu2-25/+21
This patch replaces use of the obsolete ablkcipher with skcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-28crypto: ux500 - do not build with -O0Arnd Bergmann2-4/+4
The ARM allmodconfig build currently warngs because of the ux500 crypto driver not working well with the jump label implementation that we started using for dynamic debug, which breaks building with 'gcc -O0': In file included from /git/arm-soc/include/linux/jump_label.h:105:0, from /git/arm-soc/include/linux/dynamic_debug.h:5, from /git/arm-soc/include/linux/printk.h:289, from /git/arm-soc/include/linux/kernel.h:13, from /git/arm-soc/include/linux/clk.h:16, from /git/arm-soc/drivers/crypto/ux500/hash/hash_core.c:16: /git/arm-soc/arch/arm/include/asm/jump_label.h: In function 'hash_set_dma_transfer': /git/arm-soc/arch/arm/include/asm/jump_label.h:13:7: error: asm operand 0 probably doesn't match constraints [-Werror] asm_volatile_goto("1:\n\t" Turning off compiler optimizations has never really been supported here, and it's only used when debugging the driver. I have not found a good reason for doing this here, other than a misguided attempt to produce more readable assembly output. Also, the driver is only used in obsolete hardware that almost certainly nobody will spend time debugging any more. This just removes the -O0 flag from the compiler options. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-24crypto: omap-sham - set sw fallback to 240 bytesBin Liu1-4/+8
Adds software fallback support for small crypto requests. In these cases, it is undesirable to use DMA, as setting it up itself is rather heavy operation. Gives about 40% extra performance in ipsec usecase. Signed-off-by: Bin Liu <b-liu@ti.com> [t-kristo@ti.com: dropped the extra traces, updated some comments on the code] Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-24crypto: omap - do not call dmaengine_terminate_allLokesh Vutla2-3/+0
The extra call to dmaengine_terminate_all is not needed, as the DMA is not running at this point. This improves performance slightly. Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com> Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-24crypto: omap-sham - change queue size from 1 to 10Tero Kristo1-1/+1
Change crypto queue size from 1 to 10 for omap SHA driver. This should allow clients to enqueue requests more effectively to avoid serializing whole crypto sequences, giving extra performance. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-24crypto: omap-sham - use runtime_pm autosuspend for clock handlingTero Kristo1-1/+7
Calling runtime PM API for every block causes serious performance hit to crypto operations that are done on a long buffer. As crypto is performed on a page boundary, encrypting large buffers can cause a series of crypto operations divided by page. The runtime PM API is also called those many times. Convert the driver to use runtime_pm autosuspend instead, with a default timeout value of 1 second. This results in upto ~50% speedup. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23crypto: marvell - Increase the size of the crypto queueRomain Perier1-1/+1
Now that crypto requests are chained together at the DMA level, we increase the size of the crypto queue for each engine. The result is that as the backlog list is reached later, it does not stop the crypto stack from sending asychronous requests, so more cryptographic tasks are processed by the engines. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23crypto: marvell - Add support for chaining crypto requests in TDMA modeRomain Perier5-27/+221
The Cryptographic Engines and Security Accelerators (CESA) supports the Multi-Packet Chain Mode. With this mode enabled, multiple tdma requests can be chained and processed by the hardware without software intervention. This mode was already activated, however the crypto requests were not chained together. By doing so, we reduce significantly the number of IRQs. Instead of being interrupted at the end of each crypto request, we are interrupted at the end of the last cryptographic request processed by the engine. This commits re-factorizes the code, changes the code architecture and adds the required data structures to chain cryptographic requests together before sending them to an engine (stopped or possibly already running). Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23crypto: marvell - Add load balancing between enginesRomain Perier4-86/+84
This commits adds support for fine grained load balancing on multi-engine IPs. The engine is pre-selected based on its current load and on the weight of the crypto request that is about to be processed. The global crypto queue is also moved to each engine. These changes are required to allow chaining crypto requests at the DMA level. By using a crypto queue per engine, we make sure that we keep the state of the tdma chain synchronized with the crypto queue. We also reduce contention on 'cesa_dev->lock' and improve parallelism. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23crypto: marvell - Move SRAM I/O operations to step functionsRomain Perier2-12/+12
Currently the crypto requests were sent to engines sequentially. This commit moves the SRAM I/O operations from the prepare to the step functions. It provides flexibility for future works and allow to prepare a request while the engine is running. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23crypto: marvell - Add a complete operation for async requestsRomain Perier4-15/+39
So far, the 'process' operation was used to check if the current request was correctly handled by the engine, if it was the case it copied information from the SRAM to the main memory. Now, we split this operation. We keep the 'process' operation, which still checks if the request was correctly handled by the engine or not, then we add a new operation for completion. The 'complete' method copies the content of the SRAM to memory. This will soon become useful if we want to call the process and the complete operations from different locations depending on the type of the request (different cleanup logic). Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23crypto: marvell - Move tdma chain out of mv_cesa_tdma_req and remove itRomain Perier5-98/+84
Currently, the only way to access the tdma chain is to use the 'req' union from a mv_cesa_{ablkcipher,ahash}. This will soon become a problem if we want to handle the TDMA chaining vs standard/non-DMA processing in a generic way (with generic functions at the cesa.c level detecting whether the request should be queued at the DMA level or not). Hence the decision to move the chain field a the mv_cesa_req level at the expense of adding 2 void * fields to all request contexts (including non-DMA ones) and to remove the type completly. To limit the overhead, we get rid of the type field, which can now be deduced from the req->chain.first value. Once these changes are done the union is no longer needed, so remove it and move mv_cesa_ablkcipher_std_req and mv_cesa_req to mv_cesa_ablkcipher_req directly. There are also no needs to keep the 'base' field into the union of mv_cesa_ahash_req, so move it into the upper structure. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23crypto: marvell - Copy IV vectors by DMA transfers for acipher requestsRomain Perier4-9/+60
Add a TDMA descriptor at the end of the request for copying the output IV vector via a DMA transfer. This is a good way for offloading as much as processing as possible to the DMA and the crypto engine. This is also required for processing multiple cipher requests in chained mode, otherwise the content of the IV vector would be overwritten by the last processed request. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23crypto: marvell - Fix wrong type check in dma functionsRomain Perier1-2/+3
So far, the way that the type of a TDMA operation was checked was wrong. We have to use the type mask in order to get the right part of the flag containing the type of the operation. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23crypto: marvell - Check engine is not already running when enabling a reqRomain Perier3-0/+6
Add a BUG_ON() call when the driver tries to launch a crypto request while the engine is still processing the previous one. This replaces a silent system hang by a verbose kernel panic with the associated backtrace to let the user know that something went wrong in the CESA driver. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23crypto: marvell - Add a macro constant for the size of the crypto queueRomain Perier1-1/+4
Adding a macro constant to be used for the size of the crypto queue, instead of using a numeric value directly. It will be easier to maintain in case we add more than one crypto queue of the same size. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-20crypto: caam - replace deprecated EXTRA_CFLAGSTudor Ambarus1-1/+1
EXTRA_CFLAGS is still supported but its usage is deprecated. Signed-off-by: Tudor Ambarus <tudor-dan.ambarus@nxp.com> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-20crypto: caam - fix misspelled upper_32_bitsArnd Bergmann1-2/+2
An endianess fix mistakenly used higher_32_bits() instead of upper_32_bits(), and that doesn't exist: drivers/crypto/caam/desc_constr.h: In function 'append_ptr': drivers/crypto/caam/desc_constr.h:84:75: error: implicit declaration of function 'higher_32_bits' [-Werror=implicit-function-declaration] *offset = cpu_to_caam_dma(ptr); Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: 261ea058f016 ("crypto: caam - handle core endianness != caam endianness") Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-15s390/crc32-vx: add crypto API module for optimized CRC-32 algorithmsHendrik Brueckner1-0/+13
Add a crypto API module to access the vector extension based CRC-32 implementations. Users can request the optimized implementation through the shash crypto API interface. Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-06-13crypto: qat - Remove deprecated create_workqueueBhaktipriya Shridhar3-3/+4
alloc_workqueue replaces deprecated create_workqueue(). The workqueue device_reset_wq has workitem &reset_data->reset_work per adf_reset_dev_data. The workqueue pf2vf_resp_wq is a workqueue for PF2VF responses has workitem &pf2vf_resp->pf2vf_resp_work per pf2vf_resp. The workqueue adf_vf_stop_wq is used to call adf_dev_stop() asynchronously. Dedicated workqueues have been used in all cases since the workitems on the workqueues are involved in operation of crypto which can be used in the IO path which is depended upon during memory reclaim. Hence, WQ_MEM_RECLAIM has been set to gurantee forward progress under memory pressure. Since there are only a fixed number of work items, explicit concurrency limit is unnecessary. Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-13crypto: ux500 - memmove the right sizeLinus Walleij1-2/+2
The hash buffer is really HASH_BLOCK_SIZE bytes, someone must have thought that memmove takes n*u32 words by mistake. Tests work as good/bad as before after this patch. Cc: Joakim Bech <joakim.bech@linaro.org> Cc: stable@vger.kernel.org Reported-by: David Binderman <linuxdev.baldrick@gmail.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-13crypto: vmx - Increase priority of aes-cbc cipherAnton Blanchard2-2/+2
All of the VMX AES ciphers (AES, AES-CBC and AES-CTR) are set at priority 1000. Unfortunately this means we never use AES-CBC and AES-CTR, because the base AES-CBC cipher that is implemented on top of AES inherits its priority. To fix this, AES-CBC and AES-CTR have to be a higher priority. Set them to 2000. Testing on a POWER8 with: cryptsetup benchmark --cipher aes --key-size 256 Shows decryption speed increase from 402.4 MB/s to 3069.2 MB/s, over 7x faster. Thanks to Mike Strosaker for helping me debug this issue. Fixes: 8c755ace357c ("crypto: vmx - Adding CBC routines for VMX module") Cc: stable@vger.kernel.org Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-13crypto: vmx - Fix ABI detectionAnton Blanchard1-1/+1
When calling ppc-xlate.pl, we pass it either linux-ppc64 or linux-ppc64le. The script however was expecting linux64le, a result of its OpenSSL origins. This means we aren't obeying the ppc64le ABIv2 rules. Fix this by checking for linux-ppc64le. Fixes: 5ca55738201c ("crypto: vmx - comply with ABIs that specify vrsave as reserved.") Cc: stable@vger.kernel.org Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-08crypto: talitos - templates for AEAD using HMAC_SNOOP_NO_AFEULEROY Christophe1-0/+180
This will allow IPSEC on SEC1 Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-08crypto: talitos - implement cra_priorityLEROY Christophe1-1/+5
SEC1 doesn't have IPSEC_ESP descriptor type but it is able to perform IPSEC using HMAC_SNOOP_NO_AFEU, which is also existing on SEC2 In order to be able to define descriptors templates for SEC1 without breaking SEC2+, we have to give lower priority to HMAC_SNOOP_NO_AFEU so that SEC2+ selects IPSEC_ESP and not HMAC_SNOOP_NO_AFEU which is less performant. This is done by adding a priority field in the template. If the field is 0, we use the default priority, otherwise we used the one in the field. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-08crypto: talitos - sg_to_link_tbl() not used anymore, remove itLEROY Christophe1-8/+0
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-08crypto: talitos - Implement AEAD for SEC1 using HMAC_SNOOP_NO_AFEULEROY Christophe1-85/+124
This patchs enhances the IPSEC_ESP related functions for them to also supports the same operations with descriptor type HMAC_SNOOP_NO_AFEU. The differences between the two descriptor types are: * pointeurs 2 and 3 are swaped (Confidentiality key and Primary EU Context IN) * HMAC_SNOOP_NO_AFEU has CICV out in pointer 6 * HMAC_SNOOP_NO_AFEU has no primary EU context out so we get it from the end of data out Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>