aboutsummaryrefslogtreecommitdiffstats
path: root/lib/cpumask.c (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2019-04-08crypto: scompress - Use per-CPU struct instead multiple variablesSebastian Andrzej Siewior1-71/+54
Two per-CPU variables are allocated as pointer to per-CPU memory which then are used as scratch buffers. We could be smart about this and use instead a per-CPU struct which contains the pointers already and then we need to allocate just the scratch buffers. Add a lock to the struct. By doing so we can avoid the get_cpu() statement and gain lockdep coverage (if enabled) to ensure that the lock is always acquired in the right context. On non-preemptible kernels the lock vanishes. It is okay to use raw_cpu_ptr() in order to get a pointer to the struct since it is protected by the spinlock. The diffstat of this is negative and according to size scompress.o: text data bss dec hex filename 1847 160 24 2031 7ef dbg_before.o 1754 232 4 1990 7c6 dbg_after.o 1799 64 24 1887 75f no_dbg-before.o 1703 88 4 1795 703 no_dbg-after.o The overall size increase difference is also negative. The increase in the data section is only four bytes without lockdep. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: scompress - return proper error code for allocation failureSebastian Andrzej Siewior1-1/+3
If scomp_acomp_comp_decomp() fails to allocate memory for the destination then we never copy back the data we compressed. It is probably best to return an error code instead 0 in case of failure. I haven't found any user that is using acomp_request_set_params() without the `dst' buffer so there is probably no harm. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: ccp - introduce SEV_GET_ID2 commandSingh, Brijesh3-6/+82
The current definition and implementation of the SEV_GET_ID command does not provide the length of the unique ID returned by the firmware. As per the firmware specification, the firmware may return an ID length that is not restricted to 64 bytes as assumed by the SEV_GET_ID command. Introduce the SEV_GET_ID2 command to overcome with the SEV_GET_ID limitations. Deprecate the SEV_GET_ID in the favor of SEV_GET_ID2. At the same time update SEV API web link. Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Gary Hook <gary.hook@amd.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Nathaniel McCallum <npmccallum@redhat.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: caam/qi - Change a couple IS_ERR_OR_NULL() checks to IS_ERR()Dan Carpenter1-2/+2
create_caam_req_fq() doesn't return NULL pointers so there is no need to check. The NULL checks are problematic because it's hard to say how a NULL return should be handled, so removing the checks is a nice cleanup. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: cavium/nitrox - Added rfc4106(gcm(aes)) cipher supportNagadheeraj Rottela2-83/+300
Added rfc4106(gcm(aes)) cipher. Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com> Reviewed-by: Srikanth Jampala <jsrikanth@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-28crypto: caam - limit AXI pipeline to a depth of 1Iuliana Prodan1-0/+20
Some i.MX6 devices (imx6D, imx6Q, imx6DL, imx6S, imx6DP and imx6DQ) have an issue wherein AXI bus transactions may not occur in the correct order. This isn't a problem running single descriptors, but can be if running multiple concurrent descriptors. Reworking the CAAM driver to throttle to single requests is impractical, so this patch limits the AXI pipeline to a depth of one (from a default of 4) to preclude this situation from occurring. This patch applies to known affected platforms. Signed-off-by: Radu Solea <radu.solea@nxp.com> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-28crypto: caam/jr - Remove extra memory barrier during job ring enqueueVakul Garg1-2/+4
In caam_jr_enqueue(), a write barrier is needed to order stores to job ring slot before declaring addition of new job into input job ring. The register write is done using wr_reg32() which internally uses iowrite32() for write operation. The api iowrite32() issues a write barrier before issuing write operation. Therefore, the wmb() preceding wr_reg32() can be safely removed. Signed-off-by: Vakul Garg <vakul.garg@nxp.com> Reviewed-by: Horia Geanta <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-28crypto: caam/jr - Removed redundant vars from job ring private dataVakul Garg2-7/+1
For each job ring, the variable 'ringsize' is initialised but never used. Similarly variables 'inp_ring_write_index' and 'head' always track the same value and instead of 'inp_ring_write_index', caam_jr_enqueue() can use 'head' itself. Both these variables have been removed. Signed-off-by: Vakul Garg <vakul.garg@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-28crypto: caam/jr - Remove spinlock for output job ringVakul Garg2-7/+1
For each job ring pair, the output ring is processed exactly by one cpu at a time under a tasklet context (one per ring). Therefore, there is no need to protect a job ring's access & its private data structure using a lock. Hence the lock can be removed. Signed-off-by: Vakul Garg <vakul.garg@nxp.com> Reviewed-by: Horia Geanta <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-28crypto: vmx - Make p8_init and p8_exit staticYueHaibing1-2/+2
Fix sparse warnings: drivers/crypto/vmx/vmx.c:44:12: warning: symbol 'p8_init' was not declared. Should it be static? drivers/crypto/vmx/vmx.c:70:13: warning: symbol 'p8_exit' was not declared. Should it be static? Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-28crypto: fips - Grammar s/options/option/, s/to/the/Geert Uytterhoeven1-2/+2
Fixes: ccb778e1841ce04b ("crypto: api - Add fips_enable flag") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Mukesh Ojha <mojha@codeaurora.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-28crypto: cavium - Make cptvf_device_init staticYueHaibing1-1/+1
Fix sparse warning: drivers/crypto/cavium/cpt/cptvf_main.c:644:6: warning: symbol 'cptvf_device_init' was not declared. Should it be static? Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-28crypto: bcm - remove unused array tag_to_hash_idxYueHaibing1-3/+0
It's never used since introduction in commit 9d12ba86f818 ("crypto: brcm - Add Broadcom SPU driver") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-28crypto: zip - Make some functions staticYueHaibing1-4/+4
Fix following sparse warnings: drivers/crypto/cavium/zip/zip_crypto.c:72:5: warning: symbol 'zip_ctx_init' was not declared. Should it be static? drivers/crypto/cavium/zip/zip_crypto.c:110:6: warning: symbol 'zip_ctx_exit' was not declared. Should it be static? drivers/crypto/cavium/zip/zip_crypto.c:122:5: warning: symbol 'zip_compress' was not declared. Should it be static? drivers/crypto/cavium/zip/zip_crypto.c:158:5: warning: symbol 'zip_decompress' was not declared. Should it be static? Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-28crypto: ccp - Make ccp_register_rsa_alg staticYueHaibing1-1/+2
Fix sparse warning: drivers/crypto/ccp/ccp-crypto-rsa.c:251:5: warning: symbol 'ccp_register_rsa_alg' was not declared. Should it be static? Signed-off-by: YueHaibing <yuehaibing@huawei.com> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-28crypto: cavium - Make some functions staticYueHaibing1-3/+3
Fix sparse warnings: drivers/crypto/cavium/cpt/cptvf_reqmanager.c:226:5: warning: symbol 'send_cpt_command' was not declared. Should it be static? drivers/crypto/cavium/cpt/cptvf_reqmanager.c:273:6: warning: symbol 'do_request_cleanup' was not declared. Should it be static? drivers/crypto/cavium/cpt/cptvf_reqmanager.c:319:6: warning: symbol 'do_post_process' was not declared. Should it be static? Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-28crypto: cavium - remove unused fucntionsYueHaibing1-17/+0
cptvf_mbox_send_ack and cptvf_mbox_send_nack are never used since introdution in commit c694b233295b ("crypto: cavium - Add the Virtual Function driver for CPT") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: Kconfig - fix typos AEGSI -> AEGISOndrej Mosnacek1-3/+3
Spotted while reviewind patches from Eric Biggers. Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: salsa20-generic - use crypto_xor_cpy()Eric Biggers1-5/+4
In salsa20_docrypt(), use crypto_xor_cpy() instead of crypto_xor(). This avoids having to memcpy() the src buffer to the dst buffer. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: chacha-generic - use crypto_xor_cpy()Eric Biggers1-5/+3
In chacha_docrypt(), use crypto_xor_cpy() instead of crypto_xor(). This avoids having to memcpy() the src buffer to the dst buffer. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: vmx - fix copy-paste error in CTR modeDaniel Axtens1-2/+2
The original assembly imported from OpenSSL has two copy-paste errors in handling CTR mode. When dealing with a 2 or 3 block tail, the code branches to the CBC decryption exit path, rather than to the CTR exit path. This leads to corruption of the IV, which leads to subsequent blocks being corrupted. This can be detected with libkcapi test suite, which is available at https://github.com/smuellerDD/libkcapi Reported-by: Ondrej Mosnáček <omosnacek@gmail.com> Fixes: 5c380d623ed3 ("crypto: vmx - Add support for VMS instructions by ASM") Cc: stable@vger.kernel.org Signed-off-by: Daniel Axtens <dja@axtens.net> Tested-by: Michael Ellerman <mpe@ellerman.id.au> Tested-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: ccree - reduce kernel stack usage with clangArnd Bergmann1-1/+1
Building with clang for a 32-bit architecture runs over the stack frame limit in the setkey function: drivers/crypto/ccree/cc_cipher.c:318:12: error: stack frame size of 1152 bytes in function 'cc_cipher_setkey' [-Werror,-Wframe-larger-than=] The problem is that there are two large variables: the temporary 'tmp' array and the SHASH_DESC_ON_STACK() declaration. Moving the first into the block in which it is used reduces the total frame size to 768 bytes, which seems more reasonable and is under the warning limit. Fixes: 63ee04c8b491 ("crypto: ccree - add skcipher support") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-By: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: testmgr - test the !may_use_simd() fallback codeEric Biggers1-24/+92
All crypto API algorithms are supposed to support the case where they are called in a context where SIMD instructions are unusable, e.g. IRQ context on some architectures. However, this isn't tested for by the self-tests, causing bugs to go undetected. Now that all algorithms have been converted to use crypto_simd_usable(), update the self-tests to test the no-SIMD case. First, a bool testvec_config::nosimd is added. When set, the crypto operation is executed with preemption disabled and with crypto_simd_usable() mocked out to return false on the current CPU. A bool test_sg_division::nosimd is also added. For hash algorithms it's honored by the corresponding ->update(). By setting just a subset of these bools, the case where some ->update()s are done in SIMD context and some are done in no-SIMD context is also tested. These bools are then randomly set by generate_random_testvec_config(). For now, all no-SIMD testing is limited to the extra crypto self-tests, because it might be a bit too invasive for the regular self-tests. But this could be changed later. This has already found bugs in the arm64 AES-GCM and ChaCha algorithms. This would have found some past bugs as well. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: simd - convert to use crypto_simd_usable()Eric Biggers1-4/+4
Replace all calls to may_use_simd() in the shared SIMD helpers with crypto_simd_usable(), in order to allow testing the no-SIMD code paths. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: arm64 - convert to use crypto_simd_usable()Eric Biggers15-34/+47
Replace all calls to may_use_simd() in the arm64 crypto code with crypto_simd_usable(), in order to allow testing the no-SIMD code paths. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: arm - convert to use crypto_simd_usable()Eric Biggers10-19/+29
Replace all calls to may_use_simd() in the arm crypto code with crypto_simd_usable(), in order to allow testing the no-SIMD code paths. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: x86 - convert to use crypto_simd_usable()Eric Biggers12-36/+44
Replace all calls to irq_fpu_usable() in the x86 crypto code with crypto_simd_usable(), in order to allow testing the no-SIMD code paths. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: simd,testmgr - introduce crypto_simd_usable()Eric Biggers2-1/+49
So that the no-SIMD fallback code can be tested by the crypto self-tests, add a macro crypto_simd_usable() which wraps may_use_simd(), but also returns false if the crypto self-tests have set a per-CPU bool to disable SIMD in crypto code on the current CPU. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: arm64/gcm-aes-ce - fix no-NEON fallback codeEric Biggers1-4/+6
The arm64 gcm-aes-ce algorithm is failing the extra crypto self-tests following my patches to test the !may_use_simd() code paths, which previously were untested. The problem is that in the !may_use_simd() case, an odd number of AES blocks can be processed within each step of the skcipher_walk. However, the skcipher_walk is being done with a "stride" of 2 blocks and is advanced by an even number of blocks after each step. This causes the encryption to produce the wrong ciphertext and authentication tag, and causes the decryption to incorrectly fail. Fix it by only processing an even number of blocks per step. Fixes: c2b24c36e0a3 ("crypto: arm64/aes-gcm-ce - fix scatterwalk API violation") Fixes: 71e52c278c54 ("crypto: arm64/aes-ce-gcm - operate on two input blocks at a time") Cc: <stable@vger.kernel.org> # v4.19+ Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: chacha-generic - fix use as arm64 no-NEON fallbackEric Biggers1-1/+1
The arm64 implementations of ChaCha and XChaCha are failing the extra crypto self-tests following my patches to test the !may_use_simd() code paths, which previously were untested. The problem is as follows: When !may_use_simd(), the arm64 NEON implementations fall back to the generic implementation, which uses the skcipher_walk API to iterate through the src/dst scatterlists. Due to how the skcipher_walk API works, walk.stride is set from the skcipher_alg actually being used, which in this case is the arm64 NEON algorithm. Thus walk.stride is 5*CHACHA_BLOCK_SIZE, not CHACHA_BLOCK_SIZE. This unnecessarily large stride shouldn't cause an actual problem. However, the generic implementation computes round_down(nbytes, walk.stride). round_down() assumes the round amount is a power of 2, which 5*CHACHA_BLOCK_SIZE is not, so it gives the wrong result. This causes the following case in skcipher_walk_done() to be hit, causing a WARN() and failing the encryption operation: if (WARN_ON(err)) { /* unexpected case; didn't process all bytes */ err = -EINVAL; goto finish; } Fix it by rounding down to CHACHA_BLOCK_SIZE instead of walk.stride. (Or we could replace round_down() with rounddown(), but that would add a slow division operation every time, which I think we should avoid.) Fixes: 2fe55987b262 ("crypto: arm64/chacha - use combined SIMD/ALU routine for more speed") Cc: <stable@vger.kernel.org> # v5.0+ Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22hwrng: omap - Set default qualityRouven Czerwinski1-0/+1
Newer combinations of the glibc, kernel and openssh can result in long initial startup times on OMAP devices: [ 6.671425] systemd-rc-once[102]: Creating ED25519 key; this may take some time ... [ 142.652491] systemd-rc-once[102]: Creating ED25519 key; done. due to the blocking getrandom(2) system call: [ 142.610335] random: crng init done Set the quality level for the omap hwrng driver allowing the kernel to use the hwrng as an entropy source at boot. Signed-off-by: Rouven Czerwinski <r.czerwinski@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: testmgr - remove workaround for AEADs that modify aead_requestEric Biggers1-3/+0
Now that all AEAD algorithms (that I have the hardware to test, at least) have been fixed to not modify the user-provided aead_request, remove the workaround from testmgr that reset aead_request::tfm after each AEAD encryption/decryption. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: x86/morus1280 - convert to use AEAD SIMD helpersEric Biggers5-153/+37
Convert the x86 implementations of MORUS-1280 to use the AEAD SIMD helpers, rather than hand-rolling the same functionality. This simplifies the code and also fixes the bug where the user-provided aead_request is modified. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: x86/morus640 - convert to use AEAD SIMD helpersEric Biggers4-148/+30
Convert the x86 implementation of MORUS-640 to use the AEAD SIMD helpers, rather than hand-rolling the same functionality. This simplifies the code and also fixes the bug where the user-provided aead_request is modified. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: x86/aegis256 - convert to use AEAD SIMD helpersEric Biggers2-128/+31
Convert the x86 implementation of AEGIS-256 to use the AEAD SIMD helpers, rather than hand-rolling the same functionality. This simplifies the code and also fixes the bug where the user-provided aead_request is modified. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: x86/aegis128l - convert to use AEAD SIMD helpersEric Biggers2-128/+31
Convert the x86 implementation of AEGIS-128L to use the AEAD SIMD helpers, rather than hand-rolling the same functionality. This simplifies the code and also fixes the bug where the user-provided aead_request is modified. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: x86/aegis128 - convert to use AEAD SIMD helpersEric Biggers2-128/+31
Convert the x86 implementation of AEGIS-128 to use the AEAD SIMD helpers, rather than hand-rolling the same functionality. This simplifies the code and also fixes the bug where the user-provided aead_request is modified. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: x86/aesni - convert to use AEAD SIMD helpersEric Biggers1-146/+15
Convert the AES-NI implementations of "gcm(aes)" and "rfc4106(gcm(aes))" to use the AEAD SIMD helpers, rather than hand-rolling the same functionality. This simplifies the code and also fixes the bug where the user-provided aead_request is modified. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: x86/aesni - convert to use skcipher SIMD bulk registrationEric Biggers1-36/+7
Convert the AES-NI glue code to use simd_register_skciphers_compat() to create SIMD wrappers for all the internal skcipher algorithms at once, rather than wrapping each one individually. This simplifies the code. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: simd - support wrapping AEAD algorithmsEric Biggers2-0/+289
Update the crypto_simd module to support wrapping AEAD algorithms. Previously it only supported skciphers. The code for each is similar. I'll be converting the x86 implementations of AES-GCM, AEGIS, and MORUS to use this. Currently they each independently implement the same functionality. This will not only simplify the code, but it will also fix the bug detected by the improved self-tests: the user-provided aead_request is modified. This is because these algorithms currently reuse the original request, whereas the crypto_simd helpers build a new request in the original request's context. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: caam/jr - optimize job ring enqueue and dequeue operationsVakul Garg2-2/+11
Instead of reading job ring's occupancy registers for every req/rsp enqueued/dequeued respectively, we read these registers once and store them in memory. After completing a job enqueue/dequeue, we decrement these values. When these values become zero, we refresh the snapshot of job ring's occupancy registers. This eliminates need of expensive device register read operations for every job enqueued and dequeued and hence makes caam_jr_enqueue() and caam_jr_dequeue() faster. The performance of kernel ipsec improved by about 6% on ls1028 (for frame size 408 bytes). Signed-off-by: Vakul Garg <vakul.garg@nxp.com> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-17Linux 5.1-rc1Linus Torvalds1-2/+2
2019-03-17perf/x86/intel: Make dev_attr_allow_tsx_force_abort statickbuild test robot1-1/+1
Fixes: 400816f60c54 ("perf/x86/intel: Implement support for TSX Force Abort") Signed-off-by: kbuild test robot <lkp@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org> Cc: kbuild-all@01.org Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20190313184243.GA10820@lkp-sb-ep06
2019-03-17kconfig: remove stale lxdialog/.gitignoreMasahiro Yamada1-4/+0
When this .gitignore was added, lxdialog was an independent hostprogs-y. Now that all objects in lxdialog/ are directly linked to mconf, the lxdialog is no longer generated. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-03-17kbuild: force all architectures except um to include mandatory-yMasahiro Yamada29-47/+18
Currently, every arch/*/include/uapi/asm/Kbuild explicitly includes the common Kbuild.asm file. Factor out the duplicated include directives to scripts/Makefile.asm-generic so that no architecture would opt out of the mandatory-y mechanism. um is not forced to include mandatory-y since it is a very exceptional case which does not support UAPI. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-03-17kbuild: warn redundant generic-yMasahiro Yamada12-13/+6
The generic-y is redundant under the following condition: - arch has its own implementation - the same header is added to generated-y - the same header is added to mandatory-y If a redundant generic-y is found, the warning like follows is displayed: scripts/Makefile.asm-generic:20: redundant generic-y found in arch/arm/include/asm/Kbuild: timex.h I fixed up arch Kbuild files found by this. Suggested-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-03-17Revert "modsign: Abort modules_install when signing fails"Douglas Anderson1-1/+1
This reverts commit caf6fe91ddf62a96401e21e9b7a07227440f4185. The commit was fine but is no longer needed as of commit 3a2429e1faf4 ("kbuild: change if_changed_rule for multi-line recipe"). Let's go back to using ";" to be consistent. For some discussion, see: https://lkml.kernel.org/r/CAK7LNASde0Q9S5GKeQiWhArfER4S4wL1=R_FW8q0++_X3T5=hQ@mail.gmail.com Signed-off-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-03-17kbuild: Make NOSTDINC_FLAGS a simply expanded variableDouglas Anderson1-1/+1
During a simple no-op (nothing changed) build I saw 39 invocations of the C compiler with the argument "-print-file-name=include". We don't need to call the C compiler 39 times for this--one time will suffice. Let's change NOSTDINC_FLAGS to a simply expanded variable to avoid this since there doesn't appear to be any reason it should be recursively expanded. On my build this shaved ~400 ms off my "no-op" build. Note that the recursive expansion seems to date back to the (really old) commit e8f5bdb02ce0 ("[PATCH] Makefile include path ordering"). It's a little unclear to me if the point of that patch was to switch the variable to be recursively expanded (which it did) or to avoid directly assigning to NOSTDINC_FLAGS (AKA to switch to +=) because someone else (out of tree?) was setting it. I presume later since if the only goal was to switch to recursive expansion the patch would have just removed the ":". Signed-off-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-03-17kbuild: deb-pkg: avoid implicit effectsArseny Maslennikov1-1/+4
* The man page for dpkg-source(1) notes: > -b, --build directory [format-specific-parameters] > Build a source package (--build since dpkg 1.17.14). > <...> > > dpkg-source will build the source package with the first > format found in this ordered list: the format indicated > with the --format command line option, the format > indicated in debian/source/format, “1.0”. The fallback > to “1.0” is deprecated and will be removed at some point > in the future, you should always document the desired > source format in debian/source/format. See section > SOURCE PACKAGE FORMATS for an extensive description of > the various source package formats. Thus it would be more foolproof to explicitly use 1.0 (as we always did) than to rely on dpkg-source's defaults. * In a similar vein, debian/rules is not made executable by mkdebian, and dpkg-source warns about that but still silently fixes the file. Let's be explicit once again. Signed-off-by: Arseny Maslennikov <ar@cs.msu.ru> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>