Age | Commit message (Collapse) | Author | Files | Lines |
|
As there are no in-kernel users of the Crypto API poly1305 left,
remove it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As poly1305 no longer has any in-kernel users, remove its tests.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since the poly1305 algorithm is fixed, there is no point in going
through the Crypto API for it. Use the lib/crypto poly1305 interface
instead.
For compatiblity keep the poly1305 parameter in the algorithm name.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Cross-merge networking fixes after downstream PR (net-6.15-rc5).
No conflicts or adjacent changes.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
The recent patch to make the rfc3961 simplified code use sg_miter rather
than manually walking the scatterlist to hash the contents of a buffer
described by that scatterlist failed to take the starting offset into
account.
This is indicated by the selftests reporting:
krb5: Running aes128-cts-hmac-sha256-128 mic
krb5: !!! TESTFAIL crypto/krb5/selftest.c:446
krb5: MIC mismatch
Fix this by calling sg_miter_skip() before doing the loop to advance
by the offset.
This only affects packet signing modes and not full encryption in RxGK
because, for full encryption, the message digest is handled inside the
authenc and krb5enc drivers.
Note: Nothing in linus/master uses the krb5lib, though the bug is there.
It is used by AF_RXRPC's RxGK implementation in -next, no need to backport.
Fixes: da6f9bf40ac2 ("crypto: krb5 - Use SG miter instead of doing it by hand")
Reported-by: Marc Dionne <marc.dionne@auristor.com>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Link: https://patch.msgid.link/3824017.1745835726@warthog.procyon.org.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Since crc32_generic.c and crc32c_generic.c now expose both the generic
and architecture-optimized implementations via the crypto_shash API,
rather than just the generic implementations as they originally did,
remove the "generic" part of the filenames and module names:
crypto/crc32-generic.c => crypto/crc32.c
crypto/crc32c-generic.c => crypto/crc32c.c
crc32-generic.ko => crc32-cryptoapi.ko
crc32c-generic.ko => crc32c-cryptoapi.ko
The reason for adding the -cryptoapi suffixes to the module names is to
avoid a module name collision with crc32.ko which is the library API.
We could instead rename the library module to libcrc32.ko. However,
while lib/crypto/ uses that convention, the rest of lib/ doesn't. Since
the library API is the primary API for CRC-32, I'd like to keep the
unsuffixed name for it and make the Crypto API modules use a suffix.
Acked-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20250428162458.29732-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
|
|
Move the generic part of skcipher walk into scatterwalk, and use
it to implement memcpy_sglist.
This makes memcpy_sglist do the right thing when two distinct SG
lists contain identical subsets (e.g., the AD part of AEAD).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
The accelerated export format on x86/arm64 is easier to use so
switch the generic polyval algorithm to use that format instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Do not copy the exit function in crypto_clone_tfm as it should
only be set after init_tfm or clone_tfm has succeeded.
Move the setting into crypto_clone_ahash and crypto_clone_shash
instead.
Also clone the fb if necessary.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add a helper to clone crypto requests and eliminate code duplication.
Use kmemdup in the helper.
Also add an fb field to crypto_tfm.
This also happens to fix the existing implementations which were
buggy.
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202504230118.1CxUaUoX-lkp@intel.com/
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202504230004.c7mrY0C6-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that the architecture-optimized Poly1305 kconfig symbols are defined
regardless of CRYPTO, there is no need for CRYPTO_LIB_POLY1305 to select
CRYPTO. So, remove that. This makes the indirection through the
CRYPTO_LIB_POLY1305_INTERNAL symbol unnecessary, so get rid of that and
just use CRYPTO_LIB_POLY1305 directly. Finally, make the fallback to
the generic implementation use a default value instead of a select; this
makes it consistent with how the arch-optimized code gets enabled and
also with how CRYPTO_LIB_BLAKE2S_GENERIC gets enabled.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that the architecture-optimized ChaCha kconfig symbols are defined
regardless of CRYPTO, there is no need for CRYPTO_LIB_CHACHA to select
CRYPTO. So, remove that. This makes the indirection through the
CRYPTO_LIB_CHACHA_INTERNAL symbol unnecessary, so get rid of that and
just use CRYPTO_LIB_CHACHA directly. Finally, make the fallback to the
generic implementation use a default value instead of a select; this
makes it consistent with how the arch-optimized code gets enabled and
also with how CRYPTO_LIB_BLAKE2S_GENERIC gets enabled.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Remove the private and obsolete CRYPTO_ALG_ENGINE bit which is
conflicting with the new CRYPTO_ALG_DUP_FIRST bit.
Reported-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Fixes: f1440a90465b ("crypto: api - Add support for duplicating algorithms before registration")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Merge crypto tree to pick up the scompress scratch refcount fix. The
merge resolution is slightly non-trivial as the context has shifted.
|
|
Commit ddd0a42671c0 only increments scomp_scratch_users when it was 0,
causing a panic when using ipcomp:
Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN NOPTI
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
CPU: 1 UID: 0 PID: 619 Comm: ping Tainted: G N 6.15.0-rc3-net-00032-ga79be02bba5c #41 PREEMPT(full)
Tainted: [N]=TEST
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.3-1-1 04/01/2014
RIP: 0010:inflate_fast+0x5a2/0x1b90
[...]
Call Trace:
<IRQ>
zlib_inflate+0x2d60/0x6620
deflate_sdecompress+0x166/0x350
scomp_acomp_comp_decomp+0x45f/0xa10
scomp_acomp_decompress+0x21/0x120
acomp_do_req_chain+0x3e5/0x4e0
ipcomp_input+0x212/0x550
xfrm_input+0x2de2/0x72f0
[...]
Kernel panic - not syncing: Fatal exception in interrupt
Kernel Offset: disabled
---[ end Kernel panic - not syncing: Fatal exception in interrupt ]---
Instead, let's keep the old increment, and decrement back to 0 if the
scratch allocation fails.
Fixes: ddd0a42671c0 ("crypto: scompress - Fix scratch allocation failure handling")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Provide an option to handle the partial blocks in the shash API.
Almost every hash algorithm has a block size and are only able
to hash partial blocks on finalisation.
Rather than duplicating the partial block handling many times,
add this functionality to the shash API.
It is optional (e.g., hmac would never need this by relying on
the partial block handling of the underlying hash), and to enable
it set the bit CRYPTO_AHASH_ALG_BLOCK_ONLY.
The export format is always that of the underlying hash export,
plus the partial block buffer, followed by a single-byte for the
partial block length.
Set the bit CRYPTO_AHASH_ALG_FINAL_NONZERO to withhold an extra
byte in the partial block. This will come in handy when this
is extended to ahash where hardware often can't deal with a
zero-length final.
It will also be used for algorithms requiring an extra block for
finalisation (e.g., cmac).
As an optimisation, set the bit CRYPTO_AHASH_ALG_FINUP_MAX if
the algorithm wishes to get as much data as possible instead of
just the last partial block.
The descriptor will be zeroed after finalisation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Merge crypto tree to pick up scompress off-by-one patch. The
merge resolution is non-trivial as the dst handling code has been
moved in front of the src.
|
|
Fix off-by-one bug in the last page calculation for src and dst.
Reported-by: Nhat Pham <nphamcs@gmail.com>
Fixes: 2d3553ecb4e3 ("crypto: scomp - Remove support for some non-trivial SG lists")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The return statements were missing which causes REQ_CHAIN algorithms
to execute twice for every request.
Reported-by: Eric Biggers <ebiggers@kernel.org>
Fixes: 64929fe8c0a4 ("crypto: acomp - Remove request chaining")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This reverts commit 99585c2192cb1ce212876e82ef01d1c98c7f4699.
Remove the acomp multibuffer tests as they are buggy.
Reported-by: Dmitry Antipov <dmantipov@yandex.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The recent code changes in this function triggered a false-positive
maybe-uninitialized warning in software_key_query. Rearrange the
code by moving the sig/tfm variables into the if clause where they
are actually used.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add an atomic flag to the acomp walk and use that in deflate.
Due to the use of a per-cpu context, it is impossible to sleep
during the walk in deflate.
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202504151654.4c3b6393-lkp@intel.com
Fixes: 08cabc7d3c86 ("crypto: deflate - Convert to acomp")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Following the example of the crc32, crc32c, and chacha code, make the
crypto subsystem register both generic and architecture-optimized
poly1305 shash algorithms, both implemented on top of the appropriate
library functions. This eliminates the need for every architecture to
implement the same shash glue code.
Note that the poly1305 shash requires that the key be prepended to the
data, which differs from the library functions where the key is simply a
parameter to poly1305_init(). Previously this was handled at a fairly
low level, polluting the library code with shash-specific code.
Reorganize things so that the shash code handles this quirk itself.
Also, to register the architecture-optimized shashes only when
architecture-optimized code is actually being used, add a function
poly1305_is_arch_optimized() and make each arch implement it. Change
each architecture's Poly1305 module_init function to arch_initcall so
that the CPU feature detection is guaranteed to run before
poly1305_is_arch_optimized() gets called by crypto/poly1305.c. (In
cases where poly1305_is_arch_optimized() just returns true
unconditionally, using arch_initcall is not strictly needed, but it's
still good to be consistent across architectures.)
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Ard's recent series of patches removing 'comp' implementations
left behind a bunch of trivial structs, remove them.
These are:
crypto842_ctx - commit 2d985ff0072f ("crypto: 842 - drop obsolete 'comp'
implementation")
lz4_ctx - commit 33335afe33c9 ("crypto: lz4 - drop obsolete 'comp'
implementation")
lz4hc_ctx - commit dbae96559eef ("crypto: lz4hc - drop obsolete
'comp' implementation")
lzo_ctx - commit a3e43a25bad0 ("crypto: lzo - drop obsolete
'comp' implementation")
lzorle_ctx - commit d32da55c5b0c ("crypto: lzo-rle - drop obsolete
'comp' implementation")
Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The block size of a hash algorithm is meant to be the number of
bytes its block function can handle. For cbcmac that should be
the block size of the underlying block cipher instead of one.
Set the block size of all cbcmac implementations accordingly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Move the sm3 library code into lib/crypto.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Allow any ahash to be used with a stack request, with optional
dynamic allocation when async is needed. The intended usage is:
HASH_REQUEST_ON_STACK(req, tfm);
...
err = crypto_ahash_digest(req);
/* The request cannot complete synchronously. */
if (err == -EAGAIN) {
/* This will not fail. */
req = HASH_REQUEST_CLONE(req, gfp);
/* Redo operation. */
err = crypto_ahash_digest(req);
}
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As all users of the dynamic descsize have been converted to use
a static one instead, remove support for dynamic descsize.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rather than setting descsize in init_tfm, make it an algorithm
attribute and set it during instance construction.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
If the bit CRYPTO_ALG_DUP_FIRST is set, an algorithm will be
duplicated by kmemdup before registration. This is inteded for
hardware-based algorithms that may be unplugged at will.
Do not use this if the algorithm data structure is embedded in a
bigger data structure. Perform the duplication in the driver
instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The current algorithm unregistration mechanism originated from
software crypto. The code relies on module reference counts to
stop in-use algorithms from being unregistered. Therefore if
the unregistration function is reached, it is assumed that the
module reference count has hit zero and thus the algorithm reference
count should be exactly 1.
This is completely broken for hardware devices, which can be
unplugged at random.
Fix this by allowing algorithms to be destroyed later if a destroy
callback is provided.
Reported-by: Sean Anderson <sean.anderson@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
If the destination buffer has a fixed length, strscpy() automatically
determines its size using sizeof() when the argument is omitted. This
makes the explicit size argument unnecessary - remove it.
No functional changes intended.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
When user space issues a KEYCTL_PKEY_QUERY system call for a NIST P521
key, the key_size is incorrectly reported as 528 bits instead of 521.
That's because the key size obtained through crypto_sig_keysize() is in
bytes and software_key_query() multiplies by 8 to yield the size in bits.
The underlying assumption is that the key size is always a multiple of 8.
With the recent addition of NIST P521, that's no longer the case.
Fix by returning the key_size in bits from crypto_sig_keysize() and
adjusting the calculations in software_key_query().
The ->key_size() callbacks of sig_alg algorithms now return the size in
bits, whereas the ->digest_size() and ->max_size() callbacks return the
size in bytes. This matches with the units in struct keyctl_pkey_query.
Fixes: a7d45ba77d3d ("crypto: ecdsa - Register NIST P521 and extend test suite")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
KEYCTL_PKEY_QUERY system calls for ecdsa keys return the key size as
max_enc_size and max_dec_size, even though such keys cannot be used for
encryption/decryption. They're exclusively for signature generation or
verification.
Only rsa keys with pkcs1 encoding can also be used for encryption or
decryption.
Return 0 instead for ecdsa keys (as well as ecrdsa keys).
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the common reqsize field and remove reqsize from ahash_alg.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Remove the type-specific reqsize field in favour of the common one.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the common reqsize if present.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rather than storing the folio as is and handling it later, convert
it to a scatterlist right away.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|