aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/char/random.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2019-05-26random: fix soft lockup when trying to read from an uninitialized blocking poolTheodore Ts'o1-3/+13
Fixes: eb9d1bf079bb: "random: only read from /dev/random after its pool has received 128 bits" Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2019-04-20random: add a spinlock_t to struct batched_entropySebastian Andrzej Siewior1-25/+27
The per-CPU variable batched_entropy_uXX is protected by get_cpu_var(). This is just a preempt_disable() which ensures that the variable is only from the local CPU. It does not protect against users on the same CPU from another context. It is possible that a preemptible context reads slot 0 and then an interrupt occurs and the same value is read again. The above scenario is confirmed by lockdep if we add a spinlock: | ================================ | WARNING: inconsistent lock state | 5.1.0-rc3+ #42 Not tainted | -------------------------------- | inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage. | ksoftirqd/9/56 [HC0[0]:SC1[1]:HE0:SE0] takes: | (____ptrval____) (batched_entropy_u32.lock){+.?.}, at: get_random_u32+0x3e/0xe0 | {SOFTIRQ-ON-W} state was registered at: | _raw_spin_lock+0x2a/0x40 | get_random_u32+0x3e/0xe0 | new_slab+0x15c/0x7b0 | ___slab_alloc+0x492/0x620 | __slab_alloc.isra.73+0x53/0xa0 | kmem_cache_alloc_node+0xaf/0x2a0 | copy_process.part.41+0x1e1/0x2370 | _do_fork+0xdb/0x6d0 | kernel_thread+0x20/0x30 | kthreadd+0x1ba/0x220 | ret_from_fork+0x3a/0x50 … | other info that might help us debug this: | Possible unsafe locking scenario: | | CPU0 | ---- | lock(batched_entropy_u32.lock); | <Interrupt> | lock(batched_entropy_u32.lock); | | *** DEADLOCK *** | | stack backtrace: | Call Trace: … | kmem_cache_alloc_trace+0x20e/0x270 | ipmi_alloc_recv_msg+0x16/0x40 … | __do_softirq+0xec/0x48d | run_ksoftirqd+0x37/0x60 | smpboot_thread_fn+0x191/0x290 | kthread+0xfe/0x130 | ret_from_fork+0x3a/0x50 Add a spinlock_t to the batched_entropy data structure and acquire the lock while accessing it. Acquire the lock with disabled interrupts because this function may be used from interrupt context. Remove the batched_entropy_reset_lock lock. Now that we have a lock for the data scructure, we can access it from a remote CPU. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2019-04-19random: document get_random_int() familyGeorge Spelvin1-7/+76
Explain what these functions are for and when they offer an advantage over get_random_bytes(). (We still need documentation on rng_is_initialized(), the random_ready_callback system, and early boot in general.) Signed-off-by: George Spelvin <lkml@sdf.org> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2019-04-19random: fix CRNG initialization when random.trust_cpu=1Jon DeVree1-1/+4
When the system boots with random.trust_cpu=1 it doesn't initialize the per-NUMA CRNGs because it skips the rest of the CRNG startup code. This means that the code from 1e7f583af67b ("random: make /dev/urandom scalable for silly userspace programs") is not used when random.trust_cpu=1. crash> dmesg | grep random: [ 0.000000] random: get_random_bytes called from start_kernel+0x94/0x530 with crng_init=0 [ 0.314029] random: crng done (trusting CPU's manufacturer) crash> print crng_node_pool $6 = (struct crng_state **) 0x0 After adding the missing call to numa_crng_init() the per-NUMA CRNGs are initialized again: crash> dmesg | grep random: [ 0.000000] random: get_random_bytes called from start_kernel+0x94/0x530 with crng_init=0 [ 0.314031] random: crng done (trusting CPU's manufacturer) crash> print crng_node_pool $1 = (struct crng_state **) 0xffff9a915f4014a0 The call to invalidate_batched_entropy() was also missing. This is important for architectures like PPC and S390 which only have the arch_get_random_seed_* functions. Fixes: 39a8883a2b98 ("random: add a config option to trust the CPU's hwrng") Signed-off-by: Jon DeVree <nuxi@vault24.org> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2019-04-19random: move rand_initialize() earlierKees Cook1-3/+2
Right now rand_initialize() is run as an early_initcall(), but it only depends on timekeeping_init() (for mixing ktime_get_real() into the pools). However, the call to boot_init_stack_canary() for stack canary initialization runs earlier, which triggers a warning at boot: random: get_random_bytes called from start_kernel+0x357/0x548 with crng_init=0 Instead, this moves rand_initialize() to after timekeeping_init(), and moves canary initialization here as well. Note that this warning may still remain for machines that do not have UEFI RNG support (which initializes the RNG pools during setup_arch()), or for x86 machines without RDRAND (or booting without "random.trust=on" or CONFIG_RANDOM_TRUST_CPU=y). Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2019-04-17random: only read from /dev/random after its pool has received 128 bitsTheodore Ts'o1-22/+22
Immediately after boot, we allow reads from /dev/random before its entropy pool has been fully initialized. Fix this so that we don't allow this until the blocking pool has received 128 bits. We do this by repurposing the initialized flag in the entropy pool struct, and use the initialized flag in the blocking pool to indicate whether it is safe to pull from the blocking pool. To do this, we needed to rework when we decide to push entropy from the input pool to the blocking pool, since the initialized flag for the input pool was used for this purpose. To simplify things, we no longer use the initialized flag for that purpose, nor do we use the entropy_total field any more. Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2019-04-17drivers/char/random.c: make primary_crng staticRasmus Villemoes1-1/+1
Since the definition of struct crng_state is private to random.c, and primary_crng is neither declared or used elsewhere, there's no reason for that symbol to have external linkage. Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2019-04-17drivers/char/random.c: remove unused stuct poolinfo::poolbitsRasmus Villemoes1-3/+3
This field is never used, might as well remove it. Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2019-04-17drivers/char/random.c: constify poolinfo_tableRasmus Villemoes1-1/+1
Never modified, might as well be put in .rodata. Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2018-11-20crypto: chacha20-generic - refactor to allow varying number of roundsEric Biggers1-26/+25
In preparation for adding XChaCha12 support, rename/refactor chacha20-generic to support different numbers of rounds. The justification for needing XChaCha12 support is explained in more detail in the patch "crypto: chacha - add XChaCha12 support". The only difference between ChaCha{8,12,20} are the number of rounds itself; all other parts of the algorithm are the same. Therefore, remove the "20" from all definitions, structures, functions, files, etc. that will be shared by all ChaCha versions. Also make ->setkey() store the round count in the chacha_ctx (previously chacha20_ctx). The generic code then passes the round count through to chacha_block(). There will be a ->setkey() function for each explicitly allowed round count; the encrypt/decrypt functions will be the same. I decided not to do it the opposite way (same ->setkey() function for all round counts, with different encrypt/decrypt functions) because that would have required more boilerplate code in architecture-specific implementations of ChaCha and XChaCha. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-10-25Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds1-12/+12
Pull crypto updates from Herbert Xu: "API: - Remove VLA usage - Add cryptostat user-space interface - Add notifier for new crypto algorithms Algorithms: - Add OFB mode - Remove speck Drivers: - Remove x86/sha*-mb as they are buggy - Remove pcbc(aes) from x86/aesni - Improve performance of arm/ghash-ce by up to 85% - Implement CTS-CBC in arm64/aes-blk, faster by up to 50% - Remove PMULL based arm64/crc32 driver - Use PMULL in arm64/crct10dif - Add aes-ctr support in s5p-sss - Add caam/qi2 driver Others: - Pick better transform if one becomes available in crc-t10dif" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (124 commits) crypto: chelsio - Update ntx queue received from cxgb4 crypto: ccree - avoid implicit enum conversion crypto: caam - add SPDX license identifier to all files crypto: caam/qi - simplify CGR allocation, freeing crypto: mxs-dcp - make symbols 'sha1_null_hash' and 'sha256_null_hash' static crypto: arm64/aes-blk - ensure XTS mask is always loaded crypto: testmgr - fix sizeof() on COMP_BUF_SIZE crypto: chtls - remove set but not used variable 'csk' crypto: axis - fix platform_no_drv_owner.cocci warnings crypto: x86/aes-ni - fix build error following fpu template removal crypto: arm64/aes - fix handling sub-block CTS-CBC inputs crypto: caam/qi2 - avoid double export crypto: mxs-dcp - Fix AES issues crypto: mxs-dcp - Fix SHA null hashes and output length crypto: mxs-dcp - Implement sha import/export crypto: aegis/generic - fix for big endian systems crypto: morus/generic - fix for big endian systems crypto: lrw - fix rebase error after out of bounds fix crypto: cavium/nitrox - use pci_alloc_irq_vectors() while enabling MSI-X. crypto: cavium/nitrox - NITROX command queue changes. ...
2018-09-21crypto: chacha20 - Fix chacha20_block() keystream alignment (again)Eric Biggers1-12/+12
In commit 9f480faec58c ("crypto: chacha20 - Fix keystream alignment for chacha20_block()"), I had missed that chacha20_block() can be called directly on the buffer passed to get_random_bytes(), which can have any alignment. So, while my commit didn't break anything, it didn't fully solve the alignment problems. Revert my solution and just update chacha20_block() to use put_unaligned_le32(), so the output buffer need not be aligned. This is simpler, and on many CPUs it's the same speed. But, I kept the 'tmp' buffers in extract_crng_user() and _get_random_bytes() 4-byte aligned, since that alignment is actually needed for _crng_backtrack_protect() too. Reported-by: Stephan Müller <smueller@chronox.de> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-01random: make CPU trust a boot parameterKees Cook1-3/+8
Instead of forcing a distro or other system builder to choose at build time whether the CPU is trusted for CRNG seeding via CONFIG_RANDOM_TRUST_CPU, provide a boot-time parameter for end users to control the choice. The CONFIG will set the default state instead. Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2018-08-02random: Make crng state queryableJason A. Donenfeld1-0/+15
It is very useful to be able to know whether or not get_random_bytes_wait / wait_for_random_bytes is going to block or not, or whether plain get_random_bytes is going to return good randomness or bad randomness. The particular use case is for mitigating certain attacks in WireGuard. A handshake packet arrives and is queued up. Elsewhere a worker thread takes items from the queue and processes them. In replying to these items, it needs to use some random data, and it has to be good random data. If we simply block until we can have good randomness, then it's possible for an attacker to fill the queue up with packets waiting to be processed. Upon realizing the queue is full, WireGuard will detect that it's under a denial of service attack, and behave accordingly. A better approach is just to drop incoming handshake packets if the crng is not yet initialized. This patch, therefore, makes that information directly accessible. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2018-07-24random: remove preempt disabled regionIngo Molnar1-4/+0
No need to keep preemption disabled across the whole function. mix_pool_bytes() uses a spin_lock() to protect the pool and there are other places like write_pool() whhich invoke mix_pool_bytes() without disabling preemption. credit_entropy_bits() is invoked from other places like add_hwgenerator_randomness() without disabling preemption. Before commit 95b709b6be49 ("random: drop trickle mode") the function used __this_cpu_inc_return() which would require disabled preemption. The preempt_disable() section was added in commit 43d5d3018c37 ("[PATCH] random driver preempt robustness", history tree). It was claimed that the code relied on "vt_ioctl() being called under BKL". Cc: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> [bigeasy: enhance the commit message] Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2018-07-24random: add a config option to trust the CPU's hwrngTheodore Ts'o1-1/+10
This gives the user building their own kernel (or a Linux distribution) the option of deciding whether or not to trust the CPU's hardware random number generator (e.g., RDRAND for x86 CPU's) as being correctly implemented and not having a back door introduced (perhaps courtesy of a Nation State's law enforcement or intelligence agencies). This will prevent getrandom(2) from blocking, if there is a willingness to trust the CPU manufacturer. Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2018-07-17random: Return nbytes filled from hw RNGTobin C. Harding1-7/+9
Currently the function get_random_bytes_arch() has return value 'void'. If the hw RNG fails we currently fall back to using get_random_bytes(). This defeats the purpose of requesting random material from the hw RNG in the first place. There are currently no intree users of get_random_bytes_arch(). Only get random bytes from the hw RNG, make function return the number of bytes retrieved from the hw RNG. Acked-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Tobin C. Harding <me@tobin.cc> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2018-07-17random: Fix whitespace pre random-bytes workTobin C. Harding1-2/+1
There are a couple of whitespace issues around the function get_random_bytes_arch(). In preparation for patching this function let's clean them up. Acked-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Tobin C. Harding <me@tobin.cc> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2018-07-17random: mix rdrand with entropy sent in from userspaceTheodore Ts'o1-1/+9
Fedora has integrated the jitter entropy daemon to work around slow boot problems, especially on VM's that don't support virtio-rng: https://bugzilla.redhat.com/show_bug.cgi?id=1572944 It's understandable why they did this, but the Jitter entropy daemon works fundamentally on the principle: "the CPU microarchitecture is **so** complicated and we can't figure it out, so it *must* be random". Yes, it uses statistical tests to "prove" it is secure, but AES_ENCRYPT(NSA_KEY, COUNTER++) will also pass statistical tests with flying colors. So if RDRAND is available, mix it into entropy submitted from userspace. It can't hurt, and if you believe the NSA has backdoored RDRAND, then they probably have enough details about the Intel microarchitecture that they can reverse engineer how the Jitter entropy daemon affects the microarchitecture, and attack its output stream. And if RDRAND is in fact an honest DRNG, it will immeasurably improve on what the Jitter entropy daemon might produce. This also provides some protection against someone who is able to read or set the entropy seed file. Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: stable@vger.kernel.org Cc: Arnd Bergmann <arnd@arndb.de>
2018-06-28Revert changes to convert to ->poll_mask() and aio IOCB_CMD_POLLLinus Torvalds1-16/+13
The poll() changes were not well thought out, and completely unexplained. They also caused a huge performance regression, because "->poll()" was no longer a trivial file operation that just called down to the underlying file operations, but instead did at least two indirect calls. Indirect calls are sadly slow now with the Spectre mitigation, but the performance problem could at least be largely mitigated by changing the "->get_poll_head()" operation to just have a per-file-descriptor pointer to the poll head instead. That gets rid of one of the new indirections. But that doesn't fix the new complexity that is completely unwarranted for the regular case. The (undocumented) reason for the poll() changes was some alleged AIO poll race fixing, but we don't make the common case slower and more complex for some uncommon special case, so this all really needs way more explanations and most likely a fundamental redesign. [ This revert is a revert of about 30 different commits, not reverted individually because that would just be unnecessarily messy - Linus ] Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-05-26random: convert to ->poll_maskChristoph Hellwig1-13/+16
The big change is that random_read_wait and random_write_wait are merged into a single waitqueue that uses keyed wakeups. Because wait_event_* doesn't know about that this will lead to occassional spurious wakeups in _random_read and add_hwgenerator_randomness, but wait_event_* is designed to handle these and were are not in a a hot path there. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-25random: rate limit unseeded randomness warningsTheodore Ts'o1-5/+34
On systems without sufficient boot randomness, no point spamming dmesg. Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: stable@vger.kernel.org
2018-04-24random: fix possible sleeping allocation from irq contextTheodore Ts'o1-1/+8
We can do a sleeping allocation from an irq context when CONFIG_NUMA is enabled. Fix this by initializing the NUMA crng instances in a workqueue. Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Reported-by: syzbot+9de458f6a5e713ee8c1a@syzkaller.appspotmail.com Fixes: 8ef35c866f8862df ("random: set up the NUMA crng instances...") Cc: stable@vger.kernel.org Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2018-04-14random: add new ioctl RNDRESEEDCRNGTheodore Ts'o1-1/+12
Add a new ioctl which forces the the crng to be reseeded. Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: stable@kernel.org
2018-04-14random: crng_reseed() should lock the crng instance that it is modifyingTheodore Ts'o1-2/+2
Reported-by: Jann Horn <jannh@google.com> Fixes: 1e7f583af67b ("random: make /dev/urandom scalable for silly...") Cc: stable@kernel.org # 4.8+ Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Jann Horn <jannh@google.com>
2018-04-14random: set up the NUMA crng instances after the CRNG is fully initializedTheodore Ts'o1-19/+27
Until the primary_crng is fully initialized, don't initialize the NUMA crng nodes. Otherwise users of /dev/urandom on NUMA systems before the CRNG is fully initialized can get very bad quality randomness. Of course everyone should move to getrandom(2) where this won't be an issue, but there's a lot of legacy code out there. This related to CVE-2018-1108. Reported-by: Jann Horn <jannh@google.com> Fixes: 1e7f583af67b ("random: make /dev/urandom scalable for silly...") Cc: stable@kernel.org # 4.8+ Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2018-04-14random: use a different mixing algorithm for add_device_randomness()Theodore Ts'o1-4/+51
add_device_randomness() use of crng_fast_load() was highly problematic. Some callers of add_device_randomness() can pass in a large amount of static information. This would immediately promote the crng_init state from 0 to 1, without really doing much to initialize the primary_crng's internal state with something even vaguely unpredictable. Since we don't have the speed constraints of add_interrupt_randomness(), we can do a better job mixing in the what unpredictability a device driver or architecture maintainer might see fit to give us, and do it in a way which does not bump the crng_init_cnt variable. Also, since add_device_randomness() doesn't bump any entropy accounting in crng_init state 0, mix the device randomness into the input_pool entropy pool as well. This is related to CVE-2018-1108. Reported-by: Jann Horn <jannh@google.com> Fixes: ee7998c50c26 ("random: do not ignore early device randomness") Cc: stable@kernel.org # 4.13+ Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2018-04-14random: fix crng_ready() testTheodore Ts'o1-5/+5
The crng_init variable has three states: 0: The CRNG is not initialized at all 1: The CRNG has a small amount of entropy, hopefully good enough for early-boot, non-cryptographical use cases 2: The CRNG is fully initialized and we are sure it is safe for cryptographic use cases. The crng_ready() function should only return true once we are in the last state. This addresses CVE-2018-1108. Reported-by: Jann Horn <jannh@google.com> Fixes: e192be9d9a30 ("random: replace non-blocking pool...") Cc: stable@kernel.org # 4.8+ Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Jann Horn <jannh@google.com>
2018-02-28drivers/char/random.c: remove unused dont_count_entropyRasmus Villemoes1-28/+25
Ever since "random: kill dead extract_state struct" [1], the dont_count_entropy member of struct timer_rand_state has been effectively unused. Since it hasn't found a new use in 12 years, it's probably safe to finally kill it. [1] Pre-git, https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git/commit/?id=c1c48e61c251f57e7a3f1bf11b3c462b2de9dcb5 Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2018-02-28random: optimize add_interrupt_randomnessAndi Kleen1-1/+2
add_interrupt_randomess always wakes up code blocking on /dev/random. This wake up is done unconditionally. Unfortunately this means all interrupts take the wait queue spinlock, which can be rather expensive on large systems processing lots of interrupts. We saw 1% cpu time spinning on this on a large macro workload running on a large system. I believe it's a recent regression (?) Always check if there is a waiter on the wait queue before waking up. This check can be done without taking a spinlock. 1.06% 10460 [kernel.vmlinux] [k] native_queued_spin_lock_slowpath | ---native_queued_spin_lock_slowpath | --0.57%--_raw_spin_lock_irqsave | --0.56%--__wake_up_common_lock credit_entropy_bits add_interrupt_randomness handle_irq_event_percpu handle_irq_event handle_edge_irq handle_irq do_IRQ common_interrupt Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2018-02-28random: use a tighter cap in credit_entropy_bits_safe()Theodore Ts'o1-1/+1
This fixes a harmless UBSAN where root could potentially end up causing an overflow while bumping the entropy_total field (which is ignored once the entropy pool has been initialized, and this generally is completed during the boot sequence). This is marginal for the stable kernel series, but it's a really trivial patch, and it fixes UBSAN warning that might cause security folks to get overly excited for no reason. Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reported-by: Chen Feng <puck.chen@hisilicon.com> Cc: stable@vger.kernel.org
2018-02-11vfs: do bulk POLL* -> EPOLL* replacementLinus Torvalds1-2/+2
This is the mindless scripted replacement of kernel use of POLL* variables as described by Al, done by this script: for V in IN OUT PRI ERR RDNORM RDBAND WRNORM WRBAND HUP RDHUP NVAL MSG; do L=`git grep -l -w POLL$V | grep -v '^t' | grep -v /um/ | grep -v '^sa' | grep -v '/poll.h$'|grep -v '^D'` for f in $L; do sed -i "-es/^\([^\"]*\)\(\<POLL$V\>\)/\\1E\\2/" $f; done done with de-mangling cleanups yet to come. NOTE! On almost all architectures, the EPOLL* constants have the same values as the POLL* constants do. But they keyword here is "almost". For various bad reasons they aren't the same, and epoll() doesn't actually work quite correctly in some cases due to this on Sparc et al. The next patch from Al will sort out the final differences, and we should be all done. Scripted-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-01-31Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds1-12/+12
Pull crypto updates from Herbert Xu: "API: - Enforce the setting of keys for keyed aead/hash/skcipher algorithms. - Add multibuf speed tests in tcrypt. Algorithms: - Improve performance of sha3-generic. - Add native sha512 support on arm64. - Add v8.2 Crypto Extentions version of sha3/sm3 on arm64. - Avoid hmac nesting by requiring underlying algorithm to be unkeyed. - Add cryptd_max_cpu_qlen module parameter to cryptd. Drivers: - Add support for EIP97 engine in inside-secure. - Add inline IPsec support to chelsio. - Add RevB core support to crypto4xx. - Fix AEAD ICV check in crypto4xx. - Add stm32 crypto driver. - Add support for BCM63xx platforms in bcm2835 and remove bcm63xx. - Add Derived Key Protocol (DKP) support in caam. - Add Samsung Exynos True RNG driver. - Add support for Exynos5250+ SoCs in exynos PRNG driver" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (166 commits) crypto: picoxcell - Fix error handling in spacc_probe() crypto: arm64/sha512 - fix/improve new v8.2 Crypto Extensions code crypto: arm64/sm3 - new v8.2 Crypto Extensions implementation crypto: arm64/sha3 - new v8.2 Crypto Extensions implementation crypto: testmgr - add new testcases for sha3 crypto: sha3-generic - export init/update/final routines crypto: sha3-generic - simplify code crypto: sha3-generic - rewrite KECCAK transform to help the compiler optimize crypto: sha3-generic - fixes for alignment and big endian operation crypto: aesni - handle zero length dst buffer crypto: artpec6 - remove select on non-existing CRYPTO_SHA384 hwrng: bcm2835 - Remove redundant dev_err call in bcm2835_rng_probe() crypto: stm32 - remove redundant dev_err call in stm32_cryp_probe() crypto: axis - remove unnecessary platform_get_resource() error check crypto: testmgr - test misuse of result in ahash crypto: inside-secure - make function safexcel_try_push_requests static crypto: aes-generic - fix aes-generic regression on powerpc crypto: chelsio - Fix indentation warning crypto: arm64/sha1-ce - get rid of literal pool crypto: arm64/sha2-ce - move the round constant table to .rodata section ...
2017-11-29crypto: chacha20 - Fix keystream alignment for chacha20_block()Eric Biggers1-12/+12
When chacha20_block() outputs the keystream block, it uses 'u32' stores directly. However, the callers (crypto/chacha20_generic.c and drivers/char/random.c) declare the keystream buffer as a 'u8' array, which is not guaranteed to have the needed alignment. Fix it by having both callers declare the keystream as a 'u32' array. For now this is preferable to switching over to the unaligned access macros because chacha20_block() is only being used in cases where we can easily control the alignment (stack buffers). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-28the rest of drivers/*: annotate ->poll() instancesAl Viro1-2/+2
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-11-15kmemcheck: remove annotationsLevin, Alexander (Sasha Levin)1-1/+0
Patch series "kmemcheck: kill kmemcheck", v2. As discussed at LSF/MM, kill kmemcheck. KASan is a replacement that is able to work without the limitation of kmemcheck (single CPU, slow). KASan is already upstream. We are also not aware of any users of kmemcheck (or users who don't consider KASan as a suitable replacement). The only objection was that since KASAN wasn't supported by all GCC versions provided by distros at that time we should hold off for 2 years, and try again. Now that 2 years have passed, and all distros provide gcc that supports KASAN, kill kmemcheck again for the very same reasons. This patch (of 4): Remove kmemcheck annotations, and calls to kmemcheck from the kernel. [alexander.levin@verizon.com: correctly remove kmemcheck call from dma_map_sg_attrs] Link: http://lkml.kernel.org/r/20171012192151.26531-1-alexander.levin@verizon.com Link: http://lkml.kernel.org/r/20171007030159.22241-2-alexander.levin@verizon.com Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Cc: Alexander Potapenko <glider@google.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Tim Hansen <devtimhansen@gmail.com> Cc: Vegard Nossum <vegardno@ifi.uio.no> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-10-25locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()Mark Rutland1-2/+2
Please do not apply this to mainline directly, instead please re-run the coccinelle script shown below and apply its output. For several reasons, it is desirable to use {READ,WRITE}_ONCE() in preference to ACCESS_ONCE(), and new code is expected to use one of the former. So far, there's been no reason to change most existing uses of ACCESS_ONCE(), as these aren't harmful, and changing them results in churn. However, for some features, the read/write distinction is critical to correct operation. To distinguish these cases, separate read/write accessors must be used. This patch migrates (most) remaining ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following coccinelle script: ---- // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and // WRITE_ONCE() // $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch virtual patch @ depends on patch @ expression E1, E2; @@ - ACCESS_ONCE(E1) = E2 + WRITE_ONCE(E1, E2) @ depends on patch @ expression E; @@ - ACCESS_ONCE(E) + READ_ONCE(E) ---- Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: davem@davemloft.net Cc: linux-arch@vger.kernel.org Cc: mpe@ellerman.id.au Cc: shuah@kernel.org Cc: snitzer@redhat.com Cc: thor.thayer@linux.intel.com Cc: tj@kernel.org Cc: viro@zeniv.linux.org.uk Cc: will.deacon@arm.com Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-08-08random: fix warning message on ia64 and pariscHelge Deller1-1/+1
Fix the warning message on the parisc and IA64 architectures to show the correct function name of the caller by using %pS instead of %pF. The message is printed with the value of _RET_IP_ which calls __builtin_return_address(0) and as such returns the IP address caller instead of pointer to a function descriptor of the caller. The effect of this patch is visible on the parisc and ia64 architectures only since those are the ones which use function descriptors while on all others %pS and %pF will behave the same. Cc: Theodore Ts'o <tytso@mit.edu> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Helge Deller <deller@gmx.de> Fixes: eecabf567422 ("random: suppress spammy warnings about unseeded randomness") Fixes: d06bfd1989fe ("random: warn when kernel uses unseeded randomness") Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-15Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/randomLinus Torvalds1-20/+76
Pull random updates from Ted Ts'o: "Add wait_for_random_bytes() and get_random_*_wait() functions so that callers can more safely get random bytes if they can block until the CRNG is initialized. Also print a warning if get_random_*() is called before the CRNG is initialized. By default, only one single-line warning will be printed per boot. If CONFIG_WARN_ALL_UNSEEDED_RANDOM is defined, then a warning will be printed for each function which tries to get random bytes before the CRNG is initialized. This can get spammy for certain architecture types, so it is not enabled by default" * tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random: random: reorder READ_ONCE() in get_random_uXX random: suppress spammy warnings about unseeded randomness random: warn when kernel uses unseeded randomness net/route: use get_random_int for random counter net/neighbor: use get_random_u32 for 32-bit hash random rhashtable: use get_random_u32 for hash_rnd ceph: ensure RNG is seeded before using iscsi: ensure RNG is seeded before use cifs: use get_random_u32 for 32-bit lock random random: add get_random_{bytes,u32,u64,int,long,once}_wait family random: add wait_for_random_bytes() API
2017-07-15random: reorder READ_ONCE() in get_random_uXXSebastian Andrzej Siewior1-2/+4
Avoid the READ_ONCE in commit 4a072c71f49b ("random: silence compiler warnings and fix race") if we can leave the function after arch_get_random_XXX(). Cc: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2017-07-15random: suppress spammy warnings about unseeded randomnessTheodore Ts'o1-17/+39
Unfortunately, on some models of some architectures getting a fully seeded CRNG is extremely difficult, and so this can result in dmesg getting spammed for a surprisingly long time. This is really bad from a security perspective, and so architecture maintainers really need to do what they can to get the CRNG seeded sooner after the system is booted. However, users can't do anything actionble to address this, and spamming the kernel messages log will only just annoy people. For developers who want to work on improving this situation, CONFIG_WARN_UNSEEDED_RANDOM has been renamed to CONFIG_WARN_ALL_UNSEEDED_RANDOM. By default the kernel will always print the first use of unseeded randomness. This way, hopefully the security obsessed will be happy that there is _some_ indication when the kernel boots there may be a potential issue with that architecture or subarchitecture. To see all uses of unseeded randomness, developers can enable CONFIG_WARN_ALL_UNSEEDED_RANDOM. Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2017-07-12random: do not ignore early device randomnessKees Cook1-0/+5
The add_device_randomness() function would ignore incoming bytes if the crng wasn't ready. This additionally makes sure to make an early enough call to add_latent_entropy() to influence the initial stack canary, which is especially important on non-x86 systems where it stays the same through the life of the boot. Link: http://lkml.kernel.org/r/20170626233038.GA48751@beast Signed-off-by: Kees Cook <keescook@chromium.org> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jessica Yu <jeyu@redhat.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Viresh Kumar <viresh.kumar@linaro.org> Cc: Tejun Heo <tj@kernel.org> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Lokesh Vutla <lokeshvutla@ti.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-06-19random: warn when kernel uses unseeded randomnessJason A. Donenfeld1-2/+13
This enables an important dmesg notification about when drivers have used the crng without it being seeded first. Prior, these errors would occur silently, and so there hasn't been a great way of diagnosing these types of bugs for obscure setups. By adding this as a config option, we can leave it on by default, so that we learn where these issues happen, in the field, will still allowing some people to turn it off, if they really know what they're doing and do not want the log entries. However, we don't leave it _completely_ by default. An earlier version of this patch simply had `default y`. I'd really love that, but it turns out, this problem with unseeded randomness being used is really quite present and is going to take a long time to fix. Thus, as a compromise between log-messages-for-all and nobody-knows, this is `default y`, except it is also `depends on DEBUG_KERNEL`. This will ensure that the curious see the messages while others don't have to. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2017-06-19random: add wait_for_random_bytes() APIJason A. Donenfeld1-10/+31
This enables users of get_random_{bytes,u32,u64,int,long} to wait until the pool is ready before using this function, in case they actually want to have reliable randomness. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2017-06-19random: silence compiler warnings and fix raceJason A. Donenfeld1-6/+6
Odd versions of gcc for the sh4 architecture will actually warn about flags being used while uninitialized, so we set them to zero. Non crazy gccs will optimize that out again, so it doesn't make a difference. Next, over aggressive gccs could inline the expression that defines use_lock, which could then introduce a race resulting in a lock imbalance. By using READ_ONCE, we prevent that fate. Finally, we make that assignment const, so that gcc can still optimize a nice amount. Finally, we fix a potential deadlock between primary_crng.lock and batched_entropy_reset_lock, where they could be called in opposite order. Moving the call to invalidate_batched_entropy to outside the lock rectifies this issue. Fixes: b169c13de473a85b3c859bb36216a4cb5f00a54a Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: stable@vger.kernel.org
2017-06-07random: invalidate batched entropy after crng initJason A. Donenfeld1-0/+37
It's possible that get_random_{u32,u64} is used before the crng has initialized, in which case, its output might not be cryptographically secure. For this problem, directly, this patch set is introducing the *_wait variety of functions, but even with that, there's a subtle issue: what happens to our batched entropy that was generated before initialization. Prior to this commit, it'd stick around, supplying bad numbers. After this commit, we force the entropy to be re-extracted after each phase of the crng has initialized. In order to avoid a race condition with the position counter, we introduce a simple rwlock for this invalidation. Since it's only during this awkward transition period, after things are all set up, we stop using it, so that it doesn't have an impact on performance. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: stable@vger.kernel.org # v4.11+
2017-06-07random: use lockless method of accessing and updating f->reg_idxTheodore Ts'o1-6/+6
Linus pointed out that there is a much more efficient way of avoiding the problem that we were trying to address in commit 9dfa7bba35ac0: "fix race in drivers/char/random.c:get_reg()". Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2017-05-24fix race in drivers/char/random.c:get_reg()Michael Schmitz1-1/+5
get_reg() can be reentered on architectures with prioritized interrupts (m68k in this case), causing f->reg_index to be incremented after the range check. Out of bounds memory access past the pt_regs struct results. This will go mostly undetected unless access is beyond end of memory. Prevent the race by disabling interrupts in get_reg(). Tested on m68k (Atari Falcon, and ARAnyM emulator). Kudos to Geert Uytterhoeven for helping to trace this race. Signed-off-by: Michael Schmitz <schmitzmic@gmail.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2017-02-06random: move random_min_urandom_seed into CONFIG_SYSCTL ifdef blockFabio Estevam1-5/+1
Building arm allnodefconfig causes the following build warning: drivers/char/random.c:318:12: warning: 'random_min_urandom_seed' defined but not used [-Wunused-variable] Fix the warning by moving 'random_min_urandom_seed' declaration inside the CONFIG_SYSCTL ifdef block, where it is actually used. While at it, remove the comment prior to the variable declaration. Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2017-01-27random: convert get_random_int/long into get_random_u32/u64Jason A. Donenfeld1-28/+27
Many times, when a user wants a random number, he wants a random number of a guaranteed size. So, thinking of get_random_int and get_random_long in terms of get_random_u32 and get_random_u64 makes it much easier to achieve this. It also makes the code simpler. On 32-bit platforms, get_random_int and get_random_long are both aliased to get_random_u32. On 64-bit platforms, int->u32 and long->u64. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Theodore Ts'o <tytso@mit.edu>