aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/kcsan (follow)
AgeCommit message (Collapse)AuthorFilesLines
2022-01-11Merge tag 'kcsan.2022.01.09a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcuLinus Torvalds5-106/+863
Pull KCSAN updates from Paul McKenney: "This provides KCSAN fixes and also the ability to take memory barriers into account for weakly-ordered systems. This last can increase the probability of detecting certain types of data races" * tag 'kcsan.2022.01.09a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: (29 commits) kcsan: Only test clear_bit_unlock_is_negative_byte if arch defines it kcsan: Avoid nested contexts reading inconsistent reorder_access kcsan: Turn barrier instrumentation into macros kcsan: Make barrier tests compatible with lockdep kcsan: Support WEAK_MEMORY with Clang where no objtool support exists compiler_attributes.h: Add __disable_sanitizer_instrumentation objtool, kcsan: Remove memory barrier instrumentation from noinstr objtool, kcsan: Add memory barrier instrumentation to whitelist sched, kcsan: Enable memory barrier instrumentation mm, kcsan: Enable barrier instrumentation x86/qspinlock, kcsan: Instrument barrier of pv_queued_spin_unlock() x86/barriers, kcsan: Use generic instrumentation for non-smp barriers asm-generic/bitops, kcsan: Add instrumentation for barriers locking/atomics, kcsan: Add instrumentation for barriers locking/barriers, kcsan: Support generic instrumentation locking/barriers, kcsan: Add instrumentation for barriers kcsan: selftest: Add test case to check memory barrier instrumentation kcsan: Ignore GCC 11+ warnings about TSan runtime support kcsan: test: Add test cases for memory barrier instrumentation kcsan: test: Match reordered or normal accesses ...
2021-12-14arm64: Enable KCSANKefeng Wang1-0/+1
This patch enables KCSAN for arm64, with updates to build rules to not use KCSAN for several incompatible compilation units. Recent GCC version(at least GCC10) made outline-atomics as the default option(unlike Clang), which will cause linker errors for kernel/kcsan/core.o. Disables the out-of-line atomics by no-outline-atomics to fix the linker errors. Meanwhile, as Mark said[1], some latent issues are needed to be fixed which isn't just a KCSAN problem, we make the KCSAN depends on EXPERT for now. Tested selftest and kcsan_test(built with GCC11 and Clang 13), and all passed. [1] https://lkml.kernel.org/r/YadiUPpJ0gADbiHQ@FVFF77S0Q05N Acked-by: Marco Elver <elver@google.com> # kernel/kcsan Tested-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Link: https://lore.kernel.org/r/20211211131734.126874-1-wangkefeng.wang@huawei.com [catalin.marinas@arm.com: added comment to justify EXPERT] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-12-09kcsan: Only test clear_bit_unlock_is_negative_byte if arch defines itMarco Elver2-6/+10
Some architectures do not define clear_bit_unlock_is_negative_byte(). Only test it when it is actually defined (similar to other usage, such as in lib/test_kasan.c). Link: https://lkml.kernel.org/r/202112050757.x67rHnFU-lkp@intel.com Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-12-09kcsan: Avoid nested contexts reading inconsistent reorder_accessMarco Elver1-0/+9
Nested contexts, such as nested interrupts or scheduler code, share the same kcsan_ctx. When such a nested context reads an inconsistent reorder_access due to an interrupt during set_reorder_access(), we can observe the following warning: | ------------[ cut here ]------------ | Cannot find frame for torture_random kernel/torture.c:456 in stack trace | WARNING: CPU: 13 PID: 147 at kernel/kcsan/report.c:343 replace_stack_entry kernel/kcsan/report.c:343 | ... | Call Trace: | <TASK> | sanitize_stack_entries kernel/kcsan/report.c:351 [inline] | print_report kernel/kcsan/report.c:409 | kcsan_report_known_origin kernel/kcsan/report.c:693 | kcsan_setup_watchpoint kernel/kcsan/core.c:658 | rcutorture_one_extend kernel/rcu/rcutorture.c:1475 | rcutorture_loop_extend kernel/rcu/rcutorture.c:1558 [inline] | ... | </TASK> | ---[ end trace ee5299cb933115f5 ]--- | ================================================================== | BUG: KCSAN: data-race in _raw_spin_lock_irqsave / rcutorture_one_extend | | write (reordered) to 0xffffffff8c93b300 of 8 bytes by task 154 on cpu 12: | queued_spin_lock include/asm-generic/qspinlock.h:80 [inline] | do_raw_spin_lock include/linux/spinlock.h:185 [inline] | __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline] | _raw_spin_lock_irqsave kernel/locking/spinlock.c:162 | try_to_wake_up kernel/sched/core.c:4003 | sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1097 | asm_sysvec_apic_timer_interrupt arch/x86/include/asm/idtentry.h:638 | set_reorder_access kernel/kcsan/core.c:416 [inline] <-- inconsistent reorder_access | kcsan_setup_watchpoint kernel/kcsan/core.c:693 | rcutorture_one_extend kernel/rcu/rcutorture.c:1475 | rcutorture_loop_extend kernel/rcu/rcutorture.c:1558 [inline] | rcu_torture_one_read kernel/rcu/rcutorture.c:1600 | rcu_torture_reader kernel/rcu/rcutorture.c:1692 | kthread kernel/kthread.c:327 | ret_from_fork arch/x86/entry/entry_64.S:295 | | read to 0xffffffff8c93b300 of 8 bytes by task 147 on cpu 13: | rcutorture_one_extend kernel/rcu/rcutorture.c:1475 | rcutorture_loop_extend kernel/rcu/rcutorture.c:1558 [inline] | ... The warning is telling us that there was a data race which KCSAN wants to report, but the function where the original access (that is now reordered) happened cannot be found in the stack trace, which prevents KCSAN from generating the right stack trace. The stack trace of "write (reordered)" now only shows where the access was reordered to, but should instead show the stack trace of the original write, with a final line saying "reordered to". At the point where set_reorder_access() is interrupted, it just set reorder_access->ptr and size, at which point size is non-zero. This is sufficient (if ctx->disable_scoped is zero) for further accesses from nested contexts to perform checking of this reorder_access. That then happened in _raw_spin_lock_irqsave(), which is called by scheduler code. However, since reorder_access->ip is still stale (ptr and size belong to a different ip not yet set) this finally leads to replace_stack_entry() not finding the frame in reorder_access->ip and generating the above warning. Fix it by ensuring that a nested context cannot access reorder_access while we update it in set_reorder_access(): set ctx->disable_scoped for the duration that reorder_access is updated, which effectively locks reorder_access and prevents concurrent use by nested contexts. Note, set_reorder_access() can do the update only if disabled_scoped is zero on entry, and must therefore set disable_scoped back to non-zero after the initial check in set_reorder_access(). Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-12-09kcsan: Make barrier tests compatible with lockdepMarco Elver2-21/+30
The barrier tests in selftest and the kcsan_test module only need the spinlock and mutex to test correct barrier instrumentation. Therefore, these were initially placed on the stack. However, lockdep asserts that locks are in static storage, and will generate this warning: | INFO: trying to register non-static key. | The code is fine but needs lockdep annotation, or maybe | you didn't initialize this object before use? | turning off the locking correctness validator. | CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.16.0-rc1+ #3208 | Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-1ubuntu1.1 04/01/2014 | Call Trace: | <TASK> | dump_stack_lvl+0x88/0xd8 | dump_stack+0x15/0x1b | register_lock_class+0x6b3/0x840 | ... | test_barrier+0x490/0x14c7 | kcsan_selftest+0x47/0xa0 | ... To fix, move the test locks into static storage. Fixing the above also revealed that lock operations are strengthened on first use with lockdep enabled, due to lockdep calling out into non-instrumented files (recall that kernel/locking/lockdep.c is not instrumented with KCSAN). Only kcsan_test checks for over-instrumentation of *_lock() operations, where we can simply "warm up" the test locks to avoid the test case failing with lockdep. Reported-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-12-09kcsan: selftest: Add test case to check memory barrier instrumentationMarco Elver2-0/+143
Memory barrier instrumentation is crucial to avoid false positives. To avoid surprises, run a simple test case in the boot-time selftest to ensure memory barriers are still instrumented correctly. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-12-09kcsan: test: Add test cases for memory barrier instrumentationMarco Elver1-0/+319
Adds test cases to check that memory barriers are instrumented correctly, and detection of missing memory barriers is working as intended if CONFIG_KCSAN_STRICT=y. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-12-09kcsan: test: Match reordered or normal accessesMarco Elver1-29/+63
Due to reordering accesses with weak memory modeling, any access can now appear as "(reordered)". Match any permutation of accesses if CONFIG_KCSAN_WEAK_MEMORY=y, so that we effectively match an access if it is denoted "(reordered)" or not. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-12-09kcsan: Show location access was reordered toMarco Elver1-12/+23
Also show the location the access was reordered to. An example report: | ================================================================== | BUG: KCSAN: data-race in test_kernel_wrong_memorder / test_kernel_wrong_memorder | | read-write to 0xffffffffc01e61a8 of 8 bytes by task 2311 on cpu 5: | test_kernel_wrong_memorder+0x57/0x90 | access_thread+0x99/0xe0 | kthread+0x2ba/0x2f0 | ret_from_fork+0x22/0x30 | | read-write (reordered) to 0xffffffffc01e61a8 of 8 bytes by task 2310 on cpu 7: | test_kernel_wrong_memorder+0x57/0x90 | access_thread+0x99/0xe0 | kthread+0x2ba/0x2f0 | ret_from_fork+0x22/0x30 | | | +-> reordered to: test_kernel_wrong_memorder+0x80/0x90 | | Reported by Kernel Concurrency Sanitizer on: | CPU: 7 PID: 2310 Comm: access_thread Not tainted 5.14.0-rc1+ #18 | Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 | ================================================================== Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-12-09kcsan: Call scoped accesses reordered in reportsMarco Elver2-10/+10
The scoping of an access simply denotes the scope in which it may be reordered. However, in reports, it'll be less confusing to say the access is "reordered". This is more accurate when the race occurred. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-12-09kcsan: Add core memory barrier instrumentation functionsMarco Elver1-1/+67
Add the core memory barrier instrumentation functions. These invalidate the current in-flight reordered access based on the rules for the respective barrier types and in-flight access type. To obtain barrier instrumentation that can be disabled via __no_kcsan with appropriate compiler-support (and not just with objtool help), barrier instrumentation repurposes __atomic_signal_fence(), instead of inserting explicit calls. Crucially, __atomic_signal_fence() normally does not map to any real instructions, but is still intercepted by fsanitize=thread. As a result, like any other instrumentation done by the compiler, barrier instrumentation can be disabled with __no_kcsan. Unfortunately Clang and GCC currently differ in their __no_kcsan aka __no_sanitize_thread behaviour with respect to builtin atomics (and __tsan_func_{entry,exit}) instrumentation. This is already reflected in Kconfig.kcsan's dependencies for KCSAN_WEAK_MEMORY. A later change will introduce support for newer versions of Clang that can implement __no_kcsan to also remove the additional instrumentation introduced by KCSAN_WEAK_MEMORY. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-12-09kcsan: Add core support for a subset of weak memory modelingMarco Elver1-15/+187
Add support for modeling a subset of weak memory, which will enable detection of a subset of data races due to missing memory barriers. KCSAN's approach to detecting missing memory barriers is based on modeling access reordering, and enabled if `CONFIG_KCSAN_WEAK_MEMORY=y`, which depends on `CONFIG_KCSAN_STRICT=y`. The feature can be enabled or disabled at boot and runtime via the `kcsan.weak_memory` boot parameter. Each memory access for which a watchpoint is set up, is also selected for simulated reordering within the scope of its function (at most 1 in-flight access). We are limited to modeling the effects of "buffering" (delaying the access), since the runtime cannot "prefetch" accesses (therefore no acquire modeling). Once an access has been selected for reordering, it is checked along every other access until the end of the function scope. If an appropriate memory barrier is encountered, the access will no longer be considered for reordering. When the result of a memory operation should be ordered by a barrier, KCSAN can then detect data races where the conflict only occurs as a result of a missing barrier due to reordering accesses. Suggested-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-12-09kcsan: Avoid checking scoped accesses from nested contextsMarco Elver1-3/+15
Avoid checking scoped accesses from nested contexts (such as nested interrupts or in scheduler code) which share the same kcsan_ctx. This is to avoid detecting false positive races of accesses in the same thread with currently scoped accesses: consider setting up a watchpoint for a non-scoped (normal) access that also "conflicts" with a current scoped access. In a nested interrupt (or in the scheduler), which shares the same kcsan_ctx, we cannot check scoped accesses set up in the parent context -- simply ignore them in this case. With the introduction of kcsan_ctx::disable_scoped, we can also clean up kcsan_check_scoped_accesses()'s recursion guard, and do not need to modify the list's prev pointer. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-12-09kcsan: Remove redundant zero-initialization of globalsMarco Elver1-5/+0
They are implicitly zero-initialized, remove explicit initialization. It keeps the upcoming additions to kcsan_ctx consistent with the rest. No functional change intended. Signed-off-by: Marco Elver <elver@google.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-12-09kcsan: Refactor reading of instrumented memoryMarco Elver1-34/+17
Factor out the switch statement reading instrumented memory into a helper read_instrumented_memory(). No functional change. Signed-off-by: Marco Elver <elver@google.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-09-13kcsan: selftest: Cleanup and add missing __initMarco Elver1-42/+30
Make test_encode_decode() more readable and add missing __init. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-09-13kcsan: Move ctx to start of argument listMarco Elver1-4/+4
It is clearer if ctx is at the start of the function argument list; it'll be more consistent when adding functions with varying arguments but all requiring ctx. No functional change intended. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-09-13kcsan: Support reporting scoped read-write access typeMarco Elver2-3/+9
Support generating the string representation of scoped read-write accesses for completeness. They will become required in planned changes. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-09-13kcsan: Start stack trace with explicit location if providedMarco Elver2-13/+61
If an explicit access address is set, as is done for scoped accesses, always start the stack trace from that location. get_stack_skipnr() is changed into sanitize_stack_entries(), which if given an address, scans the stack trace for a matching function and then replaces that entry with the explicitly provided address. The previous reports for scoped accesses were all over the place, which could be quite confusing. We now always point at the start of the scope. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-09-13kcsan: Save instruction pointer for scoped accessesMarco Elver1-3/+9
Save the instruction pointer for scoped accesses, so that it becomes possible for the reporting code to construct more accurate stack traces that will show the start of the scope. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-09-13kcsan: Add ability to pass instruction pointer of access to reportingMarco Elver3-38/+45
Add the ability to pass an explicitly set instruction pointer of access from check_access() all the way through to reporting. In preparation of using it in reporting. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-09-13kcsan: test: Fix flaky test caseMarco Elver1-4/+18
If CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n, then we may also see data races between the writers only. If we get unlucky and never capture a read-write data race, but only the write-write data races, then the test_no_value_change* test cases may incorrectly fail. The second problem is that the initial value needs to be reset, as otherwise we might actually observe a value change at the start. Fix it by also looking for the write-write data races, and resetting the value to what will be written. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-09-13kcsan: test: Use kunit_skip() to skip testsMarco Elver1-4/+7
Use the new kunit_skip() to skip tests if requirements were not met. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-09-13kcsan: test: Defer kcsan_test_init() after kunit initializationMarco Elver1-1/+1
When the test is built into the kernel (not a module), kcsan_test_init() and kunit_init() both use late_initcall(), which means kcsan_test_init() might see a NULL debugfs_rootdir as parent dentry, resulting in kcsan_test_init() and kcsan_debugfs_init() both trying to create a debugfs node named "kcsan" in debugfs root. One of them will show an error and be unsuccessful. Defer kcsan_test_init() until we're sure kunit was initialized. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-09-02Merge tag 'locking-debug-2021-09-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds4-51/+175
Pull memory model updates from Ingo Molnar: "LKMM updates: - Update documentation and code example KCSAN updates: - Introduce CONFIG_KCSAN_STRICT (which RCU uses) - Optimize use of get_ctx() by kcsan_found_watchpoint() - Rework atomic.h into permissive.h - Add the ability to ignore writes that change only one bit of a given data-racy variable. - Improve comments" * tag 'locking-debug-2021-09-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: tools/memory-model: Document data_race(READ_ONCE()) tools/memory-model: Heuristics using data_race() must handle all values tools/memory-model: Add example for heuristic lockless reads tools/memory-model: Make read_foo_diagnostic() more clearly diagnostic kcsan: Make strict mode imply interruptible watchers kcsan: permissive: Ignore data-racy 1-bit value changes kcsan: Print if strict or non-strict during init kcsan: Rework atomic.h into permissive.h kcsan: Reduce get_ctx() uses in kcsan_found_watchpoint() kcsan: Introduce CONFIG_KCSAN_STRICT kcsan: Remove CONFIG_KCSAN_DEBUG kcsan: Improve some Kconfig comments
2021-07-30kcsan: use u64 instead of cycles_tHeiko Carstens1-1/+1
cycles_t has a different type across architectures: unsigned int, unsinged long, or unsigned long long. Depending on architecture this will generate this warning: kernel/kcsan/debugfs.c: In function ‘microbenchmark’: ./include/linux/kern_levels.h:5:25: warning: format ‘%llu’ expects argument of type ‘long long unsigned int’, but argument 3 has type ‘cycles_t’ {aka ‘long unsigned int’} [-Wformat=] To avoid this simply change the type of cycle to u64 in microbenchmark(), since u64 is of type unsigned long long for all architectures. Acked-by: Marco Elver <elver@google.com> Link: https://lore.kernel.org/r/20210729142811.1309391-1-hca@linux.ibm.com Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-07-20kcsan: permissive: Ignore data-racy 1-bit value changesMarco Elver2-1/+80
Add rules to ignore data-racy reads with only 1-bit value changes. Details about the rules are captured in comments in kernel/kcsan/permissive.h. More background follows. While investigating a number of data races, we've encountered data-racy accesses on flags variables to be very common. The typical pattern is a reader masking all but one bit, and/or the writer setting/clearing only 1 bit (current->flags being a frequently encountered case; more examples in mm/sl[au]b.c, which disable KCSAN for this reason). Since these types of data-racy accesses are common (with the assumption they are intentional and hard to miscompile) having the option (with CONFIG_KCSAN_PERMISSIVE=y) to filter them will avoid forcing everyone to mark them, and deliberately left to preference at this time. One important motivation for having this option built-in is to move closer to being able to enable KCSAN on CI systems or for testers wishing to test the whole kernel, while more easily filtering less interesting data races with higher probability. For the implementation, we considered several alternatives, but had one major requirement: that the rules be kept together with the Linux-kernel tree. Adding them to the compiler would preclude us from making changes quickly; if the rules require tweaks, having them part of the compiler requires waiting another ~1 year for the next release -- that's not realistic. We are left with the following options: 1. Maintain compiler plugins as part of the kernel-tree that removes instrumentation for some accesses (e.g. plain-& with 1-bit mask). The analysis would be reader-side focused, as no assumption can be made about racing writers. Because it seems unrealistic to maintain 2 plugins, one for LLVM and GCC, we would likely pick LLVM. Furthermore, no kernel infrastructure exists to maintain LLVM plugins, and the build-system implications and maintenance overheads do not look great (historically, plugins written against old LLVM APIs are not guaranteed to work with newer LLVM APIs). 2. Find a set of rules that can be expressed in terms of observed value changes, and make it part of the KCSAN runtime. The analysis is writer-side focused, given we rely on observed value changes. The approach taken here is (2). While a complete approach requires both (1) and (2), experiments show that the majority of data races involving trivial bit operations on flags variables can be removed with (2) alone. It goes without saying that the filtering of data races using (1) or (2) does _not_ guarantee they are safe! Therefore, limiting ourselves to (2) for now is the conservative choice for setups that wish to enable CONFIG_KCSAN_PERMISSIVE=y. Signed-off-by: Marco Elver <elver@google.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-07-20kcsan: Print if strict or non-strict during initMarco Elver1-0/+9
Show a brief message if KCSAN is strict or non-strict, and if non-strict also say that CONFIG_KCSAN_STRICT=y can be used to see all data races. This is to hint to users of KCSAN who blindly use the default config that their configuration might miss data races of interest. Signed-off-by: Marco Elver <elver@google.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-07-20kcsan: Rework atomic.h into permissive.hMarco Elver3-32/+71
Rework atomic.h into permissive.h to better reflect its purpose, and introduce kcsan_ignore_address() and kcsan_ignore_data_race(). Introduce CONFIG_KCSAN_PERMISSIVE and update the stub functions in preparation for subsequent changes. As before, developers who choose to use KCSAN in "strict" mode will see all data races and are not affected. Furthermore, by relying on the value-change filter logic for kcsan_ignore_data_race(), even if the permissive rules are enabled, the opt-outs in report.c:skip_report() override them (such as for RCU-related functions by default). The option CONFIG_KCSAN_PERMISSIVE is disabled by default, so that the documented default behaviour of KCSAN does not change. Instead, like CONFIG_KCSAN_IGNORE_ATOMICS, the option needs to be explicitly opted in. Signed-off-by: Marco Elver <elver@google.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-07-20kcsan: Reduce get_ctx() uses in kcsan_found_watchpoint()Marco Elver1-10/+16
There are a number get_ctx() calls that are close to each other, which results in poor codegen (repeated preempt_count loads). Specifically in kcsan_found_watchpoint() (even though it's a slow-path) it is beneficial to keep the race-window small until the watchpoint has actually been consumed to avoid missed opportunities to report a race. Let's clean it up a bit before we add more code in kcsan_found_watchpoint(). Signed-off-by: Marco Elver <elver@google.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-07-20kcsan: Remove CONFIG_KCSAN_DEBUGMarco Elver1-9/+0
By this point CONFIG_KCSAN_DEBUG is pretty useless, as the system just isn't usable with it due to spamming console (I imagine a randconfig test robot will run into this sooner or later). Remove it. Back in 2019 I used it occasionally to record traces of watchpoints and verify the encoding is correct, but these days we have proper tests. If something similar is needed in future, just add it back ad-hoc. Signed-off-by: Marco Elver <elver@google.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-07-04Merge branch 'kcsan.2021.05.18a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcuLinus Torvalds3-134/+127
Pull KCSAN updates from Paul McKenney. * 'kcsan.2021.05.18a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: kcsan: Use URL link for pointing access-marking.txt kcsan: Document "value changed" line kcsan: Report observed value changes kcsan: Remove kcsan_report_type kcsan: Remove reporting indirection kcsan: Refactor access_info initialization kcsan: Fold panic() call into print_report() kcsan: Refactor passing watchpoint/other_info kcsan: Distinguish kcsan_report() calls kcsan: Simplify value change detection kcsan: Add pointer to access-marking.txt to data_race() bullet
2021-06-18sched: Introduce task_is_running()Peter Zijlstra1-1/+1
Replace a bunch of 'p->state == TASK_RUNNING' with a new helper: task_is_running(p). Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Davidlohr Bueso <dave@stgolabs.net> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210611082838.222401495@infradead.org
2021-05-18kcsan: Report observed value changesMark Rutland3-9/+33
When a thread detects that a memory location was modified without its watchpoint being hit, the report notes that a change was detected, but does not provide concrete values for the change. Knowing the concrete values can be very helpful in tracking down any racy writers (e.g. as specific values may only be written in some portions of code, or under certain conditions). When we detect a modification, let's report the concrete old/new values, along with the access's mask of relevant bits (and which relevant bits were modified). This can make it easier to identify potential racy writers. As the snapshots are at most 8 bytes, we can only report values for acceses up to this size, but this appears to cater for the common case. When we detect a race via a watchpoint, we may or may not have concrete values for the modification. To be helpful, let's attempt to log them when we do as they can be ignored where irrelevant. The resulting reports appears as follows, with values zero-padded to the access width: | ================================================================== | BUG: KCSAN: data-race in el0_svc_common+0x34/0x25c arch/arm64/kernel/syscall.c:96 | | race at unknown origin, with read to 0xffff00007ae6aa00 of 8 bytes by task 223 on cpu 1: | el0_svc_common+0x34/0x25c arch/arm64/kernel/syscall.c:96 | do_el0_svc+0x48/0xec arch/arm64/kernel/syscall.c:178 | el0_svc arch/arm64/kernel/entry-common.c:226 [inline] | el0_sync_handler+0x1a4/0x390 arch/arm64/kernel/entry-common.c:236 | el0_sync+0x140/0x180 arch/arm64/kernel/entry.S:674 | | value changed: 0x0000000000000000 -> 0x0000000000000002 | | Reported by Kernel Concurrency Sanitizer on: | CPU: 1 PID: 223 Comm: syz-executor.1 Not tainted 5.8.0-rc3-00094-ga73f923ecc8e-dirty #3 | Hardware name: linux,dummy-virt (DT) | ================================================================== If an access mask is set, it is shown underneath the "value changed" line as "bits changed: 0x<bits changed> with mask 0x<non-zero mask>". Signed-off-by: Mark Rutland <mark.rutland@arm.com> [ elver@google.com: align "value changed" and "bits changed" lines, which required massaging the message; do not print bits+mask if no mask set. ] Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-05-18kcsan: Remove kcsan_report_typeMark Rutland2-42/+20
Now that the reporting code has been refactored, it's clear by construction that print_report() can only be passed KCSAN_REPORT_RACE_SIGNAL or KCSAN_REPORT_RACE_UNKNOWN_ORIGIN, and these can also be distinguished by the presence of `other_info`. Let's simplify things and remove the report type enum, and instead let's check `other_info` to distinguish these cases. This allows us to remove code for cases which are impossible and generally makes the code simpler. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [ elver@google.com: add updated comments to kcsan_report_*() functions ] Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-05-18kcsan: Remove reporting indirectionMark Rutland1-66/+49
Now that we have separate kcsan_report_*() functions, we can factor the distinct logic for each of the report cases out of kcsan_report(). While this means each case has to handle mutual exclusion independently, this minimizes the conditionality of code and makes it easier to read, and will permit passing distinct bits of information to print_report() in future. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [ elver@google.com: retain comment about lockdep_off() ] Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-05-18kcsan: Refactor access_info initializationMark Rutland1-17/+25
In subsequent patches we'll want to split kcsan_report() into distinct handlers for each report type. The largest bit of common work is initializing the `access_info`, so let's factor this out into a helper, and have the kcsan_report_*() functions pass the `aaccess_info` as a parameter to kcsan_report(). There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-05-18kcsan: Fold panic() call into print_report()Mark Rutland1-13/+8
So that we can add more callers of print_report(), lets fold the panic() call into print_report() so the caller doesn't have to handle this explicitly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-05-18kcsan: Refactor passing watchpoint/other_infoMark Rutland1-9/+4
The `watchpoint_idx` argument to kcsan_report() isn't meaningful for races which were not detected by a watchpoint, and it would be clearer if callers passed the other_info directly so that a NULL value can be passed in this case. Given that callers manipulate their watchpoints before passing the index into kcsan_report_*(), and given we index the `other_infos` array using this before we sanity-check it, the subsequent sanity check isn't all that useful. Let's remove the `watchpoint_idx` sanity check, and move the job of finding the `other_info` out of kcsan_report(). Other than the removal of the check, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-05-18kcsan: Distinguish kcsan_report() callsMark Rutland3-15/+33
Currently kcsan_report() is used to handle three distinct cases: * The caller hit a watchpoint when attempting an access. Some information regarding the caller and access are recorded, but no output is produced. * A caller which previously setup a watchpoint detected that the watchpoint has been hit, and possibly detected a change to the location in memory being watched. This may result in output reporting the interaction between this caller and the caller which hit the watchpoint. * A caller detected a change to a modification to a memory location which wasn't detected by a watchpoint, for which there is no information on the other thread. This may result in output reporting the unexpected change. ... depending on the specific case the caller has distinct pieces of information available, but the prototype of kcsan_report() has to handle all three cases. This means that in some cases we pass redundant information, and in others we don't pass all the information we could pass. This also means that the report code has to demux these three cases. So that we can pass some additional information while also simplifying the callers and report code, add separate kcsan_report_*() functions for the distinct cases, updating callers accordingly. As the watchpoint_idx is unused in the case of kcsan_report_unknown_origin(), this passes a dummy value into kcsan_report(). Subsequent patches will refactor the report code to avoid this. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [ elver@google.com: try to make kcsan_report_*() names more descriptive ] Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-05-18kcsan: Simplify value change detectionMark Rutland1-24/+16
In kcsan_setup_watchpoint() we store snapshots of a watched value into a union of u8/u16/u32/u64 sized fields, modify this in place using a consistent field, then later check for any changes via the u64 field. We can achieve the safe effect more simply by always treating the field as a u64, as smaller values will be zero-extended. As the values are zero-extended, we don't need to truncate the access_mask when we apply it, and can always apply the full 64-bit access_mask to the 64-bit value. Finally, we can store the two snapshots and calculated difference separately, which makes the code a little easier to read, and will permit reporting the old/new values in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-05-18kcsan: Fix debugfs initcall return typeArnd Bergmann1-1/+2
clang with CONFIG_LTO_CLANG points out that an initcall function should return an 'int' due to the changes made to the initcall macros in commit 3578ad11f3fb ("init: lto: fix PREL32 relocations"): kernel/kcsan/debugfs.c:274:15: error: returning 'void' from a function with incompatible result type 'int' late_initcall(kcsan_debugfs_init); ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~ include/linux/init.h:292:46: note: expanded from macro 'late_initcall' #define late_initcall(fn) __define_initcall(fn, 7) Fixes: e36299efe7d7 ("kcsan, debugfs: Move debugfs file creation out of early init") Cc: stable <stable@vger.kernel.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Marco Elver <elver@google.com> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Miguel Ojeda <ojeda@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-04-22kcsan: Fix printk format stringArnd Bergmann1-1/+1
Printing a 'long' variable using the '%d' format string is wrong and causes a warning from gcc: kernel/kcsan/kcsan_test.c: In function 'nthreads_gen_params': include/linux/kern_levels.h:5:25: error: format '%d' expects argument of type 'int', but argument 3 has type 'long int' [-Werror=format=] Use the appropriate format modifier. Fixes: f6a149140321 ("kcsan: Switch to KUNIT_CASE_PARAM for parameterized tests") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Marco Elver <elver@google.com> Acked-by: Paul E. McKenney <paulmck@kernel.org> Link: https://lkml.kernel.org/r/20210421135059.3371701-1-arnd@kernel.org
2021-03-08kcsan: Add missing license and copyright headersMarco Elver7-1/+32
Adds missing license and/or copyright headers for KCSAN source files. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-03-08kcsan: Switch to KUNIT_CASE_PARAM for parameterized testsMarco Elver1-62/+54
Since KUnit now support parameterized tests via KUNIT_CASE_PARAM, update KCSAN's test to switch to it for parameterized tests. This simplifies parameterized tests and gets rid of the "parameters in case name" workaround (hack). At the same time, we can increase the maximum number of threads used, because on systems with too few CPUs, KUnit allows us to now stop at the maximum useful threads and not unnecessarily execute redundant test cases with (the same) limited threads as had been the case before. Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-03-08kcsan: Make test follow KUnit style recommendationsMarco Elver2-3/+3
Per recently added KUnit style recommendations at Documentation/dev-tools/kunit/style.rst, make the following changes to the KCSAN test: 1. Rename 'kcsan-test.c' to 'kcsan_test.c'. 2. Rename suite name 'kcsan-test' to 'kcsan'. 3. Rename CONFIG_KCSAN_TEST to CONFIG_KCSAN_KUNIT_TEST and default to KUNIT_ALL_TESTS. Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-03-08kcsan, debugfs: Move debugfs file creation out of early initMarco Elver3-8/+3
Commit 56348560d495 ("debugfs: do not attempt to create a new file before the filesystem is initalized") forbids creating new debugfs files until debugfs is fully initialized. This means that KCSAN's debugfs file creation, which happened at the end of __init(), no longer works. And was apparently never supposed to work! However, there is no reason to create KCSAN's debugfs file so early. This commit therefore moves its creation to a late_initcall() callback. Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: stable <stable@vger.kernel.org> Fixes: 56348560d495 ("debugfs: do not attempt to create a new file before the filesystem is initalized") Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-01-04kcsan: Rewrite kcsan_prandom_u32_max() without prandom_u32_state()Marco Elver1-13/+13
Rewrite kcsan_prandom_u32_max() to not depend on code that might be instrumented, removing any dependency on lib/random32.c. The rewrite implements a simple linear congruential generator, that is sufficient for our purposes (for udelay() and skip_watch counter randomness). The initial motivation for this was to allow enabling KCSAN for kernel/sched (remove KCSAN_SANITIZE := n from kernel/sched/Makefile), with CONFIG_DEBUG_PREEMPT=y. Without this change, we could observe recursion: check_access() [via instrumentation] kcsan_setup_watchpoint() reset_kcsan_skip() kcsan_prandom_u32_max() get_cpu_var() preempt_disable() preempt_count_add() [in kernel/sched/core.c] check_access() [via instrumentation] Note, while this currently does not affect an unmodified kernel, it'd be good to keep a KCSAN kernel working when KCSAN_SANITIZE := n is removed from kernel/sched/Makefile to permit testing scheduler code with KCSAN if desired. Fixes: cd290ec24633 ("kcsan: Use tracing-safe version of prandom") Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-11-06kcsan: Fix encoding masks and regain address bitMarco Elver1-8/+6
The watchpoint encoding masks for size and address were off-by-one bit each, with the size mask using 1 unnecessary bit and the address mask missing 1 bit. However, due to the way the size is shifted into the encoded watchpoint, we were effectively wasting and never using the extra bit. For example, on x86 with PAGE_SIZE==4K, we have 1 bit for the is-write bit, 14 bits for the size bits, and then 49 bits left for the address. Prior to this fix we would end up with this usage: [ write<1> | size<14> | wasted<1> | address<48> ] Fix it by subtracting 1 bit from the GENMASK() end and start ranges of size and address respectively. The added static_assert()s verify that the masks are as expected. With the fixed version, we get the expected usage: [ write<1> | size<14> | address<49> ] Functionally no change is expected, since that extra address bit is insignificant for enabled architectures. Acked-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-11-02kcsan: Never set up watchpoints on NULL pointersMarco Elver1-1/+5
Avoid setting up watchpoints on NULL pointers, as otherwise we would crash inside the KCSAN runtime (when checking for value changes) instead of the instrumented code. Because that may be confusing, skip any address less than PAGE_SIZE. Reviewed-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>