aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/memory-model/Documentation/access-marking.txt
blob: 65778222183e3524f3bfbf7d3968c5dc1bf74559 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
MARKING SHARED-MEMORY ACCESSES
==============================

This document provides guidelines for marking intentionally concurrent
normal accesses to shared memory, that is "normal" as in accesses that do
not use read-modify-write atomic operations.  It also describes how to
document these accesses, both with comments and with special assertions
processed by the Kernel Concurrency Sanitizer (KCSAN).  This discussion
builds on an earlier LWN article [1].


ACCESS-MARKING OPTIONS
======================

The Linux kernel provides the following access-marking options:

1.	Plain C-language accesses (unmarked), for example, "a = b;"

2.	Data-race marking, for example, "data_race(a = b);"

3.	READ_ONCE(), for example, "a = READ_ONCE(b);"
	The various forms of atomic_read() also fit in here.

4.	WRITE_ONCE(), for example, "WRITE_ONCE(a, b);"
	The various forms of atomic_set() also fit in here.


These may be used in combination, as shown in this admittedly improbable
example:

	WRITE_ONCE(a, b + data_race(c + d) + READ_ONCE(e));

Neither plain C-language accesses nor data_race() (#1 and #2 above) place
any sort of constraint on the compiler's choice of optimizations [2].
In contrast, READ_ONCE() and WRITE_ONCE() (#3 and #4 above) restrict the
compiler's use of code-motion and common-subexpression optimizations.
Therefore, if a given access is involved in an intentional data race,
using READ_ONCE() for loads and WRITE_ONCE() for stores is usually
preferable to data_race(), which in turn is usually preferable to plain
C-language accesses.  It is permissible to combine #2 and #3, for example,
data_race(READ_ONCE(a)), which will both restrict compiler optimizations
and disable KCSAN diagnostics.

KCSAN will complain about many types of data races involving plain
C-language accesses, but marking all accesses involved in a given data
race with one of data_race(), READ_ONCE(), or WRITE_ONCE(), will prevent
KCSAN from complaining.  Of course, lack of KCSAN complaints does not
imply correct code.  Therefore, please take a thoughtful approach
when responding to KCSAN complaints.  Churning the code base with
ill-considered additions of data_race(), READ_ONCE(), and WRITE_ONCE()
is unhelpful.

In fact, the following sections describe situations where use of
data_race() and even plain C-language accesses is preferable to
READ_ONCE() and WRITE_ONCE().


Use of the data_race() Macro
----------------------------

Here are some situations where data_race() should be used instead of
READ_ONCE() and WRITE_ONCE():

1.	Data-racy loads from shared variables whose values are used only
	for diagnostic purposes.

2.	Data-racy reads whose values are checked against marked reload.

3.	Reads whose values feed into error-tolerant heuristics.

4.	Writes setting values that feed into error-tolerant heuristics.


Data-Racy Reads for Approximate Diagnostics

Approximate diagnostics include lockdep reports, monitoring/statistics
(including /proc and /sys output), WARN*()/BUG*() checks whose return
values are ignored, and other situations where reads from shared variables
are not an integral part of the core concurrency design.

In fact, use of data_race() instead READ_ONCE() for these diagnostic
reads can enable better checking of the remaining accesses implementing
the core concurrency design.  For example, suppose that the core design
prevents any non-diagnostic reads from shared variable x from running
concurrently with updates to x.  Then using plain C-language writes
to x allows KCSAN to detect reads from x from within regions of code
that fail to exclude the updates.  In this case, it is important to use
data_race() for the diagnostic reads because otherwise KCSAN would give
false-positive warnings about these diagnostic reads.

If it is necessary to both restrict compiler optimizations and disable
KCSAN diagnostics, use both data_race() and READ_ONCE(), for example,
data_race(READ_ONCE(a)).

In theory, plain C-language loads can also be used for this use case.
However, in practice this will have the disadvantage of causing KCSAN
to generate false positives because KCSAN will have no way of knowing
that the resulting data race was intentional.


Data-Racy Reads That Are Checked Against Marked Reload

The values from some reads are not implicitly trusted.  They are instead
fed into some operation that checks the full value against a later marked
load from memory, which means that the occasional arbitrarily bogus value
is not a problem.  For example, if a bogus value is fed into cmpxchg(),
all that happens is that this cmpxchg() fails, which normally results
in a retry.  Unless the race condition that resulted in the bogus value
recurs, this retry will with high probability succeed, so no harm done.

However, please keep in mind that a data_race() load feeding into
a cmpxchg_relaxed() might still be subject to load fusing on some
architectures.  Therefore, it is best to capture the return value from
the failing cmpxchg() for the next iteration of the loop, an approach
that provides the compiler much less scope for mischievous optimizations.
Capturing the return value from cmpxchg() also saves a memory reference
in many cases.

In theory, plain C-language loads can also be used for this use case.
However, in practice this will have the disadvantage of causing KCSAN
to generate false positives because KCSAN will have no way of knowing
that the resulting data race was intentional.


Reads Feeding Into Error-Tolerant Heuristics

Values from some reads feed into heuristics that can tolerate occasional
errors.  Such reads can use data_race(), thus allowing KCSAN to focus on
the other accesses to the relevant shared variables.  But please note
that data_race() loads are subject to load fusing, which can result in
consistent errors, which in turn are quite capable of breaking heuristics.
Therefore use of data_race() should be limited to cases where some other
code (such as a barrier() call) will force the occasional reload.

Note that this use case requires that the heuristic be able to handle
any possible error.  In contrast, if the heuristics might be fatally
confused by one or more of the possible erroneous values, use READ_ONCE()
instead of data_race().

In theory, plain C-language loads can also be used for this use case.
However, in practice this will have the disadvantage of causing KCSAN
to generate false positives because KCSAN will have no way of knowing
that the resulting data race was intentional.


Writes Setting Values Feeding Into Error-Tolerant Heuristics

The values read into error-tolerant heuristics come from somewhere,
for example, from sysfs.  This means that some code in sysfs writes
to this same variable, and these writes can also use data_race().
After all, if the heuristic can tolerate the occasional bogus value
due to compiler-mangled reads, it can also tolerate the occasional
compiler-mangled write, at least assuming that the proper value is in
place once the write completes.

Plain C-language stores can also be used for this use case.  However,
in kernels built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, this
will have the disadvantage of causing KCSAN to generate false positives
because KCSAN will have no way of knowing that the resulting data race
was intentional.


Use of Plain C-Language Accesses
--------------------------------

Here are some example situations where plain C-language accesses should
used instead of READ_ONCE(), WRITE_ONCE(), and data_race():

1.	Accesses protected by mutual exclusion, including strict locking
	and sequence locking.

2.	Initialization-time and cleanup-time accesses.	This covers a
	wide variety of situations, including the uniprocessor phase of
	system boot, variables to be used by not-yet-spawned kthreads,
	structures not yet published to reference-counted or RCU-protected
	data structures, and the cleanup side of any of these situations.

3.	Per-CPU variables that are not accessed from other CPUs.

4.	Private per-task variables, including on-stack variables, some
	fields in the task_struct structure, and task-private heap data.

5.	Any other loads for which there is not supposed to be a concurrent
	store to that same variable.

6.	Any other stores for which there should be neither concurrent
	loads nor concurrent stores to that same variable.

	But note that KCSAN makes two explicit exceptions to this rule
	by default, refraining from flagging plain C-language stores:

	a.	No matter what.  You can override this default by building
		with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n.

	b.	When the store writes the value already contained in
		that variable.	You can override this default by building
		with CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.

	c.	When one of the stores is in an interrupt handler and
		the other in the interrupted code.  You can override this
		default by building with CONFIG_KCSAN_INTERRUPT_WATCHER=y.

Note that it is important to use plain C-language accesses in these cases,
because doing otherwise prevents KCSAN from detecting violations of your
code's synchronization rules.


ACCESS-DOCUMENTATION OPTIONS
============================

It is important to comment marked accesses so that people reading your
code, yourself included, are reminded of the synchronization design.
However, it is even more important to comment plain C-language accesses
that are intentionally involved in data races.  Such comments are
needed to remind people reading your code, again, yourself included,
of how the compiler has been prevented from optimizing those accesses
into concurrency bugs.

It is also possible to tell KCSAN about your synchronization design.
For example, ASSERT_EXCLUSIVE_ACCESS(foo) tells KCSAN that any
concurrent access to variable foo by any other CPU is an error, even
if that concurrent access is marked with READ_ONCE().  In addition,
ASSERT_EXCLUSIVE_WRITER(foo) tells KCSAN that although it is OK for there
to be concurrent reads from foo from other CPUs, it is an error for some
other CPU to be concurrently writing to foo, even if that concurrent
write is marked with data_race() or WRITE_ONCE().

Note that although KCSAN will call out data races involving either
ASSERT_EXCLUSIVE_ACCESS() or ASSERT_EXCLUSIVE_WRITER() on the one hand
and data_race() writes on the other, KCSAN will not report the location
of these data_race() writes.


EXAMPLES
========

As noted earlier, the goal is to prevent the compiler from destroying
your concurrent algorithm, to help the human reader, and to inform
KCSAN of aspects of your concurrency design.  This section looks at a
few examples showing how this can be done.


Lock Protection With Lockless Diagnostic Access
-----------------------------------------------

For example, suppose a shared variable "foo" is read only while a
reader-writer spinlock is read-held, written only while that same
spinlock is write-held, except that it is also read locklessly for
diagnostic purposes.  The code might look as follows:

	int foo;
	DEFINE_RWLOCK(foo_rwlock);

	void update_foo(int newval)
	{
		write_lock(&foo_rwlock);
		foo = newval;
		do_something(newval);
		write_unlock(&foo_rwlock);
	}

	int read_foo(void)
	{
		int ret;

		read_lock(&foo_rwlock);
		do_something_else();
		ret = foo;
		read_unlock(&foo_rwlock);
		return ret;
	}

	void read_foo_diagnostic(void)
	{
		pr_info("Current value of foo: %d\n", data_race(foo));
	}

The reader-writer lock prevents the compiler from introducing concurrency
bugs into any part of the main algorithm using foo, which means that
the accesses to foo within both update_foo() and read_foo() can (and
should) be plain C-language accesses.  One benefit of making them be
plain C-language accesses is that KCSAN can detect any erroneous lockless
reads from or updates to foo.  The data_race() in read_foo_diagnostic()
tells KCSAN that data races are expected, and should be silently
ignored.  This data_race() also tells the human reading the code that
read_foo_diagnostic() might sometimes return a bogus value.

If it is necessary to suppress compiler optimization and also detect
buggy lockless writes, read_foo_diagnostic() can be updated as follows:

	void read_foo_diagnostic(void)
	{
		pr_info("Current value of foo: %d\n", data_race(READ_ONCE(foo)));
	}

Alternatively, given that KCSAN is to ignore all accesses in this function,
this function can be marked __no_kcsan and the data_race() can be dropped:

	void __no_kcsan read_foo_diagnostic(void)
	{
		pr_info("Current value of foo: %d\n", READ_ONCE(foo));
	}

However, in order for KCSAN to detect buggy lockless writes, your kernel
must be built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n.  If you
need KCSAN to detect such a write even if that write did not change
the value of foo, you also need CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
If you need KCSAN to detect such a write happening in an interrupt handler
running on the same CPU doing the legitimate lock-protected write, you
also need CONFIG_KCSAN_INTERRUPT_WATCHER=y.  With some or all of these
Kconfig options set properly, KCSAN can be quite helpful, although
it is not necessarily a full replacement for hardware watchpoints.
On the other hand, neither are hardware watchpoints a full replacement
for KCSAN because it is not always easy to tell hardware watchpoint to
conditionally trap on accesses.


Lock-Protected Writes With Lockless Reads
-----------------------------------------

For another example, suppose a shared variable "foo" is updated only
while holding a spinlock, but is read locklessly.  The code might look
as follows:

	int foo;
	DEFINE_SPINLOCK(foo_lock);

	void update_foo(int newval)
	{
		spin_lock(&foo_lock);
		WRITE_ONCE(foo, newval);
		ASSERT_EXCLUSIVE_WRITER(foo);
		do_something(newval);
		spin_unlock(&foo_wlock);
	}

	int read_foo(void)
	{
		do_something_else();
		return READ_ONCE(foo);
	}

Because foo is read locklessly, all accesses are marked.  The purpose
of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
concurrent lockless write.


Lock-Protected Writes With Heuristic Lockless Reads
---------------------------------------------------

For another example, suppose that the code can normally make use of
a per-data-structure lock, but there are times when a global lock
is required.  These times are indicated via a global flag.  The code
might look as follows, and is based loosely on nf_conntrack_lock(),
nf_conntrack_all_lock(), and nf_conntrack_all_unlock():

	bool global_flag;
	DEFINE_SPINLOCK(global_lock);
	struct foo {
		spinlock_t f_lock;
		int f_data;
	};

	/* All foo structures are in the following array. */
	int nfoo;
	struct foo *foo_array;

	void do_something_locked(struct foo *fp)
	{
		/* This works even if data_race() returns nonsense. */
		if (!data_race(global_flag)) {
			spin_lock(&fp->f_lock);
			if (!smp_load_acquire(&global_flag)) {
				do_something(fp);
				spin_unlock(&fp->f_lock);
				return;
			}
			spin_unlock(&fp->f_lock);
		}
		spin_lock(&global_lock);
		/* global_lock held, thus global flag cannot be set. */
		spin_lock(&fp->f_lock);
		spin_unlock(&global_lock);
		/*
		 * global_flag might be set here, but begin_global()
		 * will wait for ->f_lock to be released.
		 */
		do_something(fp);
		spin_unlock(&fp->f_lock);
	}

	void begin_global(void)
	{
		int i;

		spin_lock(&global_lock);
		WRITE_ONCE(global_flag, true);
		for (i = 0; i < nfoo; i++) {
			/*
			 * Wait for pre-existing local locks.  One at
			 * a time to avoid lockdep limitations.
			 */
			spin_lock(&fp->f_lock);
			spin_unlock(&fp->f_lock);
		}
	}

	void end_global(void)
	{
		smp_store_release(&global_flag, false);
		spin_unlock(&global_lock);
	}

All code paths leading from the do_something_locked() function's first
read from global_flag acquire a lock, so endless load fusing cannot
happen.

If the value read from global_flag is true, then global_flag is
rechecked while holding ->f_lock, which, if global_flag is now false,
prevents begin_global() from completing.  It is therefore safe to invoke
do_something().

Otherwise, if either value read from global_flag is true, then after
global_lock is acquired global_flag must be false.  The acquisition of
->f_lock will prevent any call to begin_global() from returning, which
means that it is safe to release global_lock and invoke do_something().

For this to work, only those foo structures in foo_array[] may be passed
to do_something_locked().  The reason for this is that the synchronization
with begin_global() relies on momentarily holding the lock of each and
every foo structure.

The smp_load_acquire() and smp_store_release() are required because
changes to a foo structure between calls to begin_global() and
end_global() are carried out without holding that structure's ->f_lock.
The smp_load_acquire() and smp_store_release() ensure that the next
invocation of do_something() from do_something_locked() will see those
changes.


Lockless Reads and Writes
-------------------------

For another example, suppose a shared variable "foo" is both read and
updated locklessly.  The code might look as follows:

	int foo;

	int update_foo(int newval)
	{
		int ret;

		ret = xchg(&foo, newval);
		do_something(newval);
		return ret;
	}

	int read_foo(void)
	{
		do_something_else();
		return READ_ONCE(foo);
	}

Because foo is accessed locklessly, all accesses are marked.  It does
not make sense to use ASSERT_EXCLUSIVE_WRITER() in this case because
there really can be concurrent lockless writers.  KCSAN would
flag any concurrent plain C-language reads from foo, and given
CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, also any concurrent plain
C-language writes to foo.


Lockless Reads and Writes, But With Single-Threaded Initialization
------------------------------------------------------------------

For yet another example, suppose that foo is initialized in a
single-threaded manner, but that a number of kthreads are then created
that locklessly and concurrently access foo.  Some snippets of this code
might look as follows:

	int foo;

	void initialize_foo(int initval, int nkthreads)
	{
		int i;

		foo = initval;
		ASSERT_EXCLUSIVE_ACCESS(foo);
		for (i = 0; i < nkthreads; i++)
			kthread_run(access_foo_concurrently, ...);
	}

	/* Called from access_foo_concurrently(). */
	int update_foo(int newval)
	{
		int ret;

		ret = xchg(&foo, newval);
		do_something(newval);
		return ret;
	}

	/* Also called from access_foo_concurrently(). */
	int read_foo(void)
	{
		do_something_else();
		return READ_ONCE(foo);
	}

The initialize_foo() uses a plain C-language write to foo because there
are not supposed to be concurrent accesses during initialization.  The
ASSERT_EXCLUSIVE_ACCESS() allows KCSAN to flag buggy concurrent unmarked
reads, and the ASSERT_EXCLUSIVE_ACCESS() call further allows KCSAN to
flag buggy concurrent writes, even if:  (1) Those writes are marked or
(2) The kernel was built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.


Checking Stress-Test Race Coverage
----------------------------------

When designing stress tests it is important to ensure that race conditions
of interest really do occur.  For example, consider the following code
fragment:

	int foo;

	int update_foo(int newval)
	{
		return xchg(&foo, newval);
	}

	int xor_shift_foo(int shift, int mask)
	{
		int old, new, newold;

		newold = data_race(foo); /* Checked by cmpxchg(). */
		do {
			old = newold;
			new = (old << shift) ^ mask;
			newold = cmpxchg(&foo, old, new);
		} while (newold != old);
		return old;
	}

	int read_foo(void)
	{
		return READ_ONCE(foo);
	}

If it is possible for update_foo(), xor_shift_foo(), and read_foo() to be
invoked concurrently, the stress test should force this concurrency to
actually happen.  KCSAN can evaluate the stress test when the above code
is modified to read as follows:

	int foo;

	int update_foo(int newval)
	{
		ASSERT_EXCLUSIVE_ACCESS(foo);
		return xchg(&foo, newval);
	}

	int xor_shift_foo(int shift, int mask)
	{
		int old, new, newold;

		newold = data_race(foo); /* Checked by cmpxchg(). */
		do {
			old = newold;
			new = (old << shift) ^ mask;
			ASSERT_EXCLUSIVE_ACCESS(foo);
			newold = cmpxchg(&foo, old, new);
		} while (newold != old);
		return old;
	}


	int read_foo(void)
	{
		ASSERT_EXCLUSIVE_ACCESS(foo);
		return READ_ONCE(foo);
	}

If a given stress-test run does not result in KCSAN complaints from
each possible pair of ASSERT_EXCLUSIVE_ACCESS() invocations, the
stress test needs improvement.  If the stress test was to be evaluated
on a regular basis, it would be wise to place the above instances of
ASSERT_EXCLUSIVE_ACCESS() under #ifdef so that they did not result in
false positives when not evaluating the stress test.


REFERENCES
==========

[1] "Concurrency bugs should fear the big bad data-race detector (part 2)"
    https://lwn.net/Articles/816854/

[2] "Who's afraid of a big bad optimizing compiler?"
    https://lwn.net/Articles/793253/