summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_smr.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Small smr_grace_wait() optimizationvisa2020-12-251-6/+26
| | | | | | | | | | | | Make the SMR thread maintain an explicit system-wide grace period and make CPUs observe the current grace period when crossing a quiescent state. This lets the SMR thread avoid a forced context switch for CPUs that have already entered the latest grace period. This change provides a small improvement in smr_grace_wait()'s performance in terms of context switching. OK mpi@, anton@
* Adjust SMR_ASSERT_CRITICAL() and SMR_ASSERT_NONCRITICAL() so that thevisa2020-04-031-23/+1
| | | | | | | | panic message shows the actual code location of the assert. Do this by moving the assert logic inside the macros. Prompted by and OK claudio@ OK mpi@
* Start the SMR thread when all CPUs are ready for scheduling. Thisvisa2020-02-251-5/+3
| | | | | | | | | | | | | | | | | | | prevents the appearance of a "smr: dispatch took N seconds" message during boot when there is an early smr_call(). Such a call can happen with mfii(4). The initial dispatch cannot make progress until smr_grace_wait() can visit all CPUs. This fix is essentially a hack. It makes use of the fact that there is no hard guarantee on how quickly the callback of smr_call() gets invoked. It is assumed that the SMR call backlog does not grow large during boot. An alternative fix is to make smr_grace_wait() skip secondary CPUs until they have been started. However, this could break if the spinup logic of secondary CPUs was changed. Delayed SMR dispatch reported and fix tested by Hrvoje Popovski Discussed with and OK kettenis@, claudio@
* convert infinite msleep(9) to msleep_nsec(9)jsg2019-12-301-3/+3
| | | | ok mpi@
* Add tsleep_nsec(9), msleep_nsec(9), and rwsleep_nsec(9).cheloha2019-07-031-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | Equivalent to their unsuffixed counterparts except that (a) they take a timeout in terms of nanoseconds, and (b) INFSLP, aka UINT64_MAX (not zero) indicates that a timeout should not be set. For now, zero nanoseconds is not a strictly valid invocation: we log a warning on DIAGNOSTIC kernels if we see such a call. We still sleep until the next tick in such a case, however. In the future this could become some sort of poll... TBD. To facilitate conversions to these interfaces: add inline conversion functions to sys/time.h for turning your timeout into nanoseconds. Also do a few easy conversions for warmup and to demonstrate how further conversions should be done. Lots of input from mpi@ and ratchov@. Additional input from tedu@, deraadt@, mortimer@, millert@, and claudio@. Partly inspired by FreeBSD r247787. positive feedback from deraadt@, ok mpi@
* Add SMR_ASSERT_NONCRITICAL() in assertwaitok(). This eases debuggingvisa2019-05-171-4/+1
| | | | | | | | | | because now the error is detected before context switch. The sleep code path eventually calls assertwaitok() in mi_switch(), so the assertwaitok() in the SMR barrier function is somewhat redundant and can be removed. OK mpi@
* Remove incorrect optimization. The current logic for skipping idle CPUsvisa2019-05-161-21/+3
| | | | | | | | does not establish strong enough ordering between CPUs. Consequently, smr_grace_wait() might incorrectly skip a CPU and invoke an SMR callback too early. Prompted by haesbaert@
* Add lock order checking for smr_barrier(9). This is similar to thevisa2019-05-141-1/+22
| | | | | | checking done in taskq_barrier(9) and timeout_barrier(9). OK mpi@
* Introduce safe memory reclamation, a mechanism for reclaiming sharedvisa2019-02-261-0/+295
objects that readers can access without locking. This provides a basis for read-copy-update operations. Readers access SMR-protected shared objects inside SMR read-side critical section where sleeping is not allowed. To reclaim an SMR-protected object, the writer has to ensure mutual exclusion of other writers, remove the object's shared reference and wait until read-side references cannot exist any longer. As an alternative to waiting, the writer can schedule a callback that gets invoked when reclamation is safe. The mechanism relies on CPU quiescent states to determine when an SMR-protected object is ready for reclamation. The <sys/smr.h> header additionally provides an implementation of singly- and doubly-linked lists that can be used together with SMR. These lists allow lockless read access with a concurrent writer. Discussed with many OK mpi@ sashan@