summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_lock.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Let SP kernel work with WITNESS. The necessary instrumentation wasvisa2019-06-041-1/+5
| | | | | | | missing from the SP variant of mtx_enter() and mtx_enter_try(). mtx_leave() was correct already. Prompted by and OK patrick@
* Remove file name and line number output from witness(4)visa2019-04-231-55/+25
| | | | | | | | | | | | | Reduce code clutter by removing the file name and line number output from witness(4). Typically it is easy enough to locate offending locks using the stack traces that are shown in lock order conflict reports. Tricky cases can be tracked using sysctl kern.witness.locktrace=1 . This patch additionally removes the witness(4) wrapper for mutexes. Now each mutex implementation has to invoke the WITNESS_*() macros in order to utilize the checker. Discussed with and OK dlg@, OK mpi@
* Add a simple spinning mutex for ddb. Unlike mutex(9), this lock keepsvisa2019-03-231-1/+53
| | | | | | | on spinning even if `db_active' or `panicstr' has been set. The new mutex also disables IPIs in the critical section. OK mpi@ patrick@
* Fix memory barrier in __mtx_leave(). membar_exit_before_atomic() cannotvisa2019-02-251-2/+2
| | | | | | | | | | | | be used in the routine because there is no subsequent atomic operation. membar_exit() has to be used instead. The mistake has not caused problems because on most platforms membar_exit_before_atomic() is membar_exit(). Only amd64 and i386 have a dedicated membar_exit_before_atomic(), and their exit barriers are no-ops. OK dlg@
* Simplify #ifdefs. The kernel_lock symbol is no longer needed whenvisa2018-06-151-6/+4
| | | | | | building a uniprocessor kernel with WITNESS. OK mpi@
* Constipate all the struct lock_type's so they go into .rodataguenther2018-06-081-3/+3
| | | | ok visa@
* Stopping counting and reporting CPU time spent spinning on a lock asmpi2018-05-141-1/+7
| | | | | | | | | system time. Introduce a new CP_SPIN "scheduler state" and modify userland tools to display the % of timer a CPU spents spinning. Based on a diff from jmatthew@, ok pirofti@, bluhm@, visa@, deraadt@
* Drop into ddb(4) if pmap_tlb_shoot*() take too much time in MP_LOCKDEBUGmpi2018-04-261-7/+6
| | | | | | | | kernels. While here sync all MP_LOCKDEBUG/while loops. ok mlarkin@, visa@
* Teach mtx_enter_try(9) to avoid deadlocks after a panic.mpi2018-04-251-5/+5
| | | | ok deraadt@
* Try harder to execute code protected by mutexes after entering ddb(4).mpi2018-03-271-1/+13
| | | | | | Should prevent a panic after panic reported by mlarkin@. ok mlarkin@, visa@
* Do not panic from ddb(4) when a lock requirement isn't fulfilled.mpi2018-03-201-2/+2
| | | | | | | | | | | Extend the logic already present for panic() to any DDB-related operation such that if ddb(4) is entered because of a fault or other trap it is still possible to call 'boot reboot'. While here stop printing splassert() messages as well, to not fill the buffer. ok visa@, deraadt@
* Include <sys/mutex.h> directly instead of relying on other headers tompi2018-02-191-1/+2
| | | | include it.
* Directly include sys/mplock.h when needed instead of depending onjsg2018-02-191-5/+6
| | | | | | indirect inclusion. Fixes non-MULTIPROCESSOR WITNESS build. ok visa@ mpi@
* Put WITNESS only functions with the rest of the locking primitives.mpi2018-02-141-1/+51
|
* Merge license blocks now that they are identical.mpi2018-02-101-16/+2
|
* Artur Grabowski agreed to relicense his C mutex implementation under ISC.mpi2018-02-101-20/+11
| | | | This will prevent a copyright-o-rama in kern_lock.c
* Remove CSRG copyright, there isn't any code left from Berkeley here.mpi2018-02-081-35/+1
| | | | | | | | In 2016 natano@ removed the last two functions remaining from the CSRG time: lockinit() and lockstatus(). At that time they were already wrappers around recursive rwlocks functions from thib@ that tedu@ committed in 2013. ok deraadt@
* Move common mutex implementations to a MI place.mpi2018-01-251-2/+136
| | | | | | Archs not yet converted can to the jump by defining __USE_MI_MUTEX. ok visa@
* Change __mp_lock_held() to work with an arbitrary CPU info structure andmpi2017-12-041-7/+7
| | | | | | | extend ddb(4) "ps /o" output to print which CPU is currently holding the KERNEL_LOCK(). Tested by dhill@, ok visa@
* Add a machine-independent implementation for the mplock.visa2017-10-171-9/+173
| | | | | | | | | | | | | | This reduces code duplication and makes it easier to instrument lock primitives. The MI mplock uses the ticket lock code that has been in use on amd64, i386 and sparc64. These are the architectures that now switch to the MI code. The lock_machdep.c files are unhooked from the build but not removed yet, in case something goes wrong. OK mpi@, kettenis@
* Make _kernel_lock_held() always succeed after panic(9).mpi2017-10-091-1/+3
| | | | ok visa@
* Drop unnecessary headers. This fixes kernel build on platformsvisa2017-04-201-3/+1
| | | | without <machine/mplock.h>.
* Hook up mplock to witness(4) on amd64 and i386.visa2017-04-201-2/+28
|
* Remove the lockmgr() API. It is only used by filesystems, where it is anatano2016-06-191-58/+1
| | | | | | | | trivial change to use rrw locks instead. All it needs is LK_* defines for the RW_* flags. tested by naddy and sthen on package building infrastructure input and ok jmc mpi tedu
* remove uneeded proc.h includesjsg2014-09-141-2/+1
| | | | ok mpi@ kspillner@
* KERNEL_ASSERT_LOCKED(9): Assertion for kernel lock (Rev. 3)uebayasi2014-07-131-1/+7
| | | | | | | | | | | | | | | This adds a new assertion macro, KERNEL_ASSERT_LOCKED(), to assert that kernel_lock is held. In the long process of removing kernel_lock, there will be a lot (hundreds or thousands) of use of this; virtually almost all functions in !MP-safe subsystems should have this assertion. Thus this assertion should have a short, good name. Not only that "KERNEL_ASSERT_LOCKED" is consistent with other KERNEL_* and SCHED_ASSERT_LOCKED() macros. Input from dlg@ guenther@ kettenis@. OK dlg@ guenther@
* Teach rw_status() and rrw_status() to return LK_EXCLOTHER if it's writeguenther2014-07-091-1/+3
| | | | | | | locked by a different thread. Teach lockstatus() to return LK_EXCLUSIVE if an exclusive lock is held by some other thread. ok beck@ tedu@
* bzero -> memsettedu2014-01-211-2/+2
|
* restore original gangster lockstatus return values for compattedu2013-05-061-2/+10
|
* a few tweaks noticed by jsingtedu2013-05-011-2/+2
|
* exorcise lockmgr. the api remains, but is now backed by recursive rwlocks.tedu2013-05-011-281/+31
| | | | | originally by thib. ok deraadt jsing and anyone who tested
* do not include machine/cpu.h from a .c file; it is the responsibility ofderaadt2013-03-281-2/+1
| | | | | .h files to pull it in, if needed ok tedu
* lockmgr() wants to use a different address for the wchan when drainingguenther2011-08-281-7/+8
| | | | | | | | the lock, but a change in member ordering meant it was using the same address. Explicitly use different members instead of mixing address of member and address of the lock itself. ok miod@
* Clean up after P_BIGLOCK removal.art2011-07-061-18/+1
| | | | | | | KERNEL_PROC_LOCK -> KERNEL_LOCK KERNEL_PROC_UNLOCK -> KERNEL_UNLOCK oga@ ok
* Stop using the P_BIGLOCK flag to figure out when we should release theart2011-07-061-3/+1
| | | | | | | | | | | | biglock in mi_switch and just check if we're holding the biglock. The idea is that the first entry point into the kernel uses KERNEL_PROC_LOCK and recursive calls use KERNEL_LOCK. This assumption is violated in at least one place and has been causing confusion for lots of people. Initial bug report and analysis from Pedro. kettenis@ beck@ oga@ thib@ dlg@ ok
* cut down simple locks (so simple that they don't even lock) to the pointderaadt2010-04-261-415/+7
| | | | | | where there is almost nothing left to them, so that we can continue getting rid of them ok oga
* fix typos in comments, no code changes;schwarze2010-01-141-2/+2
| | | | | from Brad Tilley <brad at 16systems dot com>; ok oga@
* ntfs was the last user, LK_SLEEFAIL can die now.oga2009-03-251-5/+1
| | | | ok blambert@
* Surround WEHOLDIT() macro with braces to make it more safe.grange2009-01-151-2/+2
| | | | | | No binary change. ok otto@
* Remove some dead code that is confusing my greps.art2007-11-261-20/+1
|
* remove p_lock from struct proc; unused debug goo for lockmgr,thib2007-05-311-15/+1
| | | | | | wich gets set and never checked etc... ok art@,tedu@
* Don't use LK_CANRECURSE for the kernel lock, okay miod@ art@pedro2007-05-111-3/+2
|
* lockmgr_printinfo() is only called from #ifdef DIAGNOSTIC positions, so #ifdef DIAGNOSTIC it tooderaadt2007-05-081-1/+3
|
* Remove the lk_interlock from struct lock; Also remove the LK_INTERLOCKthib2007-04-121-23/+3
| | | | | | flag. This effectively makes the simplelock argument to lockmgr() fluff. ok miod@
* lockmgr keeps losing code, call 911!miod2007-04-111-66/+8
| | | | ok pedro@ art@
* Since p_flag is often manipulated in interrupts and without biglockart2007-03-151-3/+3
| | | | | | | | | | | | it's a good idea to use atomic.h operations on it. This mechanic change updates all bit operations on p_flag to atomic_{set,clear}bits_int. Only exception is that P_OWEUPC is set by MI code before calling need_proftick and it's automatically cleared by ADDUPC. There's no reason for MD handling of that flag since everyone handles it the same way. kettenis@ ok
* Consistently spell FALLTHROUGH to appease lint.jsg2007-02-141-3/+3
| | | | ok kettenis@ cloder@ tom@ henning@
* Remove unused functionality from lockmgr():miod2007-02-031-385/+46
| | | | | | | | | | | - LK_EXCLUPGRADE is never used. - LK_REENABLE is never used. - LK_SETRECURSE is never used. Because of this, the lk_recurselevel field is always zero, so it can be removed to. - the spinlock version (and LK_SPIN) is never used, since it was decided to use different locking structure for MP-safe protection. Tested by many
* remove duplicate comment;jmc2006-01-031-6/+1
| | | | from thordur i. bjornsson;
* ansi/deregister.jsg2005-11-281-29/+9
| | | | 'go for it' deraadt@