summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_sched.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Short circuit if we're running on the CPU that we want to sync with. Fixeskettenis2015-09-201-2/+5
| | | | | | suspend on machines with em(4) now that it uses intr_barrier(9). ok krw@
* Introduce sched_barrier(9), an interface that acts as a scheduler barrier inkettenis2015-09-131-1/+46
| | | | | | | | the sense that it guarantees that the specified CPU went through the scheduler. This also guarantees that interrupt handlers running on that CPU will have finished when sched_barrier() returns. ok miod@, guenther@
* Remove some includes include-what-you-use claims don'tjsg2015-03-141-4/+1
| | | | | | | have any direct symbols used. Tested for indirect use by compiling amd64/i386/sparc64 kernels. ok tedu@ deraadt@
* Keep under #ifdef MULTIPROCESSOR the code that deals with SPCF_SHOULDHALTmpi2014-09-241-1/+5
| | | | | | | and SPCF_HALTED, these flags only make sense on secondary CPUs which are unlikely to be present on a SP kernel. ok kettenis@
* If we're stopping a secondary cpu, don't let sched_choosecpu() short-circuitkettenis2014-07-261-1/+3
| | | | | | | | and return the current current CPU, otherwise sched_stop_secondary_cpus() will spin forever trying to empty its run queues. Fixes hangs during suspend that many people reported over the last couple of days. ok bcook@, guenther@
* Fix sched_stop_secondary_cpus() to properly drain CPUsmatthew2014-07-131-2/+2
| | | | | | | | | | TAILQ_FOREACH() isn't safe to use in sched_chooseproc() to iterate over the run queues because within the loop body we remove the threads from their run queues and reinsert them elsewhere. As a result, we end up only draining the first thread of each run queue rather than all of them. ok kettenis
* Add PS_SYSTEM, the process-level mirror of the thread-level P_SYSTEM,guenther2014-05-041-7/+2
| | | | | | | and FORK_SYSTEM as a flag to set them. This eliminates needing to peek into other processes threads in various places. Inspired by NetBSD ok miod@ matthew@
* Eliminate the exit sig handling, which was only invokable via theguenther2014-02-121-2/+2
| | | | | | | | Linux-compat clone() syscall when *not* using CLONE_THREAD. pirofti@ confirms Opera runs in compat without this, so out it goes; one less hair to choke on in kern_exit.c ok tedu@ pirofti@
* Prevent idle thread from being stolen on startup.haesbaert2013-06-061-2/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a race condition which might trigger a case where two cpus try to run the same idle thread. The problem arises when one cpu steals the idle proc of another cpu and this other cpu ends up running the idle thread via spc->spc_idleproc, resulting in two cpus trying to cpu_switchto(idleX). On startup, idle procs are scaterred around different runqueues, the decision for scheduling is: 1 look at my runqueue. 2 if empty, look at other dudes runqueue. 3 if empty, select idle proc via spc->spc_idleproc. The problem is that cpu0's idle0 might be running on cpu1 due to step 1 or 2 and cpu0 hits step 3. So cpu0 will select idle0, while cpu1 is in fact running it already. The solution is to never place idle on a runqueue, therefore being only selectable through spc->spc_idleproc. This race can be more easily triggered on a HT cpu on virtualized environments, where the guest more often than not doesn't have the cpu for itself, so timing gets shuffled. ok tedu@ guenther@ go ahead after t2k13 deraadt@
* Convert some internal APIs to use timespecs instead of timevalsguenther2013-06-031-5/+5
| | | | ok matthew@ deraadt@
* sprinkle ifdef MP to disable cpu migration code when not needed.tedu2013-04-191-7/+17
| | | | ok deraadt
* Make sure that we don't schedule processes on CPUs that we havetaken out ofkettenis2012-07-101-1/+5
| | | | | | the scheduler. ok hasbaert@. deraadt@
* Make rusage totals, itimers, and profile settings per-process insteadguenther2012-03-231-2/+2
| | | | | | | of per-rthread. Handling of per-thread tick and runtime counters inspired by how FreeBSD does it. ok kettenis@
* Account for sched_noidle and document the scheduler variables.haesbaert2012-03-101-11/+13
| | | | ok tedu@
* Remove all MD diagnostics in cpu_switchto(), and move them to MI code ifmiod2011-10-121-1/+4
| | | | | | they apply. ok oga@ deraadt@
* Clean up after P_BIGLOCK removal.art2011-07-061-3/+3
| | | | | | | KERNEL_PROC_LOCK -> KERNEL_LOCK KERNEL_PROC_UNLOCK -> KERNEL_UNLOCK oga@ ok
* Delete a fallback definition for CPU_INFO_UNIT that's both unnecessaryguenther2010-05-281-8/+1
| | | | | | and incorrect. Kills an XXX comment. ok syuu, thib, art, kettenis, millert, deraadt
* Actively remove processes from the runqueues of a CPU when we stop it.kettenis2010-05-251-5/+22
| | | | | | | | | | Also make sure not to take the scheduler lock once we have stopped a CPU such that we can safely take it away without having to worry about deadlock because it happened to own the scheduler lock. Fixes issues with suspen on SMP machines. ok mlarkin@, marco@, art@, deraadt@
* Make sure we initialize sched_lock before we try to use it.kettenis2010-05-141-4/+1
| | | | ok miod@, thib@, oga@, jsing@
* Merge the only relevant (for now) parts of simplelock.h into lock.hderaadt2010-04-231-2/+1
| | | | | since it is time to start transitioning away from the no-op behaviour. ok oga kettenis
* Implement functions to take away the secondary CPUs from the scheduler andkettenis2010-04-061-2/+51
| | | | | | | give them back again, effectively stopping and starting these CPUs. Use the stop function in sys_reboot(). ok marco@, deraadt@
* Add code to stop scheduling processes on CPUs, effectively halting that CPU.kettenis2010-01-091-2/+15
| | | | | | | Use this to do a shutdown with only the boot processor running. This should avoid nasty races during shutdown. help from art@, ok deraadt@, miod@
* Backout previous commit. There is a possible race which makes it possiblekettenis2009-11-291-13/+1
| | | | for sys_reboot() to hang forever.
* Add a mechanism to stop the scheduler from scheduling processes on akettenis2009-11-251-1/+13
| | | | | | | particular CPU such that it just sits and spins in the idle loop, effectively halting that CPU. ok deraadt@, miod@
* Don't drop the big lock at the end of exit1(), but move it into the middle ofderaadt2009-10-051-4/+3
| | | | | | | | | sched_exit(). This means that cpu_exit() and whatever it does (for instance calling free(), as well as the deadproc p_hash handling are now locked as well. This may have been one of the causes of the reaper panics, especially with rthread patches... which were terminating a lot of threads very quickly onto the deadproc p_hash list. ok kurt kettenis miod
* When starting up idle, explicitly set p_cpu and the peg flag for theart2009-04-221-1/+3
| | | | | | idle proc. p_cpu might be necessary in the future and pegging is just to be extra safe (although we'll be horribly broken if the idle proc ever ends up where that flag is checked).
* Make pegging a proc work when there are idle cpus that are looking forart2009-04-201-7/+12
| | | | | | | | | | something to do. Walk the highest priority queue looking for a proc to steal and skip those that are pegged. We could consider walking the other queues in the future too, but this should do for now. kettenis@ guenther@ ok
* Some tweaks to the cpu affinity code.art2009-04-141-56/+103
| | | | | | | | | | | | | | - Split up choosing of cpu between fork and "normal" cases. Fork is very different and should be treated as such. - Instead of implicitly choosing a cpu in setrunqueue, do it outside where it actually makes sense. - Just because a cpu is marked as idle doesn't mean it will be soon. There could be a thundering herd effect if we call wakeup from an interrupt handler, so subtract cpus with queued processes when deciding which cpu is actually idle. - some simplifications allowed by the above. kettenis@ ok (except one bugfix that was not in the intial diff)
* sched_peg_curproc_to_cpu() - function to force a proc to stay on a cpuart2009-04-031-1/+27
| | | | forever.
* Processor affinity for processes.art2009-03-231-35/+344
| | | | | | | | | | | | | | - Split up run queues so that every cpu has one. - Make setrunqueue choose the cpu where we want to make this process runnable (this should be refined and less brutal in the future). - When choosing the cpu where we want to run, make some kind of educated guess where it will be best to run (very naive right now). Other: - Set operations for sets of cpus. - load average calculations per cpu. - sched_is_idle() -> curcpu_is_idle() tested, debugged and prodded by many@
* oopsderaadt2008-11-061-2/+2
|
* panic if cpu_switchto() returns from a dead processderaadt2008-11-061-1/+2
|
* Bring biomem diff back into the tree after the nfs_bio.c fix went in.deraadt2008-06-121-1/+3
| | | | ok thib beck art
* back out biomem diff since it is not right yet. Doing very largederaadt2008-06-111-3/+1
| | | | | | | | file copies to nfsv2 causes the system to eventually peg the console. On the console ^T indicates that the load is increasing rapidly, ddb indicates many calls to getbuf, there is some very slow nfs traffic making none (or extremely slow) progress. Eventually some machines seize up entirely.
* Buffer cache revampbeck2008-06-101-1/+3
| | | | | | | | | | | | | | | | 1) remove multiple size queues, introduced as a stopgap. 2) decouple pages containing data from their mappings 3) only keep buffers mapped when they actually have to be mapped (right now, this is when buffers are B_BUSY) 4) New functions to make a buffer busy, and release the busy flag (buf_acquire and buf_release) 5) Move high/low water marks and statistics counters into a structure 6) Add a sysctl to retrieve buffer cache statistics Tested in several variants and beat upon by bob and art for a year. run accidentally on henning's nfs server for a few months... ok deraadt@, krw@, art@ - who promises to be around to deal with any fallout
* use sched_is_idle() and nuke the sched_chooseproc prototype since wethib2008-06-081-3/+2
| | | | already have on in sched.h
* Move the implementation of __mp_lock (biglock) into machine dependentart2007-11-261-2/+2
| | | | | | | | | | | | | | code. At this moment all architectures get the copy of the old code except i386 which gets a new shiny implementation that doesn't spin at splhigh (doh!) and doesn't try to grab the biglock when releasing the biglock (double doh!). Shaves 10% of system time during kernel compile and might solve a few bugs as a bonus. Other architectures coming shortly. miod@ deraadt@ ok
* Make context switching much more MI:art2007-10-101-0/+242
- Move the functionality of choosing a process from cpu_switch into a much simpler function: cpu_switchto. Instead of having the locore code walk the run queues, let the MI code choose the process we want to run and only implement the context switching itself in MD code. - Let MD context switching run without worrying about spls or locks. - Instead of having the idle loop implemented with special contexts in MD code, implement one idle proc for each cpu. make the idle loop MI with MD hooks. - Change the proc lists from the old style vax queues to TAILQs. - Change the sleep queue from vax queues to TAILQs. This makes wakeup() go from O(n^2) to O(n) there will be some MD fallout, but it will be fixed shortly. There's also a few cleanups to be done after this. deraadt@, kettenis@ ok