summaryrefslogtreecommitdiffstats
path: root/sys/kern/sched_bsd.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
...
* Move the implementation of __mp_lock (biglock) into machine dependentart2007-11-261-2/+4
| | | | | | | | | | | | | | code. At this moment all architectures get the copy of the old code except i386 which gets a new shiny implementation that doesn't spin at splhigh (doh!) and doesn't try to grab the biglock when releasing the biglock (double doh!). Shaves 10% of system time during kernel compile and might solve a few bugs as a bonus. Other architectures coming shortly. miod@ deraadt@ ok
* sched_lock_idle and sched_unlock_idle are obsolete now.art2007-10-111-15/+1
|
* Make context switching much more MI:art2007-10-101-39/+29
| | | | | | | | | | | | | | | | | | | | - Move the functionality of choosing a process from cpu_switch into a much simpler function: cpu_switchto. Instead of having the locore code walk the run queues, let the MI code choose the process we want to run and only implement the context switching itself in MD code. - Let MD context switching run without worrying about spls or locks. - Instead of having the idle loop implemented with special contexts in MD code, implement one idle proc for each cpu. make the idle loop MI with MD hooks. - Change the proc lists from the old style vax queues to TAILQs. - Change the sleep queue from vax queues to TAILQs. This makes wakeup() go from O(n^2) to O(n) there will be some MD fallout, but it will be fixed shortly. There's also a few cleanups to be done after this. deraadt@, kettenis@ ok
* Widen the SCHED_LOCK in two cases to protect p_estcpu and p_priority.art2007-05-181-6/+4
| | | | kettenis@ ok
* The world of __HAVEs and __HAVE_NOTs is reducing. All architecturesart2007-05-161-78/+1
| | | | | | have cpu_info now, so kill the option. eyeballed by jsg@ and grange@
* Use atomic.h operation for manipulating p_siglist in struct proc. Solvesart2007-02-061-2/+2
| | | | | | the problem with lost signals in MP kernels. miod@, kettenis@ ok
* Kernel stack can be swapped. This means that stuff that's on the stackmiod2006-11-291-8/+3
| | | | | | | | | | should never be referenced outside the context of the process to which this stack belongs unless we do the PHOLD/PRELE dance. Loads of code doesn't follow the rules here. Instead of trying to track down all offenders and fix this hairy situation, it makes much more sense to not swap kernel stacks. From art@, tested by many some time ago.
* typos; from bret lambertjmc2006-11-151-2/+2
|
* tbert sent me a diff to change some 0 to NULLtedu2006-10-211-6/+6
| | | | | i got carried away and deleted a whole bunch of useless casts this is C, not C++. ok md5
* bret lambert sent a patch removing register. i made it ansi.tedu2006-10-091-21/+15
|
* A second approach at fixing the telnet localhost & problemniklas2005-06-171-14/+7
| | | | | | | | | | | | | | (but I tend to call it ssh localhost & now when telnetd is history). This is more localized patch, but leaves us with a recursive lock for protecting scheduling and signal state. Better care is taken to actually be symmetric over mi_switch. Also, the dolock cruft in psignal can go with this solution. Better test runs by more people for longer time has been carried out compared to the c2k5 patch. Long term the current mess with interruptible sleep, the default action on stop signals and wakeup interactions need to be revisited. ok deraadt@, art@
* sched work by niklas and art backed out; causes panicsderaadt2005-05-291-39/+42
|
* Fix yield() to change p_stat from SONPROC to SRUN.art2005-05-261-1/+2
| | | | | yield() is not used anywhere yet, that's why we didn't notice this. Noticed by tedu@ who just started using it.
* This patch is mortly art's work and was done *a year* ago. Art wants to thankniklas2005-05-251-42/+39
| | | | | | | | | | | | | | | | everyone for the prompt review and ok of this work ;-) Yeah, that includes me too, or maybe especially me. I am sorry. Change the sched_lock to a mutex. This fixes, among other things, the infamous "telnet localhost &" problem. The real bug in that case was that the sched_lock which is by design a non-recursive lock, was recursively acquired, and not enough releases made us hold the lock in the idle loop, blocking scheduling on the other processors. Some of the other processors would hold the biglock though, which made it impossible for cpu 0 to enter the kernel... A nice deadlock. Let me just say debugging this for days just to realize that it was all fixed in an old diff noone ever ok'd was somewhat of an anti-climax. This diff also changes splsched to be correct for all our architectures.
* put the scheduler in its own file. reduces clutter, and logically separatestedu2004-07-291-0/+688
"put this process to sleep" and "find a process to run" operations. no functional change. ok art@