summaryrefslogtreecommitdiffstats
path: root/sys/kern/sched_bsd.c (follow)
Commit message (Expand)AuthorAgeFilesLines
* Use sysctl_int_bounded for sysctl_hwsetperfgnezdo2020-12-101-13/+7
* Stop asserting that the NET_LOCK() shouldn't be held in yield().mpi2020-10-151-3/+1
* In automatic performance mode on systems with offline CPUs because of SMTsolene2020-05-301-1/+3
* Split `p_priority' into `p_runpri' and `p_slppri'.mpi2020-01-301-12/+11
* Import dt(4) a driver and framework for Dynamic Profiling.mpi2020-01-211-1/+6
* Replace p_xstat with ps_xexit and ps_xsigguenther2019-12-111-6/+7
* Restore the old way of dispatching dead procs through idle proc.visa2019-11-041-2/+1
* Move dead procs to the reaper queue immediately after context switch.visa2019-11-021-1/+2
* Kill resched_proc() and instead call need_resched() when a thread ismpi2019-11-011-27/+1
* Reduce the number of places where `p_priority' and `p_stat' are set.mpi2019-10-151-12/+5
* Stop calling resched_proc() after changing the nice(3) value of a process.mpi2019-07-151-4/+5
* Untangle code setting the scheduling priority of a thread.mpi2019-07-081-33/+35
* Revert to using the SCHED_LOCK() to protect time accounting.mpi2019-06-011-2/+4
* Use a per-process mutex to protect time accounting instead of SCHED_LOCK().mpi2019-05-311-4/+2
* Do not account spinning time as running time when a thread crosses ampi2019-05-251-2/+2
* Introduce safe memory reclamation, a mechanism for reclaiming sharedvisa2019-02-261-1/+4
* Stop accounting/updating priorities for Idle threads.mpi2019-01-281-1/+13
* Fix unsafe use of ptsignal() in mi_switch().visa2019-01-061-19/+1
* Use _kernel_lock_held() instead of __mp_lock_held(&kernel_lock).mpi2017-12-041-2/+2
* Convert most of the manual checks for CPU hogging to sched_pause().mpi2017-02-141-8/+2
* Do no select a CPU to execute the current thread when being preempt()ed.mpi2017-02-091-2/+1
* Enable the NET_LOCK(), take 2.mpi2017-01-251-1/+3
* Correct some comments and definitions, from Michal Mazurek.mpi2016-03-091-11/+7
* keep all the setperf timeout(9) handling in one place; ok tedu@naddy2015-11-081-2/+2
* Remove some includes include-what-you-use claims don'tjsg2015-03-141-2/+1
* yet more mallocarray() changes.doug2014-12-131-3/+3
* take a few more ticks to actually throttle down. hopefully helps intedu2014-11-121-2/+5
* pass size argument to free()deraadt2014-11-031-2/+3
* cpu_setperf and perflevel must remain exposed, otherwise a bunch ofderaadt2014-10-171-7/+7
* redo the performance throttling in the kernel.tedu2014-10-171-1/+151
* Track whether a process is a zombie or not yet fully built via flagsguenther2014-07-041-2/+1
* Move from struct proc to process the reference-count-holding pointersguenther2014-05-151-2/+1
* Convert some internal APIs to use timespecs instead of timevalsguenther2013-06-031-10/+10
* Use long long and %lld for printing tv_sec valuesguenther2013-06-021-3/+4
* do not include machine/cpu.h from a .c file; it is the responsibility ofderaadt2013-03-281-2/+1
* Tedu old comment concerning cpu affinity which does not apply anymore.haesbaert2012-07-091-11/+2
* Make rusage totals, itimers, and profile settings per-process insteadguenther2012-03-231-6/+12
* First steps for making ptrace work with rthreads:guenther2012-02-201-2/+2
* Functions used in files other than where they are defined should beguenther2011-07-071-6/+1
* Stop using the P_BIGLOCK flag to figure out when we should release theart2011-07-061-3/+5
* The scheduling 'nice' value is per-process, not per-thread, so move itguenther2011-03-071-2/+3
* Add stricter asserts to DIAGNOSTIC kernels to help catch mutex andmatthew2010-09-241-1/+2
* This comment is unnecessarily confusing.art2010-06-301-2/+2
* Use atomic operations to access the per-cpu scheduler flags.kettenis2010-01-031-7/+6
* Some tweaks to the cpu affinity code.art2009-04-141-1/+3
* Processor affinity for processes.art2009-03-231-4/+6
* Some paranoia and deconfusion.art2008-11-061-5/+3
* Convert timeout_add() calls using multiples of hz to timeout_add_sec()blambert2008-09-101-2/+2
* Add a macro that clears the want_resched flag that need_resched sets.art2008-07-181-1/+3
* kill 2 bogus ARGUSED and use the LIST_FOREACH() macrothib2008-05-221-4/+2