| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
It currently creates a lock ordering problem because SCHED_LOCK() is taken
by hardclock(). That means the "priorities" of a thread should be moved
out of the SCHED_LOCK() first in order to make progress.
Reported-by: syzbot+8e4863b3dde88eb706dc@syzkaller.appspotmail.com
via anton@ as well as by kettenis@
|
|
|
|
|
|
|
| |
Note that hardclock(9) still increments p_{u,s,i}ticks without holding a
lock.
ok visa@, cheloha@
|
|
|
|
|
|
|
|
|
| |
tick boundary of schedlock().
This reduces the contention on the SCHED_LOCK() when the current thread
is already spinning.
Prompted by deraadt@, ok visa@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
objects that readers can access without locking. This provides a basis
for read-copy-update operations.
Readers access SMR-protected shared objects inside SMR read-side
critical section where sleeping is not allowed. To reclaim
an SMR-protected object, the writer has to ensure mutual exclusion of
other writers, remove the object's shared reference and wait until
read-side references cannot exist any longer. As an alternative to
waiting, the writer can schedule a callback that gets invoked when
reclamation is safe.
The mechanism relies on CPU quiescent states to determine when an
SMR-protected object is ready for reclamation.
The <sys/smr.h> header additionally provides an implementation of
singly- and doubly-linked lists that can be used together with SMR.
These lists allow lockless read access with a concurrent writer.
Discussed with many
OK mpi@ sashan@
|
|
|
|
|
|
|
|
|
|
|
|
| |
Idle threads are never placed on the runqueue so their priority doesn't
matter.
This fixes an accounting bug where top(1) would report a high CPU usage
for Idle threads of secondary CPUs right after booting. That's because
schedcpu() would give 100% CPU time to the Idle thread until "real"
threads get scheduled on the corresponding CPU.
Issue reported by bluhm@, ok visa@, kettenis@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ptsignal() has to be called with the kernel lock held. As ensuring the
locking in mi_switch() is not easy, and deferring the signaling using
the task API is not possible because of lock order issues in
mi_switch(), move the CPU time checking into a periodic timer where
the kernel can be locked without issues.
With this change, each process has a dedicated resource check timer.
The timer gets activated only when a CPU time limit is set. Because the
checking is not done as frequently as before, some precision is lost.
Use of timers adapted from FreeBSD.
OK tedu@
Reported-by: syzbot+2f5d62256e3280634623@syzkaller.appspotmail.com
|
|
|
|
| |
ok visa@
|
|
|
|
|
|
|
|
| |
The distinction between preempt() and yield() stays as it is usueful
to know if a thread decided to yield by itself or if the kernel told
him to go away.
ok tedu@, guenther@
|
|
|
|
|
|
|
|
|
|
|
| |
Calling sched_choosecpu() at this moment often result in moving the thread
to a different CPU. This does not help the scheduler and creates a domino
effect, resulting in kernel thread moving to other CPUs.
Tested by many without performance impact. Simon Mages measured a small
performance improvement and a smaller variance with an http proxy.
Discussed with kettenis@, ok martijn@, beck@, visa@
|
|
|
|
|
|
| |
Recursions are currently known and marked a XXXSMP.
Please report any assert to bugs@
|
| |
|
| |
|
|
|
|
|
|
|
| |
have any direct symbols used. Tested for indirect use by compiling
amd64/i386/sparc64 kernels.
ok tedu@ deraadt@
|
|
|
|
| |
ok tedu@ deraadt@
|
|
|
|
|
| |
situations where e.g. web browsing is cpu intense but intermittently idle.
subject to further refinement and tuning.
|
|
|
|
| |
ok doug tedu
|
|
|
|
| |
MD code needs excess #ifndef SMALL_KERNEL
|
|
|
|
|
|
| |
introduce a new sysctl, hw.perfpolicy, that governs the policy.
when set to anything other than manual, hw.setperf then becomes read only.
phessler was heading in this direction, but this is slightly different. :)
|
|
|
|
|
|
|
|
|
|
|
| |
PS_{ZOMBIE,EMBRYO} on the process instead of peeking into the process's
thread data. This eliminates the need for the thread-level SDEAD state.
Change kvm_getprocs() (both the sysctl() and kvm backends) to report the
"most active" scheduler state for the process's threads.
tweaks kettenis@
feedback and ok matthew@
|
|
|
|
|
|
|
|
|
|
| |
to the process's vmspace and filedescs. struct proc continues to
keep copies of the pointers, copying them on fork, clearing them
on exit, and (for vmspace) refreshing on exec.
Also, make uvm_swapout_threads() thread aware, eliminating p_swtime
in kernel.
particular testing by ajacoutot@ and sebastia@
|
|
|
|
| |
ok matthew@ deraadt@
|
|
|
|
| |
ok deraadt@
|
|
|
|
|
| |
.h files to pull it in, if needed
ok tedu
|
|
|
|
| |
ok blambert@ krw@ tedu@ miod@
|
|
|
|
|
|
|
| |
of per-rthread. Handling of per-thread tick and runtime counters
inspired by how FreeBSD does it.
ok kettenis@
|
|
|
|
|
|
|
|
|
|
| |
- move the P_TRACED and P_INEXEC flags, and p_oppid, p_ptmask, and
p_ptstat member from struct proc to struct process
- sort the PT_* requests into those that take a PID vs those that
can also take a TID
- stub in PT_GET_THREAD_FIRST and PT_GET_THREAD_NEXT
ok kettenis@
|
|
|
|
|
|
|
| |
declared in .h files, not in each .c. Apply that rule to endtsleep(),
scheduler_start(), updatepri(), and realitexpire()
ok deraadt@ tedu@
|
|
|
|
|
|
|
|
|
|
|
|
| |
biglock in mi_switch and just check if we're holding the biglock.
The idea is that the first entry point into the kernel uses KERNEL_PROC_LOCK
and recursive calls use KERNEL_LOCK. This assumption is violated in at
least one place and has been causing confusion for lots of people.
Initial bug report and analysis from Pedro.
kettenis@ beck@ oga@ thib@ dlg@ ok
|
|
|
|
|
|
| |
into struct process.
ok tedu@ deraadt@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rwlock misuse. In particular, this commit makes the following
changes:
1. i386 and amd64 now count the number of active mutexes so that
assertwaitok(9) can detect attempts to sleep while holding a mutex.
2. i386 and amd64 check that we actually hold mutexes when passed to
mtx_leave().
3. Calls to rw_exit*() now call rw_assert_{rd,wr}lock() as
appropriate.
ok krw@, oga@; "sounds good to me" deraadt@; assembly bits double
checked by pirofti@
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Split up choosing of cpu between fork and "normal" cases. Fork is
very different and should be treated as such.
- Instead of implicitly choosing a cpu in setrunqueue, do it outside
where it actually makes sense.
- Just because a cpu is marked as idle doesn't mean it will be soon.
There could be a thundering herd effect if we call wakeup from an
interrupt handler, so subtract cpus with queued processes when
deciding which cpu is actually idle.
- some simplifications allowed by the above.
kettenis@ ok (except one bugfix that was not in the intial diff)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Split up run queues so that every cpu has one.
- Make setrunqueue choose the cpu where we want to make this process
runnable (this should be refined and less brutal in the future).
- When choosing the cpu where we want to run, make some kind of educated
guess where it will be best to run (very naive right now).
Other:
- Set operations for sets of cpus.
- load average calculations per cpu.
- sched_is_idle() -> curcpu_is_idle()
tested, debugged and prodded by many@
|
|
|
|
|
|
|
|
|
|
| |
- setrunnable should never be run on SIDL processes. That's a bug and will
cause all kinds of trouble. Change the switch statement to panic
if that happens.
- p->p_stat == SRUN implies that p != curproc since curproc will always be
SONPROC. This is a leftover from before SONPROC.
deraadt@ "commit"
|
|
|
|
|
|
|
| |
Really just the low-hanging fruit of (hopefully) forthcoming timeout
conversions.
ok art@, krw@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Right now when mi_switch picks up the same proc, we didn't clear the
flag which would mean that every time we service an AST we would attempt
a context switch. For some architectures, amd64 being probably the
most extreme, that meant attempting to context switch for every
trap and interrupt.
Now we clear_resched explicitly after every context switch, even if it
didn't do anything. Which also allows us to remove some more code
in cpu_switchto (not done yet).
miod@ ok
|
|
|
|
|
|
| |
instead of handrolling...
ok miod@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
code. At this moment all architectures get the copy of the old code
except i386 which gets a new shiny implementation that doesn't spin
at splhigh (doh!) and doesn't try to grab the biglock when releasing
the biglock (double doh!).
Shaves 10% of system time during kernel compile and might solve a few
bugs as a bonus.
Other architectures coming shortly.
miod@ deraadt@ ok
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Move the functionality of choosing a process from cpu_switch into
a much simpler function: cpu_switchto. Instead of having the locore
code walk the run queues, let the MI code choose the process we
want to run and only implement the context switching itself in MD
code.
- Let MD context switching run without worrying about spls or locks.
- Instead of having the idle loop implemented with special contexts
in MD code, implement one idle proc for each cpu. make the idle
loop MI with MD hooks.
- Change the proc lists from the old style vax queues to TAILQs.
- Change the sleep queue from vax queues to TAILQs. This makes
wakeup() go from O(n^2) to O(n)
there will be some MD fallout, but it will be fixed shortly.
There's also a few cleanups to be done after this.
deraadt@, kettenis@ ok
|
|
|
|
| |
kettenis@ ok
|
|
|
|
|
|
| |
have cpu_info now, so kill the option.
eyeballed by jsg@ and grange@
|
|
|
|
|
|
| |
the problem with lost signals in MP kernels.
miod@, kettenis@ ok
|
|
|
|
|
|
|
|
|
|
| |
should never be referenced outside the context of the process to which
this stack belongs unless we do the PHOLD/PRELE dance. Loads of code
doesn't follow the rules here. Instead of trying to track down all
offenders and fix this hairy situation, it makes much more sense
to not swap kernel stacks.
From art@, tested by many some time ago.
|
| |
|
|
|
|
|
| |
i got carried away and deleted a whole bunch of useless casts
this is C, not C++. ok md5
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(but I tend to call it ssh localhost & now when telnetd is
history). This is more localized patch, but leaves us with
a recursive lock for protecting scheduling and signal state.
Better care is taken to actually be symmetric over mi_switch.
Also, the dolock cruft in psignal can go with this solution.
Better test runs by more people for longer time has been
carried out compared to the c2k5 patch.
Long term the current mess with interruptible sleep, the
default action on stop signals and wakeup interactions need
to be revisited. ok deraadt@, art@
|
| |
|