diff options
author | 2004-08-04 21:49:18 +0000 | |
---|---|---|
committer | 2004-08-04 21:49:18 +0000 | |
commit | b1748765addf5ec3fc3b40d531278766ef9086c1 (patch) | |
tree | 2bfb89b53aefa6ed9a314c60b1a7085575d3af74 /sys/kern/kern_clock.c | |
parent | use CIRCLEQ_XXX; ok mcbride, miod (diff) | |
download | wireguard-openbsd-b1748765addf5ec3fc3b40d531278766ef9086c1.tar.xz wireguard-openbsd-b1748765addf5ec3fc3b40d531278766ef9086c1.zip |
hardclock detects if ITIMER_VIRTUAL and ITIMER_PROF have expired and
sends SIGVTALRM and SIGPROF to the process if they had. There is a big
problem with calling psignal from hardclock on MULTIPROCESSOR machines
though. It means we need to protect all signal state in the process
with a lock because hardclock doesn't obtain KERNEL_LOCK. Trying to
track down all the tentacles of this quickly becomes very messy. What
saves us at the moment is that SCHED_LOCK (which is used to protect
parts of the signal state, but not all) happens to be recursive and
forgives small and big errors. That's about to change.
So instead of trying to hunt down all the locking problems here, just
make hardclock not send signals. Instead hardclock schedules a timeout
that will send the signal later. There are many reasons why this works
just as good as the previous code, all explained in a comment written
in big, friendly letters in kern_clock.
miod@ ok noone else dared to ok this, but noone screamed in agony either.
Diffstat (limited to 'sys/kern/kern_clock.c')
-rw-r--r-- | sys/kern/kern_clock.c | 47 |
1 files changed, 44 insertions, 3 deletions
diff --git a/sys/kern/kern_clock.c b/sys/kern/kern_clock.c index b7fcfa717c5..524688b58cb 100644 --- a/sys/kern/kern_clock.c +++ b/sys/kern/kern_clock.c @@ -1,4 +1,4 @@ -/* $OpenBSD: kern_clock.c,v 1.48 2004/08/04 16:29:32 art Exp $ */ +/* $OpenBSD: kern_clock.c,v 1.49 2004/08/04 21:49:19 art Exp $ */ /* $NetBSD: kern_clock.c,v 1.34 1996/06/09 04:51:03 briggs Exp $ */ /*- @@ -171,6 +171,47 @@ initclocks() } /* + * hardclock does the accounting needed for ITIMER_PROF and ITIMER_VIRTUAL. + * We don't want to send signals with psignal from hardclock because it makes + * MULTIPROCESSOR locking very complicated. Instead we use a small trick + * to send the signals safely and without blocking too many interrupts + * while doing that (signal handling can be heavy). + * + * hardclock detects that the itimer has expired, and schedules a timeout + * to deliver the signal. This works becuse of the following reasons: + * - The tiemout structures can be in struct pstats because the timers + * can be only activated on curproc (never swapped). Swapout can + * only happen from a kernel thread and softclock runs before threads + * are scheduled. + * - The timeout can be scheduled with a 1 tick time because we're + * doing it before the timeout processing in hardclock. So it will + * be scheduled to run as soon as possible. + * - The timeout will be run in softclock which will run before we + * return to userland and process pending signals. + * - If the system is so busy that several VIRTUAL/PROF ticks are + * sent before softclock processing, we'll send only one signal. + * But if we'd send the signal from hardclock only one signal would + * be delivered to the user process. So userland will only see one + * signal anyway. + */ + +void +virttimer_trampoline(void *v) +{ + struct proc *p = v; + + psignal(p, SIGVTALRM); +} + +void +proftimer_trampoline(void *v) +{ + struct proc *p = v; + + psignal(p, SIGPROF); +} + +/* * The real-time timer, interrupting hz times per second. */ void @@ -197,10 +238,10 @@ hardclock(struct clockframe *frame) if (CLKF_USERMODE(frame) && timerisset(&pstats->p_timer[ITIMER_VIRTUAL].it_value) && itimerdecr(&pstats->p_timer[ITIMER_VIRTUAL], tick) == 0) - psignal(p, SIGVTALRM); + timeout_add(&pstats->p_virt_to, 1); if (timerisset(&pstats->p_timer[ITIMER_PROF].it_value) && itimerdecr(&pstats->p_timer[ITIMER_PROF], tick) == 0) - psignal(p, SIGPROF); + timeout_add(&pstats->p_prof_to, 1); } /* |