| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
No functional change.
ok semarie@
|
|
|
|
|
|
|
|
| |
single_thread_set() is modified to explicitly indicated when waiting until
sibling threads are parked is required. This is obviously not required if
a traced thread is switching away from a CPU after handling a STOP signal.
ok claudio@
|
|
|
|
|
|
| |
Kill SINGLE_PTRACE and use SINGLE_SUSPEND which has almost the same semantic.
This diff did not properly kill SINGLE_PTRACE and broke RAMDISK kernels.
|
|
|
|
|
|
|
|
| |
Ze big lock is currently necessary to ensure that two sibling threads
are not racing against each other when processing signals. However it
is not strickly necessary to unpark sibling threads.
ok claudio@
|
|
|
|
|
|
|
|
| |
single_thread_set() is modified to explicitly indicated when waiting until
sibling threads are parked is required. This is obviously not required if
a traced thread is switching away from a CPU after handling a STOP signal.
ok claudio@
|
|
|
|
|
|
| |
This makes appear some redundant & racy checks.
ok semarie@
|
|
|
|
|
|
|
| |
Use the SCHED_LOCK() to ensure `ps_thread' isn't being modified by a sibling
when entering tsleep(9) w/o KERNEL_LOCK().
ok visa@
|
|
|
|
|
| |
We did not reach a consensus about using SMR to unlock single_thread_set()
so there's no point in keeping this change.
|
|
|
|
|
|
|
|
|
|
|
|
| |
the SCHED_LOCK().
Putting a thread on a sleep queue is reduce to the following:
sleep_setup();
/* check condition or release lock */
sleep_finish();
Previous version ok cheloha@, jmatthew@, ok claudio@
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rename klist_{insert,remove}() to klist_{insert,remove}_locked().
These functions assume that the caller has locked the klist. The current
state of locking remains intact because the kernel lock is still used
with all klists.
Add new functions klist_insert() and klist_remove() that lock the klist
internally. This allows some code simplification.
OK mpi@
|
|
|
|
|
|
|
|
| |
Make it obvious where the thread is blocked. "pause" is ambiguous.
Tweaked by kettenis@.
Probably ok kettenis@.
|
|
|
|
|
|
|
| |
Currently all iterations are done under KERNEL_LOCK() and therefor use
the *_LOCKED() variant.
From and ok claudio@
|
|
|
|
|
|
|
| |
Make sure `ps_single' is set only once by checking then updating it without
releasing the lock.
Analyzed by and ok claudio@
|
|
|
|
| |
Panic reported by dhill@
|
|
|
|
|
|
|
| |
Make sure `ps_single' is set only once by checking then updating it without
releasing the lock.
Analyzed by and ok claudio@
|
|
|
|
|
|
|
| |
Simplify MD code and reduce the amount of recursion into the signal code
which helps when dealing with locks.
ok cheloha@, deraadt@
|
|
|
|
| |
ok claudio@
|
|
|
|
| |
struct sigacts since that is the only thing that is modified by siginit.
|
|
|
|
| |
ok claudio@, pirofti@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Extend the scope of SCHED_LOCK() to better synchronize
single_thread_set(), single_thread_clear() and single_thread_check().
This prevents threads from suspending before single_thread_set() has
finished. If a thread suspended early, ps_singlecount might get
decremented too much, which in turn could make single_thread_wait()
get stuck.
The race could be triggered for example by trying to stop
a multithreaded process with a debugger. When triggered, the race
prevents the debugger from finishing a wait4(2) call on the debuggee.
This kind of gdb hang was reported by Julian Smith on misc@.
Unfortunately, single-thread mode switching still has issues and hangs
are still possible.
OK mpi@
|
|
|
|
| |
ok kettenis@, visa@
|
|
|
|
|
|
|
| |
The list can be accessed from interrupt context if a signal is sent
from an interrupt handler.
OK anton@ cheloha@ mpi@
|
|
|
|
| |
subsystem and ps_klist handling still run under the kernel lock.
|
|
|
|
|
|
| |
for example, with locking assertions.
OK mpi@, anton@
|
|
|
|
|
|
|
| |
single_thread_check() safe to be called without KERNEL_LOCK().
single_thread_wait() needs to use sleep_setup() and sleep_finish()
instead of tsleep() to make sure no wakeup() is lost.
Input kettenis@, with and OK visa@
|
|
|
|
|
|
|
|
|
|
|
|
| |
This ensures that the conditions checked are still in force. The sleep
breaks atomicity, allowing another thread to alter the state.
single_thread_set() should return immediately after sleep when called
from dowait4() because there is no guarantee that the process pr still
exists. When called from single_thread_set(), the process is that of
the calling thread, which prevents process pr from disappearing.
OK anton@, mpi@, claudio@
|
|
|
|
|
|
|
|
|
|
| |
This shows that atomic_* operations should not be necessery to write
to this field unlike with the process one.
The advantage of using a somewhat-unique prefix for struct member is
moot when multiple definitions use the same prefix :o)
From Amit Kulkarni, ok claudio@
|
|
|
|
|
|
| |
kern_sig.c where they are currently added by the include. While doing
that mark the sigprop array as const.
OK mpi@ anton@ millert@
|
|
|
|
|
|
|
| |
proc0 which is used for kthreads and idle threads. proc0 and all those
other kernel threads don't handle signals so there is no benefit in sharing.
Simplifies the code a fair bit since the refcnt is gone.
OK kettenis@
|
| |
|
|
|
|
|
|
| |
adding more filter properties without cluttering the struct.
OK mpi@, anton@
|
|
|
|
|
|
|
|
| |
interrupt is enough to defer the signal handling. This is a leftover
from the times where not all archs had generic soft interrupts.
It is possible that the defer signal handling to a soft interrupt will
be removed at a later stage.
Input anton@, mpi@ OK kettenis@
|
|
|
|
|
|
| |
process.
ok bluhm@ claudio@ visa@
|
|
|
|
|
|
|
|
|
|
|
| |
The 3 subsystems: signal, poll/select and kqueue can now be addressed
separatly.
Note that bpf(4) and audio(4) currently delay the wakeups to a separate
context in order to respect the KERNEL_LOCK() requirement. Sockets (UDP,
TCP) and pipes spin to grab the lock for the sames reasons.
ok anton@, visa@
|
|
|
|
| |
asked for more oks; my bad!
|
|
|
|
|
|
|
|
|
|
| |
operating on the process structure and issuing signals. This is similar
to what sigio_setown() already does.
With this in place, the pipe subsystem is no longer required to grab the
kernel lock before calling pgsigio().
ok visa@
|
|
|
|
|
|
|
|
|
|
|
| |
Using different fields to remember in which runqueue or sleepqueue
threads currently are will make it easier to split the SCHED_LOCK().
With this change, the (potentially boosted) sleeping priority is no
longer overwriting the thread priority. This let us get rids of the
logic required to synchronize `p_priority' with `p_usrpri'.
Tested by many, ok visa@
|
|
|
|
|
|
|
| |
This moves most of the SCHED_LOCK() related to protecting the sleepqueue
and its states to kern/kern_sync.c
Name suggestion from jsg@, ok kettenis@, visa@
|
|
|
|
|
|
| |
tsleep(9) to tsleep_nsec(9).
ok bluhm@
|
|
|
|
|
|
|
|
|
|
|
|
| |
FIOGETOWN/SIOCGPGRP/TIOCGPGRP. Do this by determining the meaning of
the ID parameter inside the sigio code. Also add cases for FIOSETOWN
and FIOGETOWN where there have been TIOCSPGRP and TIOCGPGRP before.
These changes allow removing the ID translation from sys_fcntl() and
sys_ioctl().
Idea from NetBSD
OK mpi@, claudio@
|
|
|
|
| |
OK visa@ anton@
|
|
|
|
|
|
| |
make the structs const so that the data are put in .rodata.
OK mpi@, deraadt@, anton@, bluhm@
|
|
|
|
| |
ok visa@
|
|
|
|
|
|
|
|
|
| |
Convert those to a consolidated status when needed in wait4(), kevent(),
and sysctl()
Pass exit code and signal separately to exit1()
(This also serves as prep for adding waitid(2))
ok mpi@
|
|
|
|
|
|
|
|
| |
sweep tree to correct NDIINT op and flags ahead of time. document
the requirement. This allows KERNELPATH to be used to bypass
unveil for crash dumps with nosuidcoredump=2 or 3
ok visa@ deraadt@ florian@
|
|
|
|
|
| |
with a sleep between. Reorganize the code for a single check.
ok anton beck florian mpi
|
|
|
|
|
| |
it from the pool.
ok bluhm visa
|
|
|
|
|
|
|
| |
This allows to enforce that sleeping priorities will now always be <
PUSER.
ok visa@, ratchov@
|
|
|
|
|
|
|
|
|
|
| |
of resource limit structs has been done between processes. By applying
copy-on-write also between threads, threads can read rlimits in
a nearly lock-free manner.
Inspired by code in DragonFly BSD and FreeBSD.
OK mpi@, agreement from jmatthew@ and anton@
|
|
|
|
|
|
|
|
|
|
| |
does not block the signal. If all threads block the signal, we
delivered it to the main thread. This does not conform to POSIX.
If any thread unblocks the signal, it should be delivered immediately
to this thread.
Mark such signals pending at the process instead of a single thread.
Then any thread can handle it later.
OK kettenis@ guenther@
|