| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
| |
of per-rthread. Handling of per-thread tick and runtime counters
inspired by how FreeBSD does it.
ok kettenis@
|
|
|
|
|
|
|
|
|
|
| |
- move the P_TRACED and P_INEXEC flags, and p_oppid, p_ptmask, and
p_ptstat member from struct proc to struct process
- sort the PT_* requests into those that take a PID vs those that
can also take a TID
- stub in PT_GET_THREAD_FIRST and PT_GET_THREAD_NEXT
ok kettenis@
|
|
|
|
|
|
|
|
| |
struct process; KTRFAC_ACTIVE becomes P_INKTR. Also, save the credentials
used to open the file in sys_ktrace() and use them for all writes to the
vnode.
much feedback and ok jsing@
|
|
|
|
|
|
|
|
|
| |
copied area, and initialize it properly in the FORK_THREAD case.
This restores the behavior of a forked process inheriting its parent's
signal stack.
ok guenther@
|
|
|
|
|
|
|
|
| |
and use curp vs p instead of p1 vs p2. Add curpr and pr variables
for the respective struct processes. Make sigactsshare() return
the shared sigacts intead of taking the struct proc to update.
ok deraadt@
|
|
|
|
|
|
|
|
|
| |
during the big rework at c2k10, but it's too early as signals can be posted
before the process is fully built. Move those list adds back down to the
late stage they were before.
Problem seen on sebastia@'s sparc.
ok deraadt@ miod@
|
|
|
|
|
|
|
|
|
|
|
| |
for pointing to the thread-control-block. Support for mapping this
to the correct hardware register can be added as it's finished;
start with support for amd64, sparc, and sparc64. Includes syscalls
for getting and setting it (for a portable __errno implementation) as
well as creating a new thread with an initial value for it.
discussed with miod@, kettenis@, deraadt@; committing to get the syscalls
in with the impending libc bump and do further refinements in tree
|
|
|
|
|
|
|
| |
declared in .h files, not in each .c. Apply that rule to endtsleep(),
scheduler_start(), updatepri(), and realitexpire()
ok deraadt@ tedu@
|
|
|
|
|
|
|
| |
KERNEL_PROC_LOCK -> KERNEL_LOCK
KERNEL_PROC_UNLOCK -> KERNEL_UNLOCK
oga@ ok
|
|
|
|
|
|
| |
appears to be safe now. If not, we'll know soon where the bugs lie, so
that we can fix them. This diff has been in snapshots for many months.
ok oga miod
|
|
|
|
|
|
|
|
|
| |
a vforked child behave correctly. Have the parent in a vfork()
wait on a (different) flag in *its* process instead of the child
to prevent a possible use-after-free. When ktracing the child
return from a fork, call it rfork if an rthread was created.
ok blambert@
|
|
|
|
|
|
| |
that you can't evade the checks by doing the dirty work in an rthread
ok blambert@, deraadt@
|
|
|
|
| |
Fixes rthread breakage observed by Vladimir Kirillov.
|
|
|
|
|
|
|
|
|
| |
so that the process-level stuff is to/from struct process and not
struct proc. This fixes a bunch of problem cases in rthreads.
Based on earlier work by blambert and myself, but mostly written
at c2k10.
Tested by many: deraadt, sthen, krw, ray, and in snapshots
|
|
|
|
| |
1.119.
|
|
|
|
|
|
|
|
| |
rwlock, the thread will release biglock if it sleeps, means that
atomicity from before the rw_enter() to after it is not guaranteed.
The change didn't address those, so pulling it until it does.
"go for it" tedu@
|
|
|
|
|
|
|
|
|
| |
Use uvm_km_kmemalloc_pla with the dma constraint to allocate kernel stacks.
Yes, that means DMA is possible to kernel stacks, but only until we've fixed
all the scary drivers.
deraadt@ ok
|
| |
|
|
|
|
|
|
| |
process_new() handle the new struct process like fork1() does struct proc,
with a range of members zeroed and a range copied from the parent process.
ok tedu@
|
|
|
|
|
| |
count was always one. That's pointless, so remove the member and the code.
ok tedu@
|
|
|
|
|
| |
sproc() support, but we don't have COMPAT_IRIX.
ok krw@ tedu@
|
| |
|
|
|
|
|
| |
(not done) hasn't changed, but now it's less work to test things.
ok art deraadt
|
|
|
|
|
|
|
| |
grabbing allproclk in proc_zap(); don't dereference the process's p_pgrp
if that happens.
ok art@ thib@
|
|
|
|
|
|
|
|
| |
the allproclk before searching for a free pid so that we don't sleep
between picking one and adding it to the list that is searched.
Also, keep holding the lock until after the PIDHASH update.
ok art@, tedu@
|
| |
|
|
|
|
|
| |
list walkers in sysctl that can block. As a reward, no more vslock.
With some feedback from art, guenther, phessler. ok guenther.
|
|
|
|
|
| |
from Brad Tilley <brad at 16systems dot com>;
ok oga@
|
|
|
|
|
|
|
|
|
|
|
| |
and catching FORK_THREAD when RTHREADS wasn't compiled in. Simplify
sys_rfork() based on that.
Flesh out the Linux clone support with more flags, but stricter
checks for missing support or bad combos. Still not enough for
NPTL to work, mind you.
ok kettenis@
|
|
|
|
|
|
|
|
| |
so put it in struct process instead of struct proc. While at it,
move the p_emul member inside struct proc so that it gets copied
automatically instead of requiring manual assignment.
ok deraadt@
|
|
|
|
|
|
|
|
| |
catch the libc major bump per request from deraadt@
Diff by reyk.
ok guenther@
|
|
|
|
|
|
|
|
|
|
|
| |
which is exactly what the macro does.
Macro's that are nothing more then:
#define FUNCTION(arg) function(arg)
are almost always pointless and should go away.
OK blambert@
Agreed by many.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Split up choosing of cpu between fork and "normal" cases. Fork is
very different and should be treated as such.
- Instead of implicitly choosing a cpu in setrunqueue, do it outside
where it actually makes sense.
- Just because a cpu is marked as idle doesn't mean it will be soon.
There could be a thundering herd effect if we call wakeup from an
interrupt handler, so subtract cpus with queued processes when
deciding which cpu is actually idle.
- some simplifications allowed by the above.
kettenis@ ok (except one bugfix that was not in the intial diff)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Split up run queues so that every cpu has one.
- Make setrunqueue choose the cpu where we want to make this process
runnable (this should be refined and less brutal in the future).
- When choosing the cpu where we want to run, make some kind of educated
guess where it will be best to run (very naive right now).
Other:
- Set operations for sets of cpus.
- load average calculations per cpu.
- sched_is_idle() -> curcpu_is_idle()
tested, debugged and prodded by many@
|
|
|
|
| |
ok deraadt
|
|
|
|
|
|
|
| |
fork(), i worry about it a lot but cannot prove yet that sleeping there
is bad. Anyways, this change makes us never sleep in that area -- the
memory needed is allocated ealier like the ptrace state. tested by many
developers.
|
|
|
|
|
|
| |
in case we need it. the idea is to try to get rid of some potential
sleeps..
ok tedu
|
|
|
|
|
|
|
|
|
| |
Put a reference count in struct process to prevent use-after-free
if the main thread reaches the reaper ahead of some other thread
in the process. Use the reference count to update the user process
count correctly when changin real uid.
"please re-commit before something else nasty comes in" deraadt@
|
| |
|
|
|
|
|
|
|
| |
if the main thread reaches the reaper ahead of some other thread
in the process.
ok art@ tedu@
|
|
|
|
| |
free things when exiting a threaded proc. from philip guenther
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Move the functionality of choosing a process from cpu_switch into
a much simpler function: cpu_switchto. Instead of having the locore
code walk the run queues, let the MI code choose the process we
want to run and only implement the context switching itself in MD
code.
- Let MD context switching run without worrying about spls or locks.
- Instead of having the idle loop implemented with special contexts
in MD code, implement one idle proc for each cpu. make the idle
loop MI with MD hooks.
- Change the proc lists from the old style vax queues to TAILQs.
- Change the sleep queue from vax queues to TAILQs. This makes
wakeup() go from O(n^2) to O(n)
there will be some MD fallout, but it will be fixed shortly.
There's also a few cleanups to be done after this.
deraadt@, kettenis@ ok
|
|
|
|
|
|
|
| |
bug in the code, but as soon as I try to fix it, it seems to trigger
some other bugs. Instead of trying to figure out what's going on
while everyone suffers, it's better to back out and figure out
the bugs outside the tree.
|
|
|
|
|
|
| |
have cpu_info now, so kill the option.
eyeballed by jsg@ and grange@
|
|
|
|
|
| |
leave macros behind for now to keep the commit small
ok art beck miod pedro
|
|
|
|
|
|
|
|
|
|
|
| |
a new struct. Instead of doing a huge rename and deal with the fallout
for weeks, like other projects that need no mention, we will slowly and
carefully move things out of struct proc into a new struct process.
- Create struct process and the infrastructure to create and remove them.
- Move threads in a process into struct process.
deraadt@, tedu@ ok
|
|
|
|
|
|
|
|
| |
Instead, keep the proc pointer in it and put the selinfo on a list
in struct proc in selrecord. Then clean up the list when leaving
sys_select and sys_poll.
miod@ ok, testing by many, including Bobs spamd boxes.
|
|
|
|
|
|
|
|
|
|
|
|
| |
it's a good idea to use atomic.h operations on it. This mechanic
change updates all bit operations on p_flag to atomic_{set,clear}bits_int.
Only exception is that P_OWEUPC is set by MI code before calling
need_proftick and it's automatically cleared by ADDUPC. There's
no reason for MD handling of that flag since everyone handles it the
same way.
kettenis@ ok
|
|
|
|
|
|
| |
later when more of it's resources have been allocated and thus
kill(2)ing such a process has more predictable results.
now w/ a couple of kettenis remarks; kettenis@ miod@ ok
|