| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
| |
matthew@ noticed i wasnt populating npages in the kinfo_pool sent to
userland.
|
|
|
|
| |
inline is the new __inline
|
|
|
|
|
|
|
|
|
|
| |
protect pool_list rather than the rwlock that made i386 blow up:
use pool_count to report the number of pools to userland rather
than walking the list and counting the elements as we go.
use sysctl_rdint, sysctl_rdstring, and sysctl_rdstruct instead of
handcrafted copyouts.
|
|
|
|
|
| |
provide a pool_count global so we can figure out how many pools there are
active without having to walk the global pool_list.
|
|
|
|
|
| |
take the pools mutex when copying stats out of it in the sysctl
path so we are guaranteed a consistent snapshot.
|
| |
|
| |
|
|
|
|
| |
used on some archs.
|
|
|
|
| |
userland.
|
| |
|
|
|
|
|
|
|
| |
than walking the list and counting the elements as we go.
use sysctl_rdint, sysctl_rdstring, and sysctl_rdstruct instead of
handcrafted copyouts.
|
|
|
|
| |
active without having to walk the global pool_list.
|
|
|
|
| |
path so we are guaranteed a consistent snapshot.
|
|
|
|
|
|
|
|
|
| |
number) rather than rely on implicit process exclusion, splhigh and splvm.
the only things touching the global state come from process context so we
can get away with an rwlock instead of a mutex. thankfully.
ok matthew@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pools struct out. however, struct pool in the kernel contains lots
of things that userland probably isnt interested in, like actual
mutexes, and probably shouldnt get easy access to, like pointers
to kernel memory via all the lists/trees.
this implements a kinfo_pool structure that has only the data that
userland needs to know about. it cuts the sysctl code over to
building it from struct pool as required and copying that out
instead, and cuts userland over to only handling kinfo_pool.
the only problem with this is vmstat, which can read kernel images
via kvm, which needs some understanding of struct pool. to cope,
the struct pool definition is guarded by if defined(_KERNEL) ||
defined(_LIBKVM) as inspired by sysctl which needs to do the same
thing sometimes. struct pool itself is generally not visible to
userland though, which is good.
matthew@ suggested struct kinfo_pool instead of struct pool_info.
the kinfo prefix has precedent.
lots of people liked this.
|
| |
|
|
|
|
| |
fully deterministic behavior. ok deraadt
|
|
|
|
|
|
| |
(then immediately reacquire it). this has the effect of giving interrupts
on other CPUs to a chance to run and reduces latency in many cases.
ok deraadt
|
|
|
|
|
|
| |
<uvm/uvm.h> if possible and remove double inclusions.
ok beck@, mlarkin@, deraadt@
|
|
|
|
|
|
|
|
|
| |
get and put, so they dont save us anything by caching constructed
objects. there were no real users of them, and this api was never
documented. removing conditionals in a hot path cant be a bad idea
either.
ok deraadt@ krw@ kettenis@
|
|
|
|
| |
tested on vax (gcc3) ok miod@
|
|
|
|
| |
ok deraadt
|
|
|
|
| |
ok miod
|
|
|
|
| |
this adds a tiny bit more protection from list manipulation.
|
|
|
|
|
| |
order to detect double init mistakes. add a similar check and rearrange
pool_destory to detect the opposite mistake.
|
| |
|
|
|
|
|
| |
allow some more pool debug code to be enabled if not compiled in
bump poison size back up to 64
|
|
|
|
| |
ok deraadt
|
|
|
|
| |
ok deraadt
|
|
|
|
|
|
| |
in MI code; gcc 2.95 does not accept such annotation for function pointer
declarations, only function prototypes.
To be uncommented once gcc 2.95 bites the dust.
|
|
|
|
|
|
| |
function pointer arguments which are {used as,} wrappers around the kernel
printf function.
No functional change.
|
|
|
|
| |
ok jsing@ krw@ mikeb@
|
|
|
|
|
|
|
|
| |
internals. this fixes a panic i got where a network interrupt tried to use
the mbuf pools mutex while pool_reclaim_all already held it which lead
to the same cpu trying to lock that mutex twice.
ok deraadt@
|
|
|
|
|
|
| |
arent necessarily atomic.
this is an update of a diff matthew@ posted to tech@ over a year ago.
|
|
|
|
|
|
| |
write to ph.
ok blambert@ matthew@ deraadt@
|
| |
|
|
|
|
|
| |
also, rmpage updates curpage, no need to do it twice.
ok art deraadt guenther
|
|
|
|
|
|
| |
leaves an empty page in curpage, and this inconsistency slowly spreads
until finally one of the other pool checks freaks out.
ok art deraadt
|
|
|
|
| |
and a pool_init flag to aggressively run pool_chk. ok art deraadt
|
|
|
|
|
|
| |
The problems during the hackathon were not caused by this (most likely).
prodded by deraadt@ and beck@
|
|
|
|
|
|
| |
and we aren't sure what's causing them.
shouted oks by many before I even built a kernel with the diff.
|
|
|
|
|
|
|
|
| |
- Use km_alloc for all backend allocations in pools.
- Use km_alloc for the emergmency kentry allocations in uvm_mapent_alloc
- Garbage collect uvm_km_getpage, uvm_km_getpage_pla and uvm_km_putpage
ariane@ ok
|
|
|
|
|
|
|
|
|
| |
to on, if POOL_DEBUG is compiled in, so that boot-time pool corruption
can be found. When the sysctl is turned off, performance is almost as
as good as compiling with POOL_DEBUG compiled out. Not all pool page
headers can be purged of the magic checks.
performance tests by henning
ok ariane kettenis mikeb
|
|
|
|
|
|
|
|
|
|
|
| |
Allow reclaiming pages from all pools.
Allow zeroing all pages.
Allocate the more equal pig.
mlarking@ needs this.
Not called yet.
ok mlarkin@, theo@
|
|
|
|
| |
ok claudio tedu
|
|
|
|
|
| |
have real values, no 0 values anymore.
ok deraadt kettenis krw matthew oga thib
|
|
|
|
|
|
|
|
|
|
| |
Currently only checks that we're not in an interrupt context, but will
soon check that we're not holding any mutexes either.
Update malloc(9) and pool(9) to use assertwaitok(9) as appropriate.
"i like it" art@, oga@, marco@; "i see no harm" deraadt@; too trivial
for me to bother prying actual oks from people.
|
|
|
|
|
| |
This is more clear, and as thib pointed out, the default in softraid was
wrong. ok thib.
|
|
|
|
|
|
|
|
|
| |
Use uvm_km_kmemalloc_pla with the dma constraint to allocate kernel stacks.
Yes, that means DMA is possible to kernel stacks, but only until we've fixed
all the scary drivers.
deraadt@ ok
|
|
|
|
| |
ok tedu@, beck@, oga@
|