| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
| |
|
|
|
|
|
|
| |
slackers now get more bugs to fix, yay!
discussed with deraadt@.
|
|
|
|
|
|
| |
is invoked with the pool mutex held, the asserts are satisfied by design.
ok tedu@
|
|
|
|
|
| |
in pool_init so you the pool struct doesn't have to be zeroed before
you init it.
|
| |
|
|
|
|
|
|
| |
since it is essentially free. To turn on the checking of the rest of the
allocation, use 'option POOL_DEBUG'
ok tedu
|
|
|
|
|
|
| |
between releases we may want to turn it on, since it has uncovered real
bugs)
ok miod henning etc etc
|
|
|
|
|
|
| |
in fullpages that have been allocated.
spotted by claudio@
|
|
|
|
|
|
|
| |
this can be used to walk over all the items allocated with a pool and have
them examined by a function the caller provides.
with help from and ok tedu@
|
|
|
|
|
|
| |
on, aka, its coloring.
ok tedu@
|
|
|
|
|
| |
works and there's even some sanity checks that it actually returns what we
expect it to return.
|
|
|
|
|
| |
borked and instead of stressing to figure out how to fix it, I'll
let peoples kernels to work.
|
|
|
|
|
| |
by otto@
ok otto@
|
|
|
|
| |
This should make dlg happy.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is solved by special allocators and an obfuscated compare function
for the page header splay tree and some other minor adjustments.
At this moment, the allocator will be picked automagically by pool_init
and you can get a kernel_map allocator if you specify PR_WAITOK in flags
(XXX), default is kmem_map. This will be changed in the future once the
allocator code is slightly reworked. But people want to use it now.
"nag nag nag nag" dlg@
|
|
|
|
|
|
|
| |
is a lot slower. Before release this should be backed out, but for now
we need everyone to run with this and start finding the use-after-free
style bugs this exposes. original version from tedu
ok everyone in the room
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
add a new arg to the backend so it can tell pool to slow down. when we get
this flag, yield *after* putting the page in the pool's free list. whatever
we do, don't let the thread sleep.
this makes things better by still letting the thread run when a huge pf
request comes in, but without artificially increasing pressure on the backend
by eating pages without feeding them forward.
ok deraadt
|
|
|
|
|
|
|
|
|
| |
Not sure what's more surprising: how long it took for NetBSD to
catch up to the rest of the BSDs (including UCB), or the amount of
code that NetBSD has claimed for itself without attributing to the
actual authors.
OK deraadt@
|
| |
|
|
|
|
|
|
| |
for pool_sethardlimit.
prodded by and ok tedu@
|
|
|
|
|
|
|
| |
malloc flag, does the same thing.
use it in a few places.
OK tedu@, "then go ahead. and don't forget the manpage (-:" miod@
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
remove pool_cache code. it was barely used, and quite complex. it's
silly to have both a "fast" and "faster" allocation interface. provide
a ctor/dtor interface, and convert the few cache users to use it. no
caching at this time.
use mutexes to protect pools. they should be initialized with pool_setipl
if the pool may be used in an interrupt context, without existing spl
protection.
ok art deraadt thib
|
|
|
|
|
|
|
|
|
| |
"array" index start at 1, the code also abused index 0 to detect that we
were doing a KERN_POOL_NPOOLS.
Just look at *name == KERN_POOL_NPOOLS instead of using index == 0 for that.
deraadt@ ok
|
| |
|
|
|
|
|
|
|
| |
probably a better idea to just let reclaim have the emptypages. we can
still use the partial pages.
this lets dlg sling many many more packets
ok dlg henning miod pedro ryan
|
|
|
|
|
| |
for splassert inside pool_get and pool_put (DIAGNOSTIC only)
ok miod pedro thib
|
|
|
|
| |
miod@ ok
|
|
|
|
|
|
|
|
| |
and make sure that nothing can ever be mapped at theses addresses.
Only i386 overrides the default for now.
From mickey@, ok art@ miod@
|
|
|
|
|
|
|
|
| |
kmem_object) just so that we can remove them, just use pmap_extract
to get the pages to free and simplify a lot of code to not deal with
the list of intrsafe maps, intrsafe objects, etc.
miod@ ok
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. drain hooks and lists of allocators make the code complicated
2. the only hooks in the system are the mbuf reclaim routines
3. if reclaim is actually able to put a meaningful amount of memory back
in the system, i think something else is dicked up. ie, if reclaiming
your ip fragment buffers makes the difference thrashing swap and not,
your system is in a load of trouble.
4. it's a scary amount of code running with very weird spl requirements
and i'd say it's pretty much totally untested. raise your hand if your
router is running at the edge of swap.
5. the reclaim stuff goes back to when mbufs lived in a tiny vm_map and
you could run out of va. that's very unlikely (like impossible) now.
ok/tested pedro krw sturm
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
encapsulating all such access into wall-defined functions
that makes sure locking is done as needed.
It also cleans up some uses of wall time vs. uptime some
places, but there is sure to be more of these needed as
well, particularily in MD code. Also, many current calls
to microtime() should probably be changed to getmicrotime(),
or to the {,get}microuptime() versions.
ok art@ deraadt@ aaron@ matthieu@ beck@ sturm@ millert@ others
"Oh, that is not your problem!" from miod@
|
| |
|
|
|
|
|
|
|
|
| |
the new one remains the default and _nointr.
_kmem is restored to its former position, and _oldnointr is
introduced.
this is to allow some pool users who don't like the new allocator
to continue working. testing/ok beck@ cedric@
|
|
|
|
|
|
|
|
| |
change both the nointr and default pool allocators to using uvm_km_getpage.
change pools to default to a maxpages value of 8, so they hoard less memory.
change mbuf pools to use default pool allocator.
pools are now more efficient, use less of kmem_map, and a bit faster.
tested mcbride, deraadt, pedro, drahn, miod to work everywhere
|
|
|
|
|
|
| |
we're looking for. change small page_header hash table to a splay tree.
from Chuck Silvers.
tested by brad grange henning mcbride naddy otto
|
|
|
|
|
|
| |
- Allow a pool to be initialized with PR_DEBUG which will cause it to
allocate with malloc_debug.
- sprinkle some splassert.
|
|
|
|
|
| |
- uvm_km_alloc_poolpage1 has its own spl protection, no need to add
additional layer around it.
|
|
|
|
|
| |
One relevant change: round up pool element size to the alignment.
VS: ----------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
give us pages. PR_NOWAIT most likely means "hey, we're coming from an
interrupt, don't mess with stuff that doesn't have proper protection".
- pool_allocator_free is called in too many places so I don't feel
comfortable without that added protection from splvm (and besides,
pool_allocator_free is rarely called anyway, so the extra spl will be
unnoticeable). It shouldn't matter when fiddling with those flags, but
you never know.
- Remove a wakeup without a matching tsleep. It's a left-over from
some other code path that I've been investigating when reworking the
pool a while ago and it should have been removed before that commit.
deraadt@ ok
|
| |
|