| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
encapsulating all such access into wall-defined functions
that makes sure locking is done as needed.
It also cleans up some uses of wall time vs. uptime some
places, but there is sure to be more of these needed as
well, particularily in MD code. Also, many current calls
to microtime() should probably be changed to getmicrotime(),
or to the {,get}microuptime() versions.
ok art@ deraadt@ aaron@ matthieu@ beck@ sturm@ millert@ others
"Oh, that is not your problem!" from miod@
|
| |
|
|
|
|
|
|
|
|
| |
the new one remains the default and _nointr.
_kmem is restored to its former position, and _oldnointr is
introduced.
this is to allow some pool users who don't like the new allocator
to continue working. testing/ok beck@ cedric@
|
|
|
|
|
|
|
|
| |
change both the nointr and default pool allocators to using uvm_km_getpage.
change pools to default to a maxpages value of 8, so they hoard less memory.
change mbuf pools to use default pool allocator.
pools are now more efficient, use less of kmem_map, and a bit faster.
tested mcbride, deraadt, pedro, drahn, miod to work everywhere
|
|
|
|
|
|
| |
we're looking for. change small page_header hash table to a splay tree.
from Chuck Silvers.
tested by brad grange henning mcbride naddy otto
|
|
|
|
|
|
| |
- Allow a pool to be initialized with PR_DEBUG which will cause it to
allocate with malloc_debug.
- sprinkle some splassert.
|
|
|
|
|
| |
- uvm_km_alloc_poolpage1 has its own spl protection, no need to add
additional layer around it.
|
|
|
|
|
| |
One relevant change: round up pool element size to the alignment.
VS: ----------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
give us pages. PR_NOWAIT most likely means "hey, we're coming from an
interrupt, don't mess with stuff that doesn't have proper protection".
- pool_allocator_free is called in too many places so I don't feel
comfortable without that added protection from splvm (and besides,
pool_allocator_free is rarely called anyway, so the extra spl will be
unnoticeable). It shouldn't matter when fiddling with those flags, but
you never know.
- Remove a wakeup without a matching tsleep. It's a left-over from
some other code path that I've been investigating when reworking the
pool a while ago and it should have been removed before that commit.
deraadt@ ok
|
| |
|
|
|
|
| |
Diff generated by Chris Kuethe.
|
|
|
|
|
|
|
|
| |
Just because the pool allocates from intrsafe memory doesn't mean that the
pool has to be protected by splvm. We can have an intrsafe pools at splbio
or splsoftnet.
pool_page_alloc and pool_page_free must du their own splvm protection.
|
|
|
|
|
|
| |
When trying the drain hook just in pool_allocator_alloc, don't leak memory
when the drain succeeds and don't avoid draining other pools if this
pool doesn't have a drain hook.
|
| |
|
|
|
|
| |
From thorpej@netbsd.org
|
|
|
|
| |
the current size of the pool. ok art@
|
|
|
|
| |
let other parts of the kernel call it.
|
| |
|
|
|
|
|
| |
and don't static inline big functions that are called multiple
times and are not time critical.
|
|
|
|
| |
using printf(). Makes ddb sessions more fruitful.
|
|
|
|
|
| |
PR_MALLOC wasn't used at all in the code
and PR_STATIC was missing pieces and should be solved with allocators.
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. When a pool hit the hard limit. Just before bailing out/sleeping.
2. When an allocator fails to allocate memory (with PR_NOWAIT).
3. Just before trying to reclaim some page in pool_reclaim.
The function called form the hook should try to free some items to the
pool if possible.
Convert m_reclaim hooks that were embedded in MCLGET, MGET and MGETHDR
into a pool drain hook (making the code much cleaner).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
well (not at all) with shortages of the vm_map where the pages are mapped
(usually kmem_map).
Try to deal with it:
- group all information the backend allocator for a pool in a separate
struct. The pool will only have a pointer to that struct.
- change the pool_init API to reflect that.
- link all pools allocating from the same allocator on a linked list.
- Since an allocator is responsible to wait for physical memory it will
only fail (waitok) when it runs out of its backing vm_map, carefully
drain pools using the same allocator so that va space is freed.
(see comments in code for caveats and details).
- change pool_reclaim to return if it actually succeeded to free some
memory, use that information to make draining easier and more efficient.
- get rid of PR_URGENT, noone uses it.
|
|
|
|
| |
From NetBSD.
|
|
|
|
| |
- use ltsleep instead of simple_unlock ; tsleep
|
| |
|
| |
|
| |
|
|
|
|
| |
(Look ma, I might have broken the tree)
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
art@ ok
|
| |
|
| |
|
|
|
|
|
|
|
| |
- pool_cache similar to the slab allocator in Solaris.
- clean up locking a bit.
- Don't pass __LINE__ and __FILE__ to pool_get and pool_put unless
POOL_DIAGNOSTIC is defined.
|
|
|
|
|
|
|
| |
old vm system and I hoped that it would make people help me to switch all
archs to uvm. But that didn't help.
Fix pool to work with the old vm system (not optimal, ha!).
|
| |
|
| |
|
| |
|
|
|