summaryrefslogtreecommitdiffstats
path: root/sys/kern/subr_pool.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
...
* uvm constraints. Add two mandatory MD symbols, uvm_md_constraintsthib2010-06-271-7/+36
| | | | | | | | | | | | | | | | | | | | | | which contains the constraints for DMA/memory allocation for each architecture, and dma_constraints which contains the range of addresses that are dma accessable by the system. This is based on ariane@'s physcontig diff, with lots of bugfixes and additions the following additions by my self: Introduce a new function pool_set_constraints() which sets the address range for which we allocate pages for the pool from, this is now used for the mbuf/mbuf cluster pools to keep them dma accessible. The !direct archs no longer stuff pages into the kernel object in uvm_km_getpage_pla but rather do a pmap_extract() in uvm_km_putpages. Tested heavily by my self on i386, amd64 and sparc64. Some tests on alpha and SGI. "commit it" beck, art, oga, deraadt "i like the diff" deraadt
* aligment -> alignmentmiod2010-06-171-2/+2
|
* When allocating from the item header pool, we can't sleep, as we may be holding a mutex which won't be released. From Christian Ehrhardt.tedu2010-01-161-4/+3
| | | | While here, fix another buglet: no need to pass down PR_ZERO either, as noticed by blambert@.
* sync comment to reality, off-page page headers go intothib2009-09-051-3/+3
| | | | an RB tree, not into a hashtable.
* add commented out options for PAGEFASTRECYCLE, KVA_GUARDPAGES, shuffle VFSDEBUGthib2009-08-261-2/+1
| | | | | | around and add POOL_DEBUG as an enabled option, removing the define from subr_pool.c. comments & ok deraadt@.
* add a show all vnodes command, use dlg's nice pool_walk() to accomplishthib2009-08-131-4/+5
| | | | | | this. ok beck@, dlg@
* Use an RB tree instead of a SPLAY tree for the page headers tree.thib2009-08-091-8/+8
| | | | ok beck@, dlg@
* We enable POOL_DEBUG (except in a release)deraadt2009-07-301-2/+2
|
* turn off POOL_DEBUG as we go into release; pointed out by mpfderaadt2009-06-241-2/+2
|
* rework pool_get() a bit so that if you call if with a constructor setoga2009-06-121-9/+16
| | | | | | | | | | | | | *and* PR_ZERO in flags, you will no longer zero our your nicely constructed object. Instead, now if you have a contructor set, and you set PR_ZERO, you will panic (it's invalid due to how constructor work). ok miod@ deraadt@ on earlier versions of the diff. ok tedu@ after he pointed out a couple of places I messed up. Problem initally noticed by ariane@ a while ago.
* POOL_DEBUG and DIAGNOSTIC should be better friendsderaadt2009-06-041-4/+4
|
* the POOL_DEBUG checks needed to be more friendly with DIAGNOSTICderaadt2009-06-041-4/+4
|
* enable POOL_DEBUG again just for the hackathon.oga2009-06-041-1/+2
| | | | | | slackers now get more bugs to fix, yay! discussed with deraadt@.
* Move splassert checks from pool_do_get to pool_get(). Since the formermiod2009-05-311-8/+6
| | | | | | is invoked with the pool mutex held, the asserts are satisfied by design. ok tedu@
* initialise the constructor and destructor function pointers to NULLdlg2009-04-221-1/+6
| | | | | in pool_init so you the pool struct doesn't have to be zeroed before you init it.
* ensure all pi_magic checks are inside DIAGNOSTICderaadt2009-02-171-2/+4
|
* at tedu's request, bring back the basic single "first word" PI_MAGIC checkderaadt2009-02-161-16/+21
| | | | | | since it is essentially free. To turn on the checking of the rest of the allocation, use 'option POOL_DEBUG' ok tedu
* Disable pool debug stuff for the release (it has a performance hit, butderaadt2009-02-161-14/+17
| | | | | | between releases we may want to turn it on, since it has uncovered real bugs) ok miod henning etc etc
* i got emptypages and fullpages mixed up in pool_walk. this now shows itemsdlg2008-12-231-2/+2
| | | | | | in fullpages that have been allocated. spotted by claudio@
* add pool_walk as debug code.dlg2008-12-231-1/+38
| | | | | | | this can be used to walk over all the items allocated with a pool and have them examined by a function the caller provides. with help from and ok tedu@
* record the offset into each pool page that item allocations actually begindlg2008-12-231-1/+3
| | | | | | on, aka, its coloring. ok tedu@
* Put back the support for pools > PAGE_SIZE. This time the compare functionart2008-12-041-20/+125
| | | | | works and there's even some sanity checks that it actually returns what we expect it to return.
* Back out the large page pools for now. The compare function isart2008-11-251-120/+20
| | | | | borked and instead of stressing to figure out how to fix it, I'll let peoples kernels to work.
* Make sure that equal elements always compare equal. Logic error spottedart2008-11-251-3/+5
| | | | | by otto@ ok otto@
* Protect kmem_map allocations with splvm.art2008-11-241-3/+12
| | | | This should make dlg happy.
* Allow allocations larger than PAGE_SIZE from pools.art2008-11-241-21/+110
| | | | | | | | | | | | This is solved by special allocators and an obfuscated compare function for the page header splay tree and some other minor adjustments. At this moment, the allocator will be picked automagically by pool_init and you can get a kernel_map allocator if you specify PR_WAITOK in flags (XXX), default is kmem_map. This will be changed in the future once the allocator code is slightly reworked. But people want to use it now. "nag nag nag nag" dlg@
* Do deadbeef-style protection in pools too, by default, even though it itderaadt2008-11-221-51/+68
| | | | | | | is a lot slower. Before release this should be backed out, but for now we need everyone to run with this and start finding the use-after-free style bugs this exposes. original version from tedu ok everyone in the room
* accidental commit ... backoutderaadt2008-10-311-70/+50
|
* kern_sysctl.cderaadt2008-10-311-50/+70
|
* yet again i prove unable to commit what i really wanted. spotted by deraadttedu2008-10-241-24/+5
|
* a better fix for the "uvm_km thread runs out of memory" problem.tedu2008-10-231-16/+42
| | | | | | | | | | | | add a new arg to the backend so it can tell pool to slow down. when we get this flag, yield *after* putting the page in the pool's free list. whatever we do, don't let the thread sleep. this makes things better by still letting the thread run when a huge pf request comes in, but without artificially increasing pressure on the backend by eating pages without feeding them forward. ok deraadt
* First pass at removing clauses 3 and 4 from NetBSD licenses.ray2008-06-261-8/+1
| | | | | | | | | Not sure what's more surprising: how long it took for NetBSD to catch up to the rest of the BSDs (including UCB), or the amount of code that NetBSD has claimed for itself without attributing to the actual authors. OK deraadt@
* oldnointr pool allocator is no longer used or necessary.art2008-06-141-26/+5
|
* unsigned -> u_int and warnmess -> warnmsgthib2008-05-161-3/+3
| | | | | | for pool_sethardlimit. prodded by and ok tedu@
* Add a PR_ZERO flag for pools, to compliment the M_ZEROthib2008-05-061-2/+5
| | | | | | | malloc flag, does the same thing. use it in a few places. OK tedu@, "then go ahead. and don't forget the manpage (-:" miod@
* remove an overlooked simple_lock everybody likes to point out to me.tedu2007-12-111-3/+1
|
* release the pool mutex if we may sleep in the backendtedu2007-12-111-2/+10
|
* big patch to simplify pool code.tedu2007-12-091-800/+97
| | | | | | | | | | | | | remove pool_cache code. it was barely used, and quite complex. it's silly to have both a "fast" and "faster" allocation interface. provide a ctor/dtor interface, and convert the few cache users to use it. no caching at this time. use mutexes to protect pools. they should be initialized with pool_setipl if the pool may be used in an interrupt context, without existing spl protection. ok art deraadt thib
* I don't really know what I was thinking when I wrote this. Not only does theart2007-08-161-2/+2
| | | | | | | | | "array" index start at 1, the code also abused index 0 to detect that we were doing a KERN_POOL_NPOOLS. Just look at *name == KERN_POOL_NPOOLS instead of using index == 0 for that. deraadt@ ok
* some remnants of the timestamping code i missedtedu2007-05-281-7/+1
|
* remove time from pool header. it slows us down quite a bit, and it'stedu2007-05-281-19/+3
| | | | | | | probably a better idea to just let reclaim have the emptypages. we can still use the partial pages. this lets dlg sling many many more packets ok dlg henning miod pedro ryan
* add a pool_setipl function, which allows setting an appropriate ipltedu2007-05-281-1/+16
| | | | | for splassert inside pool_get and pool_put (DIAGNOSTIC only) ok miod pedro thib
* Clean up an obsolete allocator.art2007-04-231-22/+1
| | | | miod@ ok
* Allow machine-dependant overrides for the ``deadbeef'' sentinel values,miod2007-04-121-1/+5
| | | | | | | | and make sure that nothing can ever be mapped at theses addresses. Only i386 overrides the default for now. From mickey@, ok art@ miod@
* Instead of managing pages for intrsafe maps in special objects (aka.art2007-04-111-3/+2
| | | | | | | | kmem_object) just so that we can remove them, just use pmap_extract to get the pages to free and simplify a lot of code to not deal with the list of intrsafe maps, intrsafe objects, etc. miod@ ok
* typos from bret lambert;jmc2006-11-171-4/+4
|
* add show all pools command listing all pools as vmstat -m does; miod@ okmickey2006-05-201-1/+77
|
* remove drain hooks from pool.tedu2006-05-071-149/+5
| | | | | | | | | | | | | | | 1. drain hooks and lists of allocators make the code complicated 2. the only hooks in the system are the mbuf reclaim routines 3. if reclaim is actually able to put a meaningful amount of memory back in the system, i think something else is dicked up. ie, if reclaiming your ip fragment buffers makes the difference thrashing swap and not, your system is in a load of trouble. 4. it's a scary amount of code running with very weird spl requirements and i'd say it's pretty much totally untested. raise your hand if your router is running at the edge of swap. 5. the reclaim stuff goes back to when mbufs lived in a tiny vm_map and you could run out of va. that's very unlikely (like impossible) now. ok/tested pedro krw sturm
* proper condition for freeing a page and fix a comment appropriately; art@ tedu@ okmickey2004-07-291-2/+2
|
* ifdef DDB a few functions only used (or usable) from DDB.art2004-07-201-1/+5
|