summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_malloc.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* add a size argument to free. will be used soon, but for now default to 0.tedu2014-07-121-2/+2
| | | | after discussions with beck deraadt kettenis.
* instead of defining two versions of bucketidx, just don't inline for small.tedu2014-07-101-14/+6
| | | | ok deraadt
* Add mallocarray(9)matthew2014-07-101-2/+36
| | | | | | | While here, change malloc(9)'s size argument from "unsigned long" to "size_t". ok tedu
* pool_debug still needed for non-DIAGNOSTIC kernelsderaadt2014-07-101-2/+2
|
* hide the biglock thrashing under pool_debug so it can be turned offtedu2014-07-101-2/+2
|
* you've had 12+ years to update your kernel config.daniel2014-06-211-5/+1
| | | | ok deraadt@
* consistent use of uint32_t for poison valuestedu2014-05-191-2/+2
|
* if it's ok to wait, it must also be ok to give the kernel lock. do so.tedu2014-04-031-3/+7
| | | | | | (then immediately reacquire it). this has the effect of giving interrupts on other CPUs to a chance to run and reduces latency in many cases. ok deraadt
* Reduce uvm include madness. Use <uvm/uvm_extern.h> instead ofmpi2014-03-281-2/+2
| | | | | | <uvm/uvm.h> if possible and remove double inclusions. ok beck@, mlarkin@, deraadt@
* bzero -> memsettedu2014-01-211-3/+3
|
* Uncomment kprintf format attributes for sys/kernsyl2013-08-081-2/+2
| | | | tested on vax (gcc3) ok miod@
* permit free(NULL) to work. ok deraadttedu2013-07-041-1/+4
|
* open up some races. if pool_debug == 2, force a yield() whenever waitok.tedu2013-05-311-2/+8
| | | | ok miod
* switch the malloc and pool freelists to using xor simpleq.tedu2013-05-031-11/+13
| | | | this adds a tiny bit more protection from list manipulation.
* shuffle around some poison code, prototypes, values...tedu2013-04-061-13/+18
| | | | | allow some more pool debug code to be enabled if not compiled in bump poison size back up to 64
* separate memory poisoning code to a new file and make it usable kernel widetedu2013-03-281-48/+15
| | | | ok deraadt
* replace kern malloc's hand rolled freelist with simpleq macros.tedu2013-03-261-65/+31
| | | | ok deraadt mpi
* use PAGE_SHIFT instead of PGSHIFTderaadt2013-03-211-2/+2
|
* factor out the deadbeef code for legibility.tedu2013-03-151-37/+51
| | | | ok deraadt
* Comment out recently added __attribute__((__format__(__kprintf__))) annotationsmiod2013-02-171-2/+2
| | | | | | in MI code; gcc 2.95 does not accept such annotation for function pointer declarations, only function prototypes. To be uncommented once gcc 2.95 bites the dust.
* Add explicit __attribute__ ((__format__(__kprintf__)))) to the functions andmiod2013-02-091-5/+6
| | | | | | function pointer arguments which are {used as,} wrappers around the kernel printf function. No functional change.
* Expand the panic to show the malloc type and size. Okay deraadt@.pirofti2012-03-301-2/+3
|
* New vmmap implementation.ariane2012-03-091-3/+8
| | | | | | | | | | | | no oks (it is really a pain to review properly) extensively tested, I'm confident it'll be stable 'now is the time' from several icb inhabitants Diff provides: - ability to specify different allocators for different regions/maps - a simpler implementation of the current allocator - currently in compatibility mode: it will generate similar addresses as the old allocator
* Improve kernel malloc type checking.jsing2011-09-221-2/+2
| | | | ok deraadt@
* Backout vmmap in order to repair virtual address selection algorithmsariane2011-06-061-8/+3
| | | | outside the tree.
* push kernel malloc(9) and kernel stacks into non-dma memory, since thatderaadt2011-06-061-2/+2
| | | | | | appears to be safe now. If not, we'll know soon where the bugs lie, so that we can fix them. This diff has been in snapshots for many months. ok oga miod
* Reimplement uvm/uvm_map.ariane2011-05-241-3/+8
| | | | | | | | | | | | | vmmap is designed to perform address space randomized allocations, without letting fragmentation of the address space go through the roof. Some highlights: - kernel address space randomization - proper implementation of guardpages - roughly 10% system time reduction during kernel build Tested by alot of people on tech@ and developers. Theo's machines are still happy.
* unify some pool and malloc flag values. the important bit is that all flagstedu2010-09-261-1/+3
| | | | | have real values, no 0 values anymore. ok deraadt kettenis krw matthew oga thib
* Add assertwaitok(9) to declare code paths that assume they can sleep.matthew2010-09-211-1/+4
| | | | | | | | | | Currently only checks that we're not in an interrupt context, but will soon check that we're not holding any mutexes either. Update malloc(9) and pool(9) to use assertwaitok(9) as appropriate. "i like it" art@, oga@, marco@; "i see no harm" deraadt@; too trivial for me to bother prying actual oks from people.
* We have this nice KMEMSTATS option to control when we use kmemstats,matthew2010-07-221-1/+3
| | | | | | so no point in reserving space for kmemstats unless it's enabled. ok thib@, deraadt@
* add an align argument to uvm_km_kmemalloc_pla.art2010-07-021-2/+2
| | | | | | | | | Use uvm_km_kmemalloc_pla with the dma constraint to allocate kernel stacks. Yes, that means DMA is possible to kernel stacks, but only until we've fixed all the scary drivers. deraadt@ ok
* constrain malloc to only grab pages from dma reachable memory.thib2010-07-011-4/+6
| | | | | | | Do this by calling uvm_km_kmemalloc_pla with the dma_constraint. ok art@ (ofcourse, he eats his cereal and okeys everything). OK beck@, deraadt@
* If option DIAGNOSTIC, do not bother doing sanity checks, including anmiod2009-08-251-16/+28
| | | | | | | | uvm_map_checkprot() call, if the memory we're about to return has just been allocated with uvm_km_kmemalloc() instead of coming from the freelist. No functional change but a very small speedup when the freelist for the given bucket is empty.
* The BUCKETINDX() giant macro is used to compute the base 2 logarithm of itsmiod2009-08-251-1/+33
| | | | | | | | | input, in order to pick the appropriate malloc() bucket. Replace it with an inline function in kern_malloc.c, which will either do a tightest-but-slower loop (if option SMALL_KERNEL), or a geometric search equivalent to what the macro does, but producing smaller code (especially on platforms which can not load large constants in one instruction).
* Don't enforce a minimum size for nkmempages by default; if the computedmiod2009-02-221-2/+2
| | | | | | | value (based on physmem) is below NKMEMPAGES_MIN, we are on a low memory machine and can not afford more anyway. ok deraadt@ tedu@
* Revert the change to use pools for <= PAGE_SIZE allocations. Itkettenis2008-10-181-124/+229
| | | | | | | | changes the pressure on the uvm system, uncovering several bugs. Some of those bugs result in provable deadlocks. We'll have to reconsider integrating this diff again after fixing those bugs. ok art@
* Since malloc_page_alloc() is a pool allocator it should check for PR_WAITOKkettenis2008-10-111-2/+3
| | | | | | | | instead of M_NOWAIT. Checking for M_NOWAIT made many malloc calls that used that flag actually wait. This probably explains many if the strange hangs people have seen recently. ok miod@
* In malloc_page_free(), restore the correct wire_count value.miod2008-10-051-2/+2
|
* Use pools to do allocations for all sizes <= PAGE_SIZE.art2008-09-291-229/+123
| | | | | | | | | | | This will allow us to escape the limitations of kmem_map. At this moment, the per-type limits are still enforced for all sizes, but we might loosen that limit in the future after some thinking. Original diff from Mickey in kernel/5761 , I massaged it a little to obey the per-type limits. miod@ ok
* Prevent possible free list corruption when malloc(9) sleeps.kettenis2008-02-211-3/+2
| | | | | | From NetBSD, kindly pointed out by YAMAMOTO Takashi. ok miod@
* replace ctob and btoc with ptoa and atop respectivelymartin2007-09-151-2/+2
| | | | help and ok miod@ thib@
* Add the long requested M_ZERO flag to malloc(9).art2007-09-071-4/+10
| | | | | | | | | | | But the reason for this isn't some kind of "we can make it use the pre-zeroed pages and zero the freelist in the idle loop and OMG I can has optimisatiuns" which would require tons of infrastructure and make everything slower. The reason is that it shrinks other code. And that's good. dlg@ ok, henning@ ok (before he read the diff)
* replace the machine dependant bytes-to-clicks macro by the MI ptoa()martin2007-09-011-3/+3
| | | | | | | | version for i386 more architectures and ctob() replacement is being worked on prodded by and ok miod
* Add a name argument to the RWLOCK_INITIALIZER macro.thib2007-05-291-2/+2
| | | | | | Pick reasonble names for the locks involved.. ok tedu@, art@
* Allow machine-dependant overrides for the ``deadbeef'' sentinel values,miod2007-04-121-1/+5
| | | | | | | | and make sure that nothing can ever be mapped at theses addresses. Only i386 overrides the default for now. From mickey@, ok art@ miod@
* Instead of managing pages for intrsafe maps in special objects (aka.art2007-04-111-4/+4
| | | | | | | | kmem_object) just so that we can remove them, just use pmap_extract to get the pages to free and simplify a lot of code to not deal with the list of intrsafe maps, intrsafe objects, etc. miod@ ok
* remove a few void * casts that are uselesstedu2007-03-251-4/+4
|
* Switch some lockmgr locks to rwlocks.art2007-01-121-4/+5
| | | | | | | | | | | In this commit: - gdt lock on amd64 - sysctl lock - malloc sysctl lock - disk sysctl lock - swap syscall lock miod@, pedro@ ok (and "looks good" others@)
* Make malloc() print out a warning message when returning NULL due topedro2006-11-281-3/+14
| | | | M_CANFAIL, idea from miod@, okay deraadt@
* If M_CANFAIL is set and the malloc() size is to bigthib2006-11-221-3/+8
| | | | | | return NULL instead of panic()'ing. ok pedro@, deraadt@