aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/slub_def.h (follow)
AgeCommit message (Expand)AuthorFilesLines
2008-08-05SLUB: dynamic per-cache MIN_PARTIALPekka Enberg1-0/+1
2008-07-26SL*B: drop kmem cache argument from constructorAlexey Dobriyan1-1/+1
2008-07-04Christoph has movedChristoph Lameter1-1/+1
2008-07-03slub: Do not use 192 byte sized cache if minimum alignment is 128 byteChristoph Lameter1-0/+2
2008-04-27slub: Fallback to minimal order during slab page allocationChristoph Lameter1-0/+2
2008-04-27slub: Update statistics handling for variable order slabsChristoph Lameter1-0/+2
2008-04-27slub: Add kmem_cache_order_objects structChristoph Lameter1-2/+10
2008-04-14slub: No need for per node slab counters if !SLUB_DEBUGChristoph Lameter1-1/+1
2008-03-03slub: Fix up commentsChristoph Lameter1-2/+2
2008-02-14slub: Support 4k kmallocs again to compensate for page allocator slownessChristoph Lameter1-3/+3
2008-02-14slub: Determine gfpflags once and not every time a slab is allocatedChristoph Lameter1-0/+1
2008-02-14slub: kmalloc page allocator pass-through cleanupPekka Enberg1-2/+6
2008-02-07SLUB: Support for performance statisticsChristoph Lameter1-0/+23
2008-02-04Explain kmem_cache_cpu fieldsChristoph Lameter1-5/+5
2008-02-04SLUB: rename defrag to remote_node_defrag_ratioChristoph Lameter1-1/+4
2008-01-02Unify /proc/slabinfo configurationLinus Torvalds1-2/+0
2008-01-01slub: provide /proc/slabinfoPekka J Enberg1-0/+2
2007-10-17Slab API: remove useless ctor parameter and reorder parametersChristoph Lameter1-1/+1
2007-10-16SLUB: Optimize cacheline use for zeroingChristoph Lameter1-0/+1
2007-10-16SLUB: Place kmem_cache_cpu structures in a NUMA aware wayChristoph Lameter1-3/+6
2007-10-16SLUB: Move page->offset to kmem_cache_cpu->offsetChristoph Lameter1-0/+1
2007-10-16SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slabChristoph Lameter1-1/+8
2007-10-16SLUB: direct pass through of page size or higher kmalloc requestsChristoph Lameter1-33/+24
2007-08-31SLUB: Force inlining for functions in slub_def.hChristoph Lameter1-4/+4
2007-07-20fix gfp_t annotations for slubAl Viro1-1/+1
2007-07-17Slab allocators: Cleanup zeroing allocationsChristoph Lameter1-13/+0
2007-07-17SLUB: add some more inlines and #ifdef CONFIG_SLUB_DEBUGChristoph Lameter1-0/+4
2007-07-17Slab allocators: consistent ZERO_SIZE_PTR support and NULL result semanticsChristoph Lameter1-12/+0
2007-07-16slob: initial NUMA supportPaul Mundt1-1/+5
2007-06-16SLUB: minimum alignment fixesChristoph Lameter1-2/+11
2007-06-08SLUB: return ZERO_SIZE_PTR for kmalloc(0)Christoph Lameter1-8/+17
2007-05-17Slab allocators: define common size limitationsChristoph Lameter1-17/+2
2007-05-17slub: fix handling of oversized slabsAndrew Morton1-1/+6
2007-05-17Slab allocators: Drop support for destructorsChristoph Lameter1-1/+0
2007-05-16SLUB: It is legit to allocate a slab of the maximum permitted sizeChristoph Lameter1-1/+1
2007-05-15SLUB: CONFIG_LARGE_ALLOCS must consider MAX_ORDER limitChristoph Lameter1-1/+5
2007-05-07slub: enable tracking of full slabsChristoph Lameter1-0/+1
2007-05-07SLUB: allocate smallest object size if the user asks for 0 bytesChristoph Lameter1-2/+6
2007-05-07SLUB coreChristoph Lameter1-0/+201