aboutsummaryrefslogtreecommitdiffstats
path: root/mm/slub.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2011-07-30Merge branch 'slub/lockless' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6Linus Torvalds1-252/+512
* 'slub/lockless' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6: (21 commits) slub: When allocating a new slab also prep the first object slub: disable interrupts in cmpxchg_double_slab when falling back to pagelock Avoid duplicate _count variables in page_struct Revert "SLUB: Fix build breakage in linux/mm_types.h" SLUB: Fix build breakage in linux/mm_types.h slub: slabinfo update for cmpxchg handling slub: Not necessary to check for empty slab on load_freelist slub: fast release on full slab slub: Add statistics for the case that the current slab does not match the node slub: Get rid of the another_slab label slub: Avoid disabling interrupts in free slowpath slub: Disable interrupts in free_debug processing slub: Invert locking and avoid slab lock slub: Rework allocator fastpaths slub: Pass kmem_cache struct to lock and freeze slab slub: explicit list_lock taking slub: Add cmpxchg_double_slab() mm: Rearrange struct page slub: Move page->frozen handling near where the page->freelist handling occurs slub: Do not use frozen page flag but a bit in the page counters ...
2011-07-25Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivialLinus Torvalds1-1/+1
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (43 commits) fs: Merge split strings treewide: fix potentially dangerous trailing ';' in #defined values/expressions uwb: Fix misspelling of neighbourhood in comment net, netfilter: Remove redundant goto in ebt_ulog_packet trivial: don't touch files that are removed in the staging tree lib/vsprintf: replace link to Draft by final RFC number doc: Kconfig: `to be' -> `be' doc: Kconfig: Typo: square -> squared doc: Konfig: Documentation/power/{pm => apm-acpi}.txt drivers/net: static should be at beginning of declaration drivers/media: static should be at beginning of declaration drivers/i2c: static should be at beginning of declaration XTENSA: static should be at beginning of declaration SH: static should be at beginning of declaration MIPS: static should be at beginning of declaration ARM: static should be at beginning of declaration rcu: treewide: Do not use rcu_read_lock_held when calling rcu_dereference_check Update my e-mail address PCIe ASPM: forcedly -> forcibly gma500: push through device driver tree ... Fix up trivial conflicts: - arch/arm/mach-ep93xx/dma-m2p.c (deleted) - drivers/gpio/gpio-ep93xx.c (renamed and context nearby) - drivers/net/r8169.c (just context changes)
2011-07-25slub: When allocating a new slab also prep the first objectChristoph Lameter1-0/+3
We need to branch to the debug code for the first object if we allocate a new slab otherwise the first object will be marked wrongly as inactive. Tested-by: Rabin Vincent <rabin@rab.in> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-22Merge branch 'slab-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6Linus Torvalds1-2/+103
* 'slab-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6: slab: fix DEBUG_SLAB warning slab: shrink sizeof(struct kmem_cache) slab: fix DEBUG_SLAB build SLUB: Fix missing <linux/stacktrace.h> include slub: reduce overhead of slub_debug slub: Add method to verify memory is not freed slub: Enable backtrace for create/delete points slab allocators: Provide generic description of alignment defines slab, slub, slob: Unify alignment definition slob/lockdep: Fix gfp flags passed to lockdep
2011-07-21treewide: fix potentially dangerous trailing ';' in #defined values/expressionsPhil Carmody1-1/+1
All these are instances of #define NAME value; or #define NAME(params_opt) value; These of course fail to build when used in contexts like if(foo $OP NAME) while(bar $OP NAME) and may silently generate the wrong code in contexts such as foo = NAME + 1; /* foo = value; + 1; */ bar = NAME - 1; /* bar = value; - 1; */ baz = NAME & quux; /* baz = value; & quux; */ Reported on comp.lang.c, Message-ID: <ab0d55fe-25e5-482b-811e-c475aa6065c3@c29g2000yqd.googlegroups.com> Initial analysis of the dangers provided by Keith Thompson in that thread. There are many more instances of more complicated macros having unnecessary trailing semicolons, but this pile seems to be all of the cases of simple values suffering from the problem. (Thus things that are likely to be found in one of the contexts above, more complicated ones aren't.) Signed-off-by: Phil Carmody <ext-phil.2.carmody@nokia.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2011-07-18slub: disable interrupts in cmpxchg_double_slab when falling back to pagelockChristoph Lameter1-4/+45
Split cmpxchg_double_slab into two functions. One for the case where we know that interrupts are disabled (and therefore the fallback does not need to disable interrupts) and one for the other cases where fallback will also disable interrupts. This fixes the issue that __slab_free called cmpxchg_double_slab in some scenarios without disabling interrupts. Tested-by: Hugh Dickins <hughd@google.com> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-07SLUB: Fix missing <linux/stacktrace.h> includePekka Enberg1-0/+1
This fixes the following build breakage commit d6543e3 ("slub: Enable backtrace for create/delete points"): CC mm/slub.o mm/slub.c: In function ‘set_track’: mm/slub.c:428: error: storage size of ‘trace’ isn’t known mm/slub.c:435: error: implicit declaration of function ‘save_stack_trace’ mm/slub.c:428: warning: unused variable ‘trace’ make[1]: *** [mm/slub.o] Error 1 make: *** [mm/slub.o] Error 2 Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-07slub: reduce overhead of slub_debugMarcin Slusarz1-2/+34
slub checks for poison one byte by one, which is highly inefficient and shows up frequently as a highest cpu-eater in perf top. Joining reads gives nice speedup: (Compiling some project with different options) make -j12 make clean slub_debug disabled: 1m 27s 1.2 s slub_debug enabled: 1m 46s 7.6 s slub_debug enabled + this patch: 1m 33s 3.2 s check_bytes still shows up high, but not always at the top. Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Matt Mackall <mpm@selenic.com> Cc: linux-mm@kvack.org Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-07slub: Add method to verify memory is not freedBen Greear1-0/+36
This is for tracking down suspect memory usage. Acked-by: Christoph Lameter <cl@linux.com> Signed-off-by: Ben Greear <greearb@candelatech.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-07slub: Enable backtrace for create/delete pointsBen Greear1-0/+32
This patch attempts to grab a backtrace for the creation and deletion points of the slub object. When a fault is detected, we can then get a better idea of where the item was deleted. Example output from debugging some funky nfs/rpc behaviour: ============================================================================= BUG kmalloc-64: Object is on free-list ----------------------------------------------------------------------------- INFO: Allocated in rpcb_getport_async+0x39c/0x5a5 [sunrpc] age=381 cpu=3 pid=3750 __slab_alloc+0x348/0x3ba kmem_cache_alloc_trace+0x67/0xe7 rpcb_getport_async+0x39c/0x5a5 [sunrpc] call_bind+0x70/0x75 [sunrpc] __rpc_execute+0x78/0x24b [sunrpc] rpc_execute+0x3d/0x42 [sunrpc] rpc_run_task+0x79/0x81 [sunrpc] rpc_call_sync+0x3f/0x60 [sunrpc] rpc_ping+0x42/0x58 [sunrpc] rpc_create+0x4aa/0x527 [sunrpc] nfs_create_rpc_client+0xb1/0xf6 [nfs] nfs_init_client+0x3b/0x7d [nfs] nfs_get_client+0x453/0x5ab [nfs] nfs_create_server+0x10b/0x437 [nfs] nfs_fs_mount+0x4ca/0x708 [nfs] mount_fs+0x6b/0x152 INFO: Freed in rpcb_map_release+0x3f/0x44 [sunrpc] age=30 cpu=2 pid=29049 __slab_free+0x57/0x150 kfree+0x107/0x13a rpcb_map_release+0x3f/0x44 [sunrpc] rpc_release_calldata+0x12/0x14 [sunrpc] rpc_free_task+0x59/0x61 [sunrpc] rpc_final_put_task+0x82/0x8a [sunrpc] __rpc_execute+0x23c/0x24b [sunrpc] rpc_async_schedule+0x10/0x12 [sunrpc] process_one_work+0x230/0x41d worker_thread+0x133/0x217 kthread+0x7d/0x85 kernel_thread_helper+0x4/0x10 INFO: Slab 0xffffea00029aa470 objects=20 used=9 fp=0xffff8800be7830d8 flags=0x20000000004081 INFO: Object 0xffff8800be7830d8 @offset=4312 fp=0xffff8800be7827a8 Bytes b4 0xffff8800be7830c8: 87 a8 96 00 01 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a .�......ZZZZZZZZ Object 0xffff8800be7830d8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Object 0xffff8800be7830e8: 6b 6b 6b 6b 01 08 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkk..kkkkkkkkkk Object 0xffff8800be7830f8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Object 0xffff8800be783108: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5 kkkkkkkkkkkkkkk� Redzone 0xffff8800be783118: bb bb bb bb bb bb bb bb ������������� Padding 0xffff8800be783258: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ Pid: 29049, comm: kworker/2:2 Not tainted 3.0.0-rc4+ #8 Call Trace: [<ffffffff811055c3>] print_trailer+0x131/0x13a [<ffffffff81105601>] object_err+0x35/0x3e [<ffffffff8110746f>] verify_mem_not_deleted+0x7a/0xb7 [<ffffffffa02851b5>] rpcb_getport_done+0x23/0x126 [sunrpc] [<ffffffffa027d0ba>] rpc_exit_task+0x3f/0x6d [sunrpc] [<ffffffffa027d4ab>] __rpc_execute+0x78/0x24b [sunrpc] [<ffffffffa027d6c0>] ? rpc_execute+0x42/0x42 [sunrpc] [<ffffffffa027d6d0>] rpc_async_schedule+0x10/0x12 [sunrpc] [<ffffffff810611b7>] process_one_work+0x230/0x41d [<ffffffff81061102>] ? process_one_work+0x17b/0x41d [<ffffffff81063613>] worker_thread+0x133/0x217 [<ffffffff810634e0>] ? manage_workers+0x191/0x191 [<ffffffff81066e10>] kthread+0x7d/0x85 [<ffffffff81485924>] kernel_thread_helper+0x4/0x10 [<ffffffff8147eb18>] ? retint_restore_args+0x13/0x13 [<ffffffff81066d93>] ? __init_kthread_worker+0x56/0x56 [<ffffffff81485920>] ? gs_change+0x13/0x13 Acked-by: Christoph Lameter <cl@linux.com> Signed-off-by: Ben Greear <greearb@candelatech.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-02slub: Not necessary to check for empty slab on load_freelistChristoph Lameter1-3/+2
load_freelist is now only branched to only if there are objects available. So no need to check the object variable for NULL. Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-02slub: fast release on full slabChristoph Lameter1-2/+19
Make deactivation occur implicitly while checking out the current freelist. This avoids one cmpxchg operation on a slab that is now fully in use. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-02slub: Add statistics for the case that the current slab does not match the nodeChristoph Lameter1-0/+3
Slub reloads the per cpu slab if the page does not satisfy the NUMA condition. Track those reloads since doing so has a performance impact. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-02slub: Get rid of the another_slab labelChristoph Lameter1-6/+5
We can avoid deactivate slab in special cases if we do the deactivation of slabs in each code flow that leads to new_slab. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-02slub: Avoid disabling interrupts in free slowpathChristoph Lameter1-11/+5
Disabling interrupts can be avoided now. However, list operation still require disabling interrupts since allocations can occur from interrupt contexts and there is no way to perform atomic list operations. The acquition of the list_lock therefore has to disable interrupts as well. Dropping interrupt handling significantly simplifies the slowpath. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-02slub: Disable interrupts in free_debug processingChristoph Lameter1-4/+10
We will be calling free_debug_processing with interrupts disabled in some case when the later patches are applied. Some of the functions called by free_debug_processing expect interrupts to be off. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-02slub: Invert locking and avoid slab lockChristoph Lameter1-77/+52
Locking slabs is no longer necesary if the arch supports cmpxchg operations and if no debuggin features are used on a slab. If the arch does not support cmpxchg then we fallback to use the slab lock to do a cmpxchg like operation. The patch also changes the lock order. Slab locks are subsumed to the node lock now. With that approach slab_trylocking is no longer necessary. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-02slub: Rework allocator fastpathsChristoph Lameter1-129/+280
Rework the allocation paths so that updates of the page freelist, frozen state and number of objects use cmpxchg_double_slab(). Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-02slub: Pass kmem_cache struct to lock and freeze slabChristoph Lameter1-7/+8
We need more information about the slab for the cmpxchg implementation. Signed-off-by: Christoph Lameter <cl@linux.com> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-02slub: explicit list_lock takingChristoph Lameter1-40/+49
The allocator fastpath rework does change the usage of the list_lock. Remove the list_lock processing from the functions that hide them from the critical sections and move them into those critical sections. This in turn simplifies the support functions (no __ variant needed anymore) and simplifies the lock handling on bootstrap. Inline add_partial since it becomes pretty simple. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-02slub: Add cmpxchg_double_slab()Christoph Lameter1-5/+60
Add a function that operates on the second doubleword in the page struct and manipulates the object counters, the freelist and the frozen attribute. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-02slub: Move page->frozen handling near where the page->freelist handling occursChristoph Lameter1-2/+6
This is necessary because the frozen bit has to be handled in the same cmpxchg_double with the freelist and the counters. Signed-off-by: Christoph Lameter <cl@linux.com> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-02slub: Do not use frozen page flag but a bit in the page countersChristoph Lameter1-6/+6
Do not use a page flag for the frozen bit. It needs to be part of the state that is handled with cmpxchg_double(). So use a bit in the counter struct in the page struct for that purpose. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-02slub: Push irq disable into allocate_slab()Christoph Lameter1-10/+13
Do the irq handling in allocate_slab() instead of __slab_alloc(). __slab_alloc() is already cluttered and allocate_slab() is already fiddling around with gfp flags. v6->v7: Only increment ORDER_FALLBACK if we get a page during fallback Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-06-03slub: always align cpu_slab to honor cmpxchg_double requirementChris Metcalf1-8/+4
On an architecture without CMPXCHG_LOCAL but with DEBUG_VM enabled, the VM_BUG_ON() in __pcpu_double_call_return_bool() will cause an early panic during boot unless we always align cpu_slab properly. In principle we could remove the alignment-testing VM_BUG_ON() for architectures that don't have CMPXCHG_LOCAL, but leaving it in means that new code will tend not to break x86 even if it is introduced on another platform, and it's low cost to require alignment. Acked-by: David Rientjes <rientjes@google.com> Acked-by: Christoph Lameter <cl@linux.com> Signed-off-by: Chris Metcalf <cmetcalf@tilera.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-05-25slub: remove no-longer used 'unlock_out' labelLinus Torvalds1-1/+0
Commit a71ae47a2cbf ("slub: Fix double bit unlock in debug mode") removed the only goto to this label, resulting in mm/slub.c: In function '__slab_alloc': mm/slub.c:1834: warning: label 'unlock_out' defined but not used fixed trivially by the removal of the label itself too. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Christoph Lameter <cl@linux.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-05-25slub: Fix double bit unlock in debug modeChristoph Lameter1-1/+2
Commit 442b06bcea23 ("slub: Remove node check in slab_free") added a call to deactivate_slab() in the debug case in __slab_alloc(), which unlocks the current slab used for allocation. Going to the label 'unlock_out' then does it again. Also, in the debug case we do not need all the other processing that the 'unlock_out' path does. We always fall back to the slow path in the debug case. So the tid update is useless. Similarly, ALLOC_SLOWPATH would just be incremented for all allocations. Also a pretty useless thing. So simply restore irq flags and return the object. Signed-off-by: Christoph Lameter <cl@linux.com> Reported-and-bisected-by: James Morris <jmorris@namei.org> Reported-by: Ingo Molnar <mingo@elte.hu> Reported-by: Jens Axboe <jaxboe@fusionio.com> Cc: Pekka Enberg <penberg@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-05-23Merge branch 'slab/next' into for-linusPekka Enberg1-100/+65
Conflicts: mm/slub.c
2011-05-21slub: Remove node check in slab_freeChristoph Lameter1-1/+3
We can set the page pointing in the percpu structure to NULL to have the same effect as setting c->node to NUMA_NO_NODE. Gets rid of one check in slab_free() that was only used for forcing the slab_free to the slowpath for debugging. We still need to set c->node to NUMA_NO_NODE to force the slab_alloc() fastpath to the slowpath in case of debugging. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-05-17slub: avoid label inside conditionalDavid Rientjes1-3/+3
Jumping to a label inside a conditional is considered poor style, especially considering the current organization of __slab_alloc(). This removes the 'load_from_page' label and just duplicates the three lines of code that it uses: c->node = page_to_nid(page); c->page = page; goto load_freelist; since it's probably not worth making this a separate helper function. Acked-by: Christoph Lameter <cl@linux.com> Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-05-17slub: Make CONFIG_DEBUG_PAGE_ALLOC work with new fastpathChristoph Lameter1-1/+13
Fastpath can do a speculative access to a page that CONFIG_DEBUG_PAGE_ALLOC may have marked as invalid to retrieve the pointer to the next free object. Use probe_kernel_read in that case in order not to cause a page fault. Cc: <stable@kernel.org> # 38.x Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-05-17slub: Avoid warning for !CONFIG_SLUB_DEBUGChristoph Lameter1-1/+1
Move the #ifdef so that get_map is only defined if CONFIG_SLUB_DEBUG is defined. Reported-by: David Rientjes <rientjes@google.com> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-05-07slub: Remove CONFIG_CMPXCHG_LOCAL ifdefferyChristoph Lameter1-56/+0
Remove the #ifdefs. This means that the irqsafe_cpu_cmpxchg_double() is used everywhere. There may be performance implications since: A. We now have to manage a transaction ID for all arches B. The interrupt holdoff for arches not supporting CONFIG_CMPXCHG_LOCAL is reduced to a very short irqoff section. There are no multiple irqoff/irqon sequences as a result of this change. Even in the fallback case we only have to do one disable and enable like before. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-05-04slub: Fix the lockless code on 32-bit platforms with no 64-bit cmpxchgThomas Gleixner1-2/+2
The SLUB allocator use of the cmpxchg_double logic was wrong: it actually needs the irq-safe one. That happens automatically when we use the native unlocked 'cmpxchg8b' instruction, but when compiling the kernel for older x86 CPUs that do not support that instruction, we fall back to the generic emulation code. And if you don't specify that you want the irq-safe version, the generic code ends up just open-coding the cmpxchg8b equivalent without any protection against interrupts or preemption. Which definitely doesn't work for SLUB. This was reported by Werner Landgraf <w.landgraf@ru.ru>, who saw instability with his distro-kernel that was compiled to support pretty much everything under the sun. Most big Linux distributions tend to compile for PPro and later, and would never have noticed this problem. This also fixes the prototypes for the irqsafe cmpxchg_double functions to use 'bool' like they should. [ Btw, that whole "generic code defaults to no protection" design just sounds stupid - if the code needs no protection, there is no reason to use "cmpxchg_double" to begin with. So we should probably just remove the unprotected version entirely as pointless. - Linus ] Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reported-and-tested-by: werner <w.landgraf@ru.ru> Acked-and-tested-by: Ingo Molnar <mingo@elte.hu> Acked-by: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Tejun Heo <tj@kernel.org> Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1105041539050.3005@ionos Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-04-17slub: Move debug handlign in __slab_freeChristoph Lameter1-9/+2
Its easier to read if its with the check for debugging flags. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-04-17slub: Move node determination out of hotpathChristoph Lameter1-4/+5
If the node does not change then there is no need to recalculate the node from the page struct. So move the node determination into the places where we acquire a new slab page. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-04-17slub: Eliminate repeated use of c->page through a new page variableChristoph Lameter1-19/+22
__slab_alloc is full of "c->page" repeats. Lets just use one local variable named "page" for this. Also avoids the need to a have another variable called "new". Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-04-17slub: get_map() function to establish map of free objects in a slabChristoph Lameter1-12/+22
The bit map of free objects in a slab page is determined in various functions if debugging is enabled. Provide a common function for that purpose. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-04-17slub: Use NUMA_NO_NODE in get_partialChristoph Lameter1-1/+1
A -1 was leftover during the conversion. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-04-12slub: Fix a typo in config nameLi Zefan1-1/+1
There's no config named SLAB_DEBUG, and it should be a typo of SLUB_DEBUG. Acked-by: Christoph Lameter <cl@linux.com> Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-03-31Fix common misspellingsLucas De Marchi1-4/+4
Fixes generated by 'codespell' and manually reviewed. Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>
2011-03-24SLUB: Write to per cpu data when allocating itChristoph Lameter1-1/+1
It turns out that the cmpxchg16b emulation has to access vmalloced percpu memory with interrupts disabled. If the memory has never been touched before then the fault necessary to establish the mapping will not to occur and the kernel will fail on boot. Fix that by reusing the CONFIG_PREEMPT code that writes the cpu number into a field on every cpu. Writing to the per cpu area before causes the mapping to be established before we get to a cmpxchg16b emulation. Tested-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-03-24slub: Fix debugobjects with lockless fastpathThomas Gleixner1-2/+2
On Thu, 24 Mar 2011, Ingo Molnar wrote: > RIP: 0010:[<ffffffff810570a9>] [<ffffffff810570a9>] get_next_timer_interrupt+0x119/0x260 That's a typical timer crash, but you were unable to debug it with debugobjects because commit d3f661d6 broke those. Cc: Christoph Lameter <cl@linux.com> Tested-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-03-22slub: Add statistics for this_cmpxchg_double failuresChristoph Lameter1-1/+2
Add some statistics for debugging. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-03-22slub: Add missing irq restore for the OOM pathChristoph Lameter1-0/+3
OOM path is missing the irq restore in the CONFIG_CMPXCHG_LOCAL case. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-03-20Merge branch 'slub/lockless' into for-linusPekka Enberg1-13/+221
Conflicts: include/linux/slub_def.h
2011-03-20slub: Dont define useless label in the !CONFIG_CMPXCHG_LOCAL caseChristoph Lameter1-1/+3
The redo label needs #ifdeffery. Fixes the following problem introduced by commit 8a5ec0ba42c4 ("Lockless (and preemptless) fastpaths for slub"): mm/slub.c: In function 'slab_free': mm/slub.c:2124: warning: label 'redo' defined but not used Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-03-11Merge branch 'slab/rcu' into slab/nextPekka Enberg1-22/+55
Conflicts: mm/slub.c
2011-03-11slub,rcu: don't assume the size of struct rcu_headLai Jiangshan1-5/+25
The size of struct rcu_head may be changed. When it becomes larger, it will pollute the page array. We reserve some some bytes for struct rcu_head when a slab is allocated in this situation. Changed from V1: use VM_BUG_ON instead BUG_ON Acked-by: Christoph Lameter <cl@linux.com> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-03-11slub: automatically reserve bytes at the end of slabLai Jiangshan1-17/+30
There is no "struct" for slub's slab, it shares with struct page. But struct page is very small, it is insufficient when we need to add some metadata for slab. So we add a field "reserved" to struct kmem_cache, when a slab is allocated, kmem_cache->reserved bytes are automatically reserved at the end of the slab for slab's metadata. Changed from v1: Export the reserved field via sysfs Acked-by: Christoph Lameter <cl@linux.com> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>