aboutsummaryrefslogtreecommitdiffstats
path: root/mm (follow)
AgeCommit message (Collapse)AuthorFilesLines
2010-03-03Merge branch 'x86-bootmem-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tipLinus Torvalds5-24/+508
* 'x86-bootmem-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (30 commits) early_res: Need to save the allocation name in drop_range_partial() sparsemem: Fix compilation on PowerPC early_res: Add free_early_partial() x86: Fix non-bootmem compilation on PowerPC core: Move early_res from arch/x86 to kernel/ x86: Add find_fw_memmap_area Move round_up/down to kernel.h x86: Make 32bit support NO_BOOTMEM early_res: Enhance check_and_double_early_res x86: Move back find_e820_area to e820.c x86: Add find_early_area_size x86: Separate early_res related code from e820.c x86: Move bios page reserve early to head32/64.c sparsemem: Put mem map for one node together. sparsemem: Put usemap for one node together x86: Make 64 bit use early_res instead of bootmem before slab x86: Only call dma32_reserve_bootmem 64bit !CONFIG_NUMA x86: Make early_node_mem get mem > 4 GB if possible x86: Dynamically increase early_res array size x86: Introduce max_early_res and early_res_count ...
2010-03-03Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpuLinus Torvalds3-155/+98
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: percpu: add __percpu sparse annotations to what's left percpu: add __percpu sparse annotations to fs percpu: add __percpu sparse annotations to core kernel subsystems local_t: Remove leftover local.h this_cpu: Remove pageset_notifier this_cpu: Page allocator conversion percpu, x86: Generic inc / dec percpu instructions local_t: Move local.h include to ringbuffer.c and ring_buffer_benchmark.c module: Use this_cpu_xx to dynamically allocate counters local_t: Remove cpu_local_xx macros percpu: refactor the code in pcpu_[de]populate_chunk() percpu: remove compile warnings caused by __verify_pcpu_ptr() percpu: make accessors check for percpu pointer in sparse percpu: add __percpu for sparse. percpu: make access macros universal percpu: remove per_cpu__ prefix.
2010-03-02Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6Linus Torvalds1-0/+3
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6: (1341 commits) virtio_net: remove forgotten assignment be2net: fix tx completion polling sis190: fix cable detect via link status poll net: fix protocol sk_buff field bridge: Fix build error when IGMP_SNOOPING is not enabled bnx2x: Tx barriers and locks scm: Only support SCM_RIGHTS on unix domain sockets. vhost-net: restart tx poll on sk_sndbuf full vhost: fix get_user_pages_fast error handling vhost: initialize log eventfd context pointer vhost: logging thinko fix wireless: convert to use netdev_for_each_mc_addr ethtool: do not set some flags, if others failed ipoib: returned back addrlen check for mc addresses netlink: Adding inode field to /proc/net/netlink axnet_cs: add new id bridge: Make IGMP snooping depend upon BRIDGE. bridge: Add multicast count/interval sysfs entries bridge: Add hash elasticity/max sysfs entries bridge: Add multicast_snooping sysfs toggle ... Trivial conflicts in Documentation/feature-removal-schedule.txt
2010-03-01sparsemem: Fix compilation on PowerPCYinghai Lu1-4/+7
Stephen reported: build (powerpc ppc64_defconfig) produced these warnings: mm/sparse.c: In function 'sparse_init': mm/sparse.c:488: warning: unused variable 'map_count' mm/sparse.c:484: warning: unused variable 'size2' mm/sparse.c:481: warning: unused variable 'map_map' mm/sparse.c: At top level: mm/sparse.c:442: warning: 'sparse_early_mem_maps_alloc_node' defined but not used Introduced by commit 9bdac914240759457175ac0d6529a37d2820bc4d ("sparsemem: Put mem map for one node together"). Conditionalize the bits appropriately based on the setting of CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Tested-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <4B895682.1080706@kernel.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2010-03-01Merge branch 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-armLinus Torvalds3-10/+10
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (100 commits) ARM: Eliminate decompressor -Dstatic= PIC hack ARM: 5958/1: ARM: U300: fix inverted clk round rate ARM: 5956/1: misplaced parentheses ARM: 5955/1: ep93xx: move timer defines into core.c and document ARM: 5954/1: ep93xx: move gpio interrupt support to gpio.c ARM: 5953/1: ep93xx: fix broken build of clock.c ARM: 5952/1: ARM: MM: Add ARM_L1_CACHE_SHIFT_6 for handle inside each ARCH Kconfig ARM: 5949/1: NUC900 add gpio virtual memory map ARM: 5948/1: Enable timer0 to time4 clock support for nuc910 ARM: 5940/2: ARM: MMCI: remove custom DBG macro and printk ARM: make_coherent(): fix problems with highpte, part 2 MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself ARM: 5945/1: ep93xx: include correct irq.h in core.c ARM: 5933/1: amba-pl011: support hardware flow control ARM: 5930/1: Add PKMAP area description to memory.txt. ARM: 5929/1: Add checks to detect overlap of memory regions. ARM: 5928/1: Change type of VMALLOC_END to unsigned long. ARM: 5927/1: Make delimiters of DMA area globally visibly. ARM: 5926/1: Add "Virtual kernel memory..." printout. ARM: 5920/1: OMAP4: Enable L2 Cache ... Fix up trivial conflict in arch/arm/mach-mx25/clock.c
2010-02-28Merge branch 'master' of /home/davem/src/GIT/linux-2.6/David S. Miller3-22/+18
Conflicts: drivers/firmware/iscsi_ibft.c
2010-02-26Merge git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6Linus Torvalds1-1/+1
* git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6: (187 commits) sh: remove dead LED code for migo-r and ms7724se sh: ecovec build fix for CONFIG_I2C=n sh: ecovec r-standby support sh: ms7724se r-standby support sh: SH-Mobile R-standby register save/restore clocksource: Fix up a registration/IRQ race in the sh drivers. sh: ms7724: modify scan_timing for KEYSC sh: ms7724: Add sh_sir support sh: mach-ecovec24: Add sh_sir support sh: wire up SET/GET_UNALIGN_CTL. sh: allow alignment fault mode to be configured at kernel boot. sh: sh7724: Update FSI/SPU2 clock sh: always enable sh7724 vpu_clk and set to 166MHz on Ecovec sh: add sh7724 kick callback to clk_div4_table sh: introduce struct clk_div4_table sh: clock-cpg div4 set_rate() shift fix sh: Turn on speculative return for SH7785 and SH7786 sh: Merge legacy and dynamic PMB modes. sh: Use uncached I/O helpers in PMB setup. sh: Provide uncached I/O helpers. ...
2010-02-26early_res: Add free_early_partial()Yinghai Lu1-3/+0
To free partial areas in pcpu_setup... Reported-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Pekka Enberg <penberg@cs.helsinki.fi> LKML-Reference: <4B85E245.5030001@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-25Merge branches 'clks' and 'pnx' into develRussell King1-3/+1
2010-02-22memcg: fix oom killing a child process in an other cgroupKAMEZAWA Hiroyuki1-0/+2
Presently the oom-killer is memcg aware and it finds the worst process from processes under memcg(s) in oom. Then, it kills victim's child first. It may kill a child in another cgroup and may not be any help for recovery. And it will break the assumption users have. This patch fixes it. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Reviewed-by: Minchan Kim <minchan.kim@gmail.com> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Acked-by: David Rientjes <rientjes@google.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-02-22x86: Fix non-bootmem compilation on PowerPCYinghai Lu1-0/+2
These build errors on some non-x86 platforms (PowerPC for example): mm/page_alloc.c: In function '__alloc_memory_core_early': mm/page_alloc.c:3468: error: implicit declaration of function 'find_early_area' mm/page_alloc.c:3483: error: implicit declaration of function 'reserve_early_without_check' The function is only needed on CONFIG_NO_BOOTMEM. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Johannes Weiner <hannes@saeurebad.de> Cc: Mel Gorman <mel@csn.ul.ie> LKML-Reference: <4B747239.4070907@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-21mm: Make copy_from_user() in migrate.c statically predictableH. Peter Anvin1-21/+15
x86-32 has had a static test for copy_on_user() overflow for a while. This test currently fails in mm/migrate.c resulting in an allyesconfig/allmodconfig build failure on x86-32: In function ‘copy_from_user’, inlined from ‘do_pages_stat’ at /home/hpa/kernel/git/mm/migrate.c:1012: /home/hpa/kernel/git/arch/x86/include/asm/uaccess_32.h:212: error: call to ‘copy_from_user_overflow’ declared Make the logic more explicit and therefore easier for gcc to understand. v2: rewrite the loop entirely using a more normal structure for a chunked-data loop (Linus Torvalds) Reported-by: Len Brown <lenb@kernel.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Reviewed-and-Tested-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Arjan van de Ven <arjan@linux.kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-02-20MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itselfRussell King3-10/+10
On VIVT ARM, when we have multiple shared mappings of the same file in the same MM, we need to ensure that we have coherency across all copies. We do this via make_coherent() by making the pages uncacheable. This used to work fine, until we allowed highmem with highpte - we now have a page table which is mapped as required, and is not available for modification via update_mmu_cache(). Ralf Beache suggested getting rid of the PTE value passed to update_mmu_cache(): On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables to construct a pointer to the pte again. Passing a pte_t * is much more elegant. Maybe we might even replace the pte argument with the pte_t? Ben Herrenschmidt would also like the pte pointer for PowerPC: Passing the ptep in there is exactly what I want. I want that -instead- of the PTE value, because I have issue on some ppc cases, for I$/D$ coherency, where set_pte_at() may decide to mask out the _PAGE_EXEC. So, pass in the mapped page table pointer into update_mmu_cache(), and remove the PTE value, updating all implementations and call sites to suit. Includes a fix from Stephen Rothwell: sparc: fix fallout from update_mmu_cache API change Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-16Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6David S. Miller5-62/+166
2010-02-17percpu: add __percpu sparse annotations to core kernel subsystemsTejun Heo1-8/+10
Add __percpu sparse annotations to core subsystems. These annotations are to make sparse consider percpu variables to be in a different address space and warn if accessed without going through percpu accessors. This patch doesn't affect normal builds. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Christoph Lameter <cl@linux-foundation.org> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: linux-mm@kvack.org Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Dipankar Sarma <dipankar@in.ibm.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Eric Biederman <ebiederm@xmission.com>
2010-02-12sparsemem: Put mem map for one node together.Yinghai Lu3-2/+187
Add vmemmap_alloc_block_buf for mem map only. It will fallback to the old way if it cannot get a block that big. Before this patch, when a node have 128g ram installed, memmap are split into two parts or more. [ 0.000000] [ffffea0000000000-ffffea003fffffff] PMD -> [ffff880100600000-ffff88013e9fffff] on node 1 [ 0.000000] [ffffea0040000000-ffffea006fffffff] PMD -> [ffff88013ec00000-ffff88016ebfffff] on node 1 [ 0.000000] [ffffea0070000000-ffffea007fffffff] PMD -> [ffff882000600000-ffff8820105fffff] on node 0 [ 0.000000] [ffffea0080000000-ffffea00bfffffff] PMD -> [ffff882010800000-ffff8820507fffff] on node 0 [ 0.000000] [ffffea00c0000000-ffffea00dfffffff] PMD -> [ffff882050a00000-ffff8820709fffff] on node 0 [ 0.000000] [ffffea00e0000000-ffffea00ffffffff] PMD -> [ffff884000600000-ffff8840205fffff] on node 2 [ 0.000000] [ffffea0100000000-ffffea013fffffff] PMD -> [ffff884020800000-ffff8840607fffff] on node 2 [ 0.000000] [ffffea0140000000-ffffea014fffffff] PMD -> [ffff884060a00000-ffff8840709fffff] on node 2 [ 0.000000] [ffffea0150000000-ffffea017fffffff] PMD -> [ffff886000600000-ffff8860305fffff] on node 3 [ 0.000000] [ffffea0180000000-ffffea01bfffffff] PMD -> [ffff886030800000-ffff8860707fffff] on node 3 [ 0.000000] [ffffea01c0000000-ffffea01ffffffff] PMD -> [ffff888000600000-ffff8880405fffff] on node 4 [ 0.000000] [ffffea0200000000-ffffea022fffffff] PMD -> [ffff888040800000-ffff8880707fffff] on node 4 [ 0.000000] [ffffea0230000000-ffffea023fffffff] PMD -> [ffff88a000600000-ffff88a0105fffff] on node 5 [ 0.000000] [ffffea0240000000-ffffea027fffffff] PMD -> [ffff88a010800000-ffff88a0507fffff] on node 5 [ 0.000000] [ffffea0280000000-ffffea029fffffff] PMD -> [ffff88a050a00000-ffff88a0709fffff] on node 5 [ 0.000000] [ffffea02a0000000-ffffea02bfffffff] PMD -> [ffff88c000600000-ffff88c0205fffff] on node 6 [ 0.000000] [ffffea02c0000000-ffffea02ffffffff] PMD -> [ffff88c020800000-ffff88c0607fffff] on node 6 [ 0.000000] [ffffea0300000000-ffffea030fffffff] PMD -> [ffff88c060a00000-ffff88c0709fffff] on node 6 [ 0.000000] [ffffea0310000000-ffffea033fffffff] PMD -> [ffff88e000600000-ffff88e0305fffff] on node 7 [ 0.000000] [ffffea0340000000-ffffea037fffffff] PMD -> [ffff88e030800000-ffff88e0707fffff] on node 7 after patch will get [ 0.000000] [ffffea0000000000-ffffea006fffffff] PMD -> [ffff880100200000-ffff88016e5fffff] on node 0 [ 0.000000] [ffffea0070000000-ffffea00dfffffff] PMD -> [ffff882000200000-ffff8820701fffff] on node 1 [ 0.000000] [ffffea00e0000000-ffffea014fffffff] PMD -> [ffff884000200000-ffff8840701fffff] on node 2 [ 0.000000] [ffffea0150000000-ffffea01bfffffff] PMD -> [ffff886000200000-ffff8860701fffff] on node 3 [ 0.000000] [ffffea01c0000000-ffffea022fffffff] PMD -> [ffff888000200000-ffff8880701fffff] on node 4 [ 0.000000] [ffffea0230000000-ffffea029fffffff] PMD -> [ffff88a000200000-ffff88a0701fffff] on node 5 [ 0.000000] [ffffea02a0000000-ffffea030fffffff] PMD -> [ffff88c000200000-ffff88c0701fffff] on node 6 [ 0.000000] [ffffea0310000000-ffffea037fffffff] PMD -> [ffff88e000200000-ffff88e0701fffff] on node 7 -v2: change buf to vmemmap_buf instead according to Ingo also add CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER according to Ingo -v3: according to Andrew, use sizeof(name) instead of hard coded 15 Signed-off-by: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <1265793639-15071-19-git-send-email-yinghai@kernel.org> Cc: Christoph Lameter <cl@linux-foundation.org> Acked-by: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2010-02-12sparsemem: Put usemap for one node togetherYinghai Lu1-18/+66
Could save some buffer space instead of applying one by one. Could help that system that is going to use early_res instead of bootmem less entries in early_res make search more faster on system with more memory. Signed-off-by: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <1265793639-15071-18-git-send-email-yinghai@kernel.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2010-02-12x86: Make 64 bit use early_res instead of bootmem before slabYinghai Lu4-5/+254
Finally we can use early_res to replace bootmem for x86_64 now. Still can use CONFIG_NO_BOOTMEM to enable it or not. -v2: fix 32bit compiling about MAX_DMA32_PFN -v3: folded bug fix from LKML message below Signed-off-by: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <4B747239.4070907@kernel.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2010-02-08Merge branch 'sh/dmaengine'Paul Mundt1-0/+3
Conflicts: arch/sh/drivers/dma/dma-sh.c
2010-02-06Fix potential crash with sys_move_pagesLinus Torvalds1-0/+3
We incorrectly depended on the 'node_state/node_isset()' functions testing the node range, rather than checking it explicitly. That's not reliable, even if it might often happen to work. So do the proper explicit test. Reported-by: Marcus Meissner <meissner@suse.de> Acked-and-tested-by: Brice Goglin <Brice.Goglin@inria.fr> Acked-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-02-05Merge branch 'sh/stable-updates'Paul Mundt3-28/+92
2010-02-02hugetlb: fix section mismatchesJeff Mahoney1-4/+3
hugetlb_sysfs_add_hstate is called by hugetlb_register_node directly during init and also indirectly via sysfs after init. This patch removes the __init tag from hugetlb_sysfs_add_hstate. Signed-off-by: Jeff Mahoney <jeffm@suse.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-02-02mm: flush dcache before writing into page to avoid aliasanfei zhou1-0/+3
The cache alias problem will happen if the changes of user shared mapping is not flushed before copying, then user and kernel mapping may be mapped into two different cache line, it is impossible to guarantee the coherence after iov_iter_copy_from_user_atomic. So the right steps should be: flush_dcache_page(page); kmap_atomic(page); write to page; kunmap_atomic(page); flush_dcache_page(page); More precisely, we might create two new APIs flush_dcache_user_page and flush_dcache_kern_page to replace the two flush_dcache_page accordingly. Here is a snippet tested on omap2430 with VIPT cache, and I think it is not ARM-specific: int val = 0x11111111; fd = open("abc", O_RDWR); addr = mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); *(addr+0) = 0x44444444; tmp = *(addr+0); *(addr+1) = 0x77777777; write(fd, &val, sizeof(int)); close(fd); The results are not always 0x11111111 0x77777777 at the beginning as expected. Sometimes we see 0x44444444 0x77777777. Signed-off-by: Anfei <anfei.zhou@gmail.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: <linux-arch@vger.kernel.org> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-02-02mm: purge fragmented percpu vmap blocksNick Piggin1-11/+81
Improve handling of fragmented per-CPU vmaps. We previously don't free up per-CPU maps until all its addresses have been used and freed. So fragmented blocks could fill up vmalloc space even if they actually had no active vmap regions within them. Add some logic to allow all CPUs to have these blocks purged in the case of failure to allocate a new vm area, and also put some logic to trim such blocks of a current CPU if we hit them in the allocation path (so as to avoid a large build up of them). Christoph reported some vmap allocation failures when using the per CPU vmap APIs in XFS, which cannot be reproduced after this patch and the previous bug fix. Cc: linux-mm@kvack.org Cc: stable@kernel.org Tested-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Nick Piggin <npiggin@suse.de> -- Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-02-02mm: percpu-vmap fix RCU list walkingNick Piggin1-14/+6
RCU list walking of the per-cpu vmap cache was broken. It did not use RCU primitives, and also the union of free_list and rcu_head is obviously wrong (because free_list is indeed the list we are RCU walking). While we are there, remove a couple of unused fields from an earlier iteration. These APIs aren't actually used anywhere, because of problems with the XFS conversion. Christoph has now verified that the problems are solved with these patches. Also it is an exported interface, so I think it will be good to be merged now (and Christoph wants to get the XFS changes into their local tree). Cc: stable@kernel.org Cc: linux-mm@kvack.org Tested-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Nick Piggin <npiggin@suse.de> -- Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-02-02Merge branch 'master' into percpuTejun Heo11-103/+192
2010-02-02Merge branch 'sh/stable-updates'Paul Mundt3-37/+72
2010-01-29mm: fix migratetype bug which slowed swappingHugh Dickins1-2/+3
After memory pressure has forced it to dip into the reserves, 2.6.32's 5f8dcc21211a3d4e3a7a5ca366b469fb88117f61 "page-allocator: split per-cpu list into one-list-per-migrate-type" has been returning MIGRATE_RESERVE pages to the MIGRATE_MOVABLE free_list: in some sense depleting reserves. Fix that in the most straightforward way (which, considering the overheads of alternative approaches, is Mel's preference): the right migratetype is already in page_private(page), but free_pcppages_bulk() wasn't using it. How did this bug show up? As a 20% slowdown in my tmpfs loop kbuild swapping tests, on PowerMac G5 with SLUB allocator. Bisecting to that commit was easy, but explaining the magnitude of the slowdown not easy. The same effect appears, but much less markedly, with SLAB, and even less markedly on other machines (the PowerMac divides into fewer zones than x86, I think that may be a factor). We guess that lumpy reclaim of short-lived high-order pages is implicated in some way, and probably this bug has been tickling a poor decision somewhere in page reclaim. But instrumentation hasn't told me much, I've run out of time and imagination to determine exactly what's going on, and shouldn't hold up the fix any longer: it's valid, and might even fix other misbehaviours. Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Acked-by: Mel Gorman <mel@csn.ul.ie> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-27mm: add new 'read_cache_page_gfp()' helper functionLinus Torvalds1-32/+68
It's a simplified 'read_cache_page()' which takes a page allocation flag, so that different paths can control how aggressive the memory allocations are that populate a address space. In particular, the intel GPU object mapping code wants to be able to do a certain amount of own internal memory management by automatically shrinking the address space when memory starts getting tight. This allows it to dynamically use different memory allocation policies on a per-allocation basis, rather than depend on the (static) address space gfp policy. The actual new function is a one-liner, but re-organizing the helper functions to the point where you can do this with a single line of code is what most of the patch is all about. Tested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-23Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6David S. Miller14-120/+205
2010-01-21vmalloc: remove BUG_ON due to racy counting of VM_LAZY_FREEYongseok Koh1-3/+1
In free_unmap_area_noflush(), va->flags is marked as VM_LAZY_FREE first, and then vmap_lazy_nr is increased atomically. But, in __purge_vmap_area_lazy(), while traversing of vmap_are_list, nr is counted by checking VM_LAZY_FREE is set to va->flags. After counting the variable nr, kernel reads vmap_lazy_nr atomically and checks a BUG_ON condition whether nr is greater than vmap_lazy_nr to prevent vmap_lazy_nr from being negative. The problem is that, if interrupted right after marking VM_LAZY_FREE, increment of vmap_lazy_nr can be delayed. Consequently, BUG_ON condition can be met because nr is counted more than vmap_lazy_nr. It is highly probable when vmalloc/vfree are called frequently. This scenario have been verified by adding delay between marking VM_LAZY_FREE and increasing vmap_lazy_nr in free_unmap_area_noflush(). Even the vmap_lazy_nr is for checking high watermark, it never be the strict watermark. Although the BUG_ON condition is to prevent vmap_lazy_nr from being negative, vmap_lazy_nr is signed variable. So, it could go down to negative value temporarily. Consequently, removing the BUG_ON condition is proper. A possible BUG_ON message is like the below. kernel BUG at mm/vmalloc.c:517! invalid opcode: 0000 [#1] SMP EIP: 0060:[<c04824a4>] EFLAGS: 00010297 CPU: 3 EIP is at __purge_vmap_area_lazy+0x144/0x150 EAX: ee8a8818 EBX: c08e77d4 ECX: e7c7ae40 EDX: c08e77ec ESI: 000081fe EDI: e7c7ae60 EBP: e7c7ae64 ESP: e7c7ae3c DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 Call Trace: [<c0482ad9>] free_unmap_vmap_area_noflush+0x69/0x70 [<c0482b02>] remove_vm_area+0x22/0x70 [<c0482c15>] __vunmap+0x45/0xe0 [<c04831ec>] vmalloc+0x2c/0x30 Code: 8d 59 e0 eb 04 66 90 89 cb 89 d0 e8 87 fe ff ff 8b 43 20 89 da 8d 48 e0 8d 43 20 3b 04 24 75 e7 fe 05 a8 a5 a3 c0 e9 78 ff ff ff <0f> 0b eb fe 90 8d b4 26 00 00 00 00 56 89 c6 b8 ac a5 a3 c0 31 EIP: [<c04824a4>] __purge_vmap_area_lazy+0x144/0x150 SS:ESP 0068:e7c7ae3c [ See also http://marc.info/?l=linux-kernel&m=126335856228090&w=2 ] Signed-off-by: Yongseok Koh <yongseok.koh@samsung.com> Reviewed-by: Minchan Kim <minchan.kim@gmail.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-18Merge branch 'sh/stable-updates'Paul Mundt6-57/+95
2010-01-16page allocator: update NR_FREE_PAGES only when necessaryKOSAKI Motohiro1-1/+1
commit f2260e6b (page allocator: update NR_FREE_PAGES only as necessary) made one minor regression. if __rmqueue() was failed, NR_FREE_PAGES stat go wrong. this patch fixes it. Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Reviewed-by: Minchan Kim <minchan.kim@gmail.com> Reported-by: Huang Shijie <shijie8@gmail.com> Reviewed-by: Christoph Lameter <cl@linux-foundation.org> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-16nommu: fix shared mmap after truncate shrinkage problemsDavid Howells1-0/+62
Fix a problem in NOMMU mmap with ramfs whereby a shared mmap can happen over the end of a truncation. The problem is that ramfs_nommu_check_mappings() checks that the reduced file size against the VMA tree, but not the vm_region tree. The following sequence of events can cause the problem: fd = open("/tmp/x", O_RDWR|O_TRUNC|O_CREAT, 0600); ftruncate(fd, 32 * 1024); a = mmap(NULL, 32 * 1024, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); b = mmap(NULL, 16 * 1024, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); munmap(a, 32 * 1024); ftruncate(fd, 16 * 1024); c = mmap(NULL, 32 * 1024, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); Mapping 'a' creates a vm_region covering 32KB of the file. Mapping 'b' sees that the vm_region from 'a' is covering the region it wants and so shares it, pinning it in memory. Mapping 'a' then goes away and the file is truncated to the end of VMA 'b'. However, the region allocated by 'a' is still in effect, and has _not_ been reduced. Mapping 'c' is then created, and because there's a vm_region covering the desired region, get_unmapped_area() is _not_ called to repeat the check, and the mapping is granted, even though the pages from the latter half of the mapping have been discarded. However: d = mmap(NULL, 16 * 1024, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); Mapping 'd' should work, and should end up sharing the region allocated by 'a'. To deal with this, we shrink the vm_region struct during the truncation, lest do_mmap_pgoff() take it as licence to share the full region automatically without calling the get_unmapped_area() file op again. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Al Viro <viro@zeniv.linux.org.uk> Cc: Greg Ungerer <gerg@snapgear.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-16nommu: don't need get_unmapped_area() for NOMMUDavid Howells2-22/+1
get_unmapped_area() is unnecessary for NOMMU as no-one calls it. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Al Viro <viro@zeniv.linux.org.uk> Cc: Greg Ungerer <gerg@snapgear.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-16nommu: remove a superfluous check of vm_region::vm_usageDavid Howells1-4/+3
In split_vma(), there's no need to check if the VMA being split has a region that's in use by more than one VMA because: (1) The preceding test prohibits splitting of non-anonymous VMAs and regions (eg: file or chardev backed VMAs). (2) Anonymous regions can't be mapped multiple times because there's no handle by which to refer to the already existing region. (3) If a VMA has previously been split, then the region backing it has also been split into two regions, each of usage 1. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Al Viro <viro@zeniv.linux.org.uk> Cc: Greg Ungerer <gerg@snapgear.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-16nommu: struct vm_region's vm_usage count need not be atomicDavid Howells1-7/+7
The vm_usage count field in struct vm_region does not need to be atomic as it's only even modified whilst nommu_region_sem is write locked. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Al Viro <viro@zeniv.linux.org.uk> Cc: Greg Ungerer <gerg@snapgear.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-16memcg: ensure list is empty at rmdirDaisuke Nishimura1-7/+4
Current mem_cgroup_force_empty() only ensures mem->res.usage == 0 on success. But this doesn't guarantee memcg's LRU is really empty, because there are some cases in which !PageCgrupUsed pages exist on memcg's LRU. For example: - Pages can be uncharged by its owner process while they are on LRU. - race between mem_cgroup_add_lru_list() and __mem_cgroup_uncharge_common(). So there can be a case in which the usage is zero but some of the LRUs are not empty. OTOH, mem_cgroup_del_lru_list(), which can be called asynchronously with rmdir, accesses the mem_cgroup, so this access can cause a problem if it races with rmdir because the mem_cgroup might have been freed by rmdir. Actually, I saw a bug which seems to be caused by this race. [1530745.949906] BUG: unable to handle kernel NULL pointer dereference at 0000000000000230 [1530745.950651] IP: [<ffffffff810fbc11>] mem_cgroup_del_lru_list+0x30/0x80 [1530745.950651] PGD 3863de067 PUD 3862c7067 PMD 0 [1530745.950651] Oops: 0002 [#1] SMP [1530745.950651] last sysfs file: /sys/devices/system/cpu/cpu7/cache/index1/shared_cpu_map [1530745.950651] CPU 3 [1530745.950651] Modules linked in: configs ipt_REJECT xt_tcpudp iptable_filter ip_tables x_tables bridge stp nfsd nfs_acl auth_rpcgss exportfs autofs4 hidp rfcomm l2cap crc16 bluetooth lockd sunrpc ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp bnx2i cnic uio ipv6 cxgb3i cxgb3 mdio libiscsi_tcp libiscsi scsi_transport_iscsi dm_mirror dm_multipath scsi_dh video output sbs sbshc battery ac lp kvm_intel kvm sg ide_cd_mod cdrom serio_raw tpm_tis tpm tpm_bios acpi_memhotplug button parport_pc parport rtc_cmos rtc_core rtc_lib e1000 i2c_i801 i2c_core pcspkr dm_region_hash dm_log dm_mod ata_piix libata shpchp megaraid_mbox sd_mod scsi_mod megaraid_mm ext3 jbd uhci_hcd ohci_hcd ehci_hcd [last unloaded: freq_table] [1530745.950651] Pid: 19653, comm: shmem_test_02 Tainted: G M 2.6.32-mm1-00701-g2b04386 #3 Express5800/140Rd-4 [N8100-1065] [1530745.950651] RIP: 0010:[<ffffffff810fbc11>] [<ffffffff810fbc11>] mem_cgroup_del_lru_list+0x30/0x80 [1530745.950651] RSP: 0018:ffff8803863ddcb8 EFLAGS: 00010002 [1530745.950651] RAX: 00000000000001e0 RBX: ffff8803abc02238 RCX: 00000000000001e0 [1530745.950651] RDX: 0000000000000000 RSI: ffff88038611a000 RDI: ffff8803abc02238 [1530745.950651] RBP: ffff8803863ddcc8 R08: 0000000000000002 R09: ffff8803a04c8643 [1530745.950651] R10: 0000000000000000 R11: ffffffff810c7333 R12: 0000000000000000 [1530745.950651] R13: ffff880000017f00 R14: 0000000000000092 R15: ffff8800179d0310 [1530745.950651] FS: 0000000000000000(0000) GS:ffff880017800000(0000) knlGS:0000000000000000 [1530745.950651] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [1530745.950651] CR2: 0000000000000230 CR3: 0000000379d87000 CR4: 00000000000006e0 [1530745.950651] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [1530745.950651] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [1530745.950651] Process shmem_test_02 (pid: 19653, threadinfo ffff8803863dc000, task ffff88038612a8a0) [1530745.950651] Stack: [1530745.950651] ffffea00040c2fe8 0000000000000000 ffff8803863ddd98 ffffffff810c739a [1530745.950651] <0> 00000000863ddd18 000000000000000c 0000000000000000 0000000000000000 [1530745.950651] <0> 0000000000000002 0000000000000000 ffff8803863ddd68 0000000000000046 [1530745.950651] Call Trace: [1530745.950651] [<ffffffff810c739a>] release_pages+0x142/0x1e7 [1530745.950651] [<ffffffff810c778f>] ? pagevec_move_tail+0x6e/0x112 [1530745.950651] [<ffffffff810c781e>] pagevec_move_tail+0xfd/0x112 [1530745.950651] [<ffffffff810c78a9>] lru_add_drain+0x76/0x94 [1530745.950651] [<ffffffff810dba0c>] exit_mmap+0x6e/0x145 [1530745.950651] [<ffffffff8103f52d>] mmput+0x5e/0xcf [1530745.950651] [<ffffffff81043ea8>] exit_mm+0x11c/0x129 [1530745.950651] [<ffffffff8108fb29>] ? audit_free+0x196/0x1c9 [1530745.950651] [<ffffffff81045353>] do_exit+0x1f5/0x6b7 [1530745.950651] [<ffffffff8106133f>] ? up_read+0x2b/0x2f [1530745.950651] [<ffffffff8137d187>] ? lockdep_sys_exit_thunk+0x35/0x67 [1530745.950651] [<ffffffff81045898>] do_group_exit+0x83/0xb0 [1530745.950651] [<ffffffff810458dc>] sys_exit_group+0x17/0x1b [1530745.950651] [<ffffffff81002c1b>] system_call_fastpath+0x16/0x1b [1530745.950651] Code: 54 53 0f 1f 44 00 00 83 3d cc 29 7c 00 00 41 89 f4 75 63 eb 4e 48 83 7b 08 00 75 04 0f 0b eb fe 48 89 df e8 18 f3 ff ff 44 89 e2 <48> ff 4c d0 50 48 8b 05 2b 2d 7c 00 48 39 43 08 74 39 48 8b 4b [1530745.950651] RIP [<ffffffff810fbc11>] mem_cgroup_del_lru_list+0x30/0x80 [1530745.950651] RSP <ffff8803863ddcb8> [1530745.950651] CR2: 0000000000000230 [1530745.950651] ---[ end trace c3419c1bb8acc34f ]--- [1530745.950651] Fixing recursive fault but reboot is needed! The problem here is pages on LRU may contain pointer to stale memcg. To make res->usage to be 0, all pages on memcg must be uncharged or moved to another(parent) memcg. Moved page_cgroup have already removed from original LRU, but uncharged page_cgroup contains pointer to memcg withou PCG_USED bit. (This asynchronous LRU work is for improving performance.) If PCG_USED bit is not set, page_cgroup will never be added to memcg's LRU. So, about pages not on LRU, they never access stale pointer. Then, what we have to take care of is page_cgroup _on_ LRU list. This patch fixes this problem by making mem_cgroup_force_empty() visit all LRUs before exiting its loop and guarantee there are no pages on its LRU. Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-16vmscan: kswapd: don't retry balance_pgdat() if all zones are unreclaimableKOSAKI Motohiro1-0/+3
Commit f50de2d3 (vmscan: have kswapd sleep for a short interval and double check it should be asleep) can cause kswapd to enter an infinite loop if running on a single-CPU system. If all zones are unreclaimble, sleeping_prematurely return 1 and kswapd will call balance_pgdat() again. but it's totally meaningless, balance_pgdat() doesn't anything against unreclaimable zone! Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Reported-by: Will Newton <will.newton@gmail.com> Reviewed-by: Minchan Kim <minchan.kim@gmail.com> Reviewed-by: Rik van Riel <riel@redhat.com> Tested-by: Will Newton <will.newton@gmail.com> Reviewed-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-16mm/page_alloc: fix the range check for backward mergingKazuhisa Ichikawa1-1/+1
The current check for 'backward merging' within add_active_range() does not seem correct. start_pfn must be compared against early_node_map[i].start_pfn (and NOT against .end_pfn) to find out whether the new region is backward-mergeable with the existing range. Signed-off-by: Kazuhisa Ichikawa <ki@epsilou.com> Acked-by: David Rientjes <rientjes@google.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-15mm: export use_mm/unuse_mm to modulesMichael S. Tsirkin1-0/+3
vhost net module wants to do copy to/from user from a kernel thread, which needs use_mm. Export it to modules. Acked-by: Andrea Arcangeli <aarcange@redhat.com> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-01-13vfs: Fix vmtruncate() regressionOGAWA Hirofumi1-16/+14
If __block_prepare_write() was failed in block_write_begin(), the allocated blocks can be outside of ->i_size. But new truncate_pagecache() in vmtuncate() does nothing if new < old. It means the above usage is not working anymore. So, this patch fixes it by removing "new < old" check. It would need more cleanup/change. But, now -rc and truncate working is in progress, so, this tried to fix it minimum change. Acked-by: Nick Piggin <npiggin@suse.de> Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-13Merge branches 'sh/xstate', 'sh/hw-breakpoints' and 'sh/stable-updates'Paul Mundt7-55/+92
2010-01-11mm: hugetlb: fix clear_huge_page()Andrea Arcangeli1-1/+1
sz is in bytes, MAX_ORDER_NR_PAGES is in pages. Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Acked-by: David Gibson <dwg@au1.ibm.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: David Rientjes <rientjes@google.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-11percpu: avoid calling __pcpu_ptr_to_addr(NULL)Andrew Morton1-1/+3
__pcpu_ptr_to_addr() can be overridden by the architecture and might not behave well if passed a NULL pointer. So avoid calling it until we have verified that its arg is not NULL. Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-07maccess,probe_kernel: Allow arch specific override probe_kernel_(read|write)Jason Wessel1-2/+9
Some archs such as blackfin, would like to have an arch specific probe_kernel_read() and probe_kernel_write() implementation which can fall back to the generic implementation if no special operations are needed. CC: Thomas Gleixner <tglx@linutronix.de> CC: Ingo Molnar <mingo@elte.hu> Signed-off-by: Jason Wessel <jason.wessel@windriver.com> Signed-off-by: Mike Frysinger <vapier@gentoo.org>
2010-01-06NOMMU: Use copy_*_user_page() in access_process_vm()Jie Zhang1-2/+4
The MMU code uses the copy_*_user_page() variants in access_process_vm() rather than copy_*_user() as the former includes an icache flush. This is important when doing things like setting software breakpoints with gdb. So switch the NOMMU code over to do the same. This patch makes the reasonable assumption that copy_from_user_page() won't fail - which is probably fine, as we've checked the VMA from which we're copying is usable, and the copy is not allowed to cross VMAs. The one case where it might go wrong is if the VMA is a device rather than RAM, and that device returns an error which - in which case rubbish will be returned rather than EIO. Signed-off-by: Jie Zhang <jie.zhang@analog.com> Signed-off-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: David McCullough <david_mccullough@mcafee.com> Acked-by: Paul Mundt <lethal@linux-sh.org> Acked-by: Greg Ungerer <gerg@uclinux.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-06NOMMU: Avoiding duplicate icache flushes of shared mapsMike Frysinger1-3/+8
When working with FDPIC, there are many shared mappings of read-only code regions between applications (the C library, applet packages like busybox, etc.), but the current do_mmap_pgoff() function will issue an icache flush whenever a VMA is added to an MM instead of only doing it when the map is initially created. The flush can instead be done when a region is first mmapped PROT_EXEC. Note that we may not rely on the first mapping of a region being executable - it's possible for it to be PROT_READ only, so we have to remember whether we've flushed the region or not, and then flush the entire region when a bit of it is made executable. However, this also affects the brk area. That will no longer be executable. We can mprotect() it to PROT_EXEC on MPU-mode kernels, but for NOMMU mode kernels, when it increases the brk allocation, making sys_brk() flush the extra from the icache should suffice. The brk area probably isn't used by NOMMU programs since the brk area can only use up the leavings from the stack allocation, where the stack allocation is larger than requested. Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-05this_cpu: Remove pageset_notifierChristoph Lameter1-0/+1
Remove the pageset notifier since it only marks that a processor exists on a specific node. Move that code into the vmstat notifier. Signed-off-by: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: Tejun Heo <tj@kernel.org>
2010-01-05this_cpu: Page allocator conversionChristoph Lameter2-137/+79
Use the per cpu allocator functionality to avoid per cpu arrays in struct zone. This drastically reduces the size of struct zone for systems with large amounts of processors and allows placement of critical variables of struct zone in one cacheline even on very large systems. Another effect is that the pagesets of one processor are placed near one another. If multiple pagesets from different zones fit into one cacheline then additional cacheline fetches can be avoided on the hot paths when allocating memory from multiple zones. Bootstrap becomes simpler if we use the same scheme for UP, SMP, NUMA. #ifdefs are reduced and we can drop the zone_pcp macro. Hotplug handling is also simplified since cpu alloc can bring up and shut down cpu areas for a specific cpu as a whole. So there is no need to allocate or free individual pagesets. V7-V8: - Explain chicken egg dilemmna with percpu allocator. V4-V5: - Fix up cases where per_cpu_ptr is called before irq disable - Integrate the bootstrap logic that was separate before. tj: Build failure in pageset_cpuup_callback() due to missing ret variable fixed. Reviewed-by: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: Tejun Heo <tj@kernel.org>