aboutsummaryrefslogtreecommitdiffstats
path: root/arch/sparc64/mm (follow)
AgeCommit message (Collapse)AuthorFilesLines
2008-08-30sparc64: setup_valid_addr_bitmap_from_pavail() should be __initDavid S. Miller1-1/+1
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-14sparc64: Fix cmdline_memory_size handling bugs.David S. Miller1-8/+19
First, lmb_enforce_memory_limit() interprets it's argument (mostly, heh) as a size limit not an address limit. So pass the raw cmdline_memory_size value into it. And we don't need to check it against zero, lmb_enforce_memory_limit() does that for us. Next, free_initmem() needs special handling when the kernel command line trims the available memory. The problem case is if the trimmed out memory is where the kernel image itself resides. When that memory is trimmed out, we don't add those physical ram areas to the sparsemem active ranges, amongst other things. Which means that this free_initmem() code will free up invalid page structs, resulting in either crashes or hangs. Just quick fix this by not freeing initmem at all if "mem=" was given on the boot command line. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-14sparc64: Fix overshoot in nid_range().David S. Miller1-0/+3
If 'start' does not begin on a page boundary, we can overshoot past 'end'. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-12sparc64: Implement IRQ stacks.David S. Miller1-0/+11
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-09sparc64: Don't MAGIC_SYSRQ ifdef smp_fetch_global_regs and support code.David S. Miller1-2/+0
Based upon a report and initial patch by Friedrich Oslage. The intention is to provide this facility for __trigger_all_cpu_backtrace even if MAGIC_SYSRQ is not set. The only part that should have MAGIC_SYSRQ ifdef protection is the sparc_globalreg_op sysrq regitration and immediate code. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-04sparc64: Need to disable preemption around smp_tsb_sync().David S. Miller1-1/+4
Based upon a bug report by Mariusz Kozlowski It uses smp_call_function_masked() now, which has a preemption-disabled requirement. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-07-31sparc64: Kill smp_report_regs().David S. Miller1-35/+0
All the call sites are #if 0'd out and we have a much more useful global cpu dumping facility these days. smp_report_regs() is way too verbose to be usable. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-07-30sparc64: Make global reg dumping even more useful.David S. Miller1-0/+7
Record one more level of stack frame program counter. Particularly when lockdep and all sorts of spinlock debugging is enabled, figuring out the caller of spin_lock() is difficult when the cpu is stuck on the lock. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-07-26sparc64: use generic show_mem()Johannes Weiner1-45/+0
Remove arch-specific show_mem() in favor of the generic version. This also removes the following redundant information display: - free swap pages, printed by show_swap_cache_info() - pages in swapcache, printed by show_swap_cache_info() - dirty pages, writeback pages, mapped pages, slab pages, pagetables pages, printed by show_free_areas() where show_mem() calls show_free_areas(), which calls show_swap_cache_info(). Signed-off-by: Johannes Weiner <hannes@saeurebad.de> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24hugetlb: introduce pud_hugeAndi Kleen1-0/+5
Straight forward extensions for huge pages located in the PUD instead of PMDs. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24hugetlb: modular state for hugetlb page sizeAndi Kleen1-2/+3
The goal of this patchset is to support multiple hugetlb page sizes. This is achieved by introducing a new struct hstate structure, which encapsulates the important hugetlb state and constants (eg. huge page size, number of huge pages currently allocated, etc). The hstate structure is then passed around the code which requires these fields, they will do the right thing regardless of the exact hstate they are operating on. This patch adds the hstate structure, with a single global instance of it (default_hstate), and does the basic work of converting hugetlb to use the hstate. Future patches will add more hstate structures to allow for different hugetlbfs mounts to have different page sizes. [akpm@linux-foundation.org: coding-style fixes] Acked-by: Adam Litke <agl@us.ibm.com> Acked-by: Nishanth Aravamudan <nacc@us.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24mm: move bootmem descriptors definition to a single placeJohannes Weiner1-2/+1
There are a lot of places that define either a single bootmem descriptor or an array of them. Use only one central array with MAX_NUMNODES items instead. Signed-off-by: Johannes Weiner <hannes@saeurebad.de> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Richard Henderson <rth@twiddle.net> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Kyle McMartin <kyle@parisc-linux.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: David S. Miller <davem@davemloft.net> Cc: Yinghai Lu <yhlu.kernel@gmail.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-17sparc64: Remove 4MB and 512K base page size options.David S. Miller1-6/+0
Adrian Bunk reported that enabling 4MB page size breaks the build. The problem is that MAX_ORDER combined with the page shift exceeds the SECTION_SIZE_BITS we use in asm-sparc64/sparsemem.h There are several ways I suppose we could work around this. For one we could define a CONFIG_FORCE_MAX_ZONEORDER to decrease MAX_ORDER in these higher page size cases. But I also know that these page size cases are broken wrt. TLB miss handling especially on pre-hypervisor systems, and there isn't an easy way to fix that. These options were meant to be fun experimental hacks anyways, and only 8K and 64K make any sense to support. So remove 512K and 4M base page size support. Of course, we still support these page sizes for huge pages. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-07-17sparc64: Convert to generic helpers for IPI function calls.David S. Miller1-0/+5
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-07-17sparc: Use new '%pS' infrastructure to print symbols.David S. Miller1-3/+2
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-06-26on_each_cpu(): kill unused 'retry' parameterJens Axboe1-1/+1
It's not even passed on to smp_call_function() anymore, since that was removed. So kill it. Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-05-20sparc64: Add global register dumping facility.David S. Miller1-1/+28
When a cpu really is stuck in the kernel, it can be often impossible to figure out which cpu is stuck where. The worst case is when the stuck cpu has interrupts disabled. Therefore, implement a global cpu state capture that uses SMP message interrupts which are not disabled by the normal IRQ enable/disable APIs of the kernel. As long as we can get a sysrq 'y' to the kernel, we can get a dump. Even if the console interrupt cpu is wedged, we can trigger it from userspace using /proc/sysrq-trigger The output is made compact so that this facility is more useful on high cpu count systems, which is where this facility will likely find itself the most useful :) Signed-off-by: David S. Miller <davem@davemloft.net>
2008-05-20sparc64: remove CVS keywordsAdrian Bunk5-5/+4
This patch removes the CVS keywords that weren't updated for a long time from comments. Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-05-16sparc64: Fix lmb_reserve() args in find_ramdisk().David S. Miller1-1/+1
This fixes the missing ram regression reported by Mikael Pettersson <mikpe@it.uu.se>, much thanks for all of this help in diagnosing this. The second argument to lmb_reserve() is a size, not an end address bounds. Tested-by: Mikael Pettersson <mikpe@it.uu.se> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-05-11sparc64: Work around memory probing bug in openfirmware.David S. Miller1-5/+11
Read all of the OF memory and translation tables, then read the physical available memory list twice. When making these requests, OF can allocate more memory to do it's job, which can remove pages from the available memory list. So fetch in all of the tables at once, and fetch the available list last to make sure we read a stable value. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-05-06sparc64: Fix initrd regression.David S. Miller1-0/+3
We die because we forget to convert initrd_start and initrd_end to virtual addresses. Reported by Mikael Pettersson Signed-off-by: David S. Miller <davem@davemloft.net>
2008-05-05sparc64: remove online_page()Adrian Bunk1-13/+0
The identical online_page() implementations from all architectures got moved to mm/memory_hotplug.c - except for the sparc64 one that even was dead code due to MEMORY_HOTPLUG not being available there. Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-30Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6Linus Torvalds1-0/+27
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6: sparc64: remove duplicated include sparc: Add kgdb support. kgdbts: Sparc needs sstep emulation. sparc32: Kill smp_message_pass() and related code. sparc64: Kill PIL_RESERVED, unused. sparc64: Split entry.S up into seperate files.
2008-04-29sparc: Add kgdb support.David S. Miller1-0/+27
Current limitations: 1) On SMP single stepping has some fundamental issues, shared with other sw single-step architectures such as mips and arm. 2) On 32-bit sparc we don't support SMP kgdb yet. That requires some reworking of the IPI mechanisms and infrastructure on that platform. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-24[SPARC64]: %l6 trap return handling no longer necessary.David S. Miller1-3/+1
Now that we indicate the "restart system call" in the trap type field of pt_regs->magic, we don't need to set the %l6 boolean in all of the trap return paths. And we therefore don't need to pass it to do_notify_resume(). Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-23[SPARC64]: Add NUMA support.David S. Miller1-100/+696
Currently there is only code to parse NUMA attributes on sun4v/niagara systems, but later on we will add such parsing for older systems. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-23[SPARC64]: Allocate TSB node-local.David S. Miller1-1/+2
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-23[SPARC64]: Initialize MDESC earlier and use lmb_alloc()David S. Miller1-3/+3
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-23[SPARC64]: Use lmb_alloc() for PROM device tree.David S. Miller1-2/+2
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-23[SPARC64]: Call real_setup_per_cpu_areas() earlier and use lmb_alloc().David S. Miller1-2/+6
We have to do it like this before we can move the PROM and MDESC device tree code over to using lmb_alloc(). Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-23[SPARC64]: Fully use LMB information in bootmem_init().David S. Miller1-82/+18
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-23[SPARC64]: Start using LMB information in bootmem_init().David S. Miller1-126/+6
This allows us to kill the incredibly complicated and stupid function trim_pavail(). Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-23[SPARC64]: Initialize LMB tables.David S. Miller1-1/+13
Call lmb_add() on available regions, and call lmb_reserve() on the main kernel image and the ramdisk (if any). Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-23[SPARC64]: Move ramdisk discovery code out to seperate function.David S. Miller1-24/+33
And add some comments explaining all of the quirks involved in the way the bootloader provides this information. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-29sparc: Export symbols for ZERO_PAGE usage in modules.Aneesh Kumar K.V1-0/+1
ext4 uses ZERO_PAGE(0) to zero out blocks. We need to export different symbols in different arches for the usage of ZERO_PAGE in modules. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2008-04-28pageflags: get rid of FLAGS_RESERVEDChristoph Lameter1-2/+14
NR_PAGEFLAGS specifies the number of page flags we are using. From that we can calculate the number of bits leftover that can be used for zone, node (and maybe the sections id). There is no need anymore for FLAGS_RESERVED if we use NR_PAGEFLAGS. Use the new methods to make NR_PAGEFLAGS available via the preprocessor. NR_PAGEFLAGS is used to calculate field boundaries in the page flags fields. These field widths have to be available to the preprocessor. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: David Miller <davem@davemloft.net> Cc: Andy Whitcroft <apw@shadowen.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Rik van Riel <riel@redhat.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-03-28[SPARC64]: Don't open-code {get,put}_cpu_var() in flush_tlb_pending().David S. Miller1-5/+2
Noticed by Andrew Morton. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-03-26[SPARC64]: Fix __get_cpu_var in preemption-enabled area.David S. Miller1-1/+2
Reported by Mariusz Kozlowski. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-03-26[SPARC64]: Fix sparse errors in arch/sparc64/kernel/traps.cDavid S. Miller1-4/+0
Add 'UL' markers to DCU_* macros. Declare C functions called from assembler in entry.h Declare C functions called from within the sparc64 arch code in include/asm-sparc64/*.h headers as appropriate. Remove unused routines in traps.c Signed-off-by: David S. Miller <davem@davemloft.net>
2008-03-25[SPARC64]: Fix sparse warnings in arch/sparc64/kernel/{cpu,setup}.cDavid S. Miller1-0/+1
We create a local header file entry.h, under arch/sparc64/kernel/, that we can use to declare routines either defined in assembler or only invoked from assembler. As well as other data objects which are private to the inner sparc64 kernel arch code. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-03-21[SPARC64]: Remove most limitations to kernel image size.David S. Miller1-24/+14
Currently kernel images are limited to 8MB in size, and this causes problems especially when enabling features that take up a lot of kernel image space such as lockdep. The code now will align the kernel image size up to 4MB and map that many locked TLB entries. So, the only practical limitation is the number of available locked TLB entries which is 16 on Cheetah and 64 on pre-Cheetah sparc64 cpus. Niagara cpus don't actually have hw locked TLB entry support. Rather, the hypervisor transparently provides support for "locked" TLB entries since it runs with physical addressing and does the initial TLB miss processing. Fully utilizing this change requires some help from SILO, a patch for which will be submitted to the maintainer. Essentially, SILO will only currently map up to 8MB for the kernel image and that needs to be increased. Note that neither this patch nor the SILO bits will help with network booting. The openfirmware code will only map up to a certain amount of kernel image during a network boot and there isn't much we can to about that other than to implemented a layered network booting facility. Solaris has this, and calls it "wanboot" and we may implement something similar at some point. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-28[SPARC64]: Adjust kernel PC validation test in fault handler.David S. Miller1-1/+1
Because of the new futex validation init handler, we have to accept faults in init section text as well as the normal kernel text. Thanks to Tom Callaway for the bug report. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-26[SPARC64]: Loosen checks in exception table handling.David S. Miller1-10/+2
Some parts of the kernel now do things like do *_user() accesses while set_fs(KERNEL_DS) that fault on purpose. See, for example, the code added by changeset a0c1e9073ef7428a14309cba010633a6cd6719ea ("futex: runtime enable pi and robust functionality"). That trips up the ASI sanity checking we make in do_kernel_fault(). Just remove it for now. Maybe we can add it back later with an added conditional which looks at the current get_fs() value. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-24[SPARC64]: Fix section mismatch from kernel_map_rangeSam Ravnborg1-1/+2
Fix following warnings: WARNING: vmlinux.o(.text+0x4f980): Section mismatch in reference from the function kernel_map_range() to the function .init.text:__alloc_bootmem() WARNING: vmlinux.o(.text+0x4f9cc): Section mismatch in reference from the function kernel_map_range() to the function .init.text:__alloc_bootmem() alloc_bootmem() is only used during early init and for any subsequent call to kernel_map_range() the program logic avoid the call. So annotate kernel_map_range() with __ref to tell modpost to ignore the reference to a __init function. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-17[SPARC64]: Always register a PROM based early console.David S. Miller1-3/+3
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-13[SPARC64]: Remove DEBUG_BOOTMEM.David S. Miller1-42/+1
We'll replace it in the future with better logging facilities that can be enabled at run time. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-07Introduce flags for reserve_bootmem()Bernhard Walle1-4/+4
This patchset adds a flags variable to reserve_bootmem() and uses the BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions between crashkernel area and already used memory. This patch: Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE. If that flag is set, the function returns with -EBUSY if the memory already has been reserved in the past. This is to avoid conflicts. Because that code runs before SMP initialisation, there's no race condition inside reserve_bootmem_core(). [akpm@linux-foundation.org: coding-style fixes] [akpm@linux-foundation.org: fix powerpc build] Signed-off-by: Bernhard Walle <bwalle@suse.de> Cc: <linux-arch@vger.kernel.org> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-01-30SPARC64: use generic percputravis@sgi.com1-0/+5
Sparc64 has a way of providing the base address for the per cpu area of the currently executing processor in a global register. Sparc64 also provides a way to calculate the address of a per cpu area from a base address instead of performing an array lookup. Cc: David Miller <davem@davemloft.net> Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-12-13[SPARC64]: Fix two kernel linear mapping setup bugs.David S. Miller1-9/+20
This was caught and identified by Greg Onufer. Since we setup the 256M/4M bitmap table after taking over the trap table, it's possible for some 4M mapping to get loaded in the TLB beforhand which later will be 256M mappings. This can cause illegal TLB multiple-match conditions. Fix this by setting up the bitmap before we take over the trap table. Next, __flush_tlb_all() was not doing anything on hypervisor platforms. Fix by adding sun4v_mmu_demap_all() and calling it. Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-31[SPARC64]: Fix build failure when CONFIG_BUG is disabled.David S. Miller1-1/+3
When CONFIG_BUG is turned off, the standard trick of: switch (x) { case X: ... case Y: ... default: BUG(); }; to mark impossible cases does not work because BUG() evalutes to nothing and thus GCC just sees a fallthrough code path. Add an explicit KERN_ERR log message and a do_exit() to trap this case. Signed-off-by: David S. Miller <davem@davemloft.net>