aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/mm/mmu_decl.h (follow)
AgeCommit message (Collapse)AuthorFilesLines
2010-05-17powerpc/fsl-booke: Move loadcam_entry back to asm code to fix SMP ftraceKumar Gala1-1/+9
When we build with ftrace enabled its possible that loadcam_entry would have used the stack pointer (even though the code doesn't need it). We call loadcam_entry in __secondary_start before the stack is setup. To ensure that loadcam_entry doesn't use the stack pointer the easiest solution is to just have it in asm code. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2010-05-05powerpc/47x: Base ppc476 supportDave Kleikamp1-6/+1
This patch adds the base support for the 476 processor. The code was primarily written by Ben Herrenschmidt and Torez Smith, but I've been maintaining it for a while. The goal is to have a single binary that will run on 44x and 47x, but we still have some details to work out. The biggest is that the L1 cache line size differs on the two platforms, but it's currently a compile-time option. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Torez Smith <lnxtorez@linux.vnet.ibm.com> Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2009-12-16Merge branch 'next' of git://git.secretlab.ca/git/linux-2.6Linus Torvalds1-4/+13
* 'next' of git://git.secretlab.ca/git/linux-2.6: (23 commits) powerpc: fix up for mmu_mapin_ram api change powerpc: wii: allow ioremap within the memory hole powerpc: allow ioremap within reserved memory regions wii: use both mem1 and mem2 as ram wii: bootwrapper: add fixup to calc useable mem2 powerpc: gamecube/wii: early debugging using usbgecko powerpc: reserve fixmap entries for early debug powerpc: wii: default config powerpc: wii: platform support powerpc: wii: hollywood interrupt controller support powerpc: broadway processor support powerpc: wii: bootwrapper bits powerpc: wii: device tree powerpc: gamecube: default config powerpc: gamecube: platform support powerpc: gamecube/wii: flipper interrupt controller support powerpc: gamecube/wii: udbg support for usbgecko powerpc: gamecube/wii: do not include PCI support powerpc: gamecube/wii: declare as non-coherent platforms powerpc: gamecube/wii: introduce GAMECUBE_COMMON ... Fix up conflicts in arch/powerpc/mm/fsl_booke_mmu.c. Hopefully even close to correctly.
2009-12-14powerpc: fix up for mmu_mapin_ram api changeStephen Rothwell1-3/+3
Today's linux-next build (powerpc ppc44x_defconfig) failed like this: arch/powerpc/mm/pgtable_32.c: In function 'mapin_ram': arch/powerpc/mm/pgtable_32.c:318: error: too many arguments to function 'mmu_mapin_ram' Casued by commit de32400dd26e743c5d500aa42d8d6818b79edb73 ("wii: use both mem1 and mem2 as ram"). Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
2009-12-12powerpc: allow ioremap within reserved memory regionsAlbert Herranz1-0/+1
Add a flag to let a platform ioremap memory regions marked as reserved. This flag will be used later by the Nintendo Wii support code to allow ioremapping the I/O region sitting between MEM1 and MEM2 and marked as reserved RAM in the patch "wii: use both mem1 and mem2 as ram". This will no longer be needed when proper discontig memory support for 32-bit PowerPC is added to the kernel. Signed-off-by: Albert Herranz <albert_herranz@yahoo.es> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
2009-12-12wii: use both mem1 and mem2 as ramAlbert Herranz1-1/+9
The Nintendo Wii video game console has two discontiguous RAM regions: - MEM1: 24MB @ 0x00000000 - MEM2: 64MB @ 0x10000000 Unfortunately, the kernel currently does not support discontiguous RAM memory regions on 32-bit PowerPC platforms. This patch adds a series of workarounds to allow the use of the second memory region (MEM2) as RAM by the kernel. Basically, a single range of memory from the beginning of MEM1 to the end of MEM2 is reported to the kernel, and a memory reservation is created for the hole between MEM1 and MEM2. With this patch the system is able to use all the available RAM and not just ~27% of it. This will no longer be needed when proper discontig memory support for 32-bit PowerPC is added to the kernel. Signed-off-by: Albert Herranz <albert_herranz@yahoo.es> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
2009-11-20powerpc/fsl-booke: Rework TLB CAM codeKumar Gala1-11/+0
Re-write the code so its more standalone and fixed some issues: * Bump'd # of CAM entries to 64 to support e500mc * Make the code handle MAS7 properly * Use pr_cont instead of creating a string as we go Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-08-20powerpc/mm: Add support for SPARSEMEM_VMEMMAP on 64-bit Book3EBenjamin Herrenschmidt1-1/+6
The base TLB support didn't include support for SPARSEMEM_VMEMMAP, though we did carve out some virtual space for it, the necessary support code wasn't there. This implements it by using 16M pages for now, though the page size could easily be changed at runtime if necessary. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-20powerpc: Add TLB management code for 64-bit Book3EBenjamin Herrenschmidt1-1/+13
This adds the TLB miss handler assembly, the low level TLB flush routines along with the necessary hook for dealing with our virtual page tables or indirect TLB entries that need to be flushes when PTE pages are freed. There is currently no support for hugetlbfs Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-20powerpc/mm: Make low level TLB flush ops on BookE take additional argsBenjamin Herrenschmidt1-3/+13
We need to pass down whether the page is direct or indirect and we'll need to pass the page size to _tlbil_va and _tlbivax_bcast We also add a new low level _tlbil_pid_noind() which does a TLB flush by PID but avoids flushing indirect entries if possible This implements those new prototypes but defines them with inlines or macros so that no additional arguments are actually passed on current processors. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-01-13Merge commit 'kumar/kumar-next' into nextBenjamin Herrenschmidt1-2/+9
2009-01-08powerpc: Fix missing semicolons in mmu_decl.hBenjamin Herrenschmidt1-3/+3
This is a brown paper bag from one of my earlier patches that breaks build on 40x and 8xx. And yes, I've now added 40x and 8xx to my list of test configs :-) Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-01-07powerpc/fsl-booke: Remove num_tlbcam_entriesTrent Piepho1-2/+0
This is a global variable defined in fsl_booke_mmu.c with a value that gets initialized in assembly code in head_fsl_booke.S. It's never used. If some code ever does want to know the number of entries in TLB1, then "numcams = mfspr(SPRN_TLB1CFG) & 0xfff", is a whole lot simpler than a global initialized during kernel boot from assembly. Signed-off-by: Trent Piepho <tpiepho@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-01-07powerpc/fsl-booke: Don't hard-code size of struct tlbcamTrent Piepho1-0/+9
Some assembly code in head_fsl_booke.S hard-coded the size of struct tlbcam to 20 when it indexed the TLBCAM table. Anyone changing the size of struct tlbcam would not know to expect that. The kernel already has a system to get the size of C structures into assembly language files, asm-offsets, so let's use it. The definition of the struct gets moved to a header, so that asm-offsets.c can include it. Signed-off-by: Trent Piepho <tpiepho@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-21powerpc/mm: Split low level tlb invalidate for nohash processorsBenjamin Herrenschmidt1-0/+48
Currently, the various forms of low level TLB invalidations are all implemented in misc_32.S for 32-bit processors, in a fairly scary mess of #ifdef's and with interesting duplication such as a whole bunch of code for FSL _tlbie and _tlbia which are no longer used. This moves things around such that _tlbie is now defined in hash_low_32.S and is only used by the 32-bit hash code, and all nohash CPUs use the various _tlbil_* forms that are now moved to a new file, tlb_nohash_low.S. I moved all the definitions for that stuff out of include/asm/tlbflush.h as they are really internal mm stuff, into mm/mmu_decl.h The code should have no functional changes. I kept some variants inline for trivial forms on things like 40x and 8xx. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-16powerpc/mm: Remove flush_HPTE()Benjamin Herrenschmidt1-17/+0
The function flush_HPTE() is used in only one place, the implementation of DEBUG_PAGEALLOC on ppc32. It's actually a dup of flush_tlb_page() though it's -slightly- more efficient on hash based processors. We remove it and replace it by a direct call to the hash flush code on those processors and to flush_tlb_page() for everybody else. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-09powerpc: Fix problems with 32bit PPC's running with >= 4GB of RAMStefan Roese1-2/+2
This patch enables 32bit PPC's (with 36bit physical address space, e.g. IBM/AMCC PPC44x) to run with >= 4GB of RAM. Mostly its just replacing types (unsigned long -> phys_addr_t). Tested on an AMCC Katmai with 4GB of DDR2. Signed-off-by: Stefan Roese <sr@denx.de> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2008-06-30powerpc: Change BAT code to use phys_addr_tBecky Bruce1-1/+1
Currently, the physical address is an unsigned long, but it should be phys_addr_t in set_bat, [v/p]_mapped_by_bat. Also, create a macro that can convert a large physical address into the correct format for programming the BAT registers. Signed-off-by: Becky Bruce <becky.bruce@freescale.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-17[POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addrKumar Gala1-1/+1
We always use __initial_memory_limit as an address so rename it to be clear. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-17[POWERPC] Introduce lowmem_end_addr to distinguish from total_lowmemKumar Gala1-0/+1
total_lowmem represents the amount of low memory, not the physical address that low memory ends at. If the start of memory is at 0 it happens that total_lowmem can be used as both the size and the address that lowmem ends at (or more specifically one byte beyond the end). To make the code a bit more clear and deal with the case when the start of memory isn't at physical 0, we introduce lowmem_end_addr that represents one byte beyond the last physical address in the lowmem region. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-17[POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addrKumar Gala1-0/+1
A number of users of PPC_MEMSTART (40x, ppc_mmu_32) can just always use 0 as we don't support booting these kernels at non-zero physical addresses since their exception vectors must be at 0 (or 0xfffx_xxxx). For the sub-arches that support relocatable interrupt vectors (book-e), it's reasonable to have memory start at a non-zero physical address. For those cases use the variable memstart_addr instead of the #define PPC_MEMSTART since the only uses of PPC_MEMSTART are for initialization and in the future we can set memstart_addr at runtime to have a relocatable kernel. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-20[POWERPC] Fix 8xx build breakage due to _tlbie changesBenjamin Herrenschmidt1-1/+1
My changes to _tlbie to fix 4xx unfortunately broke 8xx build in a couple of places. This fixes it. Spotted by Olof Johansson. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Vitaly Bordug <vitb@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-01[POWERPC] 4xx: Fix 4xx flush_tlb_page()Benjamin Herrenschmidt1-2/+2
On 4xx CPUs, the current implementation of flush_tlb_page() uses a low level _tlbie() assembly function that only works for the current PID. Thus, invalidations caused by, for example, a COW fault triggered by get_user_pages() from a different context will not work properly, causing among other things, gdb breakpoints to fail. This patch adds a "pid" argument to _tlbie() on 4xx processors, and uses it to flush entries in the right context. FSL BookE also gets the argument but it seems they don't need it (their tlbivax form ignores the PID when invalidating according to the document I have). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2007-06-14[POWERPC] Kill typedef-ed structs for hash PTEs and BATsDavid Gibson1-2/+2
Using typedefs to rename structure types if frowned on by CodingStyle. However, we do so for the hash PTE structure on both ppc32 (where it's called "PTE") and ppc64 (where it's called "hpte_t"). On ppc32 we also have such a typedef for the BATs ("BAT"). This removes this unhelpful use of typedefs, in the process bringing ppc32 and ppc64 closer together, by using the name "struct hash_pte" in both cases. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-14[POWERPC] Remove the dregs of APUS support from arch/powerpcDavid Gibson1-1/+0
APUS (the Amiga Power-Up System) is not supported under arch/powerpc and it's unlikely it ever will be. Therefore, this patch removes the fragments of APUS support code from arch/powerpc which have been copied from arch/ppc. A few APUS references are left in asm-powerpc in .h files which are still used from arch/ppc. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-14[POWERPC] Rewrite IO allocation & mapping on powerpc64Benjamin Herrenschmidt1-12/+0
This rewrites pretty much from scratch the handling of MMIO and PIO space allocations on powerpc64. The main goals are: - Get rid of imalloc and use more common code where possible - Simplify the current mess so that PIO space is allocated and mapped in a single place for PCI bridges - Handle allocation constraints of PIO for all bridges including hot plugged ones within the 2GB space reserved for IO ports, so that devices on hotplugged busses will now work with drivers that assume IO ports fit in an int. - Cleanup and separate tracking of the ISA space in the reserved low 64K of IO space. No ISA -> Nothing mapped there. I booted a cell blade with IDE on PIO and MMIO and a dual G5 so far, that's it :-) With this patch, all allocations are done using the code in mm/vmalloc.c, though we use the low level __get_vm_area with explicit start/stop constraints in order to manage separate areas for vmalloc/vmap, ioremap, and PCI IOs. This greatly simplifies a lot of things, as you can see in the diffstat of that patch :-) A new pair of functions pcibios_map/unmap_io_space() now replace all of the previous code that used to manipulate PCI IOs space. The allocation is done at mapping time, which is now called from scan_phb's, just before the devices are probed (instead of after, which is by itself a bug fix). The only other caller is the PCI hotplug code for hot adding PCI-PCI bridges (slots). imalloc is gone, as is the "sub-allocation" thing, but I do beleive that hotplug should still work in the sense that the space allocation is always done by the PHB, but if you unmap a child bus of this PHB (which seems to be possible), then the code should properly tear down all the HPTE mappings for that area of the PHB allocated IO space. I now always reserve the first 64K of IO space for the bridge with the ISA bus on it. I have moved the code for tracking ISA in a separate file which should also make it smarter if we ever are capable of hot unplugging or re-plugging an ISA bridge. This should have a side effect on platforms like powermac where VGA IOs will no longer work. This is done on purpose though as they would have worked semi-randomly before. The idea at this point is to isolate drivers that might need to access those and fix them by providing a proper function to obtain an offset to the legacy IOs of a given bus. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-05-02[POWERPC] Revise PPC44x MMU code for arch/powerpcDavid Gibson1-1/+2
This patch takes the definitions for the PPC44x MMU (a software loaded TLB) from asm-ppc/mmu.h, cleans them up of things no longer necessary in arch/powerpc and puts them in a new asm-powerpc/mmu_44x.h file. It also substantially simplifies arch/powerpc/mm/44x_mmu.c and makes a couple of small fixes necessary for the 44x MMU code to build and work properly in arch/powerpc. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-24[POWERPC] Cleanup and fix breakage in tlbflush.hDavid Gibson1-0/+1
BenH's commit a741e67969577163a4cfc78d7fd2753219087ef1 in powerpc.git, although (AFAICT) only intended to affect ppc64, also has side-effects which break 44x. I think 40x, 8xx and Freescale Book E are also affected, though I haven't tested them. The problem lies in unconditionally removing flush_tlb_pending() from the versions of flush_tlb_mm(), flush_tlb_range() and flush_tlb_kernel_range() used on ppc64 - which are also used the embedded platforms mentioned above. The patch below cleans up the convoluted #ifdef logic in tlbflush.h, in the process restoring the necessary flushes for the software TLB platforms. There are three sets of definitions for the flushing hooks: the software TLB versions (revised to avoid using names which appear to related to TLB batching), the 32-bit hash based versions (external functions) amd the 64-bit hash based versions (which implement batching). It also moves the declaration of update_mmu_cache() to always be in tlbflush.h (previously it was in tlbflush.h except for PPC64, where it was in pgtable.h). Booted on Ebony (440GP) and compiled for 64-bit and 32-bit multiplatform. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13[POWERPC] Fix 32-bit mm operations when not using BATsBenjamin Herrenschmidt1-0/+4
On hash table based 32 bits powerpc's, the hash management code runs with a big spinlock. It's thus important that it never causes itself a hash fault. That code is generally safe (it does memory accesses in real mode among other things) with the exception of the actual access to the code itself. That is, the kernel text needs to be accessible without taking a hash miss exceptions. This is currently guaranteed by having a BAT register mapping part of the linear mapping permanently, which includes the kernel text. But this is not true if using the "nobats" kernel command line option (which can be useful for debugging) and will not be true when using DEBUG_PAGEALLOC implemented in a subsequent patch. This patch fixes this by pre-faulting in the hash table pages that hit the kernel text, and making sure we never evict such a page under hash pressure. Signed-off-by: Benjamin Herrenchmidt <benh@kernel.crashing.org> arch/powerpc/mm/hash_low_32.S | 22 ++++++++++++++++++++-- arch/powerpc/mm/mem.c | 3 --- arch/powerpc/mm/mmu_decl.h | 4 ++++ arch/powerpc/mm/pgtable_32.c | 11 +++++++---- 4 files changed, 31 insertions(+), 9 deletions(-) Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-11-19[PATCH] powerpc: Remove imalloc.hDavid Gibson1-1/+13
asm-ppc64/imalloc.h is only included from files in arch/powerpc/mm. We already have a header for mm local definitions, arch/powerpc/mm/mmu_decl.h. Thus, this patch moves the contents of imalloc.h into mmu_decl.h. The only exception are the definitions of PHBS_IO_BASE, IMALLOC_BASE and IMALLOC_END. Those are moved into pgtable.h, next to similar definitions of VMALLOC_START and VMALLOC_SIZE. Built for multiplatform 32bit and 64bit (ARCH=powerpc). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-10-10powerpc: Merge arch/ppc64/mm to arch/powerpc/mmPaul Mackerras1-9/+9
This moves the remaining files in arch/ppc64/mm to arch/powerpc/mm, and arranges that we use them when compiling with ARCH=ppc64. Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-10-06powerpc: Merge lmb.c and make MM initialization use it.Paul Mackerras1-0/+2
This also creates merged versions of do_init_bootmem, paging_init and mem_init and moves them to arch/powerpc/mm/mem.c. It gets rid of the mem_pieces stuff. I made memory_limit a parameter to lmb_enforce_memory_limit rather than a global referenced by that function. This will require some small changes to ppc64 if we want to continue building ARCH=ppc64 using the merged lmb.c. Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-09-26powerpc: Merge enough to start building in arch/powerpc.Paul Mackerras1-0/+85
This creates the directory structure under arch/powerpc and a bunch of Kconfig files. It does a first-cut merge of arch/powerpc/mm, arch/powerpc/lib and arch/powerpc/platforms/powermac. This is enough to build a 32-bit powermac kernel with ARCH=powerpc. For now we are getting some unmerged files from arch/ppc/kernel and arch/ppc/syslib, or arch/ppc64/kernel. This makes some minor changes to files in those directories and files outside arch/powerpc. The boot directory is still not merged. That's going to be interesting. Signed-off-by: Paul Mackerras <paulus@samba.org>