aboutsummaryrefslogtreecommitdiffstats
path: root/arch/sparc64/mm/init.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2006-03-20[SPARC64]: Create a seperate kernel TSB for 4MB/256MB mappings.David S. Miller1-5/+19
It can map all of the linear kernel mappings with zero TSB hash conflicts for systems with 16GB or less ram. In such cases, on SUN4V, once we load up this TSB the first time with all the mappings, we never take a linear kernel mapping TLB miss ever again, the hypervisor handles them all. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Make use of Niagara 256MB PTEs for kernel mappings.David S. Miller1-13/+69
We use a bitmap, one bit for every 256MB of memory. If the bit is set we can use a 256MB PTE for linear mappings, else we have to use a 4MB PTE. SUN4V support is there, and we can very easily add support for Panther cpu 256MB PTEs in the future. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Export a PAGE_SHARED symbol.David S. Miller1-0/+5
For drivers/media/*, noticed by Fabbione. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: More TLB/TSB handling fixes.David S. Miller1-1/+3
The SUN4V convention with non-shared TSBs is that the context bit of the TAG is clear. So we have to choose an "invalid" bit and initialize new TSBs appropriately. Otherwise a zero TAG looks "valid". Make sure, for the window fixup cases, that we use the right global registers and that we don't potentially trample on the live global registers in etrap/rtrap handling (%g2 and %g6) and that we put the missing virtual address properly in %g5. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Check for errors in hypervisor_tlb_lock().David S. Miller1-0/+5
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Set associativity of kernel TSB descriptor correctly.David S. Miller1-1/+1
It should be 1, not 0. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Use phys tsb address in tsb_insert() in SUN4V.David S. Miller1-1/+1
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Use inline patching for critical PTE operations.David S. Miller1-208/+3
This handles the SUN4U vs SUN4V PTE layout differences with near zero performance cost. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Move PTE field definitions back into asm/pgtable.hDavid S. Miller1-84/+0
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Deal with PTE layout differences in SUN4V.David S. Miller1-94/+609
Yes, you heard it right, they changed the PTE layout for SUN4V. Ho hum... This is the simple and inefficient way to support this. It'll get optimized, don't worry. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Register kernel TSB with hypervisor.David S. Miller1-1/+69
We do this right after we take over the trap table from OBP. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Use ASI_SCRATCHPAD address 0x0 properly.David S. Miller1-21/+1
This is where the virtual address of the fault status area belongs. To set it up we don't make a hypervisor call, instead we call OBP's SUNW,set-trap-table with the real address of the fault status area as the second argument. And right before that call we write the virtual address into ASI_SCRATCHPAD vaddr 0x0. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Fix hypervisor call arg passing.David S. Miller1-10/+10
Function goes in %o5, args go in %o0 --> %o5. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Detect sun4v early in boot process.David S. Miller1-12/+46
We look for "SUNW,sun4v" in the 'compatible' property of the root OBP device tree node. Protect every %ver register access, to make sure it is not touched on sun4v, as %ver is hyperprivileged there. Lock kernel TLB entries using hypervisor calls instead of calls into OBP. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Patch up mmu context register writes for sun4v.David S. Miller1-9/+0
sun4v uses ASI_MMU instead of ASI_DMMU Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Register per-cpu fault status area with sun4v hypervisor.David S. Miller1-4/+25
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Initial sun4v TLB miss handling infrastructure.David S. Miller1-1/+23
Things are a little tricky because, unlike sun4u, we have to: 1) do a hypervisor trap to do the TLB load. 2) do the TSB lookup calculations by hand Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Add some hypervisor tlb_type checks.David S. Miller1-2/+4
And more consistently check cheetah{,_plus} instead of assuming anything not spitfire is cheetah{,_plus}. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Turn off TSB growing for now.David S. Miller1-5/+0
There are several tricky races involved with growing the TSB. So just use base-size TSBs for user contexts and we can revisit enabling this later. One part of the SMP problems is that tsb_context_switch() can see partially updated TSB configuration state if tsb_grow() is running in parallel. That's easily solved with a seqlock taken as a writer by tsb_grow() and taken as a reader to capture all the TSB config state in tsb_context_switch(). Then there is flush_tsb_user() running in parallel with a tsb_grow(). In theory we could take the seqlock as a reader there too, and just resample the TSB pointer and reflush but that looks really ugly. Lastly, I believe there is a case with threads that results in a TSB entry lock bit being set spuriously which will cause the next access to that TSB entry to wedge the cpu (since the TSB entry lock bit will never clear). It's either copy_tsb() or some bug elsewhere in the TSB assembly. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Access TSB with physical addresses when possible.David S. Miller1-0/+32
This way we don't need to lock the TSB into the TLB. The trick is that every TSB load/store is registered into a special instruction patch section. The default uses virtual addresses, and the patch instructions use physical address load/stores. We can't do this on all chips because only cheetah+ and later have the physical variant of the atomic quad load. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Kill swapper_pgd_zero, totally unused.David S. Miller1-3/+0
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Kill sole argument passed to setup_tba().David S. Miller1-9/+2
No longer used, and move extern declaration to a header file. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Kill PROM locked TLB entry preservation code.David S. Miller1-285/+10
It is totally unnecessary complexity. After we take over the trap table, we handle all PROM tlb misses fully. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Preload TSB entries from update_mmu_cache().David S. Miller1-0/+10
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Dynamically grow TSB in response to RSS growth.David S. Miller1-0/+7
As the RSS grows, grow the TSB in order to reduce the likelyhood of hash collisions and thus poor hit rates in the TSB. This definitely needs some serious tuning. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Kill pgtable quicklists and use SLAB.David S. Miller1-17/+15
Taking a nod from the powerpc port. With the per-cpu caching of both the page allocator and SLAB, the pgtable quicklist scheme becomes relatively silly and primitive. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: No need to D-cache color page tables any longer.David S. Miller1-65/+6
Unlike the virtual page tables, the new TSB scheme does not require this ugly hack. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20[SPARC64]: Move away from virtual page tables, part 1.David S. Miller1-86/+5
We now use the TSB hardware assist features of the UltraSPARC MMUs. SMP is currently knowingly broken, we need to find another place to store the per-cpu base pointers. We hid them away in the TSB base register, and that obviously will not work any more :-) Another known broken case is non-8KB base page size. Also noticed that flush_tlb_all() is not referenced anywhere, only the internal __flush_tlb_all() (local cpu only) is used by the sparc64 port, so we can get rid of flush_tlb_all(). The kernel gets it's own 8KB TSB (swapper_tsb) and each address space gets it's own private 8K TSB. Later we can add code to dynamically increase the size of per-process TSB as the RSS grows. An 8KB TSB is good enough for up to about a 4MB RSS, after which the TSB starts to incur many capacity and conflict misses. We even accumulate OBP translations into the kernel TSB. Another area for refinement is large page size support. We could use a secondary address space TSB to handle those. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-12[SPARC64]: Fix boot failures on SunBlade-150David S. Miller1-113/+74
The sequence to move over to the Linux trap tables from the firmware ones needs to be more air tight. It turns out that to be %100 safe we do need to be able to translate OBP mappings in our TLB miss handlers early. In order not to eat up a lot of kernel image memory with static page tables, just use the translations array in the OBP TLB miss handlers. That solves the bulk of the problem. Furthermore, to make sure the OBP TLB miss path will work even before the fixed MMU globals are loaded, explicitly load %g1 to TLB_SFSR at the beginning of the i-TLB and d-TLB miss handlers. To ease the OBP TLB miss walking of the prom_trans[] array, we sort it then delete all of the non-OBP entries in there (for example, there are entries for the kernel image itself which we're not interested in at all). We also save about 32K of kernel image size with this change. Not a bad side effect :-) There are still some reasons why trampoline.S can't use the setup_trap_table() yet. The most noteworthy are: 1) OBP boots secondary processors with non-bias'd stack for some reason. This is easily fixed by using a small bootup stack in the kernel image explicitly for this purpose. 2) Doing a firmware call via the normal C call prom_set_trap_table() goes through the whole OBP enter/exit sequence that saves and restores OBP and Linux kernel state in the MMUs. This path unfortunately does a "flush %g6" while loading up the OBP locked TLB entries for the firmware call. If we setup the %g6 in the trampoline.S code properly, that is in the PAGE_OFFSET linear mapping, but we're not on the kernel trap table yet so those addresses won't translate properly. One idea is to do a by-hand firmware call like we do in the early bootup code and elsewhere here in trampoline.S But this fails as well, as aparently the secondary processors are not booted with OBP's special locked TLB entries loaded. These are necessary for the firwmare to processes TLB misses correctly up until the point where we take over the trap table. This does need to be resolved at some point. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-05[SPARC64]: Fix initrd when net booting.David S. Miller1-100/+56
By allocating early memory for the firmware page tables, we can write over the beginning of the initrd image. So what we do now is: 1) Read in firmware translations table while still on the firmware's trap table. 2) Switch to Linux trap table. 3) Init bootmem. 4) Build firmware page tables using __alloc_bootmem(). And this keeps the initrd from being clobbered. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-04[SPARC64]: Replace cheetah+ code patching with variables.David S. Miller1-6/+20
Instead of code patching to handle the page size fields in the context registers, just use variables from which we get the proper values. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-29[SPARC64]: Rewrite convoluted physical memory probing.David S. Miller1-191/+111
Delete all of the code working with sp_banks[] and replace with clean acquisition and sorting of physical memory parameters from the firmware. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-28[SPARC64]: Kill all external references to sp_banks[]David S. Miller1-1/+22
Thus, we can mark sp_banks[] static in arch/sparc64/mm/init.c Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-28[SPARC64]: Move phys_base, kern_{base,size}, and sp_banks[] init to paging_initDavid S. Miller1-1/+61
Also, move prom_probe_memory() into arch/sparc64/mm/init.c Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-26[SPARC64]: Do not do TLB pre-filling any more.David S. Miller1-6/+0
In order to do it correctly on UltraSPARC-III+ and later we'd need to add some complicated code to set the TAG access extension register before loading the TLB. Since this optimization gives questionable gains, it's best to just remove it for now instead of adding the fix for Ultra-III+ Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-25[SPARC64]: Add CONFIG_DEBUG_PAGEALLOC support.David S. Miller1-3/+106
The trick is that we do the kernel linear mapping TLB miss starting with an instruction sequence like this: ba,pt %xcc, kvmap_load xor %g2, %g4, %g5 succeeded by an instruction sequence which performs a full page table walk starting at swapper_pg_dir. We first take over the trap table from the firmware. Then, using this constant PTE generation for the linear mapping area above, we build the kernel page tables for the linear mapping. After this is setup, we patch that branch above into a "nop", which will cause TLB misses to fall through to the full page table walk. With this, the page unmapping for CONFIG_DEBUG_PAGEALLOC is trivial. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-23[SPARC64]: Mark functions called by paging_init() as __init.David S. Miller1-6/+6
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22[SPARC64]: Rewrite bootup sequence.David S. Miller1-93/+5
Instead of all of this cpu-specific code to remap the kernel to the correct location, use portable firmware calls to do this instead. What we do now is the following in position independant assembler: chosen_node = prom_finddevice("/chosen"); prom_mmu_ihandle_cache = prom_getint(chosen_node, "mmu"); vaddr = 4MB_ALIGN(current_text_addr()); prom_translate(vaddr, &paddr_high, &paddr_low, &mode); prom_boot_mapping_mode = mode; prom_boot_mapping_phys_high = paddr_high; prom_boot_mapping_phys_low = paddr_low; prom_map(-1, 8 * 1024 * 1024, KERNBASE, paddr_low); and that replaces the massive amount of by-hand TLB probing and programming we used to do here. The new code should also handle properly the case where the kernel is mapped at the correct address already (think: future kexec support). Consequently, the bulk of remap_kernel() dies as does the entirety of arch/sparc64/prom/map.S We try to share some strings in the PROM library with the ones used at bootup, and while we're here mark input strings to oplib.h routines with "const" when appropriate. There are many more simplifications now possible. For one thing, we can consolidate the two copies we now have of a lot of cpu setup code sitting in head.S and trampoline.S. This is a significant step towards CONFIG_DEBUG_PAGEALLOC support. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22[SPARC64]: Kill readjust_prom_translations()David S. Miller1-35/+0
Testing shows that the prom_unmap() calls do absolutely nothing. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22[SPARC64]: Remove unnecessary paging_init() cruft.David S. Miller1-99/+15
Because we don't access the PAGE_OFFSET linear mappings any longer before we take over the trap table from the firmware, we don't need to load dummy mappings there into the TLB and we don't need the bootmap_base hack any longer either. While we are here, check for a larger than 8MB kernel and halt the boot with an error message. We know that doesn't work, so instead of failing mysteriously we should let the user know exactly what's wrong. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22[SPARC64]: Do not allocate OBP page tables using bootmemDavid S. Miller1-47/+100
Just allocate them physically starting from the end of the kernel image. This incredibly simplifies our MM bootstrap in that we don't need any mappings in the linear PAGE_OFFSET area working in order to bootstrap ourselves and take over the trap table from the firmware. Many further simplifications are possible now, and this also sets the stage for CONFIG_DEBUG_PAGEALLOC support. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22[SPARC64]: Break up inherit_prom_mappings() into it's constituent parts.David S. Miller1-141/+160
This thing was just a huge monolithic mess, so chop it up. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-21[SPARC64]: Do not allocate prom translations using bootmem.David S. Miller1-28/+26
Use __initdata instead. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-21[SPARC64]: Remove ktlb.S instruction patching.David S. Miller1-20/+12
This was kind of ugly, and actually buggy. The bug was that we didn't handle a machine with memory starting > 4GB. If the 'prompmd' was allocated in physical memory > 4GB we'd croak because the obp_iaddr_patch and obp_daddr_patch things only supported a 32-bit physical address. So fix this by just loading the appropriate values from two variables in the kernel image, which is locked into the TLB and thus accesses to them can't cause a recursive TLB miss. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-07[PATCH] Kprobes: prevent possible race conditions sparc64 changesPrasanna S Panchamukhi1-1/+2
This patch contains the sparc64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi <prasanna@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-27[SPARC64]: Fix ugly dependency on NR_CPUS being a power-of-2.David S. Miller1-6/+17
The page->flags D-cache dirty state tracking depended upon NR_CPUS being a power-of-2 via it's "NR_CPUS - 1" masking. Fix that to use a fixed (256 - 1) mask as that is the limit imposed by thread_info->cpu which is a "u8". Finally, add a compile time check that NR_CPUS is not greater than 256. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-27[SPARC64]: Avoid membar instructions in delay slots.David S. Miller1-2/+4
In particular, avoid membar instructions in the delay slot of a jmpl instruction. UltraSPARC-I, II, IIi, and IIe have a bug, documented in the UltraSPARC-IIi User's Manual, Appendix K, Erratum 51 The long and short of it is that if the IMU unit misses on a branch or jmpl, and there is a store buffer synchronizing membar in the delay slot, the chip can stop fetching instructions. If interrupts are enabled or some other trap is enabled, the chip will unwedge itself, but performance will suffer. We already had a workaround for this bug in a few spots, but it's better to have the entire tree sanitized for this rule. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-05[SPARC64]: Kill useless __pte_alloc_one_kernel indirectionChristoph Hellwig1-1/+1
warning: untested, but it there's not too much chance for screwups Signed-off-by: David S. Miller <davem@davemloft.net>
2005-04-17[PATCH] sparc64: Do not flush dcache for ZERO_PAGE.David S. Miller1-4/+15
This case actually can get exercised a lot during an ELF coredump of a process which contains a lot of non-COW'd anonymous pages. GDB has this test case which in partiaular creates near terabyte process full of ZERO_PAGEes. It takes forever to just walk through the page tables because of all of these spurious cache flushes on sparc64. With this change it takes only a second or so. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-16Linux-2.6.12-rc2Linus Torvalds1-0/+1769
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!