aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/mm/tlb_nohash_low.S (follow)
AgeCommit message (Collapse)AuthorFilesLines
2019-05-03powerpc/mm: Move nohash specifics in subdirectory mm/nohashChristophe Leroy1-491/+0
Many files in arch/powerpc/mm are only for nohash. This patch creates a subdirectory for them. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Shorten new filenames] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc: clean inclusions of asm/feature-fixups.hChristophe Leroy1-0/+1
files not using feature fixup don't need asm/feature-fixups.h files using feature fixup need asm/feature-fixups.h Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc: move ASM_CONST and stringify_in_c() into asm-const.hChristophe Leroy1-0/+1
This patch moves ASM_CONST() and stringify_in_c() into dedicated asm-const.h, then cleans all related inclusions. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: asm-compat.h should include asm-const.h] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-08-10powerpc/8xx: Getting rid of remaining use of CONFIG_8xxChristophe Leroy1-1/+1
Two config options exist to define powerpc MPC8xx: * CONFIG_PPC_8xx * CONFIG_8xx arch/powerpc/platforms/Kconfig.cputype has contained the following comment about CONFIG_8xx item for some years: "# this is temp to handle compat with arch=ppc" arch/powerpc is now the only place with remaining use of CONFIG_8xx: get rid of them. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-03-01powerpc: Fix misspellings in comments.Adam Buchbinder1-2/+2
Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-10-22powerpc/85xx: Load all early TLB entries at onceScott Wood1-0/+63
Use an AS=1 trampoline TLB entry to allow all normal TLB1 entries to be loaded at once. This avoids the need to keep the translation that code is executing from in the same TLB entry in the final TLB configuration as during early boot, which in turn is helpful for relocatable kernels (e.g. kdump) where the kernel is not running from what would be the first TLB entry. On e6500, we limit map_mem_in_cams() to the primary hwthread of a core (the boot cpu is always considered primary, as a kdump kernel can be entered on any cpu). Each TLB only needs to be set up once, and when we do, we don't want another thread to be running when we create a temporary trampoline TLB1 entry. Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09powerpc/fsl_booke: smp support for booting a relocatable kernel above 64MKevin Hao1-1/+3
When booting above the 64M for a secondary cpu, we also face the same issue as the boot cpu that the PAGE_OFFSET map two different physical address for the init tlb and the final map. So we have to use switch_to_as1/restore_to_as0 between the conversion of these two maps. When restoring to as0 for a secondary cpu, we only need to return to the caller. So add a new parameter for function restore_to_as0 for this purpose. Use LOAD_REG_ADDR_PIC to get the address of variables which may be used before we set the final map in cams for the secondary cpu. Move the setting of cams a bit earlier in order to avoid the unnecessary using of LOAD_REG_ADDR_PIC. Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Scott Wood <scottwood@freescale.com>
2012-11-15powerpc/47x: Use the new ppc-opcode infrastructureTony Breeds1-11/+4
Don't use 47x only #defines for TLBIVAX or ICBT, supply and use helpers in ppc-opcode.h This fixes a compile breakage. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-07-10powerpc: Enforce usage of RA 0-R31 where possibleMichael Neuling1-8/+8
Some macros use RA where when RA=R0 the values is 0, so make this the enforced mnemonic in the macro. Idea suggested by Andreas Schwab. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-07-10powerpc: Fixes for instructions not using correct register namingMichael Neuling1-5/+5
These macros are using integers where they could be using logical names since they take registers. We are going to enforce this soon, so fix these up now. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-07-10powerpc: Fix usage of register macros getting ready for %r0 changeMichael Neuling1-3/+3
Anything that uses a constructed instruction (ie. from ppc-opcode.h), need to use the new R0 macro, as %r0 is not going to work. Also convert usages of macros where we are just determining an offset (usually for a load/store), like: std r14,STK_REG(r14)(r1) Can't use STK_REG(r14) as %r14 doesn't work in the STK_REG macro since it's just calculating an offset. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-02-02powerpc/476: Workaround for PLB6 hangDave Kleikamp1-0/+35
The 476FP core may hang if an instruction fetch happens during an msync following a tlbsync. This workaround makes sure that enough instruction cache lines are pre-fetched before executing the msync. (sync and msync are the same to the compiler.) Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2010-10-14powerpc/fsl-booke64: Use TLB CAMs to cover linear mapping on FSL 64-bit chipsKumar Gala1-1/+1
On Freescale parts typically have TLB array for large mappings that we can bolt the linear mapping into. We utilize the code that already exists on PPC32 on the 64-bit side to setup the linear mapping to be cover by bolted TLB entries. We utilize a quarter of the variable size TLB array for this purpose. Additionally, we limit the amount of memory to what we can cover via bolted entries so we don't get secondary faults in the TLB miss handlers. We should fix this limitation in the future. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2010-08-23powerpc/47x: Add an isync before the tlbivax instructionDave Kleikamp1-0/+1
Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2010-05-17powerpc/fsl-booke: Move loadcam_entry back to asm code to fix SMP ftraceKumar Gala1-0/+28
When we build with ftrace enabled its possible that loadcam_entry would have used the stack pointer (even though the code doesn't need it). We call loadcam_entry in __secondary_start before the stack is setup. To ensure that loadcam_entry doesn't use the stack pointer the easiest solution is to just have it in asm code. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2010-05-05powerpc/47x: Base ppc476 supportDave Kleikamp1-12/+106
This patch adds the base support for the 476 processor. The code was primarily written by Ben Herrenschmidt and Torez Smith, but I've been maintaining it for a while. The goal is to have a single binary that will run on 44x and 47x, but we still have some details to work out. The biggest is that the L1 cache line size differs on the two platforms, but it's currently a compile-time option. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Torez Smith <lnxtorez@linux.vnet.ibm.com> Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2009-08-24powerpc/booke: Move MMUCSR definition into mmu-book3e.hKumar Gala1-2/+0
The MMUCSR is now defined as part of the Book-3E architecture so we can move it into mmu-book3e.h and add some of the additional bits defined by the architecture specs. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-08-20powerpc: Add TLB management code for 64-bit Book3EBenjamin Herrenschmidt1-0/+79
This adds the TLB miss handler assembly, the low level TLB flush routines along with the necessary hook for dealing with our virtual page tables or indirect TLB entries that need to be flushes when PTE pages are freed. There is currently no support for hugetlbfs Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-20powerpc/mm: Make low level TLB flush ops on BookE take additional argsBenjamin Herrenschmidt1-3/+3
We need to pass down whether the page is direct or indirect and we'll need to pass the page size to _tlbil_va and _tlbivax_bcast We also add a new low level _tlbil_pid_noind() which does a TLB flush by PID but avoids flushing indirect entries if possible This implements those new prototypes but defines them with inlines or macros so that no additional arguments are actually passed on current processors. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-04-23powerpc: fix for long standing bug noticed by gcc 4.4.0Stephen Rothwell1-1/+1
Previous gcc versions didn't notice this because one of the preceding #ifs always evaluated to true. gcc 4.4.0 produced this error: arch/powerpc/mm/tlb_nohash_low.S:206:6: error: #elif with no expression Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-04-23Revert "powerpc: Add support for early tlbilx opcode"Kumar Gala1-13/+1
This reverts commit e9965577406a2148ade97b5e0ce7c448b4ba4ef6. Our HW guys were able to fix this so it never sees the light of day. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-04-07powerpc: Add support for early tlbilx opcodeKumar Gala1-1/+13
During the ISA 2.06 development the opcode for tlbilx changed and some early implementations used to old opcode. Add support for a MMU_FTR fixup to deal with this. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-03-09powerpc/fsl-booke: Add support for tlbilx instructionsKumar Gala1-7/+37
The e500mc core supports the new tlbilx instructions that do core local invalidates and also provide us the ability to take down all TLB entries matching a given PID. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-21powerpc/44x: No need to mask MSR:CE, ME or DE in _tlbil_va on 440Benjamin Herrenschmidt1-9/+10
The handlers for Critical, Machine Check or Debug interrupts will save and restore MMUCR nowadays, thus we only need to disable normal interrupts when invalidating TLB entries. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21powerpc/mm: Split low level tlb invalidate for nohash processorsBenjamin Herrenschmidt1-0/+165
Currently, the various forms of low level TLB invalidations are all implemented in misc_32.S for 32-bit processors, in a fairly scary mess of #ifdef's and with interesting duplication such as a whole bunch of code for FSL _tlbie and _tlbia which are no longer used. This moves things around such that _tlbie is now defined in hash_low_32.S and is only used by the 32-bit hash code, and all nohash CPUs use the various _tlbil_* forms that are now moved to a new file, tlb_nohash_low.S. I moved all the definitions for that stuff out of include/asm/tlbflush.h as they are really internal mm stuff, into mm/mmu_decl.h The code should have no functional changes. I kept some variants inline for trivial forms on things like 40x and 8xx. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>