aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kernel/misc_64.S (follow)
AgeCommit message (Collapse)AuthorFilesLines
2017-02-06powerpc/64: Fix naming of cache block vs. cache lineBenjamin Herrenschmidt1-14/+14
In a number of places we called "cache line size" what is actually the cache block size, which in the powerpc architecture, means the effective size to use with cache management instructions (it can be different from the actual cache line size). We fix the naming across the board and properly retrieve both pieces of information when available in the device-tree. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30powerpc: Change places using CONFIG_KEXEC to use CONFIG_KEXEC_CORE instead.Thiago Jung Bauermann1-3/+3
Commit 2965faa5e03d ("kexec: split kexec_load syscall from kexec core code") introduced CONFIG_KEXEC_CORE so that CONFIG_KEXEC means whether the kexec_load system call should be compiled-in and CONFIG_KEXEC_FILE means whether the kexec_file_load system call should be compiled-in. These options can be set independently from each other. Since until now powerpc only supported kexec_load, CONFIG_KEXEC and CONFIG_KEXEC_CORE were synonyms. That is not the case anymore, so we need to make a distinction. Almost all places where CONFIG_KEXEC was being used should be using CONFIG_KEXEC_CORE instead, since kexec_file_load also needs that code compiled in. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-14Merge branch 'kbuild' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuildLinus Torvalds1-0/+4
Pull kbuild updates from Michal Marek: - EXPORT_SYMBOL for asm source by Al Viro. This does bring a regression, because genksyms no longer generates checksums for these symbols (CONFIG_MODVERSIONS). Nick Piggin is working on a patch to fix this. Plus, we are talking about functions like strcpy(), which rarely change prototypes. - Fixes for PPC fallout of the above by Stephen Rothwell and Nick Piggin - fixdep speedup by Alexey Dobriyan. - preparatory work by Nick Piggin to allow architectures to build with -ffunction-sections, -fdata-sections and --gc-sections - CONFIG_THIN_ARCHIVES support by Stephen Rothwell - fix for filenames with colons in the initramfs source by me. * 'kbuild' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild: (22 commits) initramfs: Escape colons in depfile ppc: there is no clear_pages to export powerpc/64: whitelist unresolved modversions CRCs kbuild: -ffunction-sections fix for archs with conflicting sections kbuild: add arch specific post-link Makefile kbuild: allow archs to select link dead code/data elimination kbuild: allow architectures to use thin archives instead of ld -r kbuild: Regenerate genksyms lexer kbuild: genksyms fix for typeof handling fixdep: faster CONFIG_ search ia64: move exports to definitions sparc32: debride memcpy.S a bit [sparc] unify 32bit and 64bit string.h sparc: move exports to definitions ppc: move exports to definitions arm: move exports to definitions s390: move exports to definitions m68k: move exports to definitions alpha: move exports to actual definitions x86: move exports to actual definitions ...
2016-09-23powerpc/64/kexec: Copy image with MMU off when possibleBenjamin Herrenschmidt1-4/+14
Currently we turn the MMU off after copying the image, and we make sure there is no overlap between the hash table and the target pages in that case. That doesn't work for Radix however. In that case, the page tables are scattered and we can't really enforce that the target of the image isn't overlapping one of them. So instead, let's turn the MMU off before copying the image in radix mode. Thankfully, in radix mode, even under a hypervisor, we know we don't have the same kind of RMA limitations that hash mode has. While at it, also turn the MMU off early when using hash in non-LPAR mode, that way we can get rid of the collision check completely. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-09-23powerpc/64/kexec: NULL check "clear_all" in kexec_sequenceBenjamin Herrenschmidt1-3/+4
With Radix, it can be NULL even on !BOOKE these days so replace the ifdef with a NULL check which is cleaner anyway. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-09-19powerpc/kernel: Use kprobe blacklist for asm functionsNicholas Piggin1-2/+3
Rather than forcing the whole function into the ".kprobes.text" section, just add the symbol's address to the kprobe blacklist. This also lets us drop the three versions of the_KPROBE macro, in exchange for just one version of _ASM_NOKPROBE_SYMBOL - which is a good cleanup. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-08-07ppc: move exports to definitionsAl Viro1-0/+4
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-07-21powerpc/mm: Move hash table ops to a separate structureBenjamin Herrenschmidt1-1/+1
Moving probe_machine() to after mmu init will cause the ppc_md fields relative to the hash table management to be overwritten. Since we have essentially disconnected the machine type from the hash backend ops, finish the job by moving them to a different structure. The only callback that didn't quite fix is update_partition_table since this is not specific to hash, so I moved it to a standalone variable for now. We can revisit later if needed. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [mpe: Fix ppc64e build failure in kexec] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-06-14powerpc: Define and use PPC64_ELF_ABI_v2/v1Michael Ellerman1-1/+1
We're approaching 20 locations where we need to check for ELF ABI v2. That's fine, except the logic is a bit awkward, because we have to check that _CALL_ELF is defined and then what its value is. So check it once in asm/types.h and define PPC64_ELF_ABI_v2 when ELF ABI v2 is detected. We also have a few places where what we're really trying to check is that we are using the 64-bit v1 ABI, ie. function descriptors. So also add a #define for that, which simplifies several checks. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-01-21powerpc: Simplify module TOC handlingAlan Modra1-28/+0
PowerPC64 uses the symbol .TOC. much as other targets use _GLOBAL_OFFSET_TABLE_. It identifies the value of the GOT pointer (or in powerpc parlance, the TOC pointer). Global offset tables are generally local to an executable or shared library, or in the kernel, module. Thus it does not make sense for a module to resolve a relocation against .TOC. to the kernel's .TOC. value. A module has its own .TOC., and indeed the powerpc64 module relocation processing ignores the kernel value of .TOC. and instead calculates a module-local value. This patch removes code involved in exporting the kernel .TOC., tweaks modpost to ignore an undefined .TOC., and the module loader to twiddle the section symbol so that .TOC. isn't seen as undefined. Note that if the kernel was compiled with -msingle-pic-base then ELFv2 would not have function global entry code setting up r2. In that case the module call stubs would need to be modified to set up r2 using the kernel .TOC. value, requiring some of this code to be reinstated. mpe: Furthermore a change in binutils master (not yet released) causes the current way we handle the TOC to no longer work when building with MODVERSIONS=y and RELOCATABLE=n. The symptom is that modules can not be loaded due to there being no version found for TOC. Cc: stable@vger.kernel.org # 3.16+ Signed-off-by: Alan Modra <amodra@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-10-27powerpc/book3e-64: Enable kexecTiejun Chen1-0/+6
Allow KEXEC for book3e, and bypass or convert non-book3e stuff in kexec code. Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com> [scottwood@freescale.com: move code to minimize diff, and cleanup] Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-10-27powerpc/book3e-64/kexec: Set "r4 = 0" when entering spinloopScott Wood1-0/+2
book3e_secondary_core_init will only create a TLB entry if r4 = 0, so do so. Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-10-27powerpc/book3e-64/kexec: create an identity TLB mappingTiejun Chen1-1/+51
book3e has no real MMU mode so we have to create an identity TLB mapping to make sure we can access the real physical address. Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com> [scottwood: cleanup, and split off some changes] Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-08-20powerpc/kexec: Reset secondary cpu endianness before kexecSamuel Mendoza-Jonas1-2/+11
If the target kernel does not inlcude the FIXUP_ENDIAN check, coming from a different-endian kernel will cause the target kernel to panic. All ppc64 kernels can handle starting in big-endian mode, so return to big-endian before branching into the target kernel. This mainly affects pseries as secondaries on powernv are returned to OPAL. Signed-off-by: Samuel Mendoza-Jonas <sam.mj@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-04-23powerpc: module: handle MODVERSION for .TOC.Rusty Russell1-0/+9
For the ELFv2 ABI, powerpc introduces a magic symbol ".TOC.". If we don't create a CRC for it (minus the leading ".", since we strip that) we get a modpost warning about missing CRC and the CRC array seems to be displaced by 1 so other CRCs mismatch too. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2014-04-23powerpc: EXPORT_SYMBOL(.TOC.)Rusty Russell1-0/+19
For the ELFv2 ABI, powerpc introduces a magic symbol ".TOC.". depmod then complains that this doesn't resolve (so does modpost, but we could easily fix that). To export this, we need to use asm. modpost and depmod both strip "." from symbols for the old PPC64 ELFv1 ABI, so we actually export a "TOC.". Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2014-04-23powerpc: ABIv2 function calls must place target address in r12Anton Blanchard1-2/+6
To establish addressability quickly, ABIv2 requires the target address of the function being called to be in r12. Fix a number of places in assembly code that we do indirect function calls. We need to avoid function descriptors on ABIv2 too. Signed-off-by: Anton Blanchard <anton@samba.org>
2014-04-23powerpc: No need to use dot symbols when branching to a functionAnton Blanchard1-5/+5
binutils is smart enough to know that a branch to a function descriptor is actually a branch to the functions text address. Alan tells me that binutils has been doing this for 9 years. Signed-off-by: Anton Blanchard <anton@samba.org>
2013-12-30Merge branch 'merge' into nextBenjamin Herrenschmidt1-1/+4
Merge a pile of fixes that went into the "merge" branch (3.13-rc's) such as Anton Little Endian fixes. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-10powerpc: Fix build break with PPC_EARLY_DEBUG_BOOTX=yMichael Ellerman1-1/+4
A kernel configured with PPC_EARLY_DEBUG_BOOTX=y but PPC_PMAC=n and PPC_MAPLE=n will fail to link: btext.c:(.text+0x2d0fc): undefined reference to `.rmci_off' btext.c:(.text+0x2d214): undefined reference to `.rmci_on' Fix it by making the build of rmci_on/off() depend on PPC_EARLY_DEBUG_BOOTX, which also enable the only code that uses them. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-02powerpc: purge all the prefetched instructions for the coherent icache flushKevin Hao1-0/+6
As Benjamin Herrenschmidt has indicated, we still need a dummy icbi to purge all the prefetched instructions from the ifetch buffers for the snooping icache. We also need a sync before the icbi to order the actual stores to memory that might have modified instructions with the icbi. Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-09-25powerpc/irq: Run softirqs off the top of the irq stackBenjamin Herrenschmidt1-6/+4
Nowadays, irq_exit() calls __do_softirq() pretty much directly instead of calling do_softirq() which switches to the decicated softirq stack. This has lead to observed stack overflows on powerpc since we call irq_enter() and irq_exit() outside of the scope that switches to the irq stack. This fixes it by moving the stack switching up a level, making irq_enter() and irq_exit() run off the irq stack. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-08-14powerpc: remove the unused function disable_kernel_fp()Kevin Hao1-13/+0
The only using of function disable_kernel_fp() was already dropped in the commit 5daf9071 (powerpc: merge align.c). Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-08-14powerpc/pmac: Early debug output on screen on 64-bit macsBenjamin Herrenschmidt1-0/+31
We have a bunch of CONFIG_PPC_EARLY_DEBUG_* options that are intended for bringup/debug only. They hard wire a machine specific udbg backend very early on (before we even probe the platform), and use whatever tricks are available on each machine/cpu to be able to get some kind of output out there early on. So far, on powermac with no serial ports, we have CONFIG_PPC_EARLY_DEBUG_BOOTX to use the low-level btext engine on the screen, but it doesn't do much, at least on 64-bit. It only really gets enabled after the platform has been probed and the MMU enabled. This adds a way to enable it much earlier. From prom_init.c (while still running with Open Firmware), we grab the screen details and set things up using the physical address of the frame buffer. Then btext itself uses the "rm_ci" feature of the 970 processor (Real Mode Cache Inhibited) to access it while in real mode. We need to do a little bit of reorg of the btext code to inline things better, in order to limit how much we touch memory while in this mode as the consequences might be ... interesting. This successfully allowed me to debug problems early on with the G5 (related to gold being broken vs. ppc64 kernels). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-08-14powerpc: Remove the symbol __flush_icache_rangeKevin Hao1-1/+1
And now the function flush_icache_range() is just a wrapper which only invoke the function __flush_icache_range() directly. So we don't have reason to keep it anymore. Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-08-14powerpc: Move the testing of CPU_FTR_COHERENT_ICACHE into __flush_icache_rangeKevin Hao1-1/+3
In function flush_icache_range(), we use cpu_has_feature() to test the feature bit of CPU_FTR_COHERENT_ICACHE. But this seems not optimal for two reasons: a) For ppc32, the function __flush_icache_range() already do this check with the macro END_FTR_SECTION_IFSET. b) Compare with the cpu_has_feature(), the method of using macro END_FTR_SECTION_IFSET will not introduce any runtime overhead. [And while at it, add the missing required isync] -- BenH Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-05-14powerpc: Provide __bswapdi2David Woodhouse1-0/+11
Some versions of GCC apparently expect this to be provided by libgcc. Updates from Mikey to fix 32 bit version and adding "r" to registers. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-30powerpc: split ret_from_forkAl Viro1-34/+0
... and get rid of in-kernel syscalls in kernel_thread() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-07-10powerpc: Fixes for instructions not using correct register namingMichael Neuling1-2/+2
These macros are using integers where they could be using logical names since they take registers. We are going to enforce this soon, so fix these up now. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-07-10powerpc/pasemi: Move lbz/stbciz to ppc-opcode.hMichael Neuling1-5/+0
move lbz/stbciz to ppc-opcode.h. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-07-10powerpc: Fix usage of register macros getting ready for %r0 changeMichael Neuling1-2/+2
Anything that uses a constructed instruction (ie. from ppc-opcode.h), need to use the new R0 macro, as %r0 is not going to work. Also convert usages of macros where we are just determining an offset (usually for a load/store), like: std r14,STK_REG(r14)(r1) Can't use STK_REG(r14) as %r14 doesn't work in the STK_REG macro since it's just calculating an offset. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-06-29powerpc/maple: Enable scom access functions on MapleDmitry Eremin-Solenikov1-2/+2
Enable functions used to access SCOM if PPC_MAPLE is defined: they are used by cpufreq driver to control hardware. Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-05-19powerpc/kexec: Fix memory corruption from unallocated slavesMilton Miller1-5/+8
Commit 1fc711f7ffb01089efc58042cfdbac8573d1b59a (powerpc/kexec: Fix race in kexec shutdown) moved the write to signal the cpu had exited the kernel from before the transition to real mode in kexec_smp_wait to kexec_wait. Unfornately it missed that kexec_wait is used both by cpus leaving the kernel and by secondary slave cpus that were not allocated a paca for what ever reason -- they could be beyond nr_cpus or not described in the current device tree for whatever reason (for example, kexec-load was not refreshed after a cpu hotplug operation). Cpus coming through that path they will write to paca[NR_CPUS] which is beyond the space allocated for the paca data and overwrite memory not allocated to pacas but very likely still real mode accessable). Move the write back to kexec_smp_wait, which is used only by cpus that found their paca, but after the transition to real mode. Signed-off-by: Milton Miller <miltonm@bga.com> Cc: <stable@kernel.org> # (1fc711f was backported to 2.6.32) Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-11-29powerpc: Remove second definition of STACK_FRAME_OVERHEADStephen Rothwell1-0/+1
Since STACK_FRAME_OVERHEAD is defined in asm/ptrace.h and that is ASSEMBER safe, we can just include that instead of going via asm-offsets.h. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-06-15powerpc: Unconditionally enabled irq stacksChristoph Hellwig1-2/+0
Irq stacks provide an essential protection from stack overflows through external interrupts, at the cost of two additionals stacks per CPU. Enable them unconditionally to simplify the kernel build and prevent people from accidentally disabling them. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-05-21powerpc/kexec: Fix race in kexec shutdownMichael Neuling1-3/+5
In kexec_prepare_cpus, the primary CPU IPIs the secondary CPUs to kexec_smp_down(). kexec_smp_down() calls kexec_smp_wait() which sets the hw_cpu_id() to -1. The primary does this while leaving IRQs on which means the primary can take a timer interrupt which can lead to the IPIing one of the secondary CPUs (say, for a scheduler re-balance) but since the secondary CPU now has a hw_cpu_id = -1, we IPI CPU -1... Kaboom! We are hitting this case regularly on POWER7 machines. There is also a second race, where the primary will tear down the MMU mappings before knowing the secondaries have entered real mode. Also, the secondaries are clearing out any pending IPIs before guaranteeing that no more will be received. This changes kexec_prepare_cpus() so that we turn off IRQs in the primary CPU much earlier. It adds a paca flag to say that the secondaries have entered the kexec_smp_down() IPI and turned off IRQs, rather than overloading hw_cpu_id with -1. This new paca flag is again used to in indicate when the secondaries has entered real mode. It also ensures that all CPUs have their IRQs off before we clear out any pending IPI requests (in kexec_cpu_down()) to ensure there are no trailing IPIs left unacknowledged. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-06-09powerpc: Move VMX and VSX asm code to vector.SBenjamin Herrenschmidt1-92/+0
Currently, load_up_altivec and give_up_altivec are duplicated in 32-bit and 64-bit. This creates a common implementation that is moved away from head_32.S, head_64.S and misc_64.S and into vector.S, using the same macros we already use for our common implementation of load_up_fpu. I also moved the VSX code over to vector.S though in that case I didn't make it build on 32-bit (yet). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-04-07powerpc: Disable VSX or current process in giveup_fpu/altivecMichael Neuling1-0/+8
When we call giveup_fpu, we need to need to turn off VSX for the current process. If we don't, on return to userspace it may execute a VSX instruction before the next FP instruction, and not have its register state refreshed correctly from the thread_struct. Ditto for altivec. This caused a bug where an unaligned lfs or stfs results in fix_alignment calling giveup_fpu so it can use the FPRs (in order to do a single <-> double conversion), and then returning to userspace with FP off but VSX on. Then if a VSX instruction is executed, before another FP instruction, it will proceed without another exception and hence have the incorrect register state for VSX registers 0-31. lfs unaligned <- alignment exception turns FP off but leaves VSX on VSX instruction <- no exception since VSX on, hence we get the wrong VSX register values for VSX registers 0-31, which overlap the FPRs. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31powerpc: Kexec exit should not use magic numbersMilton Miller1-6/+3
Commit 54622f10a6aabb8bb2bdacf3dd070046f03dc246 ("powerpc: Support for relocatable kdump kernel") added a magic flag value in a register to tell purgatory that it should be a panic kernel. This part is wrong and is reverted by this commit. The kernel gets a list of memory blocks and a entry point from user space. Its job is to copy the blocks into place and then branch to the designated entry point (after turning "off" the mmu). The user space tool inserts a trampoline, called purgatory, that runs before the user supplied code. Its job is to establish the entry environment for the new kernel or other application based on the contents of memory. The purgatory code is compiled and embedded in the tool, where it is later patched using the elf symbol table using elf symbols. Since the tool knows it is creating a purgatory that will run after a kernel crash, it should just patch purgatory (or the kernel directly) if something needs to happen. Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-22powerpc: Support for relocatable kdump kernelMohan Kumar M1-3/+6
This adds relocatable kernel support for kdump. With this one can use the same regular kernel to capture the kdump. A signature (0xfeed1234) is passed in r6 from panic code to the next kernel through kexec_sequence and purgatory code. The signature is used to differentiate between kdump kernel and non-kdump kernels. The purgatory code compares the signature and sets the __kdump_flag in head_64.S. During the boot up, kernel code checks __kdump_flag and if it is set, the kernel will behave as relocatable kdump kernel. This kernel will boot at the address where it was loaded by kexec-tools ie. at the address reserved through crashkernel boot parameter. CONFIG_CRASH_DUMP depends on CONFIG_RELOCATABLE option to build kdump kernel as relocatable. So the same kernel can be used as production and kdump kernel. This patch incorporates the changes suggested by Paul Mackerras to avoid GOT use and to avoid two copies of the code. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Mohan Kumar M <mohan@in.ibm.com> Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-10powerpc: Fix error path in kernel_thread functionJosh Poimboeuf1-3/+5
The powerpc 32-bit and 64-bit kernel_thread functions don't properly propagate errors being returned by the clone syscall. (In the case of error, the syscall exit code returns a positive errno in r3 and sets the CR0[SO] bit.) This patch fixes that by negating r3 if CR0[SO] is set after the syscall. Signed-off-by: Josh Poimboeuf <jpoimboe@us.ibm.com> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-15powerpc: fix giveup_vsx to save registers correctlyMichael Neuling1-4/+4
giveup_vsx didn't save the FPU and VMX regsiters. Change it to be like giveup_fpr/altivec which save these registers. Also update call sites where FPU and VMX are already saved to use the original giveup_vsx (renamed to __giveup_vsx). Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-01powerpc: Add VSX context save/restore, ptrace and signal supportMichael Neuling1-0/+33
This patch extends the floating point save and restore code to use the VSX load/stores when VSX is available. This will make FP context save/restore marginally slower on FP only code, when VSX is available, as it has to load/store 128bits rather than just 64bits. Mixing FP, VMX and VSX code will get constant architected state. The signals interface is extended to enable access to VSR 0-31 doubleword 1 after discussions with tool chain maintainers. Backward compatibility is maintained. The ptrace interface is also extended to allow access to VSR 0-31 full registers. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-24[POWERPC] Clean up misc_64.SKumar Gala1-16/+4
* Removed get_msr(), get_srr0(), and get_srr1() - not used anywhere * Use STACK_FRAME_OVERHEAD instead of magic number Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-11[POWERPC] kernel_execve is identical in 32 and 64 bitStephen Rothwell1-7/+0
so consolidate it into misc.S. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-25[POWERPC] kexec: Send slaves to new kernel earlierMilton Miller1-13/+13
With this, when kexec-ing, we copy the code and start the slaves on their journey to the next kernel's spin loop as soon as we copy the kexec image into place. The kernel doesn't know exactly which slaves are spinning in kexec_wait. This allows us to pass more than max-cpus to the next kernel. But it also means that we might leave some behind. Moving the code here means they have the time it takes us to clear the hash table to wake up and move on. Moving the code any earlier would reuqire walking the image description to search for the code, which could span multiple pages. Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-02-07[POWERPC] pasemi: UART udbg supportOlof Johansson1-0/+40
Early debug output for PA Semi UART. Uses the 2.05 CI real mode ops. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-25[POWERPC] Consolidate feature fixup codeBenjamin Herrenschmidt1-124/+0
There are currently two versions of the functions for applying the feature fixups, one for CPU features and one for firmware features. In addition, they are both in assembly and with separate implementations for 32 and 64 bits. identify_cpu() is also implemented in assembly and separately for 32 and 64 bits. This patch replaces them with a pair of C functions. The call sites are slightly moved on ppc64 as well to be called from C instead of from assembly, though it's a very small change, and thus shouldn't cause any problem. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-05IRQ: Maintain regs pointer globally rather than passing to IRQ handlersDavid Howells1-3/+3
Maintain a per-CPU global "struct pt_regs *" variable which can be used instead of passing regs around manually through all ~1800 interrupt handlers in the Linux kernel. The regs pointer is used in few places, but it potentially costs both stack space and code to pass it around. On the FRV arch, removing the regs parameter from all the genirq function results in a 20% speed up of the IRQ exit path (ie: from leaving timer_interrupt() to leaving do_IRQ()). Where appropriate, an arch may override the generic storage facility and do something different with the variable. On FRV, for instance, the address is maintained in GR28 at all times inside the kernel as part of general exception handling. Having looked over the code, it appears that the parameter may be handed down through up to twenty or so layers of functions. Consider a USB character device attached to a USB hub, attached to a USB controller that posts its interrupts through a cascaded auxiliary interrupt controller. A character device driver may want to pass regs to the sysrq handler through the input layer which adds another few layers of parameter passing. I've build this code with allyesconfig for x86_64 and i386. I've runtested the main part of the code on FRV and i386, though I can't test most of the drivers. I've also done partial conversion for powerpc and MIPS - these at least compile with minimal configurations. This will affect all archs. Mostly the changes should be relatively easy. Take do_IRQ(), store the regs pointer at the beginning, saving the old one: struct pt_regs *old_regs = set_irq_regs(regs); And put the old one back at the end: set_irq_regs(old_regs); Don't pass regs through to generic_handle_irq() or __do_IRQ(). In timer_interrupt(), this sort of change will be necessary: - update_process_times(user_mode(regs)); - profile_tick(CPU_PROFILING, regs); + update_process_times(user_mode(get_irq_regs())); + profile_tick(CPU_PROFILING); I'd like to move update_process_times()'s use of get_irq_regs() into itself, except that i386, alone of the archs, uses something other than user_mode(). Some notes on the interrupt handling in the drivers: (*) input_dev() is now gone entirely. The regs pointer is no longer stored in the input_dev struct. (*) finish_unlinks() in drivers/usb/host/ohci-q.c needs checking. It does something different depending on whether it's been supplied with a regs pointer or not. (*) Various IRQ handler function pointers have been moved to type irq_handler_t. Signed-Off-By: David Howells <dhowells@redhat.com> (cherry picked from 1b16e7ac850969f38b375e511e3fa2f474a33867 commit)
2006-10-04Merge branch 'master' of git://oak/home/sfr/kernels/iseries/workPaul Mackerras1-0/+46