aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kernel/setup_64.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2007-11-08[POWERPC] Fix cache line vs. block size confusionBenjamin Herrenschmidt1-12/+7
We had an historical confusion in the kernel between cache line and cache block size. The former is an implementation detail of the L1 cache which can be useful for performance optimisations, the later is the actual size on which the cache control instructions operate, which can be different. For some reason, we had a weird hack reading the right property on powermac and the wrong one on any other 64 bits (32 bits is unaffected as it only uses the cputable for cache block size infos at this stage). This fixes the booting-without-of.txt documentation to mention the right properties, and fixes the 64 bits initialization code to look for the block size first, with a fallback to the line size if the property is missing. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-10-17[POWERPC] Quieten cache information at bootAnton Blanchard1-5/+8
After 6 years the ppc64 kernel still thinks its important to tell me my cache line size is 0x80 bytes. I think most people who care know that by now. The rest probably cant even understand the hex output. Since we might have misconfigured firmware or cpus that have a linesize that isnt 128 bytes, I still print it out for those cases. If people would prefer to remove it completely, lets do it. Also for lpar remove the htab_address printout since its not used. Anton ppc64 boot log usability expert Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-10-16Convert cpu_sibling_map to be a per cpu variableMike Travis1-0/+3
Convert cpu_sibling_map from a static array sized by NR_CPUS to a per_cpu variable. This saves sizeof(cpumask_t) * NR unused cpus. Access is mostly from startup and CPU HOTPLUG functions. Signed-off-by: Mike Travis <travis@sgi.com> Cc: Andi Kleen <ak@suse.de> Cc: Christoph Lameter <clameter@sgi.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-11[POWERPC] Only call ppc_md.setup_arch() if it is providedGrant Likely1-1/+2
This allows platforms which don't have anything to do at setup_arch time (like a bunch of the 4xx platforms) to eliminate an empty setup_arch hook. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-14[POWERPC] setup_64.c and prom.c comment cleanupLinas Vepstas1-3/+3
Grammatical corrections to comments. Signed-off-by: Linas Vepstas <linas@austin.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-18Revert "[POWERPC] Do firmware feature fixups after features are initialised"Tony Breeds1-8/+4
This reverts commit 5a26f6bbb767d7ad23311a1e81cfdd2bebefb855. The original patch causes boot failures when built with ppc64_defconfig. The quickest fix is to revert it while alterates are investigated. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-25[POWERPC] Do firmware feature fixups after features are initialisedMichael Neuling1-4/+8
On pSeries the firmware features are not setup until ppc_md.init_early, so we can't do the firmware feature sections fixups till after this. Currently firmware feature sections is only used on iSeries which inits the firmware features much earlier. This is a bug in waiting on pSeries. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-05-02[PATCH] x86: Allow percpu variables to be page-alignedJeremy Fitzhardinge1-2/+2
Let's allow page-alignment in general for per-cpu data (wanted by Xen, and Ingo suggested KVM as well). Because larger alignments can use more room, we increase the max per-cpu memory to 64k rather than 32k: it's getting a little tight. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2007-04-13[POWERPC] Rename get_property to of_get_property: arch/powerpcStephen Rothwell1-5/+5
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13[POWERPC] Remove unused inclusion of linux/ide.hOlaf Hering1-1/+0
Remove unneeded inclusion of linux/ide.h It does not compile with CONFIG_BLOCK=n. Remove asm/ide.h from ksyms file, it gets included earlier via linux/ide.h. Compile tested with all defconfig files. Signed-off-by: Olaf Hering <olaf@aepfle.de> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04[POWERPC] Distinguish POWER6 partition modes and tell userspacePaul Mackerras1-1/+1
This adds code to look at the properties firmware puts in the device tree to determine what compatibility mode the partition is in on POWER6 machines, and set the ELF aux vector AT_HWCAP and AT_PLATFORM entries appropriately. Specifically, we look at the cpu-version property in the cpu node(s). If that contains a "logical" PVR value (of the form 0x0f00000x), we call identify_cpu again with this PVR value. A value of 0x0f000001 indicates the partition is in POWER5+ compatibility mode, and a value of 0x0f000002 indicates "POWER6 architected" mode, with various extensions disabled. We also look for various other properties: ibm,dfp, ibm,purr and ibm,spurr. Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04[POWERPC] Allow hooking of PCI MMIO & PIO accessors on 64 bitsBenjamin Herrenschmidt1-0/+7
This patch reworks the way iSeries hooks on PCI IO operations (both MMIO and PIO) and provides a generic way for other platforms to do so (we have need to do that for various other platforms). While reworking the IO ops, I ended up doing some spring cleaning in io.h and eeh.h which I might want to split into 2 or 3 patches (among others, eeh.h had a lot of useless stuff in it). A side effect is that EEH for PIO should work now (it used to pass IO ports down to the eeh address check functions which is bogus). Also, new are MMIO "repeat" ops, which other archs like ARM already had, and that we have too now: readsb, readsw, readsl, writesb, writesw, writesl. In the long run, I might also make EEH use the hooks instead of wrapping at the toplevel, which would make things even cleaner and relegate EEH completely in platforms/iseries, but we have to measure the performance impact there (though it's really only on MMIO reads) Since I also need to hook on ioremap, I shuffled the functions a bit there. I introduced ioremap_flags() to use by drivers who want to pass explicit flags to ioremap (and it can be hooked). The old __ioremap() is still there as a low level and cannot be hooked, thus drivers who use it should migrate unless they know they want the low level version. The patch "arch provides generic iomap missing accessors" (should be number 4 in this series) is a pre-requisite to provide full iomap API support with this patch. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04[POWERPC] Refactor 64 bits DMA operationsBenjamin Herrenschmidt1-0/+1
This patch completely refactors DMA operations for 64 bits powerpc. 32 bits is untouched for now. We use the new dev_archdata structure to add the dma operations pointer and associated data to struct device. While at it, we also add the OF node pointer and numa node. In the future, we might want to look into merging that with pci_dn as well. The old vio, pci-iommu and pci-direct DMA ops are gone. They are now replaced by a set of generic iommu and direct DMA ops (non PCI specific) that can be used by bus types. The toplevel implementation is now inline. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04Merge branch 'linux-2.6' into for-linusPaul Mackerras1-0/+11
2006-11-13[PATCH] Check for null init_early routineGeoff Levand1-1/+2
Add a check for a null ppc_md.init_early to allow platforms that don't require an init_early routine to just set this member to null. Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-11-13[PATCH] Clean up usage of boot_devs.hauer@pengutronix.de1-1/+0
dev_t boot_dev is declared in arch/powerpc/kernel/setup_32.c and in arch/powerpc/kernel/setup_64.c but not used in these files. It is only used in arch/powerpc/platforms/powermac/setup.c, so make it static in this file. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-25[POWERPC] Support feature fixups in vdso'sBenjamin Herrenschmidt1-2/+2
This patch reworks the feature fixup mecanism so vdso's can be fixed up. The main issue was that the construct: .long label (or .llong on 64 bits) will not work in the case of a shared library like the vdso. It will generate an empty placeholder in the fixup table along with a reloc, which is not something we can deal with in the vdso. The idea here (thanks Alan Modra !) is to instead use something like: 1: .long label - 1b That is, the feature fixup tables no longer contain addresses of bits of code to patch, but offsets of such code from the fixup table entry itself. That is properly resolved by ld when building the .so's. I've modified the fixup mecanism generically to use that method for the rest of the kernel as well. Another trick is that the 32 bits vDSO included in the 64 bits kernel need to have a table in the 64 bits format. However, gas does not support 32 bits code with a statement of the form: .llong label - 1b (Or even just .llong label) That is, it cannot emit the right fixup/relocation for the linker to use to assign a 32 bits address to an .llong field. Thus, in the specific case of the 32 bits vdso built as part of the 64 bits kernel, we are using a modified macro that generates: .long 0xffffffff .llong label - 1b Note that is assumes that the value is negative which is enforced by the .lds (those offsets are always negative as the .text is always before the fixup table and gas doesn't support emiting the reloc the other way around). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-25[POWERPC] Consolidate feature fixup codeBenjamin Herrenschmidt1-0/+11
There are currently two versions of the functions for applying the feature fixups, one for CPU features and one for firmware features. In addition, they are both in assembly and with separate implementations for 32 and 64 bits. identify_cpu() is also implemented in assembly and separately for 32 and 64 bits. This patch replaces them with a pair of C functions. The call sites are slightly moved on ppc64 as well to be called from C instead of from assembly, though it's a very small change, and thus shouldn't cause any problem. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-16[POWERPC] Lazy interrupt disabling for 64-bit machinesPaul Mackerras1-2/+2
This implements a lazy strategy for disabling interrupts. This means that local_irq_disable() et al. just clear the 'interrupts are enabled' flag in the paca. If an interrupt comes along, the interrupt entry code notices that interrupts are supposed to be disabled, and clears the EE bit in SRR1, clears the 'interrupts are hard-enabled' flag in the paca, and returns. This means that interrupts only actually get disabled in the processor when an interrupt comes along. When interrupts are enabled by local_irq_enable() et al., the code sets the interrupts-enabled flag in the paca, and then checks whether interrupts got hard-disabled. If so, it also sets the EE bit in the MSR to hard-enable the interrupts. This has the potential to improve performance, and also makes it easier to make a kernel that can boot on iSeries and on other 64-bit machines, since this lazy-disable strategy is very similar to the soft-disable strategy that iSeries already uses. This version renames paca->proc_enabled to paca->soft_enabled, and changes a couple of soft-disables in the kexec code to hard-disables, which should fix the crash that Michael Ellerman saw. This doesn't yet use a reserved CR field for the soft_enabled and hard_enabled flags. This applies on top of Stephen Rothwell's patches to make it possible to build a combined iSeries/other kernel. Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04[POWERPC] Fix xmon=off and cleanup xmon initialisationMichael Ellerman1-8/+4
My patch to make the early xmon logic work with earlier early param parsing (480f6f35a149802a94ad5c1a2673ed6ec8d2c158) breaks xmon=off. No one does this obviously as xmon rocks, but it should really work as documented. While fixing that it struck me that we could move the xmon param handling into xmon.c, and also consolidate the xmon_init()/do_early_xmon logic into xmon_setup(). This means xmon=early drops into xmon a little earlier on 32-bit, but it seems to work just fine. Tested on PSERIES and CLASSIC32. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-02[PATCH] namespaces: utsname: use init_utsname when appropriateSerge E. Hallyn1-1/+1
In some places, particularly drivers and __init code, the init utsns is the appropriate one to use. This patch replaces those with a the init_utsname helper. Changes: Removed several uses of init_utsname(). Hope I picked all the right ones in net/ipv4/ipconfig.c. These are now changed to utsname() (the per-process namespace utsname) in the previous patch (2/7) [akpm@osdl.org: CIFS fix] Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Kirill Korotaev <dev@openvz.org> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Herbert Poetzl <herbert@13thfloor.at> Cc: Andrey Savochkin <saw@sw.ru> Cc: Serge Hallyn <serue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-10-01[PATCH] remove SYSRQ_KEY and related defines from ppc/sh/h8300Olaf Hering1-5/+0
Remove unused global SYSRQ_KEY from ppc and powerpc Remove unused define SYSRQ_KEY from sh/sh64 and h8300 Remove unused pckbd_sysrq_xlate and kbd_sysrq_xlate usage Signed-off-by: Olaf Hering <olaf@aepfle.de> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-09-13[POWERPC] powerpc: Reduce default cacheline size to 64 bytesOlof Johansson1-4/+4
Reduce default cacheline size on 64-bit powerpc from 128 bytes to 64. This is the architected minimum. In most cases we'll still end up using cache line information from the device tree, but defaults are used during early boot and doing a few dcbst/icbi's too many there won't do any harm. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-07-31[POWERPC] Constify & voidify get_property()Jeremy Kerr1-7/+7
Now that get_property() returns a void *, there's no need to cast its return value. Also, treat the return value as const, so we can constify get_property later. powerpc core changes. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-07-13[POWERPC] iseries: Move ItLpNaca into platforms/iseriesMichael Ellerman1-1/+0
Move ItLpNaca into platforms/iseries now that it's not used elsewhere. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
2006-07-03[POWERPC] Add new interrupt mapping core and change platforms to use itBenjamin Herrenschmidt1-11/+6
This adds the new irq remapper core and removes the old one. Because there are some fundamental conflicts with the old code, like the value of NO_IRQ which I'm now setting to 0 (as per discussions with Linus), etc..., this commit also changes the relevant platform and driver code over to use the new remapper (so as not to cause difficulties later in bisecting). This patch removes the old pre-parsing of the open firmware interrupt tree along with all the bogus assumptions it made to try to renumber interrupts according to the platform. This is all to be handled by the new code now. For the pSeries XICS interrupt controller, a single remapper host is created for the whole machine regardless of how many interrupt presentation and source controllers are found, and it's set to match any device node that isn't a 8259. That works fine on pSeries and avoids having to deal with some of the complexities of split source controllers vs. presentation controllers in the pSeries device trees. The powerpc i8259 PIC driver now always requests the legacy interrupt range. It also has the feature of being able to match any device node (including NULL) if passed no device node as an input. That will help porting over platforms with broken device-trees like Pegasos who don't have a proper interrupt tree. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-06-30Remove obsolete #include <linux/config.h>Jörn Engel1-1/+0
Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
2006-06-29[POWERPC] Make sure smp_processor_id works very early in bootMichael Ellerman1-0/+3
There's a small period early in boot where we don't know which cpu we're running on. That's ok, except that it means we have no paca, or more correctly that our paca pointer points somewhere random. So that we can safely call things like smp_processor_id(), we need a paca, so just assume we're on cpu 0. No code should _write_ to the paca before we've set the correct one up. We setup the proper paca after we've scanned the flat device tree in early_setup(), so there's no need to do it again in start_here_common. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-06-28[POWERPC] Setup the boot cpu's paca pointer in C rather than asmMichael Ellerman1-1/+8
There's no need to set the boot cpu paca in asm, so do it in C so us mere mortals can understand it. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-06-28[POWERPC] Make kexec_setup() a regular initcallMichael Ellerman1-4/+0
There's no reason kexec_setup() needs to be called explicitly from setup_system(), it can just be a regular initcall. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-06-28[POWERPC] powerpc: Initialise ppc_md htab pointers earlierMichael Ellerman1-5/+1
Initialise the ppc_md htab callbacks earlier, in the probe routines. This allows us to call htab_finish_init() from htab_initialize(), and makes it private to hash_utils_64.c. Move htab_finish_init() and make_bl() above htab_initialize() to avoid forward declarations. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-05-19[PATCH] powerpc: Kdump header cleanupMichael Ellerman1-3/+1
We need to know the base address of the kdump kernel even when we're not a kdump kernel, so add a #define for it. Move the logic that sets the kdump kernelbase into kdump.h instead of page.h. Rename kdump_setup() to setup_kdump_trampoline() to make it clearer what it's doing, and add an empty definition for the !CRASH_DUMP case to avoid a Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-05-19[PATCH] powerpc: Unify mem= handlingMichael Ellerman1-3/+0
We currently do mem= handling in three seperate places. And as benh pointed out I wrote two of them. Now that we parse command line parameters earlier we can clean this mess up. Moving the parsing out of prom_init means the device tree might be allocated above the memory limit. If that happens we'd have to move it. As it happens we already have logic to do that for kdump, so just genericise it. This also means we might have reserved regions above the memory limit, if we do the bootmem allocator will blow up, so we have to modify lmb_enforce_memory_limit() to truncate the reserves as well. Tested on P5 LPAR, iSeries, F50, 44p. Tested moving device tree on P5 and 44p and F50. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-05-19[PATCH] powerpc: Parse early parameters earlierMichael Ellerman1-5/+0
Currently we have call parse_early_param() earliyish, but not really very early. In particular, it's not early enough to do things like mem=x or crashkernel=blah, which is annoying. So do it earlier. I've checked all the early param handlers, and none of them look like they should have any trouble with this. I haven't tested the booke_wdt ones though. On 32-bit we were doing the CONFIG_CMDLINE logic twice, so don't. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-05-19[PATCH] powerpc: Make early xmon logic immune to location of early parsingMichael Ellerman1-0/+3
Currently early_xmon() calls directly into debugger() if xmon=early is passed. This ties the invocation of early xmon to the location of parse_early_param(), which might change. Tested on P5 LPAR and F50. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-05-05powerpc: provide ppc_md.panic() for both ppc32 & ppc64Kumar Gala1-17/+1
Allow boards to provide a panic callback on ppc32. Moved the code to sets this up into setup-common.c so its shared between ppc32 & ppc64. Also moved do_init_bootmem prototype into setup.h. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2006-04-28[PATCH] powerpc: Use check_legacy_ioport() on ppc32 too.David Woodhouse1-8/+0
Some people report that we die on some Macs when we are expecting to catch machine checks after poking at some random I/O address. I'd seen it happen on my dual G4 with serial ports until we fixed those to use OF, but now other users are reporting it with i8042. This expands the use of check_legacy_ioport() to avoid that situation even on 32-bit kernels. Signed-off-by: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-04-02[PATCH] powerpc: iSeries needs slb_initialize to be calledStephen Rothwell1-6/+4
Since the powerpc 64k pages patch went in, systems that have SLBs (like Power4 iSeries) needed to have slb_initialize called to set up some variables for the SLB miss handler. This was not being called on the boot processor on iSeries, so on single cpu iSeries machines, we would get apparent memory curruption as soon as we entered user mode. This patch fixes that by calling slb_initialize on the boot cpu if the processor has an SLB. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-29[PATCH] for_each_possible_cpu: powerpcKAMEZAWA Hiroyuki1-3/+3
for_each_cpu() actually iterates across all possible CPUs. We've had mistakes in the past where people were using for_each_cpu() where they should have been iterating across only online or present CPUs. This is inefficient and possibly buggy. We're renaming for_each_cpu() to for_each_possible_cpu() to avoid this in the future. This patch replaces for_each_cpu with for_each_possible_cpu. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-29Merge ../linux-2.6Paul Mackerras1-1/+2
2006-03-28[PATCH] powerpc: Kill _machine and hard-coded platform numbersBenjamin Herrenschmidt1-52/+4
This removes statically assigned platform numbers and reworks the powerpc platform probe code to use a better mechanism. With this, board support files can simply declare a new machine type with a macro, and implement a probe() function that uses the flattened device-tree to detect if they apply for a given machine. We now have a machine_is() macro that replaces the comparisons of _machine with the various PLATFORM_* constants. This commit also changes various drivers to use the new macro instead of looking at _machine. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-27[PATCH] Notifier chain update: API changesAlan Stern1-1/+2
The kernel's implementation of notifier chains is unsafe. There is no protection against entries being added to or removed from a chain while the chain is in use. The issues were discussed in this thread: http://marc.theaimsgroup.com/?l=linux-kernel&m=113018709002036&w=2 We noticed that notifier chains in the kernel fall into two basic usage classes: "Blocking" chains are always called from a process context and the callout routines are allowed to sleep; "Atomic" chains can be called from an atomic context and the callout routines are not allowed to sleep. We decided to codify this distinction and make it part of the API. Therefore this set of patches introduces three new, parallel APIs: one for blocking notifiers, one for atomic notifiers, and one for "raw" notifiers (which is really just the old API under a new name). New kinds of data structures are used for the heads of the chains, and new routines are defined for registration, unregistration, and calling a chain. The three APIs are explained in include/linux/notifier.h and their implementation is in kernel/sys.c. With atomic and blocking chains, the implementation guarantees that the chain links will not be corrupted and that chain callers will not get messed up by entries being added or removed. For raw chains the implementation provides no guarantees at all; users of this API must provide their own protections. (The idea was that situations may come up where the assumptions of the atomic and blocking APIs are not appropriate, so it should be possible for users to handle these things in their own way.) There are some limitations, which should not be too hard to live with. For atomic/blocking chains, registration and unregistration must always be done in a process context since the chain is protected by a mutex/rwsem. Also, a callout routine for a non-raw chain must not try to register or unregister entries on its own chain. (This did happen in a couple of places and the code had to be changed to avoid it.) Since atomic chains may be called from within an NMI handler, they cannot use spinlocks for synchronization. Instead we use RCU. The overhead falls almost entirely in the unregister routine, which is okay since unregistration is much less frequent that calling a chain. Here is the list of chains that we adjusted and their classifications. None of them use the raw API, so for the moment it is only a placeholder. ATOMIC CHAINS ------------- arch/i386/kernel/traps.c: i386die_chain arch/ia64/kernel/traps.c: ia64die_chain arch/powerpc/kernel/traps.c: powerpc_die_chain arch/sparc64/kernel/traps.c: sparc64die_chain arch/x86_64/kernel/traps.c: die_chain drivers/char/ipmi/ipmi_si_intf.c: xaction_notifier_list kernel/panic.c: panic_notifier_list kernel/profile.c: task_free_notifier net/bluetooth/hci_core.c: hci_notifier net/ipv4/netfilter/ip_conntrack_core.c: ip_conntrack_chain net/ipv4/netfilter/ip_conntrack_core.c: ip_conntrack_expect_chain net/ipv6/addrconf.c: inet6addr_chain net/netfilter/nf_conntrack_core.c: nf_conntrack_chain net/netfilter/nf_conntrack_core.c: nf_conntrack_expect_chain net/netlink/af_netlink.c: netlink_chain BLOCKING CHAINS --------------- arch/powerpc/platforms/pseries/reconfig.c: pSeries_reconfig_chain arch/s390/kernel/process.c: idle_chain arch/x86_64/kernel/process.c idle_notifier drivers/base/memory.c: memory_chain drivers/cpufreq/cpufreq.c cpufreq_policy_notifier_list drivers/cpufreq/cpufreq.c cpufreq_transition_notifier_list drivers/macintosh/adb.c: adb_client_list drivers/macintosh/via-pmu.c sleep_notifier_list drivers/macintosh/via-pmu68k.c sleep_notifier_list drivers/macintosh/windfarm_core.c wf_client_list drivers/usb/core/notify.c usb_notifier_list drivers/video/fbmem.c fb_notifier_list kernel/cpu.c cpu_chain kernel/module.c module_notify_list kernel/profile.c munmap_notifier kernel/profile.c task_exit_notifier kernel/sys.c reboot_notifier_list net/core/dev.c netdev_chain net/decnet/dn_dev.c: dnaddr_chain net/ipv4/devinet.c: inetaddr_chain It's possible that some of these classifications are wrong. If they are, please let us know or submit a patch to fix them. Note that any chain that gets called very frequently should be atomic, because the rwsem read-locking used for blocking chains is very likely to incur cache misses on SMP systems. (However, if the chain's callout routines may sleep then the chain cannot be atomic.) The patch set was written by Alan Stern and Chandra Seetharaman, incorporating material written by Keith Owens and suggestions from Paul McKenney and Andrew Morton. [jes@sgi.com: restructure the notifier chain initialization macros] Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Signed-off-by: Jes Sorensen <jes@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-27powerpc: Unify the 32 and 64 bit idle loopsPaul Mackerras1-6/+0
This unifies the 32-bit (ARCH=ppc and ARCH=powerpc) and 64-bit idle loops. It brings over the concept of having a ppc_md.power_save function from 32-bit to ARCH=powerpc, which lets us get rid of native_idle(). With this we will also be able to simplify the idle handling for pSeries and cell. Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-27[PATCH] powerpc: Allow non zero boot cpuidsAnton Blanchard1-3/+9
We currently have a hack to flip the boot cpu and its secondary thread to logical cpuid 0 and 1. This means the logical - physical mapping will differ depending on which cpu is boot cpu. This is most apparent on kexec, where we might kexec on any cpu and therefore change the mapping from boot to boot. The patch below does a first pass early on to work out the logical cpuid of the boot thread. We then fix up some paca structures to match. Ive also removed the boot_cpuid_phys variable for ppc64, to be consistent we use get_hard_smp_processor_id(boot_cpuid) everywhere. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-22[PATCH] powerpc: Remove calculation of io holeMichael Ellerman1-2/+0
In mm_init_ppc64() we calculate the location of the "IO hole", but then no one ever looks at the value. So don't bother. That's actually all mm_init_ppc64() does, so get rid of it too. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-20[PATCH] powerpc: Don't start secondary CPUs in a UP && KEXEC kernelMichael Ellerman1-2/+2
Because smp_release_cpus() is built for SMP || KEXEC, it's not safe to unconditionally call it from setup_system(). On a UP && KEXEC kernel we'll start up the secondary CPUs which will then go beserk and we die. Simple fix is to conditionally call smp_release_cpus() in setup_system(). With that in place we don't need the dummy definition of smp_release_cpus() because all call sites are #ifdef'ed either SMP or KEXEC. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-07[PATCH] powerpc: Don't overwrite flat device tree with kdump kernelMichael Ellerman1-0/+3
It's possible for prom_init to allocate the flat device tree inside the kdump crash kernel region. If this happens, when we load the kdump kernel we overwrite the flattened device tree, which is bad. We could make prom_init try and avoid allocating inside the crash kernel region, but then we run into issues if the crash kernel region uses all the space inside the RMO. The easiest solution is to move the flat device tree once we're running in the kernel. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-01-11[PATCH] powerpc/64: per cpu data optimisationsAnton Blanchard1-0/+26
The current ppc64 per cpu data implementation is quite slow. eg: lhz 11,18(13) /* smp_processor_id() */ ld 9,.LC63-.LCTOC1(30) /* per_cpu__variable_name */ ld 8,.LC61-.LCTOC1(30) /* __per_cpu_offset */ sldi 11,11,3 /* form index into __per_cpu_offset */ mr 10,9 ldx 9,11,8 /* __per_cpu_offset[smp_processor_id()] */ ldx 0,10,9 /* load per cpu data */ 5 loads for something that is supposed to be fast, pretty awful. One reason for the large number of loads is that we have to synthesize 2 64bit constants (per_cpu__variable_name and __per_cpu_offset). By putting __per_cpu_offset into the paca we can avoid the 2 loads associated with it: ld 11,56(13) /* paca->data_offset */ ld 9,.LC59-.LCTOC1(30) /* per_cpu__variable_name */ ldx 0,9,11 /* load per cpu data Longer term we can should be able to do even better than 3 loads. If per_cpu__variable_name wasnt a 64bit constant and paca->data_offset was in a register we could cut it down to one load. A suggestion from Rusty is to use gcc's __thread extension here. In order to do this we would need to free up r13 (the __thread register and where the paca currently is). So far Ive had a few unsuccessful attempts at doing that :) The patch also allocates per cpu memory node local on NUMA machines. This patch from Rusty has been sitting in my queue _forever_ but stalled when I hit the compiler bug. Sorry about that. Finally I also only allocate per cpu data for possible cpus, which comes straight out of the x86-64 port. On a pseries kernel (with NR_CPUS == 128) and 4 possible cpus we see some nice gains: total used free shared buffers cached Mem: 4012228 212860 3799368 0 0 162424 total used free shared buffers cached Mem: 4016200 212984 3803216 0 0 162424 A saving of 3.75MB. Quite nice for smaller machines. Note: we now have to be careful of per cpu users that touch data for !possible cpus. At this stage it might be worth making the NUMA and possible cpu optimisations generic, but per cpu init is done so early we have to be careful that all architectures have their possible map setup correctly. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-01-11[PATCH] powerpc: Make early debugging configurable via KconfigMichael Ellerman1-36/+2
This patch adds Kconfig entries to control the early debugging options, currently in setup_64.c. Doing this via Kconfig rather than #defines means you can have one source tree, which is buildable for multiple platforms - and you can enable the correct early debug option for each platform via .config. I made udbg_early_init() a static inline because otherwise GCC is to daft to optimise it away when debugging is off. Now that we have udbg_init_rtas() we can make call_rtas_display_status* static. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-01-11[PATCH] powerpc: Early debugging support for iSeriesMichael Ellerman1-5/+9
Connect iSeries up to the standard early debugging infrastructure. To actually use this you need to enable the iSeries early debugging in setup_64.c. Then after the messages are logged hit Ctrl-x Ctrl-x on your console to dump the Hypervisor console buffer. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>