aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/platforms/cell/spu_base.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2022-05-08powerpc: Remove asm/prom.h from all files that don't need itChristophe Leroy1-1/+0
Several files include asm/prom.h for no reason. Clean it up. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Drop change to prom_parse.c as reported by lkp@intel.com] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7c9b8fda63dcf63e1b28f43e7ebdb95182cbc286.1646767214.git.christophe.leroy@csgroup.eu
2022-03-08powerpc: declare unmodified attribute_group usages constRohan McLure1-2/+2
Inspired by (bd75b4ef4977: Constify static attribute_group structs), accepted by linux-next, reported: https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20220210202805.7750-4-rikard.falkeborn@gmail.com/ Nearly all singletons of type struct attribute_group are never modified, and so are candidates for being const. Declare them as const. Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220307231414.86560-1-rmclure@linux.ibm.com
2021-12-23powerpc/cell: Add __init attribute to eligible functionsNick Child1-3/+3
Some functions defined in 'arch/powerpc/platforms/cell' are deserving of an `__init` macro attribute. These functions are only called by other initialization functions and therefore should inherit the attribute. Signed-off-by: Nick Child <nick.child@ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211216220035.605465-8-nick.child@ibm.com
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 153Thomas Gleixner1-14/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 675 mass ave cambridge ma 02139 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 77 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Armijn Hemel <armijn@tjaldur.nl> Reviewed-by: Richard Fontana <rfontana@redhat.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.837555891@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-04-21powerpc/mm/hash: Rename KERNEL_REGION_ID to LINEAR_MAP_REGION_IDAneesh Kumar K.V1-1/+1
The region actually point to linear map. Rename the #define to clarify thati. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-04-21powerpc/mm/hash64: Map all the kernel regions in the same 0xc rangeAneesh Kumar K.V1-2/+2
This patch maps vmalloc, IO and vmemap regions in the 0xc address range instead of the current 0xd and 0xf range. This brings the mapping closer to radix translation mode. With hash 64K page size each of this region is 512TB whereas with 4K config we are limited by the max page table range of 64TB and hence there regions are of 16TB size. The kernel mapping is now: On 4K hash kernel_region_map_size = 16TB kernel vmalloc start = 0xc000100000000000 kernel IO start = 0xc000200000000000 kernel vmemmap start = 0xc000300000000000 64K hash, 64K radix and 4k radix: kernel_region_map_size = 512TB kernel vmalloc start = 0xc008000000000000 kernel IO start = 0xc00a000000000000 kernel vmemmap start = 0xc00c000000000000 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-21signal/powerpc: Use force_sig_fault where appropriateEric W. Biederman1-2/+2
Reviewed-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2017-05-25powerpc/spufs: Fix hash faults for kernel regionsJeremy Kerr1-1/+3
Commit ac29c64089b7 ("powerpc/mm: Replace _PAGE_USER with _PAGE_PRIVILEGED") swapped _PAGE_USER for _PAGE_PRIVILEGED, and introduced check_pte_access() which denied kernel access to non-_PAGE_PRIVILEGED pages. However, it didn't add _PAGE_PRIVILEGED to the hash fault handler for spufs' kernel accesses, so the DMAs required to establish SPE memory no longer work. This change adds _PAGE_PRIVILEGED to the hash fault handler for kernel accesses. Fixes: ac29c64089b7 ("powerpc/mm: Replace _PAGE_USER with _PAGE_PRIVILEGED") Cc: stable@vger.kernel.org # v4.7+ Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Reported-by: Sombat Tragolgosol <sombat3960@gmail.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30powerpc: Change places using CONFIG_KEXEC to use CONFIG_KEXEC_CORE instead.Thiago Jung Bauermann1-1/+1
Commit 2965faa5e03d ("kexec: split kexec_load syscall from kexec core code") introduced CONFIG_KEXEC_CORE so that CONFIG_KEXEC means whether the kexec_load system call should be compiled-in and CONFIG_KEXEC_FILE means whether the kexec_file_load system call should be compiled-in. These options can be set independently from each other. Since until now powerpc only supported kexec_load, CONFIG_KEXEC and CONFIG_KEXEC_CORE were synonyms. That is not the case anymore, so we need to make a distinction. Almost all places where CONFIG_KEXEC was being used should be using CONFIG_KEXEC_CORE instead, since kexec_file_load also needs that code compiled in. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-09-20powerpc: Remove all usages of NO_IRQMichael Ellerman1-8/+8
NO_IRQ has been == 0 on powerpc for just over ten years (since commit 0ebfff1491ef ("[POWERPC] Add new interrupt mapping core and change platforms to use it")). It's also 0 on most other arches. Although it's fairly harmless, every now and then it causes confusion when a driver is built on powerpc and another arch which doesn't define NO_IRQ. There's at least 6 definitions of NO_IRQ in drivers/, at least some of which are to work around that problem. So we'd like to remove it. This is fairly trivial in the arch code, we just convert: if (irq == NO_IRQ) to if (!irq) if (irq != NO_IRQ) to if (irq) irq = NO_IRQ; to irq = 0; return NO_IRQ; to return 0; And a few other odd cases as well. At least for now we keep the #define NO_IRQ, because there is driver code that uses NO_IRQ and the fixes to remove those will go via other trees. Note we also change some occurrences in PPC sound drivers, drivers/ps3, and drivers/macintosh. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-06-14powerpc: Various typo fixesMichael Ellerman1-2/+2
Signed-off-by: Andrea Gelmini <andrea.gelmini@gelma.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-05-01powerpc/mm: Use _PAGE_READ to indicate Read accessAneesh Kumar K.V1-1/+1
This splits the _PAGE_RW bit into _PAGE_READ and _PAGE_WRITE. It also removes the dependency on _PAGE_USER for implying read only. Few things to note here is that, we have read implied with write and execute permission. Hence we should always find _PAGE_READ set on hash pte fault. We still can't switch PROT_NONE to !(_PAGE_RWX). Auto numa depends on marking a prot none pte _PAGE_WRITE. (For more details look at b191f9b106ea "mm: numa: preserve PTE write permissions across a NUMA hinting fault") Cc: Arnd Bergmann <arnd@arndb.de> Cc: Jeremy Kerr <jk@ozlabs.org> Cc: Frederic Barrat <fbarrat@linux.vnet.ibm.com> Acked-by: Ian Munsie <imunsie@au1.ibm.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-04-11powerpc/cell: Make spu_base.c explicitly non-modularPaul Gortmaker1-5/+2
The Kconfig currently controlling compilation of this code is: arch/powerpc/platforms/cell/Kconfig:config SPU_BASE arch/powerpc/platforms/cell/Kconfig: bool ...meaning that it currently is not being built as a module by anyone. Lets remove the modular code that is essentially orphaned, so that when reading the driver there is no doubt it is builtin-only. Since module_init translates to device_initcall in the non-modular case, the init ordering remains unchanged with this commit. We also delete the MODULE_LICENSE tag etc. since all that information is already contained at the top of the file in the comments. Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-12-05powerpc/mm: don't do tlbie for updatepp request with NO HPTE faultAneesh Kumar K.V1-2/+3
upatepp can get called for a nohpte fault when we find from the linux page table that the translation was hashed before. In that case we are sure that there is no existing translation, hence we could avoid doing tlbie. We could possibly race with a parallel fault filling the TLB. But that should be ok because updatepp is only ever relaxing permissions. We also look at linux pte permission bits when filling hash pte permission bits. We also hold the linux pte busy bits while inserting/updating a hashpte entry, hence a paralle update of linux pte is not possible. On the other hand mprotect involves ptep_modify_prot_start which cause a hpte invalidate and not updatepp. Performance number: We use randbox_access_bench written by Anton. Kernel with THP disabled and smaller hash page table size. 86.60% random_access_b [kernel.kallsyms] [k] .native_hpte_updatepp 2.10% random_access_b random_access_bench [.] doit 1.99% random_access_b [kernel.kallsyms] [k] .do_raw_spin_lock 1.85% random_access_b [kernel.kallsyms] [k] .native_hpte_insert 1.26% random_access_b [kernel.kallsyms] [k] .native_flush_hash_range 1.18% random_access_b [kernel.kallsyms] [k] .__delay 0.69% random_access_b [kernel.kallsyms] [k] .native_hpte_remove 0.37% random_access_b [kernel.kallsyms] [k] .clear_user_page 0.34% random_access_b [kernel.kallsyms] [k] .__hash_page_64K 0.32% random_access_b [kernel.kallsyms] [k] fast_exception_return 0.30% random_access_b [kernel.kallsyms] [k] .hash_page_mm With Fix: 27.54% random_access_b random_access_bench [.] doit 22.90% random_access_b [kernel.kallsyms] [k] .native_hpte_insert 5.76% random_access_b [kernel.kallsyms] [k] .native_hpte_remove 5.20% random_access_b [kernel.kallsyms] [k] fast_exception_return 5.12% random_access_b [kernel.kallsyms] [k] .__hash_page_64K 4.80% random_access_b [kernel.kallsyms] [k] .hash_page_mm 3.31% random_access_b [kernel.kallsyms] [k] data_access_common 1.84% random_access_b [kernel.kallsyms] [k] .trace_hardirqs_on_caller Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-10-08powerpc/cell: Move data segment faulting code out of cell platformIan Munsie1-46/+9
__spu_trap_data_seg() currently contains code to determine the VSID and ESID required for a particular EA and mm struct. This code is generically useful for other co-processors. This moves the code of the cell platform so it can be used by other powerpc code. It also adds 1TB segment handling which Cell didn't support. The new function is called copro_calculate_slb(). This also moves the internal struct spu_slb to a generic struct copro_slb which is now used in the Cell and copro code. We use this new struct instead of passing around esid and vsid parameters. Signed-off-by: Ian Munsie <imunsie@au1.ibm.com> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-07-23powerpc: cell: Use ktime_get_ns()Thomas Gleixner1-8/+3
Replace the ever recurring: ts = ktime_get_ts(); ns = timespec_to_ns(&ts); with ns = ktime_get_ns(); Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: John Stultz <john.stultz@linaro.org>
2013-05-06powerpc/cell/spufs: Fix status attribute permissionBenjamin Herrenschmidt1-1/+1
We are registering the attribute with permission 0644 but it doesn't have a store callback, which causes WARN_ON's during boot. Fix the permission. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-01-06Merge branch 'driver-core-next' into Linux 3.2Greg Kroah-Hartman1-30/+31
This resolves the conflict in the arch/arm/mach-s3c64xx/s3c6400.c file, and it fixes the build error in the arch/x86/kernel/microcode_core.c file, that the merge did not catch. The microcode_core.c patch was provided by Stephen Rothwell <sfr@canb.auug.org.au> who was invaluable in the merge issues involved with the large sysdev removal process in the driver-core tree. Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-12-21cpu: convert 'cpu' and 'machinecheck' sysdev_class to a regular subsystemKay Sievers1-30/+31
This moves the 'cpu sysdev_class' over to a regular 'cpu' subsystem and converts the devices to regular devices. The sysdev drivers are implemented as subsystem interfaces now. After all sysdev classes are ported to regular driver core entities, the sysdev implementation will be entirely removed from the kernel. Userspace relies on events and generic sysfs subsystem infrastructure from sysdev devices, which are made available with this conversion. Cc: Haavard Skinnemoen <hskinnemoen@gmail.com> Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Borislav Petkov <bp@amd64.org> Cc: Tigran Aivazian <tigran@aivazian.fsnet.co.uk> Cc: Len Brown <lenb@kernel.org> Cc: Zhang Rui <rui.zhang@intel.com> Cc: Dave Jones <davej@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-11-08powerpc/irq: Remove IRQF_DISABLEDYong Zhang1-6/+3
Since commit [e58aa3d2: genirq: Run irq handlers with interrupts disabled], We run all interrupt handlers with interrupts disabled and we even check and yell when an interrupt handler returns with interrupts enabled (see commit [b738a50a: genirq: Warn when handler enables interrupts]). So now this flag is a NOOP and can be removed. Signed-off-by: Yong Zhang <yong.zhang0@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-05-11PM / PowerPC: Use struct syscore_ops instead of sysdevs for PMRafael J. Wysocki1-10/+18
Make some PowerPC architecture's code use struct syscore_ops objects for power management instead of sysdev classes and sysdevs. This simplifies the code and reduces the kernel's memory footprint. It also is necessary for removing sysdevs from the kernel entirely in the future. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-01-21powerpc/kdump: Move crash_kexec_stop_spus to kdump crash handlerAnton Blanchard1-0/+70
Use the crash handler hooks to run the SPU stop code, just like we do for ehea and cell RAS code. While I'm here I noticed "CPUSs reliabally" so fix the spelling MISTAKESs reliabally. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-06-16fbdev: move logo externs to header fileGeert Uytterhoeven1-10/+1
Now we have __initconst, we can finally move the external declarations for the various Linux logo structures to <linux/linux_logo.h>. James' ack dates back to the previous submission (way to long ago), when the logos were still __initdata, which caused failures on some platforms with some toolchain versions. Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Acked-by: James Simmons <jsimmons@infradead.org> Cc: Krzysztof Helt <krzysztof.h1@poczta.fm> Cc: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-03-24cpumask: Use mm_cpumask() wrapper instead of cpu_vm_maskRusty Russell1-1/+1
Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-01-13powerpc: Change u64/s64 to a long long integer typeIngo Molnar1-2/+2
Convert arch/powerpc/ over to long long based u64: -#ifdef __powerpc64__ -# include <asm-generic/int-l64.h> -#else -# include <asm-generic/int-ll64.h> -#endif +#include <asm-generic/int-ll64.h> This will avoid reoccuring spurious warnings in core kernel code that comes when people test on their own hardware. (i.e. x86 in ~98% of the cases) This is what x86 uses and it generally helps keep 64-bit code 32-bit clean too. [Adjusted to not impact user mode (from paulus) - sfr] Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-21sysdev: Pass the attribute to the low level sysdev show/store functionAndi Kleen1-1/+2
This allow to dynamically generate attributes and share show/store functions between attributes. Right now most attributes are generated by special macros and lots of duplicated code. With the attribute passed it's instead possible to attach some data to the attribute and then use that in shared low level functions to do different things. I need this for the dynamically generated bank attributes in the x86 machine check code, but it'll allow some further cleanups. I converted all users in tree to the new show/store prototype. It's a single huge patch to avoid unbisectable sections. Runtime tested: x86-32, x86-64 Compiled only: ia64, powerpc Not compile tested/only grep converted: sh, arm, avr32 Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2008-06-16powerpc/spufs: synchronize interaction between spu exception handling and time slicingLuke Browning1-16/+26
Time slicing can occur at the same time as spu exception handling resulting in the wakeup of the wrong thread. This change uses the the spu's register_lock to enforce synchronization between bind/unbind and spu exception handling so that they are mutually exclusive. Signed-off-by: Luke Browning <lukebrowning@us.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-06-16powerpc/spufs: remove class_0_dsisr from spu exception handlingLuke Browning1-2/+0
According to the CBEA, the SPU dsisr is not updated for class 0 exceptions. spu_stopped() is testing the dsisr that was passed to it from the class 0 exception handler, so we return a false positive here. This patch cleans up the interrupt handler and erroneous tests in spu_stopped. It also removes the fields from the csa since it is not needed to process class 0 events. Signed-off-by: Luke Browning <lukebrowning@us.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-05-05[POWERPC] spufs: handle faults while the context switch pending flag is setLuke Browning1-0/+4
Currently, page fault handlers don't issue a mfc restart if the context switch pending flag is set, which can leave us with a hanging DMA after a context restore. This patch introduces fault pending flag that is set by the fault handler and read by the context switch code, so that the latter can add the restart bit at the right spot, after it has successfuly saved the state of the mfc control register. Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-05-05[POWERPC] spufs: fix concurrent delivery of class 0 & 1 exceptionsLuke Browning1-9/+18
SPU class 0 & 1 exceptions may occur in parallel, so we may end up overwriting csa.dsisr. This change adds dedicated fields for each class to the spu and the spu context so that fault data is not overwritten. Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-04-01[POWERPC] Replace remaining __FUNCTION__ occurrencesHarvey Harrison1-4/+4
__FUNCTION__ is gcc-specific, use __func__ Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-29[POWERPC] spufs: serialize SLB invalidation against SLB loadingArnd Bergmann1-3/+9
There is a potential race between flushes of the entire SLB in the MFC and the point where new entries are being established. The problem is that we might put a ESID entry into the MFC SLB when the VSID entry has just been cleared by the global flush. This can be circumvented by holding the register_lock throughout both the flushing and the creation of SLB entries. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-02-29[POWERPC] spufs: invalidate SLB translation before adding a new entryArnd Bergmann1-0/+4
When we replace an SLB entry in the MFC after using up all the available entries, there is a short window in which an incorrect entry is marked as valid. The problem is that the 'valid' bit is stored in the ESID, which is always written after the VSID. Overwriting the VSID first will make the original ESID entry point to the new VSID, which means that any concurrent DMA accessing the old ESID ends up being redirected to the new virtual address. A few cycles later, we write the new ESID and everything is fine again. That race can be closed by writing a zero entry to the ESID first, which makes sure that the VSID is not accessed until we write the new ESID. Note that we don't actually need to invalidate the SLB entry using the invalidation register, which would also flush any ERAT entries for that segment, because the segment translation does not become invalid but is only removed from the SLB cache. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-02-20[POWERPC] cell: fix spurious false return from spu_trap_data_{map,seg}Andre Detsch1-12/+0
At present, the __spufs_trap_data_map and __spu_trap_data_seq functions exit if spu->flags has the SPU_CONTEXT_SWITCH_ACTIVE set. This was resulting in suprious returns from these functions, as they may be legitimately called when we have this bit set. We only use it in these two sanity checks, so this change removes the flag completely. This fixes hangs in the page-fault path of SPE apps. Signed-off-by: Andre Detsch <adetsch@br.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-01-31Merge branch 'linux-2.6'Paul Mackerras1-1/+1
2008-01-24Driver core: change sysdev classes to use dynamic kobject namesKay Sievers1-1/+1
All kobjects require a dynamically allocated name now. We no longer need to keep track if the name is statically assigned, we can just unconditionally free() all kobject names on cleanup. Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2007-12-21[POWERPC] spufs: rework class 0 and 1 interrupt handlingJeremy Kerr1-50/+7
Based on original patches from Arnd Bergmann <arnd.bergman@de.ibm.com>; and Luke Browning <lukebr@linux.vnet.ibm.com> Currently, spu contexts need to be loaded to the SPU in order to take class 0 and class 1 exceptions. This change makes the actual interrupt-handlers much simpler (ie, set the exception information in the context save area), and defers the handling code to the spufs_handle_class[01] functions, called from spufs_run_spu. This should improve the concurrency of the spu scheduling leading to greater SPU utilization when SPUs are overcommited. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21[POWERPC] spufs: use #defines for SPU class [012] exception statusJeremy Kerr1-21/+21
Add a few #defines for the class 0, 1 and 2 interrupt status bits, and use them instead of magic numbers when we're setting or checking for these interrupts. Also, add a #define for the class 2 mailbox threshold interrupt mask. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-19[POWERPC] cell: catch errors from sysfs_create_group()Jeremy Kerr1-3/+17
We're currently getting a warning from not checking the result of sysfs_create_group, which is declared as __must_check. This change introduces appropriate error-handling for spu_add_sysdev_attr_group() Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19[POWERPC] cell: handle SPE kernel mappings that cross segment boundariesJeremy Kerr1-7/+43
Currently, we have a possibilty that the SLBs setup during context switch don't cover the entirety of the necessary lscsa and code regions, if these regions cross a segment boundary. This change checks the start and end of each region, and inserts a SLB entry for each, if unique. We also remove the assumption that the spu_save_code and spu_restore_code reside in the same segment, by using the specific code array for save and restore. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19[POWERPC] cell: add spu_64k_pages_available() checkJeremy Kerr1-0/+6
Add a function spu_64k_pages_available(), so that we can abstract the explicity use of mmu_psize_defs() in lssca_alloc.c Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19[POWERPC] cell: use spu_load_slb for SLB setupJeremy Kerr1-13/+10
Now that we have a helper function to setup a SPU SLB, use it for __spu_trap_data_seq. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19[POWERPC] cell: handle kernel SLB setup in spu_base.cJeremy Kerr1-0/+49
Currently, the SPU context switch code (spufs/switch.c) sets up the SPU's SLBs directly, which requires some low-level mm stuff. This change moves the kernel SLB setup to spu_base.c, by exposing a function spu_setup_kernel_slbs() to do this setup. This allows us to remove the low-level mm code from switch.c, making it possible to later move switch.c to the spufs module. Also, add a struct spu_slb for the cases where we need to deal with SLB entries. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19[POWERPC] cell: export force_sig_info()Jeremy Kerr1-0/+7
Export force_sig_info to allow signals to be sent from a modular spufs. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-10-12[POWERPC] Use 1TB segmentsPaul Mackerras1-3/+3
This makes the kernel use 1TB segments for all kernel mappings and for user addresses of 1TB and above, on machines which support them (currently POWER5+, POWER6 and PA6T). We detect that the machine supports 1TB segments by looking at the ibm,processor-segment-sizes property in the device tree. We don't currently use 1TB segments for user addresses < 1T, since that would effectively prevent 32-bit processes from using huge pages unless we also had a way to revert to using 256MB segments. That would be possible but would involve extra complications (such as keeping track of which segment size was used when HPTEs were inserted) and is not addressed here. Parts of this patch were originally written by Ben Herrenschmidt. Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19[POWERPC] spufs: Make file-internal functions & variables staticSebastian Siewior1-1/+1
There are a few symbols used only in one file within spufs; this change makes them static where suitable. Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-11[POWERPC] cell/PS3: Fix a bug that causes the PS3 to hang on the SPU Class 0 interrupt.Masato Noguchi1-9/+15
The Cell BE Architecture spec states that the SPU MFC Class 0 interrupt is edge-triggered. The current spu interrupt handler assumes this behavior and does not clear the interrupt status. The PS3 hypervisor visualizes all SPU interrupts as level, and on return from the interrupt handler the hypervisor will deliver a new virtual interrupt for any unmasked interrupts which for which the status has not been cleared. This fix clears the interrupt status in the interrupt handler. Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com> Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-10[POWERPC] cell: Move SPU affinity init to spu_management_of_opsAndre Detsch1-140/+1
This patch moves affinity initialization code from spu_base.c to a new spu_management_of_ops function (init_affinity), which is empty in the case of PS3. This fixes a linking problem that was happening when compiling for PS3. Also, some small code style changes were made. Signed-off-by: Andre Detsch <adetsch@br.ibm.com> Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com> Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-20[CELL] spufs: rework list management and associated lockingChristoph Hellwig1-65/+7
This sorts out the various lists and related locks in the spu code. In detail: - the per-node free_spus and active_list are gone. Instead struct spu gained an alloc_state member telling whether the spu is free or not - the per-node spus array is now locked by a per-node mutex, which takes over from the global spu_lock and the per-node active_mutex - the spu_alloc* and spu_free function are gone as the state change is now done inline in the spufs code. This allows some more sharing of code for the affinity vs normal case and more efficient locking - some little refactoring in the affinity code for this locking scheme Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20[CELL] spu_base: locking cleanupChristoph Hellwig1-33/+51
Sort out the locking mess in spu_base and document the current rules. As an added benefit spu_alloc* and spu_free don't block anymore. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>