aboutsummaryrefslogtreecommitdiffstats
path: root/arch (follow)
AgeCommit message (Collapse)AuthorFilesLines
2007-04-23[POWERPC] cell: enable RTAS-based PTCAL for Cell XDR memoryJeremy Kerr1-0/+160
Enable Periodic Recalibration (PTCAL) support for Cell XDR memory, using the new ibm,cbe-start-ptcal and ibm,cbe-stop-ptcal RTAS calls. Tested on QS20 and QS21 (by Thomas Huth). It seems that SLOF has problems disabling, at least on QS20; this patch should only be used once these problems have been addressed. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] cell: add support for proper device-treeChristian Krafft1-30/+88
This patch adds support for a proper device-tree. A porper device-tree on cell contains be nodes for each CBE containg nodes for SPEs and all the other special devices on it. Ofcourse oldschool devicetree is still supported. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] add of_iomap functionChristian Krafft1-17/+2
The of_iomap function maps memory for a given device_node and returns a pointer to that memory. This is used at some places, so it makes sense to a seperate function. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] pmi probe device by device-typeChristian Krafft1-0/+1
At the moment the pmi device driver is probing for devices with a given type and a given name. As there may be devices of the same type but with a different name, probing should be done also for device type only. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] add check for initialized driver data to pmi driverChristian Krafft1-2/+7
This patch adds a check for the private driver data to be initialized. The bug showed up, as the caller found a pmi device by it's type. Whereas the pmi driver probes for the type and the name. Since the name was not as the driver expected, it did not initialize. A more relaxed probing will be supplied with an extra patch, too. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] cell: use pmi in cpufreq driverChristian Krafft1-1/+80
The new PMI driver was added in order to support cpufreq on blades that require the frequency to be controlled by the service processor, so use it on those. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] cbe_thermal: add throttling attributes to cpu and spu nodesChristian Krafft1-1/+154
This patch adds some attributes the cpu and spu nodes: /sys/devices/system/[c|s]pu/[c|s]pu*/thermal/throttle_begin /sys/devices/system/[c|s]pu/[c|s]pu*/thermal/throttle_end /sys/devices/system/[c|s]pu/[c|s]pu*/thermal/throttle_full_stop Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] cbe_thermal: clean up computation of temperatureChristian Krafft1-18/+10
This patch introduces a little function for transforming register values into temperature. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] cell: add cbe_node_to_cpu functionChristian Krafft3-14/+45
This patch adds code to deal with conversion of logical cpu to cbe nodes. It removes code that assummed there were two logical CPUs per CBE. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spu_base: fix initialisation on systems with no SPEsJeremy Kerr2-3/+10
This change fixes the case where spu_base and spufs are initialised on a system with no SPEs - unconditionally create the spu_lists so spu_alloc doesn't explode, and check for spu_management ops before starting spufs. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> arch/powerpc/platforms/cell/spu_base.c | 7 ++++--- arch/powerpc/platforms/cell/spufs/inode.c | 5 +++++ 2 files changed, 9 insertions(+), 3 deletions(-)
2007-04-23[POWERPC] spu_base: remove cleanup_spu_baseChristoph Hellwig1-37/+10
spu_base.c is always built into the kernel image, so there is no need for a cleanup function. And some of the things it does are in the way for my following patches, so I'd rather get rid of it ASAP. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: various run.c cleanupsChristoph Hellwig1-20/+31
- remove the spu_acquire_runnable from spu_run_init. I need to opencode it in spufs_run_spu in the next patch - remove various inline attributes, we don't really want to inline long functions with multiple callsites - cleanup return values and runcntl_write calls in spu_run_init - use normal kernel codingstyle in spu_reacquire_runnable Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: enable SPU coredump for kernel-builtin spufsAkinobu Mita1-18/+16
spu_coredump_calls.owner is NULL in case of a builtin spufs, so the checks in here break. Check for the availability of the spu_coredump_calls variable instead. Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: fix memory leak on coredumpArnd Bergmann1-9/+10
Dynamically allocated read/write buffer in spufs_arch_write_note() will not be freed. Convert it to get_free_page at the same time. Cc: Akinobu Mita <mita@fixstars.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: Minor cleanup of spu_waitJeremy Kerr1-7/+6
Change the loop in spu_wait to be a little more straightforward. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: add mode= mount optionJeremy Kerr1-4/+10
Add a 'mode=' option to spufs mount arguments. This allows more control over access to the top-level spufs directory. Tested on Cell. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: use memcpy_fromio() to copy from local storeAkinobu Mita1-6/+6
GCC may generates inline copy loop to handle memcpy() function instead of kernel defined memcpy(). But this inlined version of memcpy() causes an alignment interrupt when copying from local store. This patch uses memcpy_fromio() and memcpy_toio to copy local store to prevent memcpy() being inlined. Signed-off-by: Akinobu Mita <mita@fixstars.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: avoid spurious memory barriersChristoph Hellwig1-14/+0
We now have proper locking around assignets of the mapping pointers, and the spin_unlock implies enough of a barrier to get rid of the explicit one. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: fix memory leak on spufs reloadingAkinobu Mita1-0/+6
When SPU isolation mode enabled, isolated_loader would be allocated by spufs_init_isolated_loader() on module_init(). But anyone do not free it. This patch introduces spufs_exit_isolated_loader() which is the opposite of spufs_init_isolated_loader() and called on module_exit(). Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Akinobu Mita <mita@fixstars.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: fix missing error handling in module_init()Akinobu Mita1-6/+10
spufs module_init forgot to call a few cleanup functions on error path. This patch also includes cosmetic changes in spu_sched_init() (identation fix and return error code). [modified by hch to apply ontop of the latest schedule changes] Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Akinobu Mita <mita@fixstars.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: check spu_acquire_runnable() return valueAkinobu Mita1-1/+4
This patch checks return value of spu_acquire_runnable() in spufs_mfc_write(). Signed-off-by: Akinobu Mita <mita@fixstars.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: turn run_sema into run_mutexChristoph Hellwig3-4/+4
There is no reason for run_sema to be a struct semaphore. Changing it to a mutex and rename it accordingly. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: provide siginfo for SPE faultsJeremy Kerr1-7/+25
This change populates a siginfo struct for SPE application exceptions (ie, invalid DMAs and illegal instructions). Tested on an IBM Cell Blade. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: make spu page faults not block schedulingArnd Bergmann8-130/+223
Until now, we have always entered the spu page fault handler with a mutex for the spu context held. This has multiple bad side-effects: - it becomes impossible to suspend the context during page faults - if an spu program attempts to access its own mmio areas through DMA, we get an immediate livelock when the nopage function tries to acquire the same mutex This patch makes the page fault logic operate on a struct spu_context instead of a struct spu, and moves it from spu_base.c to a new file fault.c inside of spufs. We now also need to copy the dar and dsisr contents of the last fault into the saved context to have it accessible in case we schedule out the context before activating the page fault handler. Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spu_base: move spu_init_channels out of spu_mutexChristoph Hellwig1-1/+2
There is no reason to execute spu_init_channels under spu_mutex after the spu has been taken off the freelist it's ours. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spu sched: make addition to stop_wq and runque atomic vs wakeupLuke Browning1-22/+16
Addition to stop_wq needs to happen before adding to the runqeueue and under the same lock so that we don't have a race window for a lost wake up in the spu scheduler. Signed-off-by: Luke Browning <lukebrowning@us.ibm.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: streamline locking for isolated spu setupChristoph Hellwig3-55/+16
For quite a while now spu state is protected by a simple mutex instead of the old rw_semaphore, and this means we can simplify the locking around spu_setup_isolated a lot. Instead of doing an spu_release before entering spu_setup_isolated and then calling the complicated spu_acquire_exclusive we can now simply enter the function locked an in guaranteed runnable state, so that the only bit of spu_acquire_exclusive that's left is the call to spu_unmap_mappings. Similarly there's no more need to unlock and reacquire the state_mutex when spu_setup_isolated is done, but we can always return with the lock held and only drop it in spu_run_init in the failure case. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: remove woken threads from the runqueue earlyChristoph Hellwig2-27/+19
A single context should only be woken once, and we should not have more wakeups for a given priority than the number of contexts on that runqueue position. Also add some asserts to trap future problems in this area more easily. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: add memory barriers after set_bitArnd Bergmann1-0/+3
set_bit does not guarantee ordering on powerpc, so using it for communication between threads requires explicit mb() calls. Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spu sched: ensure preempted threads are put back on the runqueue, part2Christoph Hellwig1-0/+6
To not lose a spu thread we need to make sure it always gets put back on the runqueue. In find_victim aswell as in the scheduler tick as done in the previous patch. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spu sched: ensure preempted threads are put back on the runqueueChristoph Hellwig1-3/+10
To not lose a spu thread we need to make sure it always gets put back on the runqueue. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: clear mapping pointers after last closeChristoph Hellwig4-13/+148
Make sure the pointers to various mappings are cleared once the last user stopped using them. This avoids accessing freed memory when tearing down the gang directory aswell as optimizing away pte invalidations if no one uses these. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: use cancel_rearming_delayed_workqueue when stopping spu contextsChristoph Hellwig2-4/+23
The scheduler workqueue may rearm itself and deadlock when we try to stop it. Put a flag in place to avoid skip the work if we're tearing down the context. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-13[POWERPC] Fix detection of loader-supplied initrd on OF platformsPaul Mackerras1-2/+4
Commit 79c8541924a220964f9f2cbed31eaa9fdb042eab introduced code to move the initrd if it was in a place where it would get overwritten by the kernel image. Unfortunately this exposed the fact that the code that checks whether the values passed in r3 and r4 are intended to indicate the start address and size of an initrd image was not as thorough as the kernel's checks. The symptom is that on OF-based platforms, the bootwrapper can cause an exception which causes the system to drop back into OF. Previously it didn't matter so much if the code incorrectly thought that there was an initrd, since the values for start and size were just passed through to the kernel. Now the bootwrapper needs to apply the same checks as the kernel since it is now using the initrd data itself (in the process of copying it if necessary). This adds the code to do that. Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-12[POWERPC] Miscellaneous arch/powerpc Kconfig and platform/Kconfig cleanupKumar Gala2-45/+30
* Cleaned up some whitespace in arch/powerpc/Kconfig * Moved sourcing of platforms/embedded6xx/Kconfig into platform/Kconfig * Moved sourcing of platforms/4xx/Kconfig into platform/Kconfig and disabled it * Removed EMBEDDEDBOOT since its not supported in arch/powerpc * Removed PC_KEYBOARD since its not used anywhere * Moved a few CONFIG options around in platform/Kconfig * Moved interrupt controllers into platform/Kconfig out of bus section Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-04-12[POWERPC] Convert 85xx platform to unified platform KconfigKumar Gala3-27/+6
Moved 85xx platform Kconfig over to being sourced by the unified arch/powerpc/platforms/Kconfig. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-04-12[POWERPC] Convert 8xx platform to unified platform KconfigKumar Gala3-38/+34
Moved 8xx platform Kconfig over to being sourced by the unified arch/powerpc/platforms/Kconfig. Also, cleaned up whitespace issues in 8xx Kconfig. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-04-12[POWERPC] Convert 82xx platform to unified platform KconfigKumar Gala3-37/+26
Moved 82xx platform Kconfig over to being sourced by the unified arch/powerpc/platforms/Kconfig. Also, cleaned up whitespace issues in 82xx Kconfig. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-04-12[POWERPC] Convert 83xx platform to unified platform KconfigKumar Gala3-7/+3
Moved 83xx platform Kconfig over to being sourced by the unified arch/powerpc/platforms/Kconfig. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-04-12[POWERPC] Convert 86xx platform to unified platform KconfigKumar Gala3-17/+10
Moved 86xx platform Kconfig over to being sourced by the unified arch/powerpc/platforms/Kconfig. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-04-12[POWERPC] Ensure platform CONFIG options have correct dependenciesKumar Gala1-1/+6
We currently support TAU and CPU frequency scaling only on discrete (non-SOC) processors. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-04-13[POWERPC] ibmebus: change probe/remove interface from using loc-code to DT pathJoachim Fenkes1-55/+60
In some cases, multiple OFDT nodes might share the same location code, so the location code is not a unique identifier for an OFDT node. Changed the ibmebus probe/remove interface to use the DT path of the device node instead of the location code. The DT path must be written into probe/remove right as it would appear in the "devspec" attribute of the ebus device: relative to the DT root, with a leading slash and without a trailing slash. One trailing newline will not hurt; multiple newlines will (like perl's chomp()). Example: Add a device "/proc/device-tree/foo@12345678" to ibmebus like this: echo /foo@12345678 > /sys/bus/ibmebus/probe Remove the device like this: echo /foo@12345678 > /sys/bus/ibmebus/remove Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13[POWERPC] DEBUG_PAGEALLOC for 64-bitBenjamin Herrenschmidt2-4/+82
Here's an implementation of DEBUG_PAGEALLOC for 64 bits powerpc. It applies on top of the 32 bits patch. Unlike Anton's previous attempt, I'm not using updatepp. I'm removing the hash entries from the bolted mapping (using a map in RAM of all the slots). Expensive but it doesn't really matter, does it ? :-) Memory hot-added doesn't benefit from this unless it's added at an address that is below end_of_DRAM() as calculated at boot time. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> arch/powerpc/Kconfig.debug | 2 arch/powerpc/mm/hash_utils_64.c | 84 ++++++++++++++++++++++++++++++++++++++-- 2 files changed, 82 insertions(+), 4 deletions(-) Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13[POWERPC] DEBUG_PAGEALLOC for 32-bitBenjamin Herrenschmidt4-1/+68
Here's an implementation of DEBUG_PAGEALLOC for ppc32. It disables BAT mapping and is only tested with Hash table based processor though it shouldn't be too hard to adapt it to others. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> arch/powerpc/Kconfig.debug | 9 ++++++ arch/powerpc/mm/init_32.c | 4 +++ arch/powerpc/mm/pgtable_32.c | 52 +++++++++++++++++++++++++++++++++++++++ arch/powerpc/mm/ppc_mmu_32.c | 4 ++- include/asm-powerpc/cacheflush.h | 6 ++++ 5 files changed, 74 insertions(+), 1 deletion(-) Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13[POWERPC] Fix 32-bit mm operations when not using BATsBenjamin Herrenschmidt4-9/+31
On hash table based 32 bits powerpc's, the hash management code runs with a big spinlock. It's thus important that it never causes itself a hash fault. That code is generally safe (it does memory accesses in real mode among other things) with the exception of the actual access to the code itself. That is, the kernel text needs to be accessible without taking a hash miss exceptions. This is currently guaranteed by having a BAT register mapping part of the linear mapping permanently, which includes the kernel text. But this is not true if using the "nobats" kernel command line option (which can be useful for debugging) and will not be true when using DEBUG_PAGEALLOC implemented in a subsequent patch. This patch fixes this by pre-faulting in the hash table pages that hit the kernel text, and making sure we never evict such a page under hash pressure. Signed-off-by: Benjamin Herrenchmidt <benh@kernel.crashing.org> arch/powerpc/mm/hash_low_32.S | 22 ++++++++++++++++++++-- arch/powerpc/mm/mem.c | 3 --- arch/powerpc/mm/mmu_decl.h | 4 ++++ arch/powerpc/mm/pgtable_32.c | 11 +++++++---- 4 files changed, 31 insertions(+), 9 deletions(-) Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13[POWERPC] Cleanup 32-bit map_pageBenjamin Herrenschmidt1-3/+6
The 32 bits map_page() function is used internally by the mm code for early mmu mappings and for ioremap. It should never be called for an address that already has a valid PTE or hash entry, so we add a BUG_ON for that and remove the useless flush_HPTE call. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> arch/powerpc/mm/pgtable_32.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13[POWERPC] Make tlb flush batch use lazy MMU modeBenjamin Herrenschmidt4-43/+49
The current tlb flush code on powerpc 64 bits has a subtle race since we lost the page table lock due to the possible faulting in of new PTEs after a previous one has been removed but before the corresponding hash entry has been evicted, which can leads to all sort of fatal problems. This patch reworks the batch code completely. It doesn't use the mmu_gather stuff anymore. Instead, we use the lazy mmu hooks that were added by the paravirt code. They have the nice property that the enter/leave lazy mmu mode pair is always fully contained by the PTE lock for a given range of PTEs. Thus we can guarantee that all batches are flushed on a given CPU before it drops that lock. We also generalize batching for any PTE update that require a flush. Batching is now enabled on a CPU by arch_enter_lazy_mmu_mode() and disabled by arch_leave_lazy_mmu_mode(). The code epects that this is always contained within a PTE lock section so no preemption can happen and no PTE insertion in that range from another CPU. When batching is enabled on a CPU, every PTE updates that need a hash flush will use the batch for that flush. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13[POWERPC] Alignment exception uses __get/put_user_inatomicBenjamin Herrenschmidt1-25/+31
Make the alignment exception handler use the new _inatomic variants of __get/put_user. This fixes erroneous warnings in the very rare cases where we manage to have copy_tofrom_user_inatomic() trigger an alignment exception. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> arch/powerpc/kernel/align.c | 56 ++++++++++++++++++++++++-------------------- 1 file changed, 31 insertions(+), 25 deletions(-) Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13[POWERPC] Clean up unused ROUND_UP, NAME_OFFSET macros in arch/powerpcMilind Arun Choudhary1-4/+0
Unused ROUND_UP, NAME_OFFSET macro cleanup Signed-off-by: Milind Arun Choudhary <milindchoudhary@gmail.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13[POWERPC] Add correct interrupt property for pegasos ideOlaf Hering1-22/+33
The firmware assigns irq 20/21 to the VIA IDE device on Pegasos. But the required interrupt is 14/15. Maybe someone confused decimal vs. hexadecimal values. Signed-off-by: Olaf Hering <olaf@aepfle.de> Signed-off-by: Paul Mackerras <paulus@samba.org>