aboutsummaryrefslogtreecommitdiffstats
path: root/arch/sh/mm (follow)
AgeCommit message (Collapse)AuthorFilesLines
2006-12-07[PATCH] slab: remove kmem_cache_tChristoph Lameter1-3/+3
Replace all uses of kmem_cache_t with struct kmem_cache. The patch was generated using the following script: #!/bin/sh # # Replace one string by another in all the kernel sources. # set -e for file in `find * -name "*.c" -o -name "*.h"|xargs grep -l $1`; do quilt add $file sed -e "1,\$s/$1/$2/g" $file >/tmp/$$ mv /tmp/$$ $file quilt refresh done The script was run like this sh replace kmem_cache_t "struct kmem_cache" Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07[PATCH] shared page table for hugetlb pageChen, Kenneth W1-0/+5
Following up with the work on shared page table done by Dave McCracken. This set of patch target shared page table for hugetlb memory only. The shared page table is particular useful in the situation of large number of independent processes sharing large shared memory segments. In the normal page case, the amount of memory saved from process' page table is quite significant. For hugetlb, the saving on page table memory is not the primary objective (as hugetlb itself already cuts down page table overhead significantly), instead, the purpose of using shared page table on hugetlb is to allow faster TLB refill and smaller cache pollution upon TLB miss. With PT sharing, pte entries are shared among hundreds of processes, the cache consumption used by all the page table is smaller and in return, application gets much higher cache hit ratio. One other effect is that cache hit ratio with hardware page walker hitting on pte in cache will be higher and this helps to reduce tlb miss latency. These two effects contribute to higher application performance. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Acked-by: Hugh Dickins <hugh@veritas.com> Cc: Dave McCracken <dmccr@us.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Gibson <david@gibson.dropbear.id.au> Cc: Adam Litke <agl@us.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-06sh: sh775x/titan fixes for irq header changes.Jamie Lenehan1-0/+5
The following moves the creation of IPR interupts into setup-7750.c and updates a few other things to make it all work after the "Drop CPU subtype IRQ headers" commit. It boots and runs fine on my titan board. - adds an ipr_idx to the ipr_data and uses a function in the subtype code to calculate the address of the IPR registers - adds a function to enable individual interrupt mode for externals in the subtype code and calls that from the titan board code instead of doing it directly. - I changed the shift in the ipr_data to be the actual # of bits to shift, instead of the numnber / 4 - made it easier to match with the manual. Signed-off-by: Jamie Lenehan <lenehan@twibble.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-06sh: stacktrace/lockdep/irqflags tracing support.Paul Mundt1-0/+3
Wire up all of the essentials for lockdep.. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-06sh: Get the PGD right in oops case with 64-bit PTEs.Paul Mundt1-4/+2
Previously this was using a static pgd shift in the reporting code, simply flip this to PGDIR_SHIFT which does the right thing depending on varying PTE magnitudes on the SH-X2 MMU. While we're at it, and since it's been recently added, use get_TTB() for fetching the TTB, rather than the open coded instructions. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-06sh: Fixup various PAGE_SIZE == 4096 assumptions.Paul Mundt5-22/+19
There were a number of places that made evil PAGE_SIZE == 4k assumptions that ended up breaking when trying to play with 8k and 64k page sizes, this fixes those up. The most significant change is the way we load THREAD_SIZE, previously this was done via: mov #(THREAD_SIZE >> 8), reg shll8 reg to avoid a memory access and allow the immediate load. With a 64k PAGE_SIZE, we're out of range for the immediate load size without resorting to special instructions available in later ISAs (movi20s and so on). The "workaround" for this is to bump up the shift to 10 and insert a shll2, which gives a bit more flexibility while still being much cheaper than a memory access. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-06sh: TLB miss fast-path optimizations.Stuart Menefy2-86/+1
Handle simple TLB miss faults which can be resolved completely from the page table in assembler. Signed-off-by: Stuart Menefy <stuart.menefy@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-06sh: pmd rework.Stuart Menefy2-17/+49
Remove extra bits from the pmd structure and store a kernel logical address rather than a physical address. This allows it to be directly dereferenced. Another piece of wierdness inherited from x86. Signed-off-by: Stuart Menefy <stuart.menefy@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-06sh: Use MMU.TTB register as pointer to current pgd.Stuart Menefy1-10/+8
Add TTB accessor functions and give it a sensible default value. We will use this later for optimizing the fault path. Signed-off-by: Stuart Menefy <stuart.menefy@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-06sh: Set up correct siginfo structures for page faults.Stuart Menefy1-9/+17
Remove the previous saving of fault codes into the thread_struct as they are never used, and appeared to be inherited from x86. Signed-off-by: Stuart Menefy <stuart.menefy@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-06sh: p3map_sem sem2mutex conversion.Paul Mundt2-26/+11
Simple sem2mutex conversion for the p3map semaphores. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-06sh: Preliminary support for SH-X2 MMU.Paul Mundt4-17/+52
This adds some preliminary support for the SH-X2 MMU, used by newer SH-4A parts (particularly SH7785). This MMU implements a 'compat' mode with SH-X MMUs and an 'extended' mode for SH-X2 extended features. Extended features include additional page sizes (8kB, 4MB, 64MB), as well as the addition of page execute permissions. The extended mode attributes are placed in a second data array, which requires us to switch to 64-bit PTEs when in X2 mode. With the addition of the exec perms, we also overhaul the mmap prots somewhat, now that it's possible to handle them more intelligently. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-06sh: Hook SH7785 in to the build system.Paul Mundt1-0/+5
Simple 7785 placeholders to start hooking up other bits of code. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-06sh: Add support for SH7206 and SH7619 CPU subtypes.Yoshinori Sato2-33/+53
This implements initial support for the SH7206 (SH-2A) and SH7619 (SH-2) MMU-less CPUs. Signed-off-by: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-10-10sh: Zero-out coherent buffer in consistent_alloc().Paul Mundt1-0/+1
Be sure to zero out the buffer, this was causing occasional problems under heavier PCI tests. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-10-03sh: build fixes for defconfigs.Paul Mundt1-1/+1
Get all of the defconfigs building again. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-29[PATCH] pidspace: is_init()Sukadev Bhattiprolu1-1/+1
This is an updated version of Eric Biederman's is_init() patch. (http://lkml.org/lkml/2006/2/6/280). It applies cleanly to 2.6.18-rc3 and replaces a few more instances of ->pid == 1 with is_init(). Further, is_init() checks pid and thus removes dependency on Eric's other patches for now. Eric's original description: There are a lot of places in the kernel where we test for init because we give it special properties. Most significantly init must not die. This results in code all over the kernel test ->pid == 1. Introduce is_init to capture this case. With multiple pid spaces for all of the cases affected we are looking for only the first process on the system, not some other process that has pid == 1. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Sukadev Bhattiprolu <sukadev@us.ibm.com> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Serge Hallyn <serue@us.ibm.com> Cc: Cedric Le Goater <clg@fr.ibm.com> Cc: <lxc-devel@lists.sourceforge.net> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-09-29[PATCH] make PROT_WRITE imply PROT_READJason Baron1-1/+1
Make PROT_WRITE imply PROT_READ for a number of architectures which don't support write only in hardware. While looking at this, I noticed that some architectures which do not support write only mappings already take the exact same approach. For example, in arch/alpha/mm/fault.c: " if (cause < 0) { if (!(vma->vm_flags & VM_EXEC)) goto bad_area; } else if (!cause) { /* Allow reads even for write-only mappings */ if (!(vma->vm_flags & (VM_READ | VM_WRITE))) goto bad_area; } else { if (!(vma->vm_flags & VM_WRITE)) goto bad_area; } " Thus, this patch brings other architectures which do not support write only mappings in-line and consistent with the rest. I've verified the patch on ia64, x86_64 and x86. Additional discussion: Several architectures, including x86, can not support write-only mappings. The pte for x86 reserves a single bit for protection and its two states are read only or read/write. Thus, write only is not supported in h/w. Currently, if i 'mmap' a page write-only, the first read attempt on that page creates a page fault and will SEGV. That check is enforced in arch/blah/mm/fault.c. However, if i first write that page it will fault in and the pte will be set to read/write. Thus, any subsequent reads to the page will succeed. It is this inconsistency in behavior that this patch is attempting to address. Furthermore, if the page is swapped out, and then brought back the first read will also cause a SEGV. Thus, any arbitrary read on a page can potentially result in a SEGV. According to the SuSv3 spec, "if the application requests only PROT_WRITE, the implementation may also allow read access." Also as mentioned, some archtectures, such as alpha, shown above already take the approach that i am suggesting. The counter-argument to this raised by Arjan, is that the kernel is enforcing the write only mapping the best it can given the h/w limitations. This is true, however Alan Cox, and myself would argue that the inconsitency in behavior, that is applications can sometimes work/sometimes fails is highly undesireable. If you read through the thread, i think people, came to an agreement on the last patch i posted, as nobody has objected to it... Signed-off-by: Jason Baron <jbaron@redhat.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Andi Kleen <ak@muc.de> Acked-by: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: Arjan van de Ven <arjan@linux.intel.com> Acked-by: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Ian Molton <spyro@f2s.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-09-27sh: Fix occasional flush_cache_4096() stack corruption.Paul Mundt1-11/+9
IRQs disabling in flush_cache_4096 for cache purge. Under certain workloads we would get an IRQ in the middle of a purge operation, and the cachelines would remain in an inconsistent state, leading to occasional stack corruption. Signed-off-by: Takeo Takahashi <takahashi.takeo@renesas.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Initial vsyscall page support.Paul Mundt3-8/+26
This implements initial support for the vsyscall page on SH. At the moment we leave it configurable due to having nommu to support from the same code base. We hook it up for the signal trampoline return at present, with more to be added later, once uClibc catches up. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Selective flush_cache_mm() flushing.Paul Mundt1-90/+130
flush_cache_mm() wraps in to flush_cache_all(), which is rather excessive given that the number of PTEs within the specified context are generally quite low. Optimize for walking the mm's VMA list and selectively flushing the VMA ranges from the dcache. Invalidate the icache only if a VMA sets VM_EXEC. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Enable /proc/kcore support.Paul Mundt1-2/+9
This was previously unimplemented.. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Add support for cacheline poking through debugfs.Paul Mundt2-0/+151
A simple debugging aid for easier visibility of the respective cachelines. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Add support for SH7706/SH7710/SH7343 CPUs.Paul Mundt1-4/+26
This adds support for the aforementioned CPU subtypes, and cleans up some build issues encountered as a result. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: __addr_ok() and other misc nommu fixups.Yoshinori Sato2-3/+3
A few more outstanding nommu fixups.. Signed-off-by: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Various nommu fixes.Yoshinori Sato2-8/+14
This fixes up some of the various outstanding nommu bugs on SH. Signed-off-by: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Make PAGE_OFFSET configurable.Paul Mundt1-0/+31
nommu needs to be able to shift PAGE_OFFSET, so we switch it to a non-user-visible CONFIG_PAGE_OFFSET and use that in the few places where it matters. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: More cosmetic cleanups and trivial fixes.Paul Mundt3-22/+14
Nothing exciting here, just trivial fixes.. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Inhibit mapping PCI apertures through page tables.Paul Mundt1-1/+16
Inhibit mapping through page tables in __ioremap() for PCI memory apertures on SH7751 and SH7780-style PCI controllers, translation is not possible for these areas. For other users that map a small window in P1/P2 space, ioremap() traps that already, and should never make it to __ioremap(). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: SE73180 updates for IRQ changes.Paul Mundt1-1/+1
SE73180 can use the generic support, we just need to wire up the IRQ demuxing. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Fix split ptlock for user mappings in __do_page_fault().Paul Mundt1-3/+4
There was a bug that got introduced when the split ptlock changes went in where mm could be unintialized for user mappings, this fixes it up.. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: ioremap() overhaul.Paul Mundt2-8/+144
ioremap() overhaul. Add support for transparent PMB mapping, get rid of p3_ioremap(), etc. Also drop ioremap() and iounmap() routines from the machvec, as everyone can use the generic ioremap() API instead. For PCI memory apertures and other special cases, use the pci_iomap() API, as boards are already required to get the mapping right there. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: page table alloc cleanups and page fault optimizations.Paul Mundt5-164/+187
Cleanup of page table allocators, using generic folded PMD and PUD helpers. TLB flushing operations are moved to a more sensible spot. The page fault handler is also optimized slightly, we no longer waste cycles on IRQ disabling for flushing of the page from the ITLB, since we're already under CLI protection by the initial exception handler. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: SH-4A Privileged Space Mapping Buffer (PMB) support.Paul Mundt2-1/+271
Add support for 32-bit physical addressing through the SH-4A Privileged Space Mapping Buffer (PMB). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Add control register barriers.Paul Mundt2-2/+8
Currently when making changes to control registers, we typically need some time for changes to take effect (8 nops, generally). However, for sh4a we simply need to do an icbi.. This is a simple patch for implementing a general purpose ctrl_barrier() which functions as a control register write barrier. There's some additional documentation in the patch itself, but it's pretty self explanatory. There were also some places where we were not doing the barrier, which didn't seem to have any adverse effects on legacy parts, but certainly did on sh4a. It's safer to have the barrier in place for legacy parts as well in these cases, though this does make flush_tlb_all() more expensive (by an order of 8 nops). We can ifdef around the flush_tlb_all() case for now if it's clear that all legacy parts won't have a problem with this. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Add flag for MMU PTEA capability.Paul Mundt1-8/+4
Add CPU_HAS_PTEA, refactor some of the cpu flag settings. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Fix fatal oops in copy_user_page() on sh4a (SH7780).Paul Mundt1-10/+14
We had a pretty interesting oops happening, where copy_user_page() was down()'ing p3map_sem[] with a bogus offset (particularly, an offset that hadn't been initialized with sema_init(), due to the mismatch between cpu_data->dcache.n_aliases and what was assumed based off of the old CACHE_ALIAS value). Luckily, spinlock debugging caught this for us, and so we drop the old hardcoded CACHE_ALIAS for sh4 completely and rely on the run-time probed cpu_data->dcache.alias_mask. This in turn gets the p3map_sem[] index right, and everything works again. While we're at it, also convert to 4-level page tables.. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Support for SH7770/SH7780 CPU subtypes.Paul Mundt1-0/+4
Merge support for SH7770 and SH7780 SH-4A subtypes. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Optimized cache handling for SH-4/SH-4A caches.Richard Curnow2-185/+431
This reworks some of the SH-4 cache handling code to more easily accomodate newer-style caches (particularly for the > direct-mapped case), as well as optimizing some of the old code. Signed-off-by: Richard Curnow <richard.curnow@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Support for SH-4A memory barriers.Paul Mundt1-0/+5
SH-4A supports 'synco' as a barrier, sprinkle it around the cache ops as necessary.. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: hugetlb updates.Paul Mundt1-36/+16
For some of the larger sizes we permitted spanning pages across several PTEs, but this turned out to not be generally useful. This reverts the sh hugetlbpage interface to something more sensible using huge pages at single PTE granularity. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: flush_cache_range() cleanup and optimizations.Paul Mundt1-26/+46
flush_cache_range() wasn't page aligning the end of the range, we can't assume that it will always be page aligned, and we ended up getting unaligned faults in some rare call paths. Additionally, we add a small optimization to just purge the dcache entirely if the range is large enough that the page table walking will take longer. We use an arbitrary value of 64 pages for the large range size, as per sh64. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-26[PATCH] Standardize pxx_page macrosDave McCracken1-1/+1
One of the changes necessary for shared page tables is to standardize the pxx_page macros. pte_page and pmd_page have always returned the struct page associated with their entry, while pte_page_kernel and pmd_page_kernel have returned the kernel virtual address. pud_page and pgd_page, on the other hand, return the kernel virtual address. Shared page tables needs pud_page and pgd_page to return the actual page structures. There are very few actual users of these functions, so it is simple to standardize their usage. Since this is basic cleanup, I am submitting these changes as a standalone patch. Per Hugh Dickins' comments about it, I am also changing the pxx_page_kernel macros to pxx_page_vaddr to clarify their meaning. Signed-off-by: Dave McCracken <dmccr@us.ibm.com> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-30Remove obsolete #include <linux/config.h>Jörn Engel6-6/+0
Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
2006-03-22[PATCH] hugepage: is_aligned_hugepage_range() cleanupDavid Gibson1-12/+0
Quite a long time back, prepare_hugepage_range() replaced is_aligned_hugepage_range() as the callback from mm/mmap.c to arch code to verify if an address range is suitable for a hugepage mapping. is_aligned_hugepage_range() stuck around, but only to implement prepare_hugepage_range() on archs which didn't implement their own. Most archs (everything except ia64 and powerpc) used the same implementation of is_aligned_hugepage_range(). On powerpc, which implements its own prepare_hugepage_range(), the custom version was never used. In addition, "is_aligned_hugepage_range()" was a bad name, because it suggests it returns true iff the given range is a good hugepage range, whereas in fact it returns 0-or-error (so the sense is reversed). This patch cleans up by abolishing is_aligned_hugepage_range(). Instead prepare_hugepage_range() is defined directly. Most archs use the default version, which simply checks the given region is aligned to the size of a hugepage. ia64 and powerpc define custom versions. The ia64 one simply checks that the range is in the correct address space region in addition to being suitably aligned. The powerpc version (just as previously) checks for suitable addresses, and if necessary performs low-level MMU frobbing to set up new areas for use by hugepages. No libhugetlbfs testsuite regressions on ppc64 (POWER5 LPAR). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-22[PATCH] remove set_page_count() outside mm/Nick Piggin1-2/+2
set_page_count usage outside mm/ is limited to setting the refcount to 1. Remove set_page_count from outside mm/, and replace those users with init_page_count() and set_page_refcounted(). This allows more debug checking, and tighter control on how code is allowed to play around with page->_count. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-22[PATCH] mm: split highorder pagesNick Piggin1-2/+1
Have an explicit mm call to split higher order pages into individual pages. Should help to avoid bugs and be more explicit about the code's intention. Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Russell King <rmk@arm.linux.org.uk> Cc: David Howells <dhowells@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Chris Zankel <chris@zankel.net> Signed-off-by: Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-16[PATCH] sh: Move CPU subtype configuration to its own KconfigPaul Mundt1-0/+233
Currently the CPU subtype options are cluttering up arch/sh/Kconfig somewhat. Given that, this moves all of that in to its own arch/sh/mm/Kconfig. Things like cache configuration are also moved to this new location. This also adds support for strict CPU tuning on newer cores, which requires the addition of as-option. Signed-off-by: Paul Mundt <lethal@linux-sh.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-16[PATCH] sh: I/O routine cleanups and ioremap() overhaulPaul Mundt1-18/+81
This introduces a few changes in the way that the I/O routines are defined on SH, specifically so that things like the iomap API properly wrap through the machvec for board-specific quirks. In addition to this, the old p3_ioremap() work is converted to a more generic __ioremap() that will map through the PMB if it's available, or fall back on page tables for everything else. An alpha-like IO_CONCAT is also added so we can start to clean up the board-specific io.h mess, which will be handled in board update patches.. Signed-off-by: Paul Mundt <lethal@linux-sh.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-07[PATCH] sh: Use pfn_valid() for lazy dcache write-back on SH7705Paul Mundt1-7/+12
SH7705 in extended cache mode has some left-over VALID_PAGE() cruft that it checks when doing lazy dcache write-back. This has been gone for some time (the last bits were in the discontig code, which should now also be gone -- this also fixes up a build error in the non-discontig case). pfn_valid() gives the desired behaviour, so we switch to that. Signed-off-by: Paul Mundt <lethal@linux-sh.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>