aboutsummaryrefslogtreecommitdiffstats
path: root/include/asm-generic (follow)
AgeCommit message (Collapse)AuthorFilesLines
2006-07-14[PATCH] hdrinstall: remove asm/atomic.h from user visibilityDavid Woodhouse1-1/+1
This isn't suitable for userspace to see -- the kernel headers are not a random library of stuff for userspace; they're only there to define the kernel<->user ABI for system libraries and tools. Anything which _was_ abusing asm/atomic.h from userspace was probably broken anyway -- as it often didn't even give atomic operation. Signed-off-by: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-14[PATCH] hdrinstall: remove asm/irq.h from user visibilityDavid Woodhouse1-1/+1
Remove asm/irq.h from the exported headers -- there was never any good reason for it to have been listed. Signed-off-by: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-04Merge master.kernel.org:/pub/scm/linux/kernel/git/davej/cpufreqLinus Torvalds1-0/+2
* master.kernel.org:/pub/scm/linux/kernel/git/davej/cpufreq: Move workqueue exports to where the functions are defined. [CPUFREQ] Misc cleanups in ondemand. [CPUFREQ] Make ondemand sampling per CPU and remove the mutex usage in sampling path. [CPUFREQ] Add queue_delayed_work_on() interface for workqueues. [CPUFREQ] Remove slowdown from ondemand sampling path.
2006-07-04Merge git://git.infradead.org/hdrinstall-2.6Linus Torvalds2-0/+14
* git://git.infradead.org/hdrinstall-2.6: Remove export of include/linux/isdn/tpam.h Remove <linux/i2c-id.h> and <linux/i2c-algo-ite.h> from userspace export Restrict headers exported to userspace for SPARC and SPARC64 Add empty Kbuild files for 'make headers_install' in remaining arches. Add Kbuild file for Alpha 'make headers_install' Add Kbuild file for SPARC 'make headers_install' Add Kbuild file for IA64 'make headers_install' Add Kbuild file for S390 'make headers_install' Add Kbuild file for i386 'make headers_install' Add Kbuild file for x86_64 'make headers_install' Add Kbuild file for PowerPC 'make headers_install' Add generic Kbuild files for 'make headers_install' Basic implementation of 'make headers_check' Basic implementation of 'make headers_install'
2006-07-03[PATCH] lockdep: better lock debuggingIngo Molnar1-10/+5
Generic lock debugging: - generalized lock debugging framework. For example, a bug in one lock subsystem turns off debugging in all lock subsystems. - got rid of the caller address passing (__IP__/__IP_DECL__/etc.) from the mutex/rtmutex debugging code: it caused way too much prototype hackery, and lockdep will give the same information anyway. - ability to do silent tests - check lock freeing in vfree too. - more finegrained debugging options, to allow distributions to turn off more expensive debugging features. There's no separate 'held mutexes' list anymore - but there's a 'held locks' stack within lockdep, which unifies deadlock detection across all lock classes. (this is independent of the lockdep validation stuff - lockdep first checks whether we are holding a lock already) Here are the current debugging options: CONFIG_DEBUG_MUTEXES=y CONFIG_DEBUG_LOCK_ALLOC=y which do: config DEBUG_MUTEXES bool "Mutex debugging, basic checks" config DEBUG_LOCK_ALLOC bool "Detect incorrect freeing of live mutexes" Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-03[PATCH] lockdep: add per_cpu_offset()Ingo Molnar1-0/+2
Add the per_cpu_offset() generic method. (used by the lock validator) Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-01Merge branch 'audit.b22' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/audit-currentLinus Torvalds2-0/+32
* 'audit.b22' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/audit-current: [PATCH] audit syscall classes [PATCH] audit: support for object context filters [PATCH] audit: rename AUDIT_SE_* constants [PATCH] add rule filterkey
2006-07-01[PATCH] add __[start|end]_rodata sections to asm-generic/sections.hHeiko Carstens1-0/+1
Add __start_rodata and __end_rodata to sections.h to avoid extern declarations. Needed by s390 code (see following patch). [akpm@osdl.org: update architectures] Cc: Arjan van de Ven <arjan@infradead.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Andi Kleen <ak@muc.de> Acked-by: Kyle McMartin <kyle@mcmartin.ca> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-01[PATCH] audit syscall classesAl Viro2-0/+32
Allow to tie upper bits of syscall bitmap in audit rules to kernel-defined sets of syscalls. Infrastructure, a couple of classes (with 32bit counterparts for biarch targets) and actual tie-in on i386, amd64 and ia64. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2006-06-30[CPUFREQ] Remove slowdown from ondemand sampling path.Venkatesh Pallipadi1-0/+2
Remove slowdown from ondemand sampling path. This reduces the code path length in dbs_check_cpu() by half. slowdown was not used by ondemand by default. If there are any user level tools that were using this tunable, they may report error now. Signed-off-by: Alexey Starikovskiy <alexey.y.starikovskiy@intel.com> Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Dave Jones <davej@redhat.com>
2006-06-28[PATCH] Add EXPORT_UNUSED_SYMBOL and EXPORT_UNUSED_SYMBOL_GPLArjan van de Ven1-0/+28
Temporarily add EXPORT_UNUSED_SYMBOL and EXPORT_UNUSED_SYMBOL_GPL. These will be used as a transition measure for symbols that aren't used in the kernel and are on the way out. When a module uses such a symbol, a warning is printk'd at modprobe time. The main reason for removing unused exports is size: eacho export takes roughly between 100 and 150 bytes of kernel space in the binary. This patch gives users the option to immediately get this size gain via a config option. Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-27[PATCH] pi-futex: introduce WARN_ON_SMPIngo Molnar1-0/+6
Introduce a new WARN_ON variant: WARN_ON_SMP(cond). Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-25[PATCH] introduce WARN_ON_ONCE(cond)Ingo Molnar1-0/+13
Add WARN_ON_ONCE(cond) to print once-per-bootup messages. [rostedt@goodmis.org: improve code generation] Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-25[PATCH] Define __raw_get_cpu_var and use itPaul Mackerras1-0/+2
There are several instances of per_cpu(foo, raw_smp_processor_id()), which is semantically equivalent to __get_cpu_var(foo) but without the warning that smp_processor_id() can give if CONFIG_DEBUG_PREEMPT is enabled. For those architectures with optimized per-cpu implementations, namely ia64, powerpc, s390, sparc64 and x86_64, per_cpu() turns into more and slower code than __get_cpu_var(), so it would be preferable to use __get_cpu_var on those platforms. This defines a __raw_get_cpu_var(x) macro which turns into per_cpu(x, raw_smp_processor_id()) on architectures that use the generic per-cpu implementation, and turns into __get_cpu_var(x) on the architectures that have an optimized per-cpu implementation. Signed-off-by: Paul Mackerras <paulus@samba.org> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Ingo Molnar <mingo@elte.hu> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-24Add some basic resume trace facilitiesLinus Torvalds1-3/+4
Considering that there isn't a lot of hw we can depend on during resume, this is about as good as it gets. This is x86-only for now, although the basic concept (and most of the code) will certainly work on almost any platform. Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23[PATCH] squash duplicate page_to_pfn and pfn_to_pageAndy Whitcroft1-12/+15
We have architectures where the size of page_to_pfn and pfn_to_page are significant enough to overall image size that they wish to push them out of line. However, in the process we have grown a second copy of the implementation of each of these routines for each memory model. Share the implmentation exposing it either inline or out-of-line as required. Signed-off-by: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-20Merge git://git.infradead.org/hdrcleanup-2.6Linus Torvalds6-5/+7
* git://git.infradead.org/hdrcleanup-2.6: (63 commits) [S390] __FD_foo definitions. Switch to __s32 types in joystick.h instead of C99 types for consistency. Add <sys/types.h> to headers included for userspace in <linux/input.h> Move inclusion of <linux/compat.h> out of user scope in asm-x86_64/mtrr.h Remove struct fddi_statistics from user view in <linux/if_fddi.h> Move user-visible parts of drivers/s390/crypto/z90crypt.h to include/asm-s390 Revert include/media changes: Mauro says those ioctls are only used in-kernel(!) Include <linux/types.h> and use __uXX types in <linux/cramfs_fs.h> Use __uXX types in <linux/i2o_dev.h>, include <linux/ioctl.h> too Remove private struct dx_hash_info from public view in <linux/ext3_fs.h> Include <linux/types.h> and use __uXX types in <linux/affs_hardblocks.h> Use __uXX types in <linux/divert.h> for struct divert_blk et al. Use __u32 for elf_addr_t in <asm-powerpc/elf.h>, not u32. It's user-visible. Remove PPP_FCS from user view in <linux/ppp_defs.h>, remove __P mess entirely Use __uXX types in user-visible structures in <linux/nbd.h> Don't use 'u32' in user-visible struct ip_conntrack_old_tuple. Use __uXX types for S390 DASD volume label definitions which are user-visible S390 BIODASDREADCMB ioctl should use __u64 not u64 type. Remove unneeded inclusion of <linux/time.h> from <linux/ufs_fs.h> Fix private integer types used in V4L2 ioctls. ... Manually resolve conflict in include/linux/mtd/physmap.h
2006-06-18Add generic Kbuild files for 'make headers_install'David Woodhouse2-0/+14
This adds the Kbuild files listing the files which are to be installed by the 'headers_install' make target, in generic directories. Signed-off-by: David Woodhouse <dwmw2@infradead.org>
2006-06-01[SPARC64]: Fix D-cache corruption in mremapDavid S. Miller1-10/+1
If we move a mapping from one virtual address to another, and this changes the virtual color of the mapping to those pages, we can see corrupt data due to D-cache aliasing. Check for and deal with this by overriding the move_pte() macro. Set things up so that other platforms can cleanly override the move_pte() macro too. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-04-27Add standard include guard to asm-generic/signal and use compiler.hDavid Woodhouse1-0/+7
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
2006-04-26Don't include linux/config.h from anywhere else in include/David Woodhouse5-5/+0
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
2006-03-31[PATCH] mutex: some cleanupsNicolas Pitre2-30/+33
Turn some macros into inline functions and add proper type checking as well as being more readable. Also a minor comment adjustment. Signed-off-by: Nicolas Pitre <nico@cam.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-31[PATCH] make local_t signedAndrew Morton1-3/+10
local_t's were defined to be unsigned. This increases confusion because atomic_t's are signed. The patch goes through and changes all implementations to use signed longs throughout. Also, x86-64 was using 32-bit quantities for the value passed into local_add() and local_sub(). Fixed. All (actually, both) existing users have been audited. (Also s/__inline__/inline/ in x86_64/local.h) Cc: Andi Kleen <ak@muc.de> Cc: Benjamin LaHaise <bcrl@kvack.org> Cc: Kyle McMartin <kyle@parisc-linux.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-28[PATCH] for_each_possible_cpu: fixes for generic partKAMEZAWA Hiroyuki1-1/+1
replaces for_each_cpu with for_each_possible_cpu(). Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-28[PATCH] Decrapify asm-generic/local.hKyle McMartin1-71/+9
Now that Christoph Lameter's atomic_long_t support is merged in mainline, might as well convert asm-generic/local.h to use it, so the same code can be used for both sizes of 32 and 64-bit unsigned longs. akpm sayeth: Q: Is there any particular reason why these routines weren't simply implemented with local_save/restore_flags, if they are only meant to guarantee atomicity to the local cpu? I'm sure on most platforms this would be more efficient than using an atomic... A: The whole _point_ of local_t is to avoid local_irq_disable(). It's designed to exploit the fact that many CPUs can do incs and decs in a way which is atomic wrt local interrupts, but not atomic wrt SMP. But this patch makes sense, because asm-generic/local.h is just a fallback implementation for architectures which either cannot perform these local-irq-atomic operations, or its maintainers haven't yet got around to implementing them. We need more work done on local_t in the 2.6.17 timeframe - they're defined as unsigned long, but some architectures implement them as signed long. Signed-off-by: Kyle McMartin <kyle@parisc-linux.org> Cc: Benjamin LaHaise <bcrl@kvack.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-27[PATCH] lightweight robust futexes updatesIngo Molnar1-1/+1
- fix: initialize the robust list(s) to NULL in copy_process. - doc update - cleanup: rename _inuser to _inatomic - __user cleanups and other small cleanups Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Ulrich Drepper <drepper@redhat.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-27[PATCH] lightweight robust futexes: arch defaultsIngo Molnar1-0/+6
This patchset provides a new (written from scratch) implementation of robust futexes, called "lightweight robust futexes". We believe this new implementation is faster and simpler than the vma-based robust futex solutions presented before, and we'd like this patchset to be adopted in the upstream kernel. This is version 1 of the patchset. Background ---------- What are robust futexes? To answer that, we first need to understand what futexes are: normal futexes are special types of locks that in the noncontended case can be acquired/released from userspace without having to enter the kernel. A futex is in essence a user-space address, e.g. a 32-bit lock variable field. If userspace notices contention (the lock is already owned and someone else wants to grab it too) then the lock is marked with a value that says "there's a waiter pending", and the sys_futex(FUTEX_WAIT) syscall is used to wait for the other guy to release it. The kernel creates a 'futex queue' internally, so that it can later on match up the waiter with the waker - without them having to know about each other. When the owner thread releases the futex, it notices (via the variable value) that there were waiter(s) pending, and does the sys_futex(FUTEX_WAKE) syscall to wake them up. Once all waiters have taken and released the lock, the futex is again back to 'uncontended' state, and there's no in-kernel state associated with it. The kernel completely forgets that there ever was a futex at that address. This method makes futexes very lightweight and scalable. "Robustness" is about dealing with crashes while holding a lock: if a process exits prematurely while holding a pthread_mutex_t lock that is also shared with some other process (e.g. yum segfaults while holding a pthread_mutex_t, or yum is kill -9-ed), then waiters for that lock need to be notified that the last owner of the lock exited in some irregular way. To solve such types of problems, "robust mutex" userspace APIs were created: pthread_mutex_lock() returns an error value if the owner exits prematurely - and the new owner can decide whether the data protected by the lock can be recovered safely. There is a big conceptual problem with futex based mutexes though: it is the kernel that destroys the owner task (e.g. due to a SEGFAULT), but the kernel cannot help with the cleanup: if there is no 'futex queue' (and in most cases there is none, futexes being fast lightweight locks) then the kernel has no information to clean up after the held lock! Userspace has no chance to clean up after the lock either - userspace is the one that crashes, so it has no opportunity to clean up. Catch-22. In practice, when e.g. yum is kill -9-ed (or segfaults), a system reboot is needed to release that futex based lock. This is one of the leading bugreports against yum. To solve this problem, 'Robust Futex' patches were created and presented on lkml: the one written by Todd Kneisel and David Singleton is the most advanced at the moment. These patches all tried to extend the futex abstraction by registering futex-based locks in the kernel - and thus give the kernel a chance to clean up. E.g. in David Singleton's robust-futex-6.patch, there are 3 new syscall variants to sys_futex(): FUTEX_REGISTER, FUTEX_DEREGISTER and FUTEX_RECOVER. The kernel attaches such robust futexes to vmas (via vma->vm_file->f_mapping->robust_head), and at do_exit() time, all vmas are searched to see whether they have a robust_head set. Lots of work went into the vma-based robust-futex patch, and recently it has improved significantly, but unfortunately it still has two fundamental problems left: - they have quite complex locking and race scenarios. The vma-based patches had been pending for years, but they are still not completely reliable. - they have to scan _every_ vma at sys_exit() time, per thread! The second disadvantage is a real killer: pthread_exit() takes around 1 microsecond on Linux, but with thousands (or tens of thousands) of vmas every pthread_exit() takes a millisecond or more, also totally destroying the CPU's L1 and L2 caches! This is very much noticeable even for normal process sys_exit_group() calls: the kernel has to do the vma scanning unconditionally! (this is because the kernel has no knowledge about how many robust futexes there are to be cleaned up, because a robust futex might have been registered in another task, and the futex variable might have been simply mmap()-ed into this process's address space). This huge overhead forced the creation of CONFIG_FUTEX_ROBUST, but worse than that: the overhead makes robust futexes impractical for any type of generic Linux distribution. So it became clear to us, something had to be done. Last week, when Thomas Gleixner tried to fix up the vma-based robust futex patch in the -rt tree, he found a handful of new races and we were talking about it and were analyzing the situation. At that point a fundamentally different solution occured to me. This patchset (written in the past couple of days) implements that new solution. Be warned though - the patchset does things we normally dont do in Linux, so some might find the approach disturbing. Parental advice recommended ;-) New approach to robust futexes ------------------------------ At the heart of this new approach there is a per-thread private list of robust locks that userspace is holding (maintained by glibc) - which userspace list is registered with the kernel via a new syscall [this registration happens at most once per thread lifetime]. At do_exit() time, the kernel checks this user-space list: are there any robust futex locks to be cleaned up? In the common case, at do_exit() time, there is no list registered, so the cost of robust futexes is just a simple current->robust_list != NULL comparison. If the thread has registered a list, then normally the list is empty. If the thread/process crashed or terminated in some incorrect way then the list might be non-empty: in this case the kernel carefully walks the list [not trusting it], and marks all locks that are owned by this thread with the FUTEX_OWNER_DEAD bit, and wakes up one waiter (if any). The list is guaranteed to be private and per-thread, so it's lockless. There is one race possible though: since adding to and removing from the list is done after the futex is acquired by glibc, there is a few instructions window for the thread (or process) to die there, leaving the futex hung. To protect against this possibility, userspace (glibc) also maintains a simple per-thread 'list_op_pending' field, to allow the kernel to clean up if the thread dies after acquiring the lock, but just before it could have added itself to the list. Glibc sets this list_op_pending field before it tries to acquire the futex, and clears it after the list-add (or list-remove) has finished. That's all that is needed - all the rest of robust-futex cleanup is done in userspace [just like with the previous patches]. Ulrich Drepper has implemented the necessary glibc support for this new mechanism, which fully enables robust mutexes. (Ulrich plans to commit these changes to glibc-HEAD later today.) Key differences of this userspace-list based approach, compared to the vma based method: - it's much, much faster: at thread exit time, there's no need to loop over every vma (!), which the VM-based method has to do. Only a very simple 'is the list empty' op is done. - no VM changes are needed - 'struct address_space' is left alone. - no registration of individual locks is needed: robust mutexes dont need any extra per-lock syscalls. Robust mutexes thus become a very lightweight primitive - so they dont force the application designer to do a hard choice between performance and robustness - robust mutexes are just as fast. - no per-lock kernel allocation happens. - no resource limits are needed. - no kernel-space recovery call (FUTEX_RECOVER) is needed. - the implementation and the locking is "obvious", and there are no interactions with the VM. Performance ----------- I have benchmarked the time needed for the kernel to process a list of 1 million (!) held locks, using the new method [on a 2GHz CPU]: - with FUTEX_WAIT set [contended mutex]: 130 msecs - without FUTEX_WAIT set [uncontended mutex]: 30 msecs I have also measured an approach where glibc does the lock notification [which it currently does for !pshared robust mutexes], and that took 256 msecs - clearly slower, due to the 1 million FUTEX_WAKE syscalls userspace had to do. (1 million held locks are unheard of - we expect at most a handful of locks to be held at a time. Nevertheless it's nice to know that this approach scales nicely.) Implementation details ---------------------- The patch adds two new syscalls: one to register the userspace list, and one to query the registered list pointer: asmlinkage long sys_set_robust_list(struct robust_list_head __user *head, size_t len); asmlinkage long sys_get_robust_list(int pid, struct robust_list_head __user **head_ptr, size_t __user *len_ptr); List registration is very fast: the pointer is simply stored in current->robust_list. [Note that in the future, if robust futexes become widespread, we could extend sys_clone() to register a robust-list head for new threads, without the need of another syscall.] So there is virtually zero overhead for tasks not using robust futexes, and even for robust futex users, there is only one extra syscall per thread lifetime, and the cleanup operation, if it happens, is fast and straightforward. The kernel doesnt have any internal distinction between robust and normal futexes. If a futex is found to be held at exit time, the kernel sets the highest bit of the futex word: #define FUTEX_OWNER_DIED 0x40000000 and wakes up the next futex waiter (if any). User-space does the rest of the cleanup. Otherwise, robust futexes are acquired by glibc by putting the TID into the futex field atomically. Waiters set the FUTEX_WAITERS bit: #define FUTEX_WAITERS 0x80000000 and the remaining bits are for the TID. Testing, architecture support ----------------------------- I've tested the new syscalls on x86 and x86_64, and have made sure the parsing of the userspace list is robust [ ;-) ] even if the list is deliberately corrupted. i386 and x86_64 syscalls are wired up at the moment, and Ulrich has tested the new glibc code (on x86_64 and i386), and it works for his robust-mutex testcases. All other architectures should build just fine too - but they wont have the new syscalls yet. Architectures need to implement the new futex_atomic_cmpxchg_inuser() inline function before writing up the syscalls (that function returns -ENOSYS right now). This patch: Add placeholder futex_atomic_cmpxchg_inuser() implementations to every architecture that supports futexes. It returns -ENOSYS. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Arjan van de Ven <arjan@infradead.org> Acked-by: Ulrich Drepper <drepper@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-27[PATCH] remove zone_mem_mapKAMEZAWA Hiroyuki1-5/+5
This patch removes zone_mem_map. pfn_to_page uses pgdat, page_to_pfn uses zone. page_to_pfn can use pgdat instead of zone, which is only one user of zone_mem_map. By modifing it, we can remove zone_mem_map. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Christoph Lameter <christoph@lameter.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-27[PATCH] unify pfn_to_page: generic functionsKAMEZAWA Hiroyuki1-0/+77
There are 3 memory models, FLATMEM, DISCONTIGMEM, SPARSEMEM. Each arch has its own page_to_pfn(), pfn_to_page() for each models. But most of them can use the same arithmetic. This patch adds asm-generic/memory_model.h, which includes generic page_to_pfn(), pfn_to_page() definitions for each memory model. When CONFIG_OUT_OF_LINE_PFN_TO_PAGE=y, out-of-line functions are used instead of macro. This is enabled by some archs and reduces text size. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: Andi Kleen <ak@muc.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ian Molton <spyro@f2s.com> Cc: Mikael Starvik <starvik@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Hirokazu Takata <takata.hirokazu@renesas.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Richard Curnow <rc@rc0.org.uk> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp> Cc: Chris Zankel <chris@zankel.net> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: update include/asm-generic/bitops.hAkinobu Mita1-63/+13
Currently include/asm-generic/bitops.h is not referenced from anywhere. But it will be the benefit of those who are trying to port Linux to another architecture. So update it by same manner Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic minix_{test,set,test_and_clear,test,find_first_zero}_bit()Akinobu Mita2-0/+32
This patch introduces the C-language equivalents of the functions below: int minix_test_and_set_bit(int nr, volatile unsigned long *addr); int minix_set_bit(int nr, volatile unsigned long *addr); int minix_test_and_clear_bit(int nr, volatile unsigned long *addr); int minix_test_bit(int nr, const volatile unsigned long *addr); unsigned long minix_find_first_zero_bit(const unsigned long *addr, unsigned long size); In include/asm-generic/bitops/minix.h and include/asm-generic/bitops/minix-le.h This code largely copied from: include/asm-sparc/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic ext2_{set,clear}_bit_atomic()Akinobu Mita1-0/+22
This patch introduces the C-language equivalents of the functions below: int ext2_set_bit_atomic(int nr, volatile unsigned long *addr); int ext2_clear_bit_atomic(int nr, volatile unsigned long *addr); In include/asm-generic/bitops/ext2-atomic.h This code largely copied from: include/asm-sparc/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic ext2_{set,clear,test,find_first_zero,find_next_zero}_bit()Akinobu Mita2-0/+71
This patch introduces the C-language equivalents of the functions below: int ext2_set_bit(int nr, volatile unsigned long *addr); int ext2_clear_bit(int nr, volatile unsigned long *addr); int ext2_test_bit(int nr, const volatile unsigned long *addr); unsigned long ext2_find_first_zero_bit(const unsigned long *addr, unsigned long size); unsinged long ext2_find_next_zero_bit(const unsigned long *addr, unsigned long size); In include/asm-generic/bitops/ext2-non-atomic.h This code largely copied from: include/asm-powerpc/bitops.h include/asm-parisc/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] fix error: __u32 undeclaredAkinobu Mita2-0/+4
Build fix for s390 declare __u32 and __u64. Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic hweight{64,32,16,8}()Akinobu Mita1-0/+9
This patch introduces the C-language equivalents of the functions below: unsigned int hweight32(unsigned int w); unsigned int hweight16(unsigned int w); unsigned int hweight8(unsigned int w); unsigned long hweight64(__u64 w); In include/asm-generic/bitops/hweight.h This code largely copied from: include/linux/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic ffs()Akinobu Mita1-0/+41
This patch introduces the C-language equivalent of the function: int ffs(int x); In include/asm-generic/bitops/ffs.h This code largely copied from: include/linux/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic sched_find_first_bit()Akinobu Mita1-0/+36
This patch introduces the C-language equivalent of the function: int sched_find_first_bit(const unsigned long *b); In include/asm-generic/bitops/sched.h This code largely copied from: include/asm-powerpc/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic find_{next,first}{,_zero}_bit()Akinobu Mita1-0/+13
This patch introduces the C-language equivalents of the functions below: unsigned logn find_next_bit(const unsigned long *addr, unsigned long size, unsigned long offset); unsigned long find_next_zero_bit(const unsigned long *addr, unsigned long size, unsigned long offset); unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size); unsigned long find_first_bit(const unsigned long *addr, unsigned long size); In include/asm-generic/bitops/find.h This code largely copied from: arch/powerpc/lib/bitops.c Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic fls64()Akinobu Mita1-0/+12
This patch introduces the C-language equivalent of the function: int fls64(__u64 x); In include/asm-generic/bitops/fls64.h This code largely copied from: include/linux/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic fls()Akinobu Mita1-0/+41
This patch introduces the C-language equivalent of the function: int fls(int x); In include/asm-generic/bitops/fls.h This code largely copied from: include/linux/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic ffz()Akinobu Mita1-0/+12
This patch introduces the C-language equivalent of the function: unsigned long ffz(unsigned long word); In include/asm-generic/bitops/ffz.h This code largely copied from: include/asm-parisc/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic __ffs()Akinobu Mita1-0/+43
This patch introduces the C-language equivalent of the function: unsigned long __ffs(unsigned long word); In include/asm-generic/bitops/__ffs.h This code largely copied from: include/asm-sparc/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic __{,test_and_}{set,clear,change}_bit() and test_bit()Akinobu Mita1-0/+111
This patch introduces the C-language equivalents of the functions below: void __set_bit(int nr, volatile unsigned long *addr); void __clear_bit(int nr, volatile unsigned long *addr); void __change_bit(int nr, volatile unsigned long *addr); int __test_and_set_bit(int nr, volatile unsigned long *addr); int __test_and_clear_bit(int nr, volatile unsigned long *addr); int __test_and_change_bit(int nr, volatile unsigned long *addr); int test_bit(int nr, const volatile unsigned long *addr); In include/asm-generic/bitops/non-atomic.h This code largely copied from: asm-powerpc/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic {,test_and_}{set,clear,change}_bit()Akinobu Mita1-0/+191
This patch introduces the C-language equivalents of the functions below: void set_bit(int nr, volatile unsigned long *addr); void clear_bit(int nr, volatile unsigned long *addr); void change_bit(int nr, volatile unsigned long *addr); int test_and_set_bit(int nr, volatile unsigned long *addr); int test_and_clear_bit(int nr, volatile unsigned long *addr); int test_and_change_bit(int nr, volatile unsigned long *addr); In include/asm-generic/bitops/atomic.h This code largely copied from: include/asm-powerpc/bitops.h include/asm-parisc/bitops.h include/asm-parisc/atomic.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-23[PATCH] more for_each_cpu() conversionsAndrew Morton1-4/+3
When we stop allocating percpu memory for not-possible CPUs we must not touch the percpu data for not-possible CPUs at all. The correct way of doing this is to test cpu_possible() or to use for_each_cpu(). This patch is a kernel-wide sweep of all instances of NR_CPUS. I found very few instances of this bug, if any. But the patch converts lots of open-coded test to use the preferred helper macros. Cc: Mikael Starvik <starvik@axis.com> Cc: David Howells <dhowells@redhat.com> Acked-by: Kyle McMartin <kyle@parisc-linux.org> Cc: Anton Blanchard <anton@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: Andi Kleen <ak@muc.de> Cc: Christian Zankel <chris@zankel.net> Cc: Philippe Elie <phil.el@wanadoo.fr> Cc: Nathan Scott <nathans@sgi.com> Cc: Jens Axboe <axboe@suse.de> Cc: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-23[PATCH] make bug messages more consistentIngo Molnar1-2/+2
Consolidate all kernel bug printouts to begin with the "BUG: " string. Makes it easier to find them in large bootup logs. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-20[PATCH] add EXPORT_SYMBOL_GPL_FUTURE()Greg Kroah-Hartman1-0/+14
This patch adds the ability to mark symbols that will be changed in the future, so that kernel modules that don't include MODULE_LICENSE("GPL") and use the symbols, will be flagged and printed out to the system log. Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2006-03-08[PATCH] __get_unaligned() gcc-4 fixAtsushi Nemoto1-2/+2
If the 'ptr' is a const, this code cause "assignment of read-only variable" error on gcc 4.x. Use __u64 instead of __typeof__(*(ptr)) for temporary variable to get rid of errors on gcc 4.x. Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Cc: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-02-15[PATCH] add asm-generic/mman.hMichael S. Tsirkin1-0/+42
Make new MADV_REMOVE, MADV_DONTFORK, MADV_DOFORK consistent across all arches. The idea is to make it possible to use them portably even before distros include them in libc headers. Move common flags to asm-generic/mman.h Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Cc: Roland Dreier <rolandd@cisco.com> Cc: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-15[PATCH] Use atomic64_set for 64-bit case of atomic_long_setKyle McMartin1-1/+1
For some reason, the BITS_PER_LONG == 64 case of atomic_long_set was using atomic_set instead of atomic64_set. This does not jive with architectures which use an inline instead of a #define to implement their atomic_set() primitives. Signed-off-by: Kyle McMartin <kyle@parisc-linux.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>