aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kernel/traps.c (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2011-01-21powerpc: Remove duplicate debugger hook in machine_check_exceptionAnton Blanchard1-2/+0
We are calling debugger_fault_handler twice in machine_check_exception. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc: Never halt RTAS error logging after receiving an unrecoverable machine checkAnton Blanchard1-1/+1
Newer versions of the System p firwmare send a partial RTAS error log in the machine check handler with a more detailed response appearing sometime later via check event. This means at machine check time we do not have enough information to ascertain exactly what went on. Furthermore, I have found the RTAS error logs in the machine check handler contain no useful information, so halting on them makes little sense. If we want to halt it would make more sense to do it following the error log received sometime later via check event. In light of this, never halt the error log in the pseries machine check handler. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc: Don't force MSR_RI in machine_check_exceptionAnton Blanchard1-4/+1
We should never force MSR_RI on. If we take a machine check with MSR_RI off then we have no chance of recovering safely. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc: Print 32 bits of DSISR in show_regsAnton Blanchard1-1/+1
We were printing 64 bits of DSISR in show_regs even though it is 32 bit. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/kdump: Disable ftrace during kexecAnton Blanchard1-0/+7
We should disable ftrace during kexec, some of the tracers are very invasive and we do not want them going off while doing the low level work of swapping one kernel out for another. This mirrors what we do on x86. Even though we cannot return from a kexec on powerpc (since we do not implement CONFIG_KEXEC_JUMP), add the restore code in case we do one day. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/kdump: Move crash_kexec_stop_spus to kdump crash handlerAnton Blanchard3-78/+72
Use the crash handler hooks to run the SPU stop code, just like we do for ehea and cell RAS code. While I'm here I noticed "CPUSs reliabally" so fix the spelling MISTAKESs reliabally. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/kexec: Remove empty ppc_md.machine_kexec_prepareAnton Blanchard2-22/+0
We check for a valid handler before calling ppc_md.machine_kexec_prepare so we can just remove these empty handlers. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/kexec: Don't initialise kexec hooks to default handlersAnton Blanchard2-13/+0
There's no need to initialise ppc_md.machine_kexec and ppc_md.machine_kexec_prepare to the default handlers. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/kdump: Remove ppc_md.machine_crash_shutdownAnton Blanchard4-12/+1
No one uses ppc_md.machine_crash_shutdown, so remove it. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/kexec: Remove ppc_md.machine_kexecAnton Blanchard2-10/+1
No one uses ppc_md.machine_kexec, so remove it. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/kexec: Remove ppc_md.machine_kexec_cleanupAnton Blanchard2-5/+0
No one uses ppc_md.machine_kexec_cleanup, so remove it. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/kexec: Move all ppc_md kexec function pointers togetherAnton Blanchard1-3/+2
Move all the kexec handlers together. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/cell: Use system_wq in cpufreq_spudemandTejun Heo2-22/+23
With cmwq, there's no reason to use a separate workqueue in cpufreq_spudemand. Use system_wq instead. The work items are already sync canceled on stop, so it's already guaranteed that no work is running when spu_gov_exit() is entered. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: linuxppc-dev@lists.ozlabs.org Cc: Dave Jones <davej@redhat.com> Cc: cpufreq@vger.kernel.org Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/macintosh: Fix wrong test in fan_{read,write}_reg()roel kluin1-2/+2
Fix error test in fan_{read,write}_reg() Signed-off-by: Roel Kluin <roel.kluin@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/rtas_flash: Use simple_read_from_bufferAkinobu Mita1-47/+6
Simplify read file operation for /proc/powerpc/rtas/* interface by using simple_read_from_buffer. Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/spufs: Use simple_write_to_bufferAkinobu Mita1-20/+7
Simplify several write fileoperations for spufs by using simple_write_to_buffer(). Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/ppc32/tracing: Add stack frame to calls of trace_hardirqs_on/offSteven Rostedt1-0/+11
32-bit variant of the previous patch for 64-bit: << When an interrupt occurs in userspace, we can call trace_hardirqs_on/off() With one level stack. But if we have irqsoff tracing enabled, it checks both CALLER_ADDR0 and CALLER_ADDR1. The second call goes two stack frames up. If this is from user space, then there may not exist a second stack.... >> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/ppc64/tracing: Add stack frame to calls of trace_hardirqs_on/offSteven Rostedt1-10/+30
When an interrupt occurs in userspace, we can call trace_hardirqs_on/off() With one level stack. But if we have irqsoff tracing enabled, it checks both CALLER_ADDR0 and CALLER_ADDR1. The second call goes two stack frames up. If this is from user space, then there may not exist a second stack. Add a second stack when calling trace_hardirqs_on/off() otherwise the following oops might occur: Oops: Kernel access of bad area, sig: 11 [#1] PREEMPT SMP NR_CPUS=2 PA Semi PWRficient last sysfs file: /sys/block/sda/size Modules linked in: ohci_hcd ehci_hcd usbcore NIP: c0000000000e1c00 LR: c0000000000034d4 CTR: 000000011012c440 REGS: c00000003e2f3af0 TRAP: 0300 Not tainted (2.6.37-rc6+) MSR: 9000000000001032 <ME,IR,DR> CR: 48044444 XER: 20000000 DAR: 00000001ffb9db50, DSISR: 0000000040000000 TASK = c00000003e1a00a0[2088] 'emacs' THREAD: c00000003e2f0000 CPU: 1 GPR00: 0000000000000001 c00000003e2f3d70 c00000000084e0d0 c0000000008816e8 GPR04: 000000001034c678 000000001032e8f9 0000000010336540 0000000040020000 GPR08: 0000000040020000 00000001ffb9db40 c00000003e2f3e30 0000000060000000 GPR12: 100000000000f032 c00000000fff0280 000000001032e8c9 0000000000000008 GPR16: 00000000105be9c0 00000000105be950 00000000105be9b0 00000000105be950 GPR20: 00000000ffb9dc50 00000000ffb9dbf0 00000000102f0000 00000000102f0000 GPR24: 00000000102e0000 00000000102f0000 0000000010336540 c0000000009ded38 GPR28: 00000000102e0000 c0000000000034d4 c0000000007ccb10 c00000003e2f3d70 NIP [c0000000000e1c00] .trace_hardirqs_off+0xb0/0x1d0 LR [c0000000000034d4] decrementer_common+0xd4/0x100 Call Trace: [c00000003e2f3d70] [c00000003e2f3e30] 0xc00000003e2f3e30 (unreliable) [c00000003e2f3e30] [c0000000000034d4] decrementer_common+0xd4/0x100 Instruction dump: 81690000 7f8b0000 419e0018 f84a0028 60000000 60000000 60000000 e95f0000 80030000 e92a0000 eb6301f8 2f800000 <eb890010> 41fe00dc a06d000a eb1e8050 ---[ end trace 4ec7fd2be9240928 ]--- Reported-by: Joerg Sommer <joerg@alea.gnuu.de> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc: Ensure the else case of feature sections will fitMichael Ellerman2-12/+34
When we create an alternative feature section, the else case must be the same size or smaller than the body. This is because when we patch the else case in we just overwrite the body, so there must be room. Up to now we just did this by inspection, but it's quite easy to enforce it in the assembler, so we should. The only change is to add the ifgt block, but that effects the alignment of the tabs and so the whole macro is modified. Also add a test, but #if 0 it because we don't want to break the build. Anyone who's modifying the feature macros should enable the test. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-20ACPI / PM: Call suspend_nvs_free() earlier during resumeRafael J. Wysocki1-1/+1
It turns out that some device drivers map pages from the ACPI NVS region during resume using ioremap(), which conflicts with ioremap_cache() used for mapping those pages by the NVS save/restore code in nvs.c. Make the NVS pages mapped by the code in nvs.c be unmapped before device drivers' resume routines run. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20ACPI: Introduce acpi_os_ioremap()Rafael J. Wysocki4-11/+27
Commit ca9b600be38c ("ACPI / PM: Make suspend_nvs_save() use acpi_os_map_memory()") attempted to prevent the code in osl.c and nvs.c from using different ioremap() variants by making the latter use acpi_os_map_memory() for mapping the NVS pages. However, that also requires acpi_os_unmap_memory() to be used for unmapping them, which causes synchronize_rcu() to be executed many times in a row unnecessarily and introduces substantial delays during resume on some systems. Instead of using acpi_os_map_memory() for mapping the NVS pages in nvs.c introduce acpi_os_ioremap() calling ioremap_cache() and make the code in both osl.c and nvs.c use it. Reported-by: Jeff Chua <jeff.chua.linux@gmail.com> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-21cifs: fix up CIFSSMBEcho for unaligned accessJeff Layton1-3/+3
Make sure that CIFSSMBEcho can handle unaligned fields. Also fix a minor bug that causes this warning: fs/cifs/cifssmb.c: In function 'CIFSSMBEcho': fs/cifs/cifssmb.c:740: warning: large integer implicitly truncated to unsigned type ...WordCount is u8, not __le16, so no need to convert it. This patch should apply cleanly on top of the rest of the patchset to clean up unaligned access. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20kernel/smp.c: consolidate writes in smp_call_function_interrupt()Milton Miller1-10/+19
We have to test the cpu mask in the interrupt handler before checking the refs, otherwise we can start to follow an entry before its deleted and find it partially initailzed for the next trip. Presently we also clear the cpumask bit before executing the called function, which implies getting write access to the line. After the function is called we then decrement refs, and if they go to zero we then unlock the structure. However, this implies getting write access to the call function data before and after another the function is called. If we can assert that no smp_call_function execution function is allowed to enable interrupts, then we can move both writes to after the function is called, hopfully allowing both writes with one cache line bounce. On a 256 thread system with a kernel compiled for 1024 threads, the time to execute testcase in the "smp_call_function_many race" changelog was reduced by about 30-40ms out of about 545 ms. I decided to keep this as WARN because its now a buggy function, even though the stack trace is of no value -- a simple printk would give us the information needed. Raw data: Without patch: ipi_test startup took 1219366ns complete 539819014ns total 541038380ns ipi_test startup took 1695754ns complete 543439872ns total 545135626ns ipi_test startup took 7513568ns complete 539606362ns total 547119930ns ipi_test startup took 13304064ns complete 533898562ns total 547202626ns ipi_test startup took 8668192ns complete 544264074ns total 552932266ns ipi_test startup took 4977626ns complete 548862684ns total 553840310ns ipi_test startup took 2144486ns complete 541292318ns total 543436804ns ipi_test startup took 21245824ns complete 530280180ns total 551526004ns With patch: ipi_test startup took 5961748ns complete 500859628ns total 506821376ns ipi_test startup took 8975996ns complete 495098924ns total 504074920ns ipi_test startup took 19797750ns complete 492204740ns total 512002490ns ipi_test startup took 14824796ns complete 487495878ns total 502320674ns ipi_test startup took 11514882ns complete 494439372ns total 505954254ns ipi_test startup took 8288084ns complete 502570774ns total 510858858ns ipi_test startup took 6789954ns complete 493388112ns total 500178066ns #include <linux/module.h> #include <linux/init.h> #include <linux/sched.h> /* sched clock */ #define ITERATIONS 100 static void do_nothing_ipi(void *dummy) { } static void do_ipis(struct work_struct *dummy) { int i; for (i = 0; i < ITERATIONS; i++) smp_call_function(do_nothing_ipi, NULL, 1); printk(KERN_DEBUG "cpu %d finished\n", smp_processor_id()); } static struct work_struct work[NR_CPUS]; static int __init testcase_init(void) { int cpu; u64 start, started, done; start = local_clock(); for_each_online_cpu(cpu) { INIT_WORK(&work[cpu], do_ipis); schedule_work_on(cpu, &work[cpu]); } started = local_clock(); for_each_online_cpu(cpu) flush_work(&work[cpu]); done = local_clock(); pr_info("ipi_test startup took %lldns complete %lldns total %lldns\n", started-start, done-started, done-start); return 0; } static void __exit testcase_exit(void) { } module_init(testcase_init) module_exit(testcase_exit) MODULE_LICENSE("GPL"); MODULE_AUTHOR("Anton Blanchard"); Signed-off-by: Milton Miller <miltonm@bga.com> Cc: Anton Blanchard <anton@samba.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20kernel/smp.c: fix smp_call_function_many() SMP raceAnton Blanchard1-0/+30
I noticed a failure where we hit the following WARN_ON in generic_smp_call_function_interrupt: if (!cpumask_test_and_clear_cpu(cpu, data->cpumask)) continue; data->csd.func(data->csd.info); refs = atomic_dec_return(&data->refs); WARN_ON(refs < 0); <------------------------- We atomically tested and cleared our bit in the cpumask, and yet the number of cpus left (ie refs) was 0. How can this be? It turns out commit 54fdade1c3332391948ec43530c02c4794a38172 ("generic-ipi: make struct call_function_data lockless") is at fault. It removes locking from smp_call_function_many and in doing so creates a rather complicated race. The problem comes about because: - The smp_call_function_many interrupt handler walks call_function.queue without any locking. - We reuse a percpu data structure in smp_call_function_many. - We do not wait for any RCU grace period before starting the next smp_call_function_many. Imagine a scenario where CPU A does two smp_call_functions back to back, and CPU B does an smp_call_function in between. We concentrate on how CPU C handles the calls: CPU A CPU B CPU C CPU D smp_call_function smp_call_function_interrupt walks call_function.queue sees data from CPU A on list smp_call_function smp_call_function_interrupt walks call_function.queue sees (stale) CPU A on list smp_call_function int clears last ref on A list_del_rcu, unlock smp_call_function reuses percpu *data A data->cpumask sees and clears bit in cpumask might be using old or new fn! decrements refs below 0 set data->refs (too late!) The important thing to note is since the interrupt handler walks a potentially stale call_function.queue without any locking, then another cpu can view the percpu *data structure at any time, even when the owner is in the process of initialising it. The following test case hits the WARN_ON 100% of the time on my PowerPC box (having 128 threads does help :) #include <linux/module.h> #include <linux/init.h> #define ITERATIONS 100 static void do_nothing_ipi(void *dummy) { } static void do_ipis(struct work_struct *dummy) { int i; for (i = 0; i < ITERATIONS; i++) smp_call_function(do_nothing_ipi, NULL, 1); printk(KERN_DEBUG "cpu %d finished\n", smp_processor_id()); } static struct work_struct work[NR_CPUS]; static int __init testcase_init(void) { int cpu; for_each_online_cpu(cpu) { INIT_WORK(&work[cpu], do_ipis); schedule_work_on(cpu, &work[cpu]); } return 0; } static void __exit testcase_exit(void) { } module_init(testcase_init) module_exit(testcase_exit) MODULE_LICENSE("GPL"); MODULE_AUTHOR("Anton Blanchard"); I tried to fix it by ordering the read and the write of ->cpumask and ->refs. In doing so I missed a critical case but Paul McKenney was able to spot my bug thankfully :) To ensure we arent viewing previous iterations the interrupt handler needs to read ->refs then ->cpumask then ->refs _again_. Thanks to Milton Miller and Paul McKenney for helping to debug this issue. [miltonm@bga.com: add WARN_ON and BUG_ON, remove extra read of refs before initial read of mask that doesn't help (also noted by Peter Zijlstra), adjust comments, hopefully clarify scenario ] [miltonm@bga.com: remove excess tests] Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Milton Miller <miltonm@bga.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: <stable@kernel.org> [2.6.32+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20memcg: correctly order reading PCG_USED and pc->mem_cgroupJohannes Weiner1-18/+9
The placement of the read-side barrier is confused: the writer first sets pc->mem_cgroup, then PCG_USED. The read-side barrier has to be between testing PCG_USED and reading pc->mem_cgroup. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20backlight: fix 88pm860x_bl macro collisionRandy Dunlap1-2/+2
Fix collision with kernel-supplied #define: drivers/video/backlight/88pm860x_bl.c:24:1: warning: "CURRENT_MASK" redefined arch/x86/include/asm/page_64_types.h:6:1: warning: this is the location of the previous definition Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: Haojian Zhuang <haojian.zhuang@marvell.com> Cc: Richard Purdie <rpurdie@rpsys.net> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20drivers/leds/ledtrig-gpio.c: make output match input, tighten input checkingJanusz Krzysztofik1-7/+8
Replicate changes made to drivers/leds/ledtrig-backlight.c. Cc: Paul Mundt <lethal@linux-sh.org> Cc: Richard Purdie <richard.purdie@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20MAINTAINERS: update Atmel AT91 entryNicolas Ferre1-2/+6
Add two co-maintainers and update the entry with new information. Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Acked-by: Andrew Victor <linux@maxim.org.za> Acked-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20mm: fix truncate_setsize() commentJan Kara1-6/+5
Contrary to what the comment says, truncate_setsize() should be called *before* filesystem truncated blocks. Signed-off-by: Jan Kara <jack@suse.cz> Cc: Christoph Hellwig <hch@infradead.org> Cc: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20memcg: fix rmdir, force_empty with THPKAMEZAWA Hiroyuki1-11/+26
Now, when THP is enabled, memcg's rmdir() function is broken because move_account() for THP page is not supported. This will cause account leak or -EBUSY issue at rmdir(). This patch fixes the issue by supporting move_account() THP pages. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20memcg: fix LRU accounting with THPKAMEZAWA Hiroyuki1-4/+18
memory cgroup's LRU stat should take care of size of pages because Transparent Hugepage inserts hugepage into LRU. If this value is the number wrong, memory reclaim will not work well. Note: only head page of THP's huge page is linked into LRU. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20memcg: fix USED bit handling at uncharge in THPKAMEZAWA Hiroyuki3-40/+62
Now, under THP: at charge: - PageCgroupUsed bit is set to all page_cgroup on a hugepage. ....set to 512 pages. at uncharge - PageCgroupUsed bit is unset on the head page. So, some pages will remain with "Used" bit. This patch fixes that Used bit is set only to the head page. Used bits for tail pages will be set at splitting if necessary. This patch adds this lock order: compound_lock() -> page_cgroup_move_lock(). [akpm@linux-foundation.org: fix warning] Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20memcg: modify accounting function for supporting THP betterKAMEZAWA Hiroyuki1-13/+12
mem_cgroup_charge_statisics() was designed for charging a page but now, we have transparent hugepage. To fix problems (in following patch) it's required to change the function to get the number of pages as its arguments. The new function gets following as argument. - type of page rather than 'pc' - size of page which is accounted. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20fs/direct-io.c: don't try to allocate more than BIO_MAX_PAGES in a bioDavid Dillow1-3/+7
When using devices that support max_segments > BIO_MAX_PAGES (256), direct IO tries to allocate a bio with more pages than allowed, which leads to an oops in dio_bio_alloc(). Clamp the request to the supported maximum, and change dio_bio_alloc() to reflect that bio_alloc() will always return a bio when called with __GFP_WAIT and a valid number of vectors. [akpm@linux-foundation.org: remove redundant BUG_ON()] Signed-off-by: David Dillow <dillowda@ornl.gov> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20mm: compaction: prevent division-by-zero during user-requested compactionJohannes Weiner1-0/+11
Up until 3e7d344 ("mm: vmscan: reclaim order-0 and use compaction instead of lumpy reclaim"), compaction skipped calculating the fragmentation index of a zone when compaction was explicitely requested through the procfs knob. However, when compaction_suitable was introduced, it did not come with an extra check for order == -1, set on explicit compaction requests, and passed this order on to the fragmentation index calculation, where it overshifts the number of requested pages, leading to a division by zero. This patch makes sure that order == -1 is recognized as the flag it is rather than passing it along as valid order parameter. [akpm@linux-foundation.org: add comment, per Mel] Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20mm/vmscan.c: remove duplicate include of compaction.hJesper Juhl1-1/+0
Signed-off-by: Jesper Juhl <jj@chaosbits.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20memblock: fix memblock_is_region_memory()Tomi Valkeinen1-4/+4
memblock_is_region_memory() uses reserved memblocks to search for the given region, while it should use the memory memblocks. I encountered the problem with OMAP's framebuffer ram allocation. Normally the ram is allocated dynamically, and this function is not called. However, if we want to pass the framebuffer from the bootloader to the kernel (to retain the boot image), this function is used to check the validity of the kernel parameters for the framebuffer ram area. Signed-off-by: Tomi Valkeinen <tomi.valkeinen@nokia.com> Acked-by: Yinghai Lu <yinghai@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20thp: keep highpte mapped until it is no longer neededJohannes Weiner1-1/+2
Two users reported THP-related crashes on 32-bit x86 machines. Their oops reports indicated an invalid pte, and subsequent code inspection showed that the highpte is actually used after unmap. The fix is to unmap the pte only after all operations against it are finished. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reported-by: Ilya Dryomov <idryomov@gmail.com> Reported-by: werner <w.landgraf@ru.ru> Cc: Andrea Arcangeli <aarcange@redhat.com> Tested-by: Ilya Dryomov <idryomov@gmail.com> Tested-by: Steven Rostedt <rostedt@goodmis.org Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20kconfig: rename CONFIG_EMBEDDED to CONFIG_EXPERTDavid Rientjes298-423/+431
The meaning of CONFIG_EMBEDDED has long since been obsoleted; the option is used to configure any non-standard kernel with a much larger scope than only small devices. This patch renames the option to CONFIG_EXPERT in init/Kconfig and fixes references to the option throughout the kernel. A new CONFIG_EMBEDDED option is added that automatically selects CONFIG_EXPERT when enabled and can be used in the future to isolate options that should only be considered for embedded systems (RISC architectures, SLOB, etc). Calling the option "EXPERT" more accurately represents its intention: only expert users who understand the impact of the configuration changes they are making should enable it. Reviewed-by: Ingo Molnar <mingo@elte.hu> Acked-by: David Woodhouse <david.woodhouse@intel.com> Signed-off-by: David Rientjes <rientjes@google.com> Cc: Greg KH <gregkh@suse.de> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jens Axboe <axboe@kernel.dk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Robin Holt <holt@sgi.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20Fix broken "pipe: use event aware wakeups" optimizationLinus Torvalds1-5/+5
Commit e462c448fdc8 ("pipe: use event aware wakeups") optimized the pipe event wakeup calls to avoid wakeups if the events do not match the requested set. However, the optimization was buggy, in that it didn't actually use the correct sets for the events: when we make room for more data to be written, the pipe poll() routine will return both the POLLOUT _and_ POLLWRNORM bits. Similarly for read. And most critically, when a pipe is released, that will potentially result in POLLHUP|POLLERR (depending on whether it was the last reader or writer), not just the regular POLLIN|POLLOUT. This bug showed itself as a hung gnome-screensaver-dialog process, stuck forever (or at least until it was poked by a signal or by being traced) in a poll() system call. Cc: Davide Libenzi <davidel@xmailserver.org> Cc: David S. Miller <davem@davemloft.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-20i915: Fix i915 suspend delayLinus Torvalds2-3/+3
During system suspend, the "wait for ring buffer to empty" loop would always time out after three seconds, because the faster cached ring buffer head read would always return zero. Force the slow-and-careful PIO read on all but the first iterations of the loop to fix it. This also removes the unused (and useless) 'actual_head' variable that tried to approximate doing this, but did it incorrectly. Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Rafael J. Wysocki <rjw@sisk.pl> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Dave Airlie <airlied@linux.ie> Cc: DRI mailing list <dri-devel@lists.freedesktop.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-21firewire: net: is not experimental anymoreStefan Richter1-4/+2
thanks to Clemens' and Maxim's fixes to firewire-ohci and -net in the last two kernel releases. Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
2011-01-21firewire: net: invalidate ARP entries of removed nodesMaxim Levitsky1-1/+8
This makes it possible to resume communication with a node that dropped off the bus for a brief period. Otherwise communication will only be possible after ARP cache entry timeouts. Signed-off-by: Maxim Levitsky <maximlevitsky@gmail.com> Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de> (rebased)
2011-01-21firewire: core: fix unstable I/O with Canon camcorderStefan Richter1-2/+9
Regression since commit 10389536742c, "firewire: core: check for 1394a compliant IRM, fix inaccessibility of Sony camcorder": The camcorder Canon MV5i generates lots of bus resets when asynchronous requests are sent to it (e.g. Config ROM read requests or FCP Command write requests) if the camcorder is not root node. This causes drop- outs in videos or makes the camcorder entirely inaccessible. https://bugzilla.redhat.com/show_bug.cgi?id=633260 Fix this by allowing any Canon device, even if it is a pre-1394a IRM like MV5i are, to remain root node (if it is at least Cycle Master capable). With the FireWire controller cards that I tested, MV5i always becomes root node when plugged in and left to its own devices. Reported-by: Ralf Lange Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de> Cc: <stable@kernel.org> # 2.6.32.y and newer
2011-01-20cifs: fix unaligned accesses in cifsConvertToUCSJeff Layton2-71/+76
Move cifsConvertToUCS to cifs_unicode.c where all of the other unicode related functions live. Have it store mapped characters in 'temp' and then use put_unaligned_le16 to copy it to the target buffer. Also fix the comments to match kernel coding style. Signed-off-by: Jeff Layton <jlayton@redhat.com> Acked-by: Pavel Shilovsky <piastryyy@gmail.com> Reviewed-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20cifs: clean up unaligned accesses in cifs_unicode.cJeff Layton1-23/+28
Make sure we use get/put_unaligned routines when accessing wide character strings. Signed-off-by: Jeff Layton <jlayton@redhat.com> Acked-by: Pavel Shilovsky <piastryyy@gmail.com> Reviewed-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20cifs: fix unaligned access in check2ndT2 and coalesce_t2Jeff Layton1-19/+14
Signed-off-by: Jeff Layton <jlayton@redhat.com> Acked-by: Pavel Shilovsky <piastryyy@gmail.com> Reviewed-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20cifs: clean up unaligned accesses in validate_t2Jeff Layton1-21/+23
...and clean up function to reduce indentation. Signed-off-by: Jeff Layton <jlayton@redhat.com> Acked-by: Pavel Shilovsky <piastryyy@gmail.com> Reviewed-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20cifs: use get/put_unaligned functions to access ByteCountJeff Layton6-32/+65
It's possible that when we access the ByteCount that the alignment will be off. Most CPUs deal with that transparently, but there's usually some performance impact. Some CPUs raise an exception on unaligned accesses. Fix this by accessing the byte count using the get_unaligned and put_unaligned inlined functions. While we're at it, fix the types of some of the variables that end up getting returns from these functions. Acked-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20cifs: move time field in cifsInodeInfoJeff Layton1-5/+5
...and remove length qualifiers from bools. Before: /* size: 1176, cachelines: 19, members: 13 */ /* sum members: 1165, holes: 2, sum holes: 11 */ /* bit holes: 1, sum bit holes: 4 bits */ /* last cacheline: 24 bytes */ After: /* size: 1168, cachelines: 19, members: 13 */ /* last cacheline: 16 bytes */ ...savings of 8 bytes per inode. Signed-off-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Steve French <sfrench@us.ibm.com>