aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm/mach-w90x900 (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2014-03-12ARM: 8003/1: w90x900: remove deprecated IRQF_DISABLEDMichael Opdenacker1-1/+1
This patch removes the use of the IRQF_DISABLED flag from arch/arm/mach-w90x900/time.c It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com> Acked-by: Wan zongshun <mcuos.com@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-03-12ARM: 8002/1: spear: remove deprecated IRQF_DISABLEDMichael Opdenacker1-1/+1
This patch removes the use of the IRQF_DISABLED flag from arch/arm/mach-spear/time.c It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-03-12ARM: 8001/1: mmp: remove deprecated IRQF_DISABLEDMichael Opdenacker1-1/+1
This patch removes the use of the IRQF_DISABLED flag from arch/arm/mach-mmp/time.c It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-03-12ARM: 8000/1: misc: remove deprecated IRQF_DISABLEDMichael Opdenacker9-9/+8
This patch removes the use of the IRQF_DISABLED flag from miscellaneous code in mach-xxx and plat-xxx This flag is a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Greg Ungerer <gerg@uclinux.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-03-12ARM: 7999/1: arch/arm/mach-lpc32xx-remove-irqf-disabledMichael Opdenacker1-1/+1
This patch removes the use of the IRQF_DISABLED flag from arch/arm/mach-lpc32xx/timer.c It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-03-12ARM: 7998/1: IXP4xx: remove deprecated IRQF_DISABLEDMichael Opdenacker5-13/+7
This patch removes the use of the IRQF_DISABLED flag from code in arch/arm/mach-ixp4xx It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-03-12ARM: 7997/1: cns3xxx: remove deprecated IRQF_DISABLEDMichael Opdenacker1-1/+1
This patch removes the use of the IRQF_DISABLED flag from arch/arm/mach-cns3xxx/core.c It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-03-12ARM: 7996/1: floppy.h: remove deprecated IRQF_DISABLEDMichael Opdenacker1-1/+1
This patch removes the use of the IRQF_DISABLED flag in arch/arm/include/asm/floppy.h It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-03-12ARM: 7995/1: footbridge: remove obsolete IRQF_DISABLEDMichael Opdenacker3-7/+7
This patch removes the IRQF_DISABLED flag from footbridge code. It's a NOOP since 2.6.35. Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-03-12ARM: 7993/1: mm/memblock: add memblock_get_current_limitLaura Abbott2-0/+7
Apart from setting the limit of memblock, it's also useful to be able to get the limit to avoid recalculating it every time. Add the function to do so. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-25ARM: 7987/1: ARM : unwinder : Prevent data abort due to stack overflowAnurag Aggarwal1-37/+100
While unwinding backtrace, stack overflow is possible. This stack overflow can sometimes lead to data abort in system if the area after stack is not mapped to physical memory. To prevent this problem from happening, execute the instructions that can cause a data abort in separate helper functions, where a check for feasibility is made before reading each word from the stack. Signed-off-by: Anurag Aggarwal <a.anurag@samsung.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-25ARM: 7982/1: introduce HWCAP2 feature bits for ARMv8 Crypto ExtensionsArd Biesheuvel2-0/+10
This allocates feature bits 0-4 in HWCAP2 for the crypto and CRC extensions introduced in ARMv8. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-25ARM: 7981/1: add support for AT_HWCAP2 ELF auxv entryArd Biesheuvel3-1/+17
This enables AT_HWCAP2 for ARM. The generic support for this new ELF auxv entry was added in commit 2171364d1a9 (powerpc: Add HWCAP2 aux entry) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-25ARM: 7986/1: bios32: use pci_enable_resource to enable PCI resourcesWill Deacon1-34/+3
This patch moves bios32 over to using the generic code for enabling PCI resources. Since the core code takes care of bridge resources too, we can also drop the explicit IO and MEMORY enabling for them in the arch code. A side-effect of this change is that we no longer explicitly enable devices when running in PCI_PROBE_ONLY mode. This stays closer to the meaning of the option and prevents us from trying to enable devices without any assigned resources (the core code refuses to enable resources without parents). Tested-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Tested-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-25ARM: 7983/1: atomics: implement a better __atomic_add_unless for v6+Will Deacon1-4/+31
Looking at perf profiles of multi-threaded hackbench runs, a significant performance hit appears to manifest from the cmpxchg loop used to implement the 32-bit atomic_add_unless function. This can be mitigated by writing a direct implementation of __atomic_add_unless which doesn't require iteration outside of the atomic operation. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-25ARM: 7990/1: asm: rename logical shift macros push pull into lspush lspullVictor Kamensky7-196/+196
Renames logical shift macros, 'push' and 'pull', defined in arch/arm/include/asm/assembler.h, into 'lspush' and 'lspull'. That eliminates name conflict between 'push' logical shift macro and 'push' instruction mnemonic. That allows assembler.h to be included in .S files that use 'push' instruction. Suggested-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-25ARM: 7989/1: Delete asm/system.hDavid Howells6-12/+1
Delete ARM's asm/system.h. It's the last holdout and should be got rid of. This builds for defconfig, lpc32xx_defconfig, exynos_defconfig + XEN, the previous changed to a Gemini system and an omap3 config with TI_DAVINCI_EMAC. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-25ARM: 7985/1: mm: implement pte_accessible for faulting mappingsWill Deacon1-2/+5
The pte_accessible macro can be used to identify page table entries capable of being cached by a TLB. In principle, this differs from pte_present, since PROT_NONE mappings are mapped using invalid entries identified as present and ptes designated as `old' can use either invalid entries or those with the access flag cleared (guaranteed not to be in the TLB). However, there is a race to take care of, as described in 20841405940e ("mm: fix TLB flush race between migration, and change_protection_range"), between a page being migrated and mprotected at the same time. In this case, we can check whether a TLB invalidation is pending for the mm and if so, temporarily consider PROT_NONE mappings as valid. This patch implements a quick pte_accessible macro for ARM by simply checking if the pte is valid/present depending on the mm. For classic MMU, these checks are identical and will generate some false positives for PROT_NONE mappings, but this is better than the current asm-generic definition of ((void)(pte),1). Finally, pte_present_user is moved to use pte_valid (and renamed appropriately) since we don't care about cache flushing for faulting mappings. Acked-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-25ARM: 7984/1: prefetch: add prefetchw invocations for barriered atomicsWill Deacon4-0/+23
After a bunch of benchmarking on the interaction between dmb and pldw, it turns out that issuing the pldw *after* the dmb instruction can give modest performance gains (~3% atomic_add_return improvement on a dual A15). This patch adds prefetchw invocations to our barriered atomic operations including cmpxchg, test_and_xxx and futexes. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-18ARM: 7979/1: mm: Remove hugetlb warning from Coherent DMA allocatorSteven Capper1-3/+0
The Coherant DMA allocator allocates pages of high order then splits them up into smaller pages. This splitting logic would run into problems if the allocator was given compound pages. Thus the Coherant DMA allocator was originally incompatible with compound pages existing and, by extension, huge pages. A compile #error was put in place whenever huge pages were enabled. Compatibility with compound pages has since been introduced by the following commit (which merely excludes GFP_COMP pages from being requested by the coherant DMA allocator): ea2e705 ARM: 7172/1: dma: Drop GFP_COMP for DMA memory allocations When huge page support was introduced to ARM, the compile #error in dma-mapping.c was replaced by a #warning when it should have been removed instead. This patch removes the compile #warning in dma-mapping.c when huge pages are enabled. Signed-off-by: Steve Capper <steve.capper@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-18ARM: 7962/2: Make all mcpm functions notraceDave Martin2-0/+4
The functions in mcpm_entry.c are mostly intended for use during scary cache and coherency disabling sequences, or do other things which confuse trace ... like powering a CPU down and not returning. Similarly for the backend code. For simplicity, this patch just makes whole files notrace. There should be more than enough traceable points on the paths to these functions, but we can be more fine-grained later if there is a need for it. Jon Medhurst: Also added spc.o to the list of files as it contains functions used by MCPM code which have comments comments like: "might be used in code paths where normal cacheable locks are not working" Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Jon Medhurst <tixy@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-10ARM: 7954/1: mm: remove remaining domain support from ARMv6Will Deacon6-28/+18
CPU_32v6 currently selects CPU_USE_DOMAINS if CPU_V6 and MMU. This is because ARM 1136 r0pX CPUs lack the v6k extensions, and therefore do not have hardware thread registers. The lack of these registers requires the kernel to update the vectors page at each context switch in order to write a new TLS pointer. This write must be done via the userspace mapping, since aliasing caches can lead to expensive flushing when using kmap. Finally, this requires the vectors page to be mapped r/w for kernel and r/o for user, which has implications for things like put_user which must trigger CoW appropriately when targetting user pages. The upshot of all this is that a v6/v7 kernel makes use of domains to segregate kernel and user memory accesses. This has the nasty side-effect of making device mappings executable, which has been observed to cause subtle bugs on recent cores (e.g. Cortex-A15 performing a speculative instruction fetch from the GIC and acking an interrupt in the process). This patch solves this problem by removing the remaining domain support from ARMv6. A new memory type is added specifically for the vectors page which allows that page (and only that page) to be mapped as user r/o, kernel r/w. All other user r/o pages are mapped also as kernel r/o. Patch co-developed with Russell King. Cc: <stable@vger.kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-10ARM: 7951/1: uaccess: use CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESSNicolas Pitre1-1/+1
Now that we select HAVE_EFFICIENT_UNALIGNED_ACCESS for ARMv6+ CPUs, replace the __LINUX_ARM_ARCH__ check in uaccess.h with the new symbol. Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-10ARM: 7949/1: feroceon: Log a FW_BUG if the L2 cache is turned on at bootJason Gunthorpe1-1/+3
Booting on feroceon CPUS requires the L2 cache to be turned off. With some kernel configurations (notably CONFIG_ARM_PATCH_PHYS_VIRT disabled) the kernel will boot even if the L2 is turned on. However there may be subtle breakage, and when PATCH_PHYS_VIRT is enabled it is very likely that booting with L2 will crash at early boot before any kernel diagnostic output. The diagnostic message is intended to discourage people from shipping bootloaders that leave the L2 turned on. The issue on feroceon is that the L2 is bypassed when the L1 caches are disabled. So the decompressor will place parts of the kernel image into the L2 and the early cache-off boot code in head.S will write to parts of the kernel image, bypassing the L2 and creating inconsistency. Tested on ARM Kirkwood. Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Acked-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-10ARM: 7948/1: hw_breakpoint: Add ARMv8 supportChristopher Covington2-1/+3
Add the trivial support necessary to get hardware breakpoints working for GDB on ARMv8 simulators running in AArch32 mode. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Christopher Covington <cov@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-10ARM: 7945/1: footbridge: Switch to sched_clock_register()Stephen Boyd1-2/+2
The 32 bit sched_clock interface supports 64 bits since 3.13-rc1. Upgrade to the 64 bit function to allow us to remove the 32 bit registration interface. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-10ARM: 7940/1: add support for the Cortex-A12 processorJonathan Austin2-0/+12
The A12 behaves as the A7/A15 does with respect to setting the SMP bit, and doesn't require TLB ops broadcasting to be explicitly enabled like the A9 does. Note that as the ACTLR cannot (usually) be written from non-secure, it is the responsibility of the bootloader/firmware to set this bit per core - it is done here in Linux as last resort in case of bad firmware. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jonathan Austin <jonathan.austin@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-09Linux 3.14-rc2Linus Torvalds1-1/+1
2014-02-09fix a kmap leak in virtio_consoleAl Viro1-6/+3
While we are at it, don't do kmap() under kmap_atomic(), *especially* for a page we'd allocated with GFP_KERNEL. It's spelled "page_address", and had that been more than that, we'd have a real trouble - kmap_high() can block, and doing that while holding kmap_atomic() is a Bad Idea(tm). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2014-02-09fix O_SYNC|O_APPEND syncing the wrong range on write()Al Viro7-25/+14
It actually goes back to 2004 ([PATCH] Concurrent O_SYNC write support) when sync_page_range() had been introduced; generic_file_write{,v}() correctly synced pos_after_write - written .. pos_after_write - 1 but generic_file_aio_write() synced pos_before_write .. pos_before_write + written - 1 instead. Which is not the same thing with O_APPEND, obviously. A couple of years later correct variant had been killed off when everything switched to use of generic_file_aio_write(). All users of generic_file_aio_write() are affected, and the same bug has been copied into other instances of ->aio_write(). The fix is trivial; the only subtle point is that generic_write_sync() ought to be inlined to avoid calculations useless for the majority of calls. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2014-02-08Btrfs: fix data corruption when reading/updating compressed extentsFilipe David Borba Manana1-0/+2
When using a mix of compressed file extents and prealloc extents, it is possible to fill a page of a file with random, garbage data from some unrelated previous use of the page, instead of a sequence of zeroes. A simple sequence of steps to get into such case, taken from the test case I made for xfstests, is: _scratch_mkfs _scratch_mount "-o compress-force=lzo" $XFS_IO_PROG -f -c "pwrite -S 0x06 -b 18670 266978 18670" $SCRATCH_MNT/foobar $XFS_IO_PROG -c "falloc 26450 665194" $SCRATCH_MNT/foobar $XFS_IO_PROG -c "truncate 542872" $SCRATCH_MNT/foobar $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foobar This results in the following file items in the fs tree: item 4 key (257 INODE_ITEM 0) itemoff 15879 itemsize 160 inode generation 6 transid 6 size 542872 block group 0 mode 100600 item 5 key (257 INODE_REF 256) itemoff 15863 itemsize 16 inode ref index 2 namelen 6 name: foobar item 6 key (257 EXTENT_DATA 0) itemoff 15810 itemsize 53 extent data disk byte 0 nr 0 gen 6 extent data offset 0 nr 24576 ram 266240 extent compression 0 item 7 key (257 EXTENT_DATA 24576) itemoff 15757 itemsize 53 prealloc data disk byte 12849152 nr 241664 gen 6 prealloc data offset 0 nr 241664 item 8 key (257 EXTENT_DATA 266240) itemoff 15704 itemsize 53 extent data disk byte 12845056 nr 4096 gen 6 extent data offset 0 nr 20480 ram 20480 extent compression 2 item 9 key (257 EXTENT_DATA 286720) itemoff 15651 itemsize 53 prealloc data disk byte 13090816 nr 405504 gen 6 prealloc data offset 0 nr 258048 The on disk extent at offset 266240 (which corresponds to 1 single disk block), contains 5 compressed chunks of file data. Each of the first 4 compress 4096 bytes of file data, while the last one only compresses 3024 bytes of file data. Therefore a read into the file region [285648 ; 286720[ (length = 4096 - 3024 = 1072 bytes) should always return zeroes (our next extent is a prealloc one). The solution here is the compression code path to zero the remaining (untouched) bytes of the last page it uncompressed data into, as the information about how much space the file data consumes in the last page is not known in the upper layer fs/btrfs/extent_io.c:__do_readpage(). In __do_readpage we were correctly zeroing the remainder of the page but only if it corresponds to the last page of the inode and if the inode's size is not a multiple of the page size. This would cause not only returning random data on reads, but also permanently storing random data when updating parts of the region that should be zeroed. For the example above, it means updating a single byte in the region [285648 ; 286720[ would store that byte correctly but also store random data on disk. A test case for xfstests follows soon. Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> Signed-off-by: Chris Mason <clm@fb.com>
2014-02-08Btrfs: don't loop forever if we can't run because of the tree mod logJosef Bacik1-0/+1
A user reported a 100% cpu hang with my new delayed ref code. Turns out I forgot to increase the count check when we can't run a delayed ref because of the tree mod log. If we can't run any delayed refs during this there is no point in continuing to look, and we need to break out. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2014-02-08btrfs: reserve no transaction units in btrfs_ioctl_set_featuresDavid Sterba1-1/+1
Added in patch "btrfs: add ioctls to query/change feature bits online" modifications to superblock don't need to reserve metadata blocks when starting a transaction. Signed-off-by: David Sterba <dsterba@suse.cz> Signed-off-by: Chris Mason <clm@fb.com>
2014-02-08btrfs: commit transaction after setting label and featuresJeff Mahoney1-2/+2
The set_fslabel ioctl uses btrfs_end_transaction, which means it's possible that the change will be lost if the system crashes, same for the newly set features. Let's use btrfs_commit_transaction instead. Signed-off-by: Jeff Mahoney <jeffm@suse.com> Signed-off-by: David Sterba <dsterba@suse.cz> Signed-off-by: Chris Mason <clm@fb.com>
2014-02-08Btrfs: fix assert screwup for the pending move stuffJosef Bacik1-5/+3
Wang noticed that he was failing btrfs/030 even though me and Filipe couldn't reproduce. Turns out this is because Wang didn't have CONFIG_BTRFS_ASSERT set, which meant that a key part of Filipe's original patch was not being built in. This appears to be a mess up with merging Filipe's patch as it does not exist in his original patch. Fix this by changing how we make sure del_waiting_dir_move asserts that it did not error and take the function out of the ifdef check. This makes btrfs/030 pass with the assert on or off. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Reviewed-by: Filipe Manana <fdmanana@gmail.com> Signed-off-by: Chris Mason <clm@fb.com>
2014-02-08jfs: fix generic posix ACL regressionDave Kleikamp1-7/+7
I missed a couple errors in reviewing the patches converting jfs to use the generic posix ACL function. Setting ACL's currently fails with -EOPNOTSUPP. Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com> Reported-by: Michael L. Semon <mlsemon35@gmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
2014-02-08watchdog: dw_wdt: Add dependency on HAS_IOMEMRichard Weinberger1-0/+1
On archs like S390 or um this driver cannot build nor work. Make it depend on HAS_IOMEM to bypass build failures. drivers/built-in.o: In function `dw_wdt_drv_probe': drivers/watchdog/dw_wdt.c:302: undefined reference to `devm_ioremap_resource' Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
2014-02-07libceph: do not dereference a NULL bio pointerIlya Dryomov1-2/+6
Commit f38a5181d9f3 ("ceph: Convert to immutable biovecs") introduced a NULL pointer dereference, which broke rbd in -rc1. Fix it. Cc: Kent Overstreet <kmo@daterainc.com> Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com> Reviewed-by: Sage Weil <sage@inktank.com>
2014-02-07libceph: take map_sem for read in handle_reply()Ilya Dryomov1-6/+11
Handling redirect replies requires both map_sem and request_mutex. Taking map_sem unconditionally near the top of handle_reply() avoids possible race conditions that arise from releasing request_mutex to be able to acquire map_sem in redirect reply case. (Lock ordering is: map_sem, request_mutex, crush_mutex.) Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com> Reviewed-by: Sage Weil <sage@inktank.com>
2014-02-07libceph: factor out logic from ceph_osdc_start_request()Ilya Dryomov1-23/+39
Factor out logic from ceph_osdc_start_request() into a new helper, __ceph_osdc_start_request(). ceph_osdc_start_request() now amounts to taking locks and calling __ceph_osdc_start_request(). Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com> Reviewed-by: Sage Weil <sage@inktank.com>
2014-02-07arm64: defconfig: Expand default enabled featuresMark Rutland2-4/+15
FPGA implementations of the Cortex-A57 and Cortex-A53 are now available in the form of the SMM-A57 and SMM-A53 Soft Macrocell Models (SMMs) for Versatile Express. As these attach to a Motherboard Express V2M-P1 it would be useful to have support for some V2M-P1 peripherals enabled by default. Additionally a couple of of features have been introduced since the last defconfig update (CMA, jump labels) that would be good to have enabled by default to ensure they are build and boot tested. This patch updates the arm64 defconfig to enable support for these devices and features. The arm64 Kconfig is modified to select HAVE_PATA_PLATFORM, which is required to enable support for the CompactFlash controller on the V2M-P1. A few options which don't need to appear in defconfig are trimmed: * BLK_DEV - selected by default * EXPERIMENTAL - otherwise gone from the kernel * MII - selected by drivers which require it * USB_SUPPORT - selected by default Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-07arm64: asm: remove redundant "cc" clobbersWill Deacon4-25/+21
cbnz/tbnz don't update the condition flags, so remove the "cc" clobbers from inline asm blocks that only use these instructions to implement conditional branches. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-07arm64: atomics: fix use of acquire + release for full barrier semanticsWill Deacon5-18/+35
Linux requires a number of atomic operations to provide full barrier semantics, that is no memory accesses after the operation can be observed before any accesses up to and including the operation in program order. On arm64, these operations have been incorrectly implemented as follows: // A, B, C are independent memory locations <Access [A]> // atomic_op (B) 1: ldaxr x0, [B] // Exclusive load with acquire <op(B)> stlxr w1, x0, [B] // Exclusive store with release cbnz w1, 1b <Access [C]> The assumption here being that two half barriers are equivalent to a full barrier, so the only permitted ordering would be A -> B -> C (where B is the atomic operation involving both a load and a store). Unfortunately, this is not the case by the letter of the architecture and, in fact, the accesses to A and C are permitted to pass their nearest half barrier resulting in orderings such as Bl -> A -> C -> Bs or Bl -> C -> A -> Bs (where Bl is the load-acquire on B and Bs is the store-release on B). This is a clear violation of the full barrier requirement. The simple way to fix this is to implement the same algorithm as ARMv7 using explicit barriers: <Access [A]> // atomic_op (B) dmb ish // Full barrier 1: ldxr x0, [B] // Exclusive load <op(B)> stxr w1, x0, [B] // Exclusive store cbnz w1, 1b dmb ish // Full barrier <Access [C]> but this has the undesirable effect of introducing *two* full barrier instructions. A better approach is actually the following, non-intuitive sequence: <Access [A]> // atomic_op (B) 1: ldxr x0, [B] // Exclusive load <op(B)> stlxr w1, x0, [B] // Exclusive store with release cbnz w1, 1b dmb ish // Full barrier <Access [C]> The simple observations here are: - The dmb ensures that no subsequent accesses (e.g. the access to C) can enter or pass the atomic sequence. - The dmb also ensures that no prior accesses (e.g. the access to A) can pass the atomic sequence. - Therefore, no prior access can pass a subsequent access, or vice-versa (i.e. A is strictly ordered before C). - The stlxr ensures that no prior access can pass the store component of the atomic operation. The only tricky part remaining is the ordering between the ldxr and the access to A, since the absence of the first dmb means that we're now permitting re-ordering between the ldxr and any prior accesses. From an (arbitrary) observer's point of view, there are two scenarios: 1. We have observed the ldxr. This means that if we perform a store to [B], the ldxr will still return older data. If we can observe the ldxr, then we can potentially observe the permitted re-ordering with the access to A, which is clearly an issue when compared to the dmb variant of the code. Thankfully, the exclusive monitor will save us here since it will be cleared as a result of the store and the ldxr will retry. Notice that any use of a later memory observation to imply observation of the ldxr will also imply observation of the access to A, since the stlxr/dmb ensure strict ordering. 2. We have not observed the ldxr. This means we can perform a store and influence the later ldxr. However, that doesn't actually tell us anything about the access to [A], so we've not lost anything here either when compared to the dmb variant. This patch implements this solution for our barriered atomic operations, ensuring that we satisfy the full barrier requirements where they are needed. Cc: <stable@vger.kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-06hwmon: (da9055) Remove use of regmap_irq_get_virq()Adam Thomson1-4/+0
Remove use of regmap_irq_get_virq() in driver probe which was conflicting with use of platform_get_irq_byname(). platform_get_irq_byname() already returns the VIRQ number due to MFD core translation so using regmap_irq_get_virq() on that returned value results in an incorrect IRQ being requested. The driver probes then fail because of this. Signed-off-by: Adam Thomson <Adam.Thomson.Opensource@diasemi.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2014-02-06mm: __set_page_dirty uses spin_lock_irqsave instead of spin_lock_irqKOSAKI Motohiro1-2/+4
To use spin_{un}lock_irq is dangerous if caller disabled interrupt. During aio buffer migration, we have a possibility to see the following call stack. aio_migratepage [disable interrupt] migrate_page_copy clear_page_dirty_for_io set_page_dirty __set_page_dirty_buffers __set_page_dirty spin_lock_irq This mean, current aio migration is a deadlockable. spin_lock_irqsave is a safer alternative and we should use it. Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reported-by: David Rientjes rientjes@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-02-06arch/x86/mm/numa.c: fix array index overflow when synchronizing nid to memblock.reserved.Tang Chen1-8/+11
The following path will cause array out of bound. memblock_add_region() will always set nid in memblock.reserved to MAX_NUMNODES. In numa_register_memblks(), after we set all nid to correct valus in memblock.reserved, we called setup_node_data(), and used memblock_alloc_nid() to allocate memory, with nid set to MAX_NUMNODES. The nodemask_t type can be seen as a bit array. And the index is 0 ~ MAX_NUMNODES-1. After that, when we call node_set() in numa_clear_kernel_node_hotplug(), the nodemask_t got an index of value MAX_NUMNODES, which is out of [0 ~ MAX_NUMNODES-1]. See below: numa_init() |---> numa_register_memblks() | |---> memblock_set_node(memory) set correct nid in memblock.memory | |---> memblock_set_node(reserved) set correct nid in memblock.reserved | |...... | |---> setup_node_data() | |---> memblock_alloc_nid() here, nid is set to MAX_NUMNODES (1024) |...... |---> numa_clear_kernel_node_hotplug() |---> node_set() here, we have an index 1024, and overflowed This patch moves nid setting to numa_clear_kernel_node_hotplug() to fix this problem. Reported-by: Dave Jones <davej@redhat.com> Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> Tested-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Reported-by: Dave Jones <davej@redhat.com> Cc: David Rientjes <rientjes@google.com> Tested-by: Dave Jones <davej@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-02-06arch/x86/mm/numa.c: initialize numa_kernel_nodes in numa_clear_kernel_node_hotplug()Tang Chen1-1/+1
On-stack variable numa_kernel_nodes in numa_clear_kernel_node_hotplug() was not initialized. So we need to initialize it. [akpm@linux-foundation.org: use NODE_MASK_NONE, per David] Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> Tested-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Reported-by: Dave Jones <davej@redhat.com> Reported-by: David Rientjes <rientjes@google.com> Tested-by: Dave Jones <davej@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-02-06mm: __set_page_dirty_nobuffers() uses spin_lock_irqsave() instead of spin_lock_irq()KOSAKI Motohiro1-2/+3
During aio stress test, we observed the following lockdep warning. This mean AIO+numa_balancing is currently deadlockable. The problem is, aio_migratepage disable interrupt, but __set_page_dirty_nobuffers unintentionally enable it again. Generally, all helper function should use spin_lock_irqsave() instead of spin_lock_irq() because they don't know caller at all. other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&(&ctx->completion_lock)->rlock); <Interrupt> lock(&(&ctx->completion_lock)->rlock); *** DEADLOCK *** dump_stack+0x19/0x1b print_usage_bug+0x1f7/0x208 mark_lock+0x21d/0x2a0 mark_held_locks+0xb9/0x140 trace_hardirqs_on_caller+0x105/0x1d0 trace_hardirqs_on+0xd/0x10 _raw_spin_unlock_irq+0x2c/0x50 __set_page_dirty_nobuffers+0x8c/0xf0 migrate_page_copy+0x434/0x540 aio_migratepage+0xb1/0x140 move_to_new_page+0x7d/0x230 migrate_pages+0x5e5/0x700 migrate_misplaced_page+0xbc/0xf0 do_numa_page+0x102/0x190 handle_pte_fault+0x241/0x970 handle_mm_fault+0x265/0x370 __do_page_fault+0x172/0x5a0 do_page_fault+0x1a/0x70 page_fault+0x28/0x30 Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Larry Woodman <lwoodman@redhat.com> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <jweiner@redhat.com> Acked-by: David Rientjes <rientjes@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-02-06mm/swap: fix race on swap_info reuse between swapoff and swaponWeijie Yang1-1/+10
swapoff clear swap_info's SWP_USED flag prematurely and free its resources after that. A concurrent swapon will reuse this swap_info while its previous resources are not cleared completely. These late freed resources are: - p->percpu_cluster - swap_cgroup_ctrl[type] - block_device setting - inode->i_flags &= ~S_SWAPFILE This patch clears the SWP_USED flag after all its resources are freed, so that swapon can reuse this swap_info by alloc_swap_info() safely. [akpm@linux-foundation.org: tidy up code comment] Signed-off-by: Weijie Yang <weijie.yang@samsung.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-02-06swap: add a simple detector for inappropriate swapin readaheadShaohua Li2-5/+62
This is a patch to improve swap readahead algorithm. It's from Hugh and I slightly changed it. Hugh's original changelog: swapin readahead does a blind readahead, whether or not the swapin is sequential. This may be ok on harddisk, because large reads have relatively small costs, and if the readahead pages are unneeded they can be reclaimed easily - though, what if their allocation forced reclaim of useful pages? But on SSD devices large reads are more expensive than small ones: if the readahead pages are unneeded, reading them in caused significant overhead. This patch adds very simplistic random read detection. Stealing the PageReadahead technique from Konstantin Khlebnikov's patch, avoiding the vma/anon_vma sophistications of Shaohua Li's patch, swapin_nr_pages() simply looks at readahead's current success rate, and narrows or widens its readahead window accordingly. There is little science to its heuristic: it's about as stupid as can be whilst remaining effective. The table below shows elapsed times (in centiseconds) when running a single repetitive swapping load across a 1000MB mapping in 900MB ram with 1GB swap (the harddisk tests had taken painfully too long when I used mem=500M, but SSD shows similar results for that). Vanilla is the 3.6-rc7 kernel on which I started; Shaohua denotes his Sep 3 patch in mmotm and linux-next; HughOld denotes my Oct 1 patch which Shaohua showed to be defective; HughNew this Nov 14 patch, with page_cluster as usual at default of 3 (8-page reads); HughPC4 this same patch with page_cluster 4 (16-page reads); HughPC0 with page_cluster 0 (1-page reads: no readahead). HDD for swapping to harddisk, SSD for swapping to VertexII SSD. Seq for sequential access to the mapping, cycling five times around; Rand for the same number of random touches. Anon for a MAP_PRIVATE anon mapping; Shmem for a MAP_SHARED anon mapping, equivalent to tmpfs. One weakness of Shaohua's vma/anon_vma approach was that it did not optimize Shmem: seen below. Konstantin's approach was perhaps mistuned, 50% slower on Seq: did not compete and is not shown below. HDD Vanilla Shaohua HughOld HughNew HughPC4 HughPC0 Seq Anon 73921 76210 75611 76904 78191 121542 Seq Shmem 73601 73176 73855 72947 74543 118322 Rand Anon 895392 831243 871569 845197 846496 841680 Rand Shmem 1058375 1053486 827935 764955 764376 756489 SSD Vanilla Shaohua HughOld HughNew HughPC4 HughPC0 Seq Anon 24634 24198 24673 25107 21614 70018 Seq Shmem 24959 24932 25052 25703 22030 69678 Rand Anon 43014 26146 28075 25989 26935 25901 Rand Shmem 45349 45215 28249 24268 24138 24332 These tests are, of course, two extremes of a very simple case: under heavier mixed loads I've not yet observed any consistent improvement or degradation, and wider testing would be welcome. Shaohua Li: Test shows Vanilla is slightly better in sequential workload than Hugh's patch. I observed with Hugh's patch sometimes the readahead size is shrinked too fast (from 8 to 1 immediately) in sequential workload if there is no hit. And in such case, continuing doing readahead is good actually. I don't prepare a sophisticated algorithm for the sequential workload because so far we can't guarantee sequential accessed pages are swap out sequentially. So I slightly change Hugh's heuristic - don't shrink readahead size too fast. Here is my test result (unit second, 3 runs average): Vanilla Hugh New Seq 356 370 360 Random 4525 2447 2444 Attached graph is the swapin/swapout throughput I collected with 'vmstat 2'. The first part is running a random workload (till around 1200 of the x-axis) and the second part is running a sequential workload. swapin and swapout throughput are almost identical in steady state in both workloads. These are expected behavior. while in Vanilla, swapin is much bigger than swapout especially in random workload (because wrong readahead). Original patches by: Shaohua Li and Konstantin Khlebnikov. [fengguang.wu@intel.com: swapin_nr_pages() can be static] Signed-off-by: Hugh Dickins <hughd@google.com> Signed-off-by: Shaohua Li <shli@fusionio.com> Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Cc: Rik van Riel <riel@redhat.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Konstantin Khlebnikov <khlebnikov@openvz.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>