aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorJoonsoo Kim <iamjoonsoo.kim@lge.com>2014-10-13 15:51:01 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2014-10-14 02:18:12 +0200
commit85c9f4b04a08f6bc770b77530c22d04103468b8f (patch)
tree06d2bc1feb7560c2973795a93ee37c68a9c88119 /mm
parentMerge branch 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip (diff)
downloadlinux-dev-85c9f4b04a08f6bc770b77530c22d04103468b8f.tar.xz
linux-dev-85c9f4b04a08f6bc770b77530c22d04103468b8f.zip
mm/slab: fix unaligned access on sparc64
Commit bf0dea23a9c0 ("mm/slab: use percpu allocator for cpu cache") changed the allocation method for cpu cache array from slab allocator to percpu allocator. Alignment should be provided for aligned memory in percpu allocator case, but, that commit mistakenly set this alignment to 0. So, percpu allocator returns unaligned memory address. It doesn't cause any problem on x86 which permits unaligned access, but, it causes the problem on sparc64 which needs strong guarantee of alignment. Following bug report is reported from David Miller. I'm getting tons of the following on sparc64: [603965.383447] Kernel unaligned access at TPC[546b58] free_block+0x98/0x1a0 [603965.396987] Kernel unaligned access at TPC[546b60] free_block+0xa0/0x1a0 ... [603970.554394] log_unaligned: 333 callbacks suppressed ... This patch provides a proper alignment parameter when allocating cpu cache to fix this unaligned memory access problem on sparc64. Reported-by: David Miller <davem@davemloft.net> Tested-by: David Miller <davem@davemloft.net> Tested-by: Meelis Roos <mroos@linux.ee> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/slab.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/slab.c b/mm/slab.c
index 154aac8411c5..eb2b2ea30130 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1992,7 +1992,7 @@ static struct array_cache __percpu *alloc_kmem_cache_cpus(
struct array_cache __percpu *cpu_cache;
size = sizeof(void *) * entries + sizeof(struct array_cache);
- cpu_cache = __alloc_percpu(size, 0);
+ cpu_cache = __alloc_percpu(size, sizeof(void *));
if (!cpu_cache)
return NULL;