aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/bench/mem-memcpy-x86-64-asm.S
diff options
context:
space:
mode:
authorJan Beulich <JBeulich@suse.com>2012-01-18 13:28:13 +0000
committerArnaldo Carvalho de Melo <acme@redhat.com>2012-01-24 19:50:19 -0200
commit9ea811973d49a1df0be04ff6e4df449e4fca4fb5 (patch)
treedb7d041f0a50ed424d131745513620d348d1d8f1 /tools/perf/bench/mem-memcpy-x86-64-asm.S
parentperf tools: Introduce per user view (diff)
downloadlinux-dev-9ea811973d49a1df0be04ff6e4df449e4fca4fb5.tar.xz
linux-dev-9ea811973d49a1df0be04ff6e4df449e4fca4fb5.zip
perf bench: Make "default" memcpy() selection actually use glibc's implementation
Since arch/x86/lib/memcpy_64.S implements not only __memcpy, but also memcpy, without further precautions this function will get chose by the static linker for resolving all references, and hence the "default" measurement didn't really measure anything else than the "x86-64-unrolled" one. Fix this by renaming (through the pre-processor) the conflicting symbol. On my Westmere system, the glibc variant turns out to require about 4% less instructions, but 15% more cycles for the default 1Mb block size measured. Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/4F16D6FD020000780006D72F@nat28.tlf.novell.com Signed-off-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Diffstat (limited to '')
-rw-r--r--tools/perf/bench/mem-memcpy-x86-64-asm.S2
1 files changed, 1 insertions, 1 deletions
diff --git a/tools/perf/bench/mem-memcpy-x86-64-asm.S b/tools/perf/bench/mem-memcpy-x86-64-asm.S
index a57b66e853c2..384b60788ab9 100644
--- a/tools/perf/bench/mem-memcpy-x86-64-asm.S
+++ b/tools/perf/bench/mem-memcpy-x86-64-asm.S
@@ -1,2 +1,2 @@
-
+#define memcpy MEMCPY /* don't hide glibc's memcpy() */
#include "../../../arch/x86/lib/memcpy_64.S"