aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel/entry_32.S
diff options
context:
space:
mode:
authorH. Peter Anvin <hpa@zytor.com>2008-11-11 13:51:52 -0800
committerH. Peter Anvin <hpa@zytor.com>2008-11-11 13:51:52 -0800
commit939b787130bf22887a09d8fd2641a094dcef8c22 (patch)
tree6bdd272bb742bf2916d35c04cb8a6dd24e2dd135 /arch/x86/kernel/entry_32.S
parentx86: 32 bits: shrink and align IRQ stubs (diff)
downloadlinux-dev-939b787130bf22887a09d8fd2641a094dcef8c22.tar.xz
linux-dev-939b787130bf22887a09d8fd2641a094dcef8c22.zip
x86: 64 bits: shrink and align IRQ stubs
Move the IRQ stub generation to assembly to simplify it and for consistency with 32 bits. Doing it in a C file with asm() statements doesn't help clarity, and it prevents some optimizations. Shrink the IRQ stubs down to just over four bytes per (we fit seven into a 32-byte chunk.) This shrinks the total icache consumption of the IRQ stubs down to an even kilobyte, if all of them are in active use. The downside is that we end up with a double jump, which could have a negative effect on some pipelines. The double jump is always inside the same cacheline on any modern chips. To get the most effect, cache-align the IRQ stubs. This makes the 64-bit code match changes already done to the 32-bit code, and should open up irqinit*.c for unification. Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Diffstat (limited to 'arch/x86/kernel/entry_32.S')
0 files changed, 0 insertions, 0 deletions