aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86_64/kernel/vmlinux.lds.S (follow)
AgeCommit message (Collapse)AuthorFilesLines
2007-05-19all-archs: consolidate .data section definition in asm-genericSam Ravnborg1-1/+1
With this consolidation we can now modify the .data section definition in one spot for all archs. Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
2007-05-19all-archs: consolidate .text section definition in asm-genericSam Ravnborg1-1/+1
Move definition of .text section to asm-generic. Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
2007-05-02[PATCH] x86-64: Remove CONFIG_REORDERAndi Kleen1-3/+0
The option never worked well and functionlist wasn't well maintained. Also it made the build very slow on many binutils version. So just remove it. Cc: arjan@linux.intel.com Signed-off-by: Andi Kleen <ak@suse.de>
2007-05-02[PATCH] x86-64: move __vgetcpu_mode & __jiffies to the vsyscall_2 zoneEric Dumazet1-4/+6
We apparently hit the 1024 limit of vsyscall_0 zone when some debugging options are set, or if __vsyscall_gtod_data is 64 bytes larger. In order to save 128 bytes from the vsyscall_0 zone, we move __vgetcpu_mode & __jiffies to vsyscall_2 zone where they really belong, since they are used only from vgetcpu() (which is in this vsyscall_2 area). After patch is applied, new layout is : ffffffffff600000 T vgettimeofday ffffffffff60004e t vsysc2 ffffffffff600140 t vread_hpet ffffffffff600150 t vread_tsc ffffffffff600180 D __vsyscall_gtod_data ffffffffff600400 T vtime ffffffffff600413 t vsysc1 ffffffffff600800 T vgetcpu ffffffffff600870 D __vgetcpu_mode ffffffffff600880 D __jiffies ffffffffff600c00 T venosys_1 Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2007-05-02[PATCH] x86: Allow percpu variables to be page-alignedJeremy Fitzhardinge1-1/+1
Let's allow page-alignment in general for per-cpu data (wanted by Xen, and Ingo suggested KVM as well). Because larger alignments can use more room, we increase the max per-cpu memory to 64k rather than 32k: it's getting a little tight. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2007-05-02[PATCH] x86: tighten kernel image page access rightsJan Beulich1-2/+3
On x86-64, kernel memory freed after init can be entirely unmapped instead of just getting 'poisoned' by overwriting with a debug pattern. On i386 and x86-64 (under CONFIG_DEBUG_RODATA), kernel text and bug table can also be write-protected. Compared to the first version, this one prevents re-creating deleted mappings in the kernel image range on x86-64, if those got removed previously. This, together with the original changes, prevents temporarily having inconsistent mappings when cacheability attributes are being changed on such pages (e.g. from AGP code). While on i386 such duplicate mappings don't exist, the same change is done there, too, both for consistency and because checking pte_present() before using various other pte_XXX functions is a requirement anyway. At once, i386 code gets adjusted to use pte_huge() instead of open coding this. AK: split out cpa() changes Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andi Kleen <ak@suse.de>
2007-04-16[PATCH] x86: Fix gcc 4.2 _proxy_pda workaroundAndi Kleen1-1/+1
Due to an over aggressive optimizer gcc 4.2 cannot optimize away _proxy_pda in all cases (counter intuitive, but true). This breaks loading of some modules. The earlier workaround to just export a dummy symbol didn't work unfortunately because the module code ignores exports with 0 value. Make it 1 instead. Signed-off-by: Andi Kleen <ak@suse.de>
2007-02-16[PATCH] time: x86_64: re-enable vsyscall support for x86_64john stultz1-17/+11
Cleanup and re-enable vsyscall gettimeofday using the generic clocksource infrastructure. [akpm@osdl.org: cleanup] Signed-off-by: John Stultz <johnstul@us.ibm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Andi Kleen <ak@muc.de> Cc: Roman Zippel <zippel@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-02-11[PATCH] disable init/initramfs.c: architecturesJean-Paul Saman1-0/+4
Update all arch/*/kernel/vmlinux.lds.S to not include space for initramfs when CONFIG_BLK_DEV_INITRAMFS is not selected. This saves another 4 kbytes on most platfoms (some reserve PAGE_SIZE for initramfs). Signed-off-by: Jean-Paul Saman <jean-paul.saman@nxp.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2006-12-15Remove stack unwinder for nowLinus Torvalds1-2/+0
It has caused more problems than it ever really solved, and is apparently not getting cleaned up and fixed. We can put it back when it's stable and isn't likely to make warning or bug events worse. In the meantime, enable frame pointers for more readable stack traces. Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-09[PATCH] x86: Work around gcc 4.2 over aggressive optimizerAndi Kleen1-0/+1
The new PDA code uses a dummy _proxy_pda variable to describe memory references to the PDA. It is never referenced in inline assembly, but exists as input/output arguments. gcc 4.2 in some cases can CSE references to this which causes unresolved symbols. Define it to zero to avoid this. Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-08[PATCH] Generic BUG for x86-64Jeremy Fitzhardinge1-0/+2
This makes x86-64 use the generic BUG machinery. The main advantage in using the generic BUG machinery for x86-64 is that the inlined overhead of BUG is just the ud2a instruction; the file+line information are no longer inlined into the instruction stream. This reduces cache pollution. Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Andi Kleen <ak@muc.de> Cc: Hugh Dickens <hugh@veritas.com> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07[PATCH] unwinder: move .eh_frame to RODATAJan Beulich1-9/+0
The .eh_frame section contents is never written to, so it can as well benefit from CONFIG_DEBUG_RODATA. Diff-ed against firstfloor tree. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-11-21[PATCH] x86_64: Align data segment to PAGE_SIZE boundaryVivek Goyal1-0/+1
o Explicitly align data segment to PAGE_SIZE boundary otherwise depending on config options and tool chain it might be placed on a non PAGE_SIZE aligned boundary and vmlinux loaders like kexec fail when they encounter a PT_LOAD type segment which is not aligned to PAGE_SIZE boundary. Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-10-27[PATCH] vmlinux.lds: consolidate initcall sectionsAndrew Morton1-7/+1
Add a vmlinux.lds.h helper macro for defining the eight-level initcall table, teach all the architectures to use it. This is a prerequisite for a patch which performs initcall synchronisation for multithreaded-probing. Cc: Greg KH <greg@kroah.com> Signed-off-by: Andrew Morton <akpm@osdl.org> [ Added AVR32 as well ] Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-10-21[PATCH] x86-64: Overlapping program headers in physical addr space fixVivek Goyal1-1/+2
o A recent change to vmlinux.ld.S file broke kexec as now resulting vmlinux program headers are overlapping in physical address space. o Now all the vsyscall related sections are placed after data and after that mostly init data sections are placed. To avoid physical overlap among phdrs, there are three possible solutions. - Place vsyscall sections also in data phdrs instead of user - move vsyscal sections after init data in bss. - create another phdrs say data.init and move all the sections after vsyscall into this new phdr. o This patch implements the third solution. Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Magnus Damm <magnus@valinux.co.jp> Cc: Andi Kleen <ak@suse.de> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-10-01[PATCH] kill wall_jiffiesAtsushi Nemoto1-3/+0
With 2.6.18-rc4-mm2, now wall_jiffies will always be the same as jiffies. So we can kill wall_jiffies completely. This is just a cleanup and logically should not change any real behavior except for one thing: RTC updating code in (old) ppc and xtensa use a condition "jiffies - wall_jiffies == 1". This condition is never met so I suppose it is just a bug. I just remove that condition only instead of kill the whole "if" block. [heiko.carstens@de.ibm.com: s390 build fix and cleanup] Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Cc: Andi Kleen <ak@muc.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ian Molton <spyro@f2s.com> Cc: Mikael Starvik <starvik@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Hirokazu Takata <takata.hirokazu@renesas.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Richard Curnow <rc@rc0.org.uk> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp> Cc: Chris Zankel <chris@zankel.net> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-09-30[PATCH] Re-positioning the bss segmentVivek Goyal1-7/+7
[AK: This apparently broke some systems, but we need it to fix a compile problem with old binutils and in theory the patch is correct. So let's trying reenabling it again.] o Currently bss segment is being placed somewhere in the middle (after .data) section and after bss lots of init section and data sections are coming. Is it intentional? o One side affect of placing bss in the middle is that objcopy keeps the bss in raw binary image (vmlinux.bin) hence unnecessarily increasing the size of raw binary image. (In my case ~600K). It also increases the size of generated bzImage, though the increase is very small (896 bytes), probably a very high compression ratio for stream of zeros. o This patch moves the bss at the end hence reducing the size of bzImage by 896 bytes and size of vmlinux.bin by 600K. o This change benefits in the context of relocatable kernel patches. If kernel bss is not part of compressed data (vmlinux.bin) then it does not have to be decompressed and this area can be used by the decompressor for its execution hence keeping the memory requirements bounded and decompressor code does not stomp over any other data loaded beyond kernel image (As might be the case with bootloaders like kexec). Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Mark per cpu data initialization __initdata againAndi Kleen1-5/+3
Before 2.6.16 this was changed to work around code that accessed CPUs not in the possible map. But that code should be all fixed now, so mark it __initdata again. Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Put .note.* sections into a PT_NOTE segmentIan Campbell1-4/+10
This patch updates x86_64 linker script to pack any .note.* sections into a PT_NOTE segment in the output file. To do this, we tell ld that we need a PT_NOTE segment. This requires us to start explicitly mapping sections to segments, so we also need to explicitly create PT_LOAD segments for text and data, and map the sections to them appropriately. Fortunately, each section will default to its previous section's segment, so it doesn't take many changes to vmlinux.lds.S. The corresponding change is already made for i386 in -mm and I'd like this patch to join it. The section to segment mappings do change as do the segment flags so some time in -mm would be good for that reason as well, just in case. In particular .data and .bss move from the text segment to the data segment and .data.cacheline_aligned .data.read_mostly are put in the data segment instead of a separate one. I think that it would be possible to exactly match the existing section to segment mapping and flags but it would be a more intrusive change and I'm not sure there is a reason for the existing layout other than it is what you get by default if you don't explicitly specify something else. If there is a reason for the existing layout then I will of course make the more intrusive change. If there is no reason we could probably drop the executable or writable flags from some segments but I don't know how much attention is paid to them anyway so it might not be worth the effort. The vsyscall related sections need to go in a different segment to the normal data segment and so I invented a "user" segment to contain them. I believe this should appear to be another data segment as far as the kernel is concerned so the flags are setup accordingly. The notes will be used in the Xen paravirt_ops backend to provide additional information to the domain builder. I am in the process of converting the xen-unstable kernels and tools over to this scheme at the moment to support this in the future. It has been suggested to me that the notes segment should have flags 0 (i.e. not readable) since it is only used by the loader and is not used at runtime. For now I went with a readable segment since that is what the i386 patch uses. AK: dropped NOTES addition right now because the needed infrastructure for that is not merged yet Signed-off-by: Ian Campbell <ian.campbell@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Add the vgetcpu vsyscallVojtech Pavlik1-0/+3
This patch adds a vgetcpu vsyscall, which depending on the CPU RDTSCP capability uses either the RDTSCP or CPUID to obtain a CPU and node numbers and pass them to the program. AK: Lots of changes over Vojtech's original code: Better prototype for vgetcpu() It's better to pass the cpu / node numbers as separate arguments to avoid mistakes when going from SMP to NUMA. Also add a fast time stamp based cache using a user supplied argument to speed things more up. Use fast method from Chuck Ebbert to retrieve node/cpu from GDT limit instead of CPUID Made sure RDTSCP init is always executed after node is known. Drop printk Signed-off-by: Vojtech Pavlik <vojtech@suse.cz> Signed-off-by: Andi Kleen <ak@suse.de>
2006-06-30Remove obsolete #include <linux/config.h>Jörn Engel1-1/+0
Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
2006-06-26[PATCH] x86_64: reliable stack trace support (x86-64)Jan Beulich1-0/+9
These are the x86_64-specific pieces to enable reliable stack traces. The only restriction with this is that it currently cannot unwind across the interrupt->normal stack boundary, as that transition is lacking proper annotation. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26[PATCH] x86_64: x86_64 version of the smp alternative patch.Gerd Hoffmann1-0/+20
Changes are largely identical to the i386 version: * alternative #define are moved to the new alternative.h file. * one new elf section with pointers to the lock prefixes which can be nop'ed out for non-smp. * two new elf sections simliar to the "classic" alternatives to replace SMP code with simpler UP code. * fixup headers to use alternative.h instead of defining their own LOCK / LOCK_PREFIX macros. The patch reuses the i386 version of the alternatives code to avoid code duplication. The code in alternatives.c was shuffled around a bit to reduce the number of #ifdefs needed. It also got some tweaks needed for x86_64 (vsyscall page handling) and new features (noreplacement option which was x86_64 only up to now). Debug printk's are changed from compile-time to runtime. Loosely based on a early version from Bastian Blank <waldi@debian.org> Signed-off-by: Gerd Hoffmann <kraxel@suse.de> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-04-09[PATCH] x86_64: Fixup read_mostly section on internode cache line size for vSMPRavikiran G Thirumalai1-1/+1
Fixup the read mostly section to start at internode cacheline boundary. Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org> Signed-off-by: Shai Fultheim <shai@scalex86.org> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-25[PATCH] x86_64: Basic reorder infrastructureArjan van de Ven1-0/+5
This patch puts the infrastructure in place to allow for a reordering of functions based inside the vmlinux. The general idea is that it is possible to put all "common" functions into the first 2Mb of the code, so that they are covered by one TLB entry. This as opposed to the current situation where a typical vmlinux covers about 3.5Mb (on x86-64) and thus 2 TLB entries. This is done by enabling the -ffunction-sections flag in gcc, which puts each function in its own ELF section, so that the linker can then order them in a way defined by the linker script. As per previous discussions, Linus said he wanted a "static" list for this, eg a list provided by the kernel tarbal, so that most people have the same ordering at least. A script is provided to create this list based on readprofile(1) output. The included list is provisional, and entirely biased on my own testbox and me running a few kernel compiles and some other things. I think that to get to a better list we need to invite people to submit their own profiles, and somehow add those all up and base the final list on that. I'm willing to do that effort if this is ends up being the prefered approach. Such an effort probably needs to be repeated like once a year or so to adopt to the changing nature of the kernel. Made it a CONFIG with default n because it increases link times dramatically. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-25[PATCH] x86_64: Patch to make the head.S-must-be-first-in-vmlinux order explicitArjan van de Ven1-0/+1
This patch puts the code from head.S in a special .bootstrap.text section. I'm working on a patch to reorder the functions in the kernel (I'll post that later), but for x86-64 at least the kernel bootstrap requires that the head.S functions are on the very first page/pages of the kernel text. This is understandable since the bootstrap is complex enough already and not a problem at all, it just means they aren't allowed to be reordered. This patch puts these special functions into a separate section to document this, and to guarantee this in the light of possibly reordering the rest later. (So this patch doesn't fix a bug per se, but makes things more robust by making the order of these functions explicit) Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-02-04[PATCH] x86_64: Let impossible CPUs point to reference per cpu dataAndi Kleen1-3/+5
Hack for 2.6.16. In 2.6.17 all code that uses NR_CPUs should be audited and changed to only touch possible CPUs. Don't mark the reference per cpu data init data (so it stays around after boot) and point all impossible CPUs to it. This way they reference some valid - although shared memory. Usually this is only initialization like INIT_LIST_HEADs and there won't be races because these CPUs never run. Still somewhat hackish. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-02-04[PATCH] x86_64: align per-cpu section to configured cache bytesZach Brown1-1/+1
Align the start of the per-cpu section to the configured number of bytes in a cache line. This stops a BUG_ON() from triggering in load_module() when DEFINE_PER_CPU() is used in a module and the section isn't cacheline-aligned. Rusty also found this and sent a patch in a while ago (http://lkml.org/lkml/2004/10/19/17), I don't know what came of that. Signed-off-by: Zach Brown <zach.brown@oracle.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-11[PATCH] x86_64: Allow compilation on a 32bit biarch toolchainAndi Kleen1-0/+2
This might help on distributions that use a 32bit biarch compiler. First pass -m64 by default. Secondly add some more .code32s because at least the Ubuntu biarch 32bit as called by gcc doesn't seem to handle -m64 -m32 as generated by the Makefile without such assistance. And finally make sure the linker script can be preprocessed with a 32bit cpp. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-11[PATCH] x86_64: Separate CONFIG_UNWIND_INFO from CONFIG_DEBUG_INFOJan Beulich1-1/+1
As a follow-up to the introduction of CONFIG_UNWIND_INFO, this separates the generation of frame unwind information for x86-64 from that of full debug information. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-14[PATCH] x86_64: Only use asm/sections.h to declare section symbolsAndi Kleen1-1/+1
Adding __initdata_* to asm-generic/sections.h Replaces a lot of open coded externs in arch/x86_64/* I had to change __bss_end to __bss_stop to match the other architectures. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-10[PATCH] x86_64 linker script cleanups for debug sectionsPaolo 'Blaisorblade' Giarrusso1-16/+3
Use the new macros for x86_64 too. Note that the current scripts includes different definitions; more exactly, it only contains part of the DWARF2 sections and the .comment one from Stabs. Shouldn't be a problem, anyway. Cc: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07[PATCH] kprobes: prevent possible race conditions x86_64 changesPrasanna S Panchamukhi1-0/+1
This patch contains the x86_64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi <prasanna@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-12[PATCH] x86_64: section alignment fixAndrew Morton1-2/+2
This is the second time this has happened: inserting a new section requires that we adjust the arithmetic which is used to calculate the vsyscall page's offset. Cc: Christoph Lameter <christoph@lameter.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-07[PATCH] mostly_read data sectionChristoph Lameter1-0/+4
Add a new section called ".data.read_mostly" for data items that are read frequently and rarely written to like cpumaps etc. If these maps are placed in the .data section then these frequenly read items may end up in cachelines with data is is frequently updated. In that case all processors in an SMP system must needlessly reload the cachelines again and again containing elements of those frequently used variables. The ability to share these cachelines will allow each cpu in an SMP system to keep local copies of those shared cachelines thereby optimizing performance. Signed-off-by: Alok N Kataria <alokk@calsoftinc.com> Signed-off-by: Shobhit Dayal <shobhit@calsoftinc.com> Signed-off-by: Christoph Lameter <christoph@scalex86.org> Signed-off-by: Shai Fultheim <shai@scalex86.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25[PATCH] kexec: x86_64: vmlinux: fix physical addressesEric W. Biederman1-42/+86
The vmlinux on x86_64 does not report the correct physical address of the kernel. Instead in the physical address field it currently reports the virtual address of the kernel. This is patch is a bug fix that corrects vmlinux to report the proper physical addresses. This is potentially a help for crash dump analysis tools. This definitiely allows bootloaders that load vmlinux as a standard ELF executable. Bootloaders directly loading vmlinux become of practical importance when we consider the kexec on panic case. Signed-off-by: Eric Biederman <ebiederm@xmission.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-16Linux-2.6.12-rc2Linus Torvalds1-0/+164
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!