aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel (follow)
AgeCommit message (Collapse)AuthorFilesLines
2009-03-27Merge branch 'core/percpu' into percpu-cpumask-x86-for-linus-2Ingo Molnar108-4621/+5699
Conflicts: arch/parisc/kernel/irq.c arch/x86/include/asm/fixmap_64.h arch/x86/include/asm/setup.h kernel/irq/handle.c Semantic merge: arch/x86/include/asm/fixmap.h Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-26Merge branch 'timers-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tipLinus Torvalds2-17/+66
* 'timers-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (26 commits) posix timers: fix RLIMIT_CPU && fork() time: ntp: fix bug in ntp_update_offset() & do_adjtimex(), fix time: ntp: clean up second_overflow() time: ntp: simplify ntp_tick_adj calculations time: ntp: make 64-bit constants more robust time: ntp: refactor do_adjtimex() some more time: ntp: refactor do_adjtimex() time: ntp: fix bug in ntp_update_offset() & do_adjtimex() time: ntp: micro-optimize ntp_update_offset() time: ntp: simplify ntp_update_offset_fll() time: ntp: refactor and clean up ntp_update_offset() time: ntp: refactor up ntp_update_frequency() time: ntp: clean up ntp_update_frequency() time: ntp: simplify the MAX_TICKADJ_SCALED definition time: ntp: simplify the second_overflow() code flow time: ntp: clean up kernel/time/ntp.c x86: hpet: stop HPET_COUNTER when programming periodic mode x86: hpet: provide separate functions to stop and start the counter x86: hpet: print HPET registers during setup (if hpet=verbose is used) time: apply NTP frequency/tick changes immediately ...
2009-03-26Merge branch 'sched-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tipLinus Torvalds2-5/+12
* 'sched-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (46 commits) sched: Add comments to find_busiest_group() function sched: Refactor the power savings balance code sched: Optimize the !power_savings_balance during fbg() sched: Create a helper function to calculate imbalance sched: Create helper to calculate small_imbalance in fbg() sched: Create a helper function to calculate sched_domain stats for fbg() sched: Define structure to store the sched_domain statistics for fbg() sched: Create a helper function to calculate sched_group stats for fbg() sched: Define structure to store the sched_group statistics for fbg() sched: Fix indentations in find_busiest_group() using gotos sched: Simple helper functions for find_busiest_group() sched: remove unused fields from struct rq sched: jiffies not printed per CPU sched: small optimisation of can_migrate_task() sched: fix typos in documentation sched: add avg_overlap decay x86, sched_clock(): mark variables read-mostly sched: optimize ttwu vs group scheduling sched: TIF_NEED_RESCHED -> need_reshed() cleanup sched: don't rebalance if attached on NULL domain ...
2009-03-26Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/davej/cpufreqLinus Torvalds21-717/+958
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/davej/cpufreq: (35 commits) [CPUFREQ] Prevent p4-clockmod from auto-binding to the ondemand governor. [CPUFREQ] Make cpufreq-nforce2 less obnoxious [CPUFREQ] p4-clockmod reports wrong frequency. [CPUFREQ] powernow-k8: Use a common exit path. [CPUFREQ] Change link order of x86 cpufreq modules [CPUFREQ] conservative: remove 10x from def_sampling_rate [CPUFREQ] conservative: fixup governor to function more like ondemand logic [CPUFREQ] conservative: fix dbs_cpufreq_notifier so freq is not locked [CPUFREQ] conservative: amend author's email address [CPUFREQ] Use swap() in longhaul.c [CPUFREQ] checkpatch cleanups for acpi-cpufreq [CPUFREQ] powernow-k8: Only print error message once, not per core. [CPUFREQ] ondemand/conservative: sanitize sampling_rate restrictions [CPUFREQ] ondemand/conservative: deprecate sampling_rate{min,max} [CPUFREQ] powernow-k8: Always compile powernow-k8 driver with ACPI support [CPUFREQ] Introduce /sys/devices/system/cpu/cpu*/cpufreq/cpuinfo_transition_latency [CPUFREQ] checkpatch cleanups for powernow-k8 [CPUFREQ] checkpatch cleanups for ondemand governor. [CPUFREQ] checkpatch cleanups for powernow-k7 [CPUFREQ] checkpatch cleanups for speedstep related drivers. ...
2009-03-26Merge branch 'timers/hpet' into timers/coreIngo Molnar1-16/+64
2009-03-26Merge commit 'v2.6.29' into timers/coreIngo Molnar34-167/+290
2009-03-18Merge branches 'sched/cleanups' and 'linus' into sched/coreIngo Molnar2-45/+68
2009-03-17prevent boosting kprobes on exception addressMasami Hiramatsu1-0/+3
Don't boost at the addresses which are listed on exception tables, because major page fault will occur on those addresses. In that case, kprobes can not ensure that when instruction buffer can be freed since some processes will sleep on the buffer. kprobes-ia64 already has same check. Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-03-17Fast TSC calibration: calculate proper frequency error boundsLinus Torvalds1-45/+56
In order for ntpd to correctly synchronize the clocks, the frequency of the system clock must not be off by more than 500 ppm (or, put another way, 1:2000), or ntpd will end up giving up on trying to synchronize properly, and ends up reseting the clock in jumps instead. The fast TSC PIT calibration sometimes failed this test - it was assuming that the PIT reads always took about one microsecond each (2us for the two reads to get a 16-bit timer), and that calibrating TSC to the PIT over 15ms should thus be sufficient to get much closer than 500ppm (max 2us error on both sides giving 4us over 15ms: a 270 ppm error value). However, that assumption does not always hold: apparently some hardware is either very much slower at reading the PIT registers, or there was other noise causing at least one machine to get 700+ ppm errors. So instead of using a fixed 15ms timing loop, this changes the fast PIT calibration to read the TSC delta over the individual PIT timer reads, and use the result to calculate the error bars on the PIT read timing properly. We then successfully calibrate the TSC only if the maximum error bars fall below 500ppm. In the process, we also relax the timing to allow up to 25ms for the calibration, although it can happen much faster depending on hardware. Reported-and-tested-by: Jesper Krogh <jesper@krogh.cc> Cc: john stultz <johnstul@us.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-03-17Fix potential fast PIT TSC calibration startup glitchLinus Torvalds1-0/+9
During bootup, when we reprogram the PIT (programmable interval timer) to start counting down from 0xffff in order to use it for the fast TSC calibration, we should also make sure to delay a bit afterwards to allow the PIT hardware to actually start counting with the new value. That will happens at the next CLK pulse (1.193182 MHz), so the easiest way to do that is to just wait at least one microsecond after programming the new PIT counter value. We do that by just reading the counter value back once - which will take about 2us on PC hardware. Reported-and-tested-by: john stultz <johnstul@us.ibm.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-03-13Merge branches 'sched/clock', 'sched/urgent' and 'linus' into sched/coreIngo Molnar2-5/+5
2009-03-10x86, sched_clock(): mark variables read-mostlyIngo Molnar1-4/+5
Impact: micro-optimization There's a number of variables in the sched_clock() path that are in .data/.bss - but not marked __read_mostly. This creates the danger of accidental false cacheline sharing with some other, write-often variable. So mark them __read_mostly. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-10Merge branches 'sched/cleanups' and 'linus' into sched/coreIngo Molnar6-24/+22
2009-03-10percpu: generalize embedding first chunk setup helperTejun Heo1-48/+6
Impact: code reorganization Separate out embedding first chunk setup helper from x86 embedding first chunk allocator and put it in mm/percpu.c. This will be used by the default percpu first chunk allocator and possibly by other archs. Signed-off-by: Tejun Heo <tj@kernel.org>
2009-03-10percpu: more flexibility for @dyn_size of pcpu_setup_first_chunk()Tejun Heo1-7/+6
Impact: cleanup, more flexibility for first chunk init Non-negative @dyn_size used to be allowed iff @unit_size wasn't auto. This restriction stemmed from implementation detail and made things a bit less intuitive. This patch allows @dyn_size to be specified regardless of @unit_size and swaps the positions of @dyn_size and @unit_size so that the parameter order makes more sense (static, reserved and dyn sizes followed by enclosing unit_size). While at it, add @unit_size >= PCPU_MIN_UNIT_SIZE sanity check. Signed-off-by: Tejun Heo <tj@kernel.org>
2009-03-09Merge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/davej/cpufreqLinus Torvalds1-1/+0
* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/davej/cpufreq: [CPUFREQ] Add p4-clockmod sysfs-ui removal to feature-removal schedule. Revert "[CPUFREQ] Disable sysfs ui for p4-clockmod."
2009-03-09Revert "[CPUFREQ] Disable sysfs ui for p4-clockmod."Dave Jones1-1/+0
This reverts commit e088e4c9cdb618675874becb91b2fd581ee707e6. Removing the sysfs interface for p4-clockmod was flagged as a regression in bug 12826. Course of action: - Find out the remaining causes of overheating, and fix them if possible. ACPI should be doing the right thing automatically. If it isn't, we need to fix that. - mark p4-clockmod ui as deprecated - try again with the removal in six months. It's not really feasible to printk about the deprecation, because it needs to happen at all the sysfs entry points, which means adding a lot of strcmp("p4-clockmod".. calls to the core, which.. bleuch. Signed-off-by: Dave Jones <davej@redhat.com>
2009-03-08x86: UV: remove uv_flush_tlb_others() WARN_ONCliff Wickman1-2/+0
In uv_flush_tlb_others() (arch/x86/kernel/tlb_uv.c), the "WARN_ON(!in_atomic())" fails if CONFIG_PREEMPT is not enabled. And CONFIG_PREEMPT is not enabled by default in the distribution that most UV owners will use. We could #ifdef CONFIG_PREEMPT the warning, but that is not good form. And there seems to be no suitable fix to in_atomic() when CONFIG_PREMPT is not on. As Ingo commented: > and we have no proper primitive to test for atomicity. (mainly > because we dont know about atomicity on a non-preempt kernel) So we drop the WARN_ON. Signed-off-by: Cliff Wickman <cpw@sgi.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-06x86, pebs: correct qualifier passed to ds_write_config() from ds_request_pebs()Markus Metzger1-1/+1
ds_write_config() can write the BTS as well as the PEBS part of the DS config. ds_request_pebs() passes the wrong qualifier, which results in the wrong configuration to be written. Reported-by: Stephane Eranian <eranian@googlemail.com> Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> LKML-Reference: <20090305085721.A22550@sedona.ch.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-06x86, bts: remove bad warningMarkus Metzger1-1/+0
In case a ptraced task is reaped (while the tracer is still attached), ds_exit_thread() is called before ptrace_exit(). The latter will release the bts_tracer and remove the thread's ds_ctx. The former will WARN() if the context is not NULL. Oleg Nesterov submitted patches that move ptrace_exit() before exit_thread() and thus reverse the order of the above calls. Remove the bad warning. I will add it again when Oleg's changes are in. Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> LKML-Reference: <20090305084954.A22000@sedona.ch.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-06x86, percpu: setup reserved percpu area for x86_64Tejun Heo1-9/+28
Impact: fix relocation overflow during module load x86_64 uses 32bit relocations for symbol access and static percpu symbols whether in core or modules must be inside 2GB of the percpu segement base which the dynamic percpu allocator doesn't guarantee. This patch makes x86_64 reserve PERCPU_MODULE_RESERVE bytes in the first chunk so that module percpu areas are always allocated from the first chunk which is always inside the relocatable range. This problem exists for any percpu allocator but is easily triggered when using the embedding allocator because the second chunk is located beyond 2GB on it. This patch also changes the meaning of PERCPU_DYNAMIC_RESERVE such that it only indicates the size of the area to reserve for dynamic allocation as static and dynamic areas can be separate. New PERCPU_DYNAMIC_RESERVED is increased by 4k for both 32 and 64bits as the reserved area separation eats away some allocatable space and having slightly more headroom (currently between 4 and 8k after minimal boot sans module area) makes sense for common case performance. x86_32 can address anywhere from anywhere and doesn't need reserving. Mike Galbraith first reported the problem first and bisected it to the embedding percpu allocator commit. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Mike Galbraith <efault@gmx.de> Reported-by: Jaswinder Singh Rajput <jaswinder@kernel.org>
2009-03-06percpu, module: implement reserved allocation and use it for module percpu variablesTejun Heo1-4/+4
Impact: add reserved allocation functionality and use it for module percpu variables This patch implements reserved allocation from the first chunk. When setting up the first chunk, arch can ask to set aside certain number of bytes right after the core static area which is available only through a separate reserved allocator. This will be used primarily for module static percpu variables on architectures with limited relocation range to ensure that the module perpcu symbols are inside the relocatable range. If reserved area is requested, the first chunk becomes reserved and isn't available for regular allocation. If the first chunk also includes piggy-back dynamic allocation area, a separate chunk mapping the same region is created to serve dynamic allocation. The first one is called static first chunk and the second dynamic first chunk. Although they share the page map, their different area map initializations guarantee they serve disjoint areas according to their purposes. If arch doesn't setup reserved area, reserved allocation is handled like any other allocation. Signed-off-by: Tejun Heo <tj@kernel.org>
2009-03-06x86: make embedding percpu allocator return excessive free spaceTejun Heo1-16/+28
Impact: reduce unnecessary memory usage on certain configurations Embedding percpu allocator allocates unit_size * smp_num_possible_cpus() bytes consecutively and use it for the first chunk. However, if the static area is small, this can result in excessive prellocated free space in the first chunk due to PCPU_MIN_UNIT_SIZE restriction. This patch makes embedding percpu allocator preallocate only what's necessary as described by PERPCU_DYNAMIC_RESERVE and return the leftover to the bootmem allocator. Signed-off-by: Tejun Heo <tj@kernel.org>
2009-03-06percpu: use negative for auto for pcpu_setup_first_chunk() argumentsTejun Heo1-1/+1
Impact: argument semantic cleanup In pcpu_setup_first_chunk(), zero @unit_size and @dyn_size meant auto-sizing. It's okay for @unit_size as 0 doesn't make sense but 0 dynamic reserve size is valid. Alos, if arch @dyn_size is calculated from other parameters, it might end up passing in 0 @dyn_size and malfunction when the size is automatically adjusted. This patch makes both @unit_size and @dyn_size ssize_t and use -1 for auto sizing. Signed-off-by: Tejun Heo <tj@kernel.org>
2009-03-05Merge commit 'v2.6.29-rc7' into sched/coreIngo Molnar1-1/+1
2009-03-05[CPUFREQ] Prevent p4-clockmod from auto-binding to the ondemand governor.Dave Jones1-1/+4
The latency of p4-clockmod sucks so hard that scaling on a regular basis with ondemand is a really bad idea. Signed-off-by: Matthew Garrett <mjg59@srcf.ucam.org> Signed-off-by: Dave Jones <davej@redhat.com>
2009-03-04x86: add Dell XPS710 reboot quirkLeann Ogasawara1-0/+8
Dell XPS710 will hang on reboot. This is resolved by adding a quirk to set bios reboot. Signed-off-by: Leann Ogasawara <leann.ogasawara@canonical.com> Signed-off-by: Tim Gardner <tim.gardner@canonical.com> Cc: "manoj.iyer" <manoj.iyer@canonical.com> Cc: <stable@kernel.org> LKML-Reference: <1236196380.3231.89.camel@emiko> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-04x86, math-emu: fix init_fpu for task != currentDaniel Glöckner1-1/+1
Impact: fix math-emu related crash while using GDB/ptrace init_fpu() calls finit to initialize a task's xstate, while finit always works on the current task. If we use PTRACE_GETFPREGS on another process and both processes did not already use floating point, we get a null pointer exception in finit. This patch creates a new function finit_task that takes a task_struct parameter. finit becomes a wrapper that simply calls finit_task with current. On the plus side this avoids many calls to get_current which would each resolve to an inline assembler mov instruction. An empty finit_task has been added to i387.h to avoid linker errors in case the compiler still emits the call in init_fpu when CONFIG_MATH_EMULATION is not defined. The declaration of finit in i387.h has been removed as the remaining code using this function gets its prototype from fpu_proto.h. Signed-off-by: Daniel Glöckner <dg@emlix.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: "Pallipadi Venkatesh" <venkatesh.pallipadi@intel.com> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Bill Metzenthen <billm@melbpc.org.au> LKML-Reference: <E1Lew31-0004il-Fg@mailer.emlix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-04x86: EFI: Back efi_ioremap with init_memory_mapping instead of FIX_MAPHuang Ying2-19/+9
Impact: Fix boot failure on EFI system with large runtime memory range Brian Maly reported that some EFI system with large runtime memory range can not boot. Because the FIX_MAP used to map runtime memory range is smaller than run time memory range. This patch fixes this issue by re-implement efi_ioremap() with init_memory_mapping(). Reported-and-tested-by: Brian Maly <bmaly@redhat.com> Signed-off-by: Huang Ying <ying.huang@intel.com> Cc: Brian Maly <bmaly@redhat.com> Cc: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <1236135513.6204.306.camel@yhuang-dev.sh.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-04x86: fix DMI on EFIBrian Maly1-2/+3
Impact: reactivate DMI quirks on EFI hardware DMI tables are loaded by EFI, so the dmi calls must happen after efi_init() and not before. Currently Apple hardware uses DMI to determine the framebuffer mappings for efifb. Without DMI working you also have no video on MacBook Pro. This patch resolves the DMI issue for EFI hardware (DMI is now properly detected at boot), and additionally efifb now loads on Apple hardware (i.e. video works). Signed-off-by: Brian Maly <bmaly@redhat> Acked-by: Yinghai Lu <yinghai@kernel.org> Cc: ying.huang@intel.com LKML-Reference: <49ADEDA3.1030406@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> arch/x86/kernel/setup.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
2009-03-04Merge branch 'x86/core' into core/percpuIngo Molnar24-793/+472
2009-03-04Merge branches 'x86/apic', 'x86/cpu', 'x86/fixmap', 'x86/mm', 'x86/sched', 'x86/setup-lzma', 'x86/signal' and 'x86/urgent' into x86/coreIngo Molnar8-518/+247
2009-03-03x86, signals: fix xine & firefox bustageHiroshi Shimamoto1-5/+4
Impact: fix bad frame in rt_sigreturn on 64-bit After commit 97286a2b64725aac2d584ddd1f94871f9991d5a1 some applications fail to return from signal handler: [ 145.150133] firefox[3250] bad frame in rt_sigreturn frame:00007f902b44eb28 ip:352e80b307 sp:7f902b44ef70 orax:ffffffffffffffff in libpthread-2.9.so[352e800000+17000] [ 665.519017] firefox[5420] bad frame in rt_sigreturn frame:00007faa8deaeb28 ip:352e80b307 sp:7faa8deaef70 orax:ffffffffffffffff in libpthread-2.9.so[352e800000+17000] The root cause is forgetting to keep 64 byte aligned value of fpstate for next stack pointer calculation. Reported-by: Jaswinder Singh Rajput <jaswinder@kernel.org> Reported-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> LKML-Reference: <49AC85C1.7060600@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-02x86-64: syscall-audit: fix 32/64 syscall holeRoland McGrath1-1/+1
On x86-64, a 32-bit process (TIF_IA32) can switch to 64-bit mode with ljmp, and then use the "syscall" instruction to make a 64-bit system call. A 64-bit process make a 32-bit system call with int $0x80. In both these cases, audit_syscall_entry() will use the wrong system call number table and the wrong system call argument registers. This could be used to circumvent a syscall audit configuration that filters based on the syscall numbers or argument details. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-03-02x86: unify chunks of kernel/process*.cJeremy Fitzhardinge3-361/+190
With x86-32 and -64 using the same mechanism for managing the tss io permissions bitmap, large chunks of process*.c are trivially unifyable, including: - exit_thread - flush_thread - __switch_to_xtra (along with tsc enable/disable) and as bonus pickups: - sys_fork - sys_vfork (Note: asmlinkage expands to empty on x86-64) Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-02x86-32: use non-lazy io bitmap context switchingJeremy Fitzhardinge3-84/+9
Impact: remove 32-bit optimization to prepare unification x86-32 and -64 differ in the way they context-switch tasks with io permission bitmaps. x86-64 simply copies the next tasks io bitmap into place (if any) on context switch. x86-32 invalidates the bitmap on context switch, so that the next IO instruction will fault; at that point it installs the appropriate IO bitmap. This makes context switching IO-bitmap-using tasks a bit more less expensive, at the cost of making the next IO instruction slower due to the extra fault. This tradeoff only makes sense if IO-bitmap-using processes are relatively common, but they don't actually use IO instructions very often. However, in a typical desktop system, the only process likely to be using IO bitmaps is the X server, and nothing at all on a server. Therefore the lazy context switch doesn't really win all that much, and its just a gratuitious difference from 64-bit code. This patch removes the lazy context switch, with a view to unifying this code in a later change. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-02x86_32: apic/numaq_32, fix section mismatchJiri Slaby1-1/+1
Remove __cpuinitdata section placement for translation_table structure, since it is referenced from a functions within .text. Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Cc: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: H. Peter Anvin <hpa@zytor.com>
2009-03-02x86_32: apic/summit_32, fix section mismatchJiri Slaby1-9/+9
Remove __init section placement for some functions/data, so that we don't get section mismatch warnings. Also make inline function instead of empty setup_summit macro. [v2] One of them was not caught by DEBUG_SECTION_MISMATCH=y magic. Fix it. Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Cc: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: H. Peter Anvin <hpa@zytor.com>
2009-03-02x86_32: apic/es7000_32, fix section mismatchJiri Slaby1-20/+17
Remove __init section placement for some functions, so that we don't get section mismatch warnings. [v2]: 2 of them were not caught by DEBUG_SECTION_MISMATCH=y magic. Fix it. Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Cc: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: H. Peter Anvin <hpa@zytor.com>
2009-03-02x86_32: apic/summit_32, fix cpu_mask_to_apicidJiri Slaby1-21/+9
Perform same-cluster checking even for masks with all (nr_cpu_ids) bits set and report correct apicid on success instead. While at it, convert it to for_each_cpu and newer cpumask api. Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-02x86_32: apic/es7000_32, fix cpu_mask_to_apicidJiri Slaby1-20/+10
Perform same-cluster checking even for masks with all (nr_cpu_ids) bits set and report BAD_APICID on failure. While at it, convert it to for_each_cpu. Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-02x86_32: apic/es7000_32, cpu_mask_to_apicid cleanupJiri Slaby1-42/+4
Remove es7000_cpu_mask_to_apicid_cluster completely, because it's almost the same as es7000_cpu_mask_to_apicid except 2 code paths. One of them is about to be removed soon, the another should be BAD_APICID (it's a fail path). The _cluster one was not invoked on apic->cpu_mask_to_apicid_and anyway, since there was no _cluster_and variant. Also use newer cpumask functions. Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-02x86_32: apic/bigsmp_32, de-inline functionsJiri Slaby1-23/+17
The ones which go only into struct apic are de-inlined by compiler anyway, so remove the inline specifier from them. Afterwards, remove bigsmp_setup_portio_remap completely as it is unused. Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-28x86: remove double copy of show_cpuinfo_core for 32 and 64 bitJaswinder Singh Rajput1-18/+2
Impact: unification show_cpuinfo_core is identical for 32 and 64 bit and can be unified, and CONFIG_X86_HT inherently depends on CONFIG_X86_SMP. Signed-off-by: Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-02-28x86: signal: introduce helper align_sigframe()Hiroshi Shimamoto1-12/+15
Impact: cleanup Introduce helper align_sigframe() to align stack pointer for signal frame. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-28x86: signal: unify get_sigframe()Hiroshi Shimamoto1-56/+41
Impact: cleanup Unify get_sigframe(). Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-28x86: signal: use 16 bytes boundary for rt_sigframeHiroshi Shimamoto1-4/+2
Impact: cleanup Supporting xsave/xrestore introduces 64 bytes boundary for save_i387_xstate(). 16 bytes boundary is OK for rt_sigframe. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-28x86: signal: intrroduce get_sigframe() and replace get_sigstack()Hiroshi Shimamoto1-13/+19
Impact: cleanup Introduce get_sigframe() like 32-bit to replace get_sigstack(). Move the i387 stuff into get_sigframe(). Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-28x86: signal: add __user annotationHiroshi Shimamoto1-2/+2
Impact: cleanup Add missing __user annotation to the parameter of get_sigframe(). Also change cast type to void __user * of *fpstate. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-26x86: set X86_FEATURE_TSC_RELIABLEIngo Molnar1-1/+7
If the TSC is constant and non-stop, also set it reliable. (We will turn this off in DMI quirks for multi-chassis systems) The performance number on a 16-way Nehalem system running 32 tasks that context-switch between each other is significant: sched_clock_stable=0 sched_clock_stable=1 .................... .................... 22.456925 million/sec 24.306972 million/sec [+8.2%] lmbench's "lat_ctx -s 0 2" goes from 0.63 microseconds to 0.59 microseconds - a 6.7% increase in context-switching performance. Perfstat of 1 million pipe context switches between two tasks: Performance counter stats for './pipe-test-1m': [before] [after] ............ ............ 37621.421089 36436.848378 task clock ticks (msecs) 0 0 CPU migrations (events) 2000274 2000189 context switches (events) 194 193 pagefaults (events) 8433799643 8171016416 CPU cycles (events) -3.21% 8370133368 8180999694 instructions (events) -2.31% 4158565 3895941 cache references (events) -6.74% 44312 46264 cache misses (events) 2349.287976 2279.362465 wall-time (msecs) -3.06% The speedup comes straight from the reduction in the instruction count. sched_clock_cpu() got simpler and the whole workload thus executes faster. Signed-off-by: Ingo Molnar <mingo@elte.hu>