aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2015-06-20x86/hpet: Check for irq==0 when allocating hpet MSI interruptsJiang Liu1-1/+1
irq == 0 is not a valid irq for a irqdomain MSI allocation, but hpet code checks only for negative return values. Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Borislav Petkov <bp@alien8.de> Link: http://lkml.kernel.org/r/558447AF.30703@linux.intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-16genirq: Prevent crash in irq_move_irq()Jiang Liu1-1/+8
The functions irq_move_irq() and irq_move_masked_irq() expect that the caller passes the top-level irq_data to them when hierarchical irqdomains are enabled. But that's not true when called from apic_ack_edge(), which results in a null pointer dereference by idata->chip->irq_mask(idata). Instead of fixing callers to passing top-level irq_data, we rather change irq_move_irq()/irq_move_masked_irq() to accept any irq_data. Fixes: 52f518a3a7c 'x86/MSI: Use hierarchical irqdomains to manage MSI interrupts' Reported-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Link: http://lkml.kernel.org/r/1433145945-789-3-git-send-email-jiang.liu@linux.intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-16genirq: Enhance irq_data_to_desc() to support hierarchy irqdomainJiang Liu2-2/+9
For irq associated with hierarchy irqdomains, there will be multiple irq_datas for one irq_desc. So enhance irq_data_to_desc() to support hierarchy irqdomain. Also export irq_data_to_desc() as an inline function for later reuse. Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Marc Zyngier <marc.zyngier@arm.com> Link: http://lkml.kernel.org/r/1433145945-789-2-git-send-email-jiang.liu@linux.intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-12iommu, x86: Properly handle posted interrupts for IOMMU hotplugFeng Wu1-0/+3
Return error when inserting a new IOMMU which doesn't support posted interrupts if posted interrupts are already enabled. Signed-off-by: Feng Wu <feng.wu@intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Joerg Roedel <joro@8bytes.org> Cc: jiang.liu@linux.intel.com Cc: iommu@lists.linux-foundation.org Cc: dwmw2@infradead.org Link: http://lkml.kernel.org/r/1433827237-3382-11-git-send-email-feng.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-12iommu, x86: Provide irq_remapping_cap() interfaceFeng Wu2-0/+11
Add a new interface irq_remapping_cap() to detect whether irq remapping supports new features, such as VT-d Posted-Interrupts. Export the function, so that KVM code can check this and use this mechanism properly. Signed-off-by: Feng Wu <feng.wu@intel.com> Reviewed-by: Jiang Liu <jiang.liu@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Joerg Roedel <joro@8bytes.org> Cc: iommu@lists.linux-foundation.org Cc: dwmw2@infradead.org Link: http://lkml.kernel.org/r/1433827237-3382-10-git-send-email-feng.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-12iommu, x86: Setup Posted-Interrupts capability for Intel iommuFeng Wu3-0/+35
Set Posted-Interrupts capability for Intel iommu when Interrupt Remapping is enabled, clear it when disabled. Signed-off-by: Feng Wu <feng.wu@intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Joerg Roedel <joro@8bytes.org> Cc: jiang.liu@linux.intel.com Cc: iommu@lists.linux-foundation.org Cc: dwmw2@infradead.org Link: http://lkml.kernel.org/r/1433827237-3382-9-git-send-email-feng.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-12iommu, x86: Add cap_pi_support() to detect VT-d PI capabilityFeng Wu1-0/+1
Add helper function to detect VT-d Posted-Interrupts capability. Signed-off-by: Feng Wu <feng.wu@intel.com> Reviewed-by: Jiang Liu <jiang.liu@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: David Woodhouse <David.Woodhouse@intel.com> Acked-by: Joerg Roedel <joro@8bytes.org> Cc: iommu@lists.linux-foundation.org Cc: dwmw2@infradead.org Link: http://lkml.kernel.org/r/1433827237-3382-8-git-send-email-feng.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-12iommu, x86: Avoid migrating VT-d posted interruptsFeng Wu1-1/+4
When the interrupt is configured in posted mode, the destination of the interrupt is set in the Posted-Interrupts Descriptor and the migration of these interrupts happens during vCPU scheduling. We still update the cached irte, which will be used when changing back to remapping mode, but we avoid writing the table entry as this would overwrite the posted mode entry. Signed-off-by: Feng Wu <feng.wu@intel.com> Reviewed-by: Jiang Liu <jiang.liu@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: David Woodhouse <David.Woodhouse@intel.com> Acked-by: Joerg Roedel <joro@8bytes.org> Cc: iommu@lists.linux-foundation.org Cc: dwmw2@infradead.org Link: http://lkml.kernel.org/r/1433827237-3382-7-git-send-email-feng.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-12iommu, x86: Save the mode (posted or remapped) of an IRTEFeng Wu1-0/+10
Add a new field to struct irq_2_iommu, which captures whether the associated IRTE is in posted mode or remapped mode. We update this field when the IRTE is written into the table. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Feng Wu <feng.wu@intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Joerg Roedel <joro@8bytes.org> Cc: jiang.liu@linux.intel.com Cc: iommu@lists.linux-foundation.org Cc: dwmw2@infradead.org Link: http://lkml.kernel.org/r/1433827237-3382-6-git-send-email-feng.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-12iommu, x86: Implement irq_set_vcpu_affinity for intel_ir_chipFeng Wu2-0/+42
Interrupt chip callback to set the VCPU affinity for posted interrupts. [ tglx: Use the helper function to copy from the remap irte instead of open coding it. Massage the comment as well ] Signed-off-by: Feng Wu <feng.wu@intel.com> Reviewed-by: Jiang Liu <jiang.liu@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: David Woodhouse <David.Woodhouse@intel.com> Cc: iommu@lists.linux-foundation.org Cc: joro@8bytes.org Cc: dwmw2@infradead.org Link: http://lkml.kernel.org/r/1433827237-3382-5-git-send-email-feng.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-12iommu: dmar: Provide helper to copy shared irte fieldsThomas Gleixner1-0/+12
Instead of open coding, provide a helper function to copy the shared irte fields. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: jiang.liu@linux.intel.com Cc: iommu@lists.linux-foundation.org Cc: joro@8bytes.org Cc: dwmw2@infradead.org Link: http://lkml.kernel.org/r/1433827237-3382-4-git-send-email-feng.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-12iommu: dmar: Extend struct irte for VT-d Posted-InterruptsThomas Gleixner1-15/+55
The IRTE (Interrupt Remapping Table Entry) is either an entry for remapped or for posted interrupts. The hardware distiguishes between remapped and posted entries by bit 15 in the low 64 bit of the IRTE. If cleared the entry is remapped, if set it's posted. The entries have common fields and dependent on the posted bit fields with different meanings. Extend struct irte to handle the differences between remap and posted mode by having three structs in the unions: - Shared - Remapped - Posted Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Feng Wu <feng.wu@intel.com> Acked-by: Joerg Roedel <joro@8bytes.org> Cc: jiang.liu@linux.intel.com Cc: iommu@lists.linux-foundation.org Cc: dwmw2@infradead.org Link: http://lkml.kernel.org/r/1433827237-3382-3-git-send-email-feng.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-12iommu: Add new member capability to struct irq_remap_opsFeng Wu2-0/+7
Add a new member 'capability' to struct irq_remap_ops for storing information about available capabilities such as VT-d Posted-Interrupts. Signed-off-by: Feng Wu <feng.wu@intel.com> Reviewed-by: Jiang Liu <jiang.liu@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Joerg Roedel <joro@8bytes.org> Cc: iommu@lists.linux-foundation.org Cc: dwmw2@infradead.org Link: http://lkml.kernel.org/r/1433827237-3382-2-git-send-email-feng.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-05-19x86/irq: Show statistics information for posted-interruptsFeng Wu1-0/+12
Show the statistics information for notification event and wakeup event for posted-interrupt in /proc/interrupts. [ tglx: Named the short identifiers PIN and PIW to match the long identifiers ] Signed-off-by: Feng Wu <feng.wu@intel.com> Cc: jiang.liu@linux.intel.com Link: http://lkml.kernel.org/r/1432026437-16560-5-git-send-email-feng.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-05-19x86/irq: Define a global vector for VT-d Posted-InterruptsFeng Wu8-0/+40
Currently, we use a global vector as the Posted-Interrupts Notification Event for all the vCPUs in the system. We need to introduce another global vector for VT-d Posted-Interrtups, which will be used to wakeup the sleep vCPU when an external interrupt from a direct-assigned device happens for that vCPU. [ tglx: Removed a gazillion of extra newlines ] Signed-off-by: Feng Wu <feng.wu@intel.com> Cc: jiang.liu@linux.intel.com Link: http://lkml.kernel.org/r/1432026437-16560-4-git-send-email-feng.wu@intel.com Suggested-by: Yang Zhang <yang.z.zhang@intel.com> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-05-19x86/irq/msi: Implement irq_set_vcpu_affinity for remapped MSI irqsFeng Wu1-0/+1
Implement irq_set_vcpu_affinity for pci_msi_ir_controller. Signed-off-by: Feng Wu <feng.wu@intel.com> Reviewed-by: Jiang Liu <jiang.liu@linux.intel.com> Link: http://lkml.kernel.org/r/1432026437-16560-3-git-send-email-feng.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-05-19genirq: Introduce irq_set_vcpu_affinity() to target an interrupt to a VCPUJiang Liu3-0/+51
With Posted-Interrupts support in Intel CPU and IOMMU, an external interrupt from assigned-devices could be directly delivered to a virtual CPU in a virtual machine. Instead of hacking KVM and Intel IOMMU drivers, we propose a platform independent interface to target an interrupt to a specific virtual CPU in a virtual machine, or set virtual CPU affinity for an interrupt. By adopting this new interface and the hierarchy irqdomain, we could easily support posted-interrupts on Intel platforms, and also provide flexible enough interfaces for other platforms to support similar features. Here is the usage scenario for this interface: Guest update MSI/MSI-X interrupt configuration -->QEMU and KVM handle this -->KVM call this interface (passing posted interrupts descriptor and guest vector) -->irq core will transfer the control to IOMMU -->IOMMU will do the real work of updating IRTE (IRTE has new format for VT-d Posted-Interrupts) Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com> Signed-off-by: Feng Wu <feng.wu@intel.com> Link: http://lkml.kernel.org/r/1432026437-16560-2-git-send-email-feng.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-05-19x86: Rename eisa_set_level_irq to elcr_set_level_irqPaul Gortmaker3-9/+9
This routine has been around for over a decade, but with EISA being dead and abandoned for about twice that long, the name can be kind of confusing. The function is going at the PIC Edge/Level Configuration Registers (ELCR), so rename it as such and mentally decouple it from the long since dead EISA bus. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Reviewed-by: Maciej W. Rozycki <macro@linux-mips.org> Acked-by: Pavel Machek <pavel@ucw.cz> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Len Brown <len.brown@intel.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: x86@kernel.org Link: http://lkml.kernel.org/r/1431217657-934-1-git-send-email-paul.gortmaker@windriver.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-05-18x86/smp/boot: Fix legacy SMP bootup slow-boot bugIngo Molnar1-1/+1
So while testing kernels using tools/kvm/ (kvmtool) I noticed that it booted super slow: [ 0.142991] Performance Events: no PMU driver, software events only. [ 0.149265] x86: Booting SMP configuration: [ 0.149765] .... node #0, CPUs: #1 [ 0.148304] kvm-clock: cpu 1, msr 2:1bfe9041, secondary cpu clock [ 10.158813] KVM setup async PF for cpu 1 [ 10.159000] #2 [ 10.159000] kvm-stealtime: cpu 1, msr 211a4d400 [ 10.158829] kvm-clock: cpu 2, msr 2:1bfe9081, secondary cpu clock [ 20.167805] KVM setup async PF for cpu 2 [ 20.168000] #3 [ 20.168000] kvm-stealtime: cpu 2, msr 211a8d400 [ 20.167818] kvm-clock: cpu 3, msr 2:1bfe90c1, secondary cpu clock [ 30.176902] KVM setup async PF for cpu 3 [ 30.177000] #4 [ 30.177000] kvm-stealtime: cpu 3, msr 211acd400 One CPU booted up per 10 seconds. With 120 CPUs that takes a while. Bisection pinpointed this commit: 853b160aaafb ("Revert f5d6a52f5111 ("x86/smpboot: Skip delays during SMP initialization similar to Xen")") But that commit just restores previous behavior, so it cannot cause the problem. After some head scratching it turns out that these two commits: 1a744cb356c5 ("x86/smp/boot: Remove 10ms delay from cpu_up() on modern processors") d68921f9bd14 ("x86/smp/boot: Add cmdline "cpu_init_udelay=N" to specify cpu_up() delay") added the following code to smpboot.c: - mdelay(10); + mdelay(init_udelay); Note the mismatch in the units: the delay is called 'udelay' and is set to microseconds - while the function used here is actually 'mdelay', which counts in milliseconds ... So the delay for legacy systems is off by a factor of 1,000, so instead of 10 msecs we waited for 10 seconds ... The reason bisection pointed to 853b160aaafb was that 853b160aaafb removed a (broken) boot-time speedup patch, which masked the factor 1,000 bug. Fix it by using udelay(). This fixes my bootup problems. Cc: Len Brown <len.brown@intel.com> Cc: Alan Cox <alan@linux.intel.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jan H. Schönherr <jschoenh@amazon.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-17x86/asm/entry/64: Use shorter MOVs from segment registersDenys Vlasenko1-4/+4
The "movw %ds,%cx" instruction needs a 0x66 prefix, while "movl %ds,%ecx" does not. The difference is that latter form (on 64-bit CPUs) overwrites the entire %ecx, not only its lower half. But subsequent code doesn't depend on the value of upper half of %ecx, so we can safely use the shorter instruction. The new code is also faster than the old one - now we don't depend on the old value of %ecx, but this code fragment is not performance-critical so it does not matter much. Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Drewry <wad@chromium.org> Link: http://lkml.kernel.org/r/1431722346-26585-1-git-send-email-dvlasenk@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-17x86/asm/head*.S: Change global labels to localBorislav Petkov2-4/+4
Make the disassembly look less confusing: -- head_64.o.before.asm ++ head_64.o.after.asm 0000000000000120 <early_idt_handler>: 120: fc cld 121: 83 3c 24 02 cmpl $0x2,(%rsp) - 125: 0f 84 9d 00 00 00 je 1c8 <is_nmi> + 125: 0f 84 9d 00 00 00 je 1c8 <early_idt_handler+0xa8> 12b: 83 3d 00 00 00 00 02 cmpl $0x2,0x0(%rip) # 132 <early_idt_handler+0x12> 132: 74 7e je 1b2 <early_idt_handler+0x92> 134: ff 05 00 00 00 00 incl 0x0(%rip) # 13a <early_idt_handler+0x1a> @@ -1198,9 +1198,7 @@ Disassembly of section .init.text: 1bf: 5a pop %rdx 1c0: 59 pop %rcx 1c1: 58 pop %rax - 1c2: ff 0d 00 00 00 00 decl 0x0(%rip) # 1c8 <is_nmi> - -00000000000001c8 <is_nmi>: + 1c2: ff 0d 00 00 00 00 decl 0x0(%rip) # 1c8 <early_idt_handler+0xa8> 1c8: 48 83 c4 10 add $0x10,%rsp 1cc: 48 cf iretq -- head_32.o.before.asm ++ head_32.o.after.asm 0000016c <early_idt_handler>: 16c: fc cld 16d: 83 3c 24 02 cmpl $0x2,(%esp) - 171: 74 73 je 1e6 <is_nmi> + 171: 74 73 je 1e6 <ex_entry+0xc> 173: 36 83 3d 00 00 00 00 cmpl $0x2,%ss:0x0 17a: 02 17b: 74 5a je 1d7 <hlt_loop> @@ -483,8 +483,6 @@ Disassembly of section .init.text: 1dd: 59 pop %ecx 1de: 58 pop %eax 1df: 36 ff 0d 00 00 00 00 decl %ss:0x0 - -000001e6 <is_nmi>: 1e6: 83 c4 08 add $0x8,%esp 1e9: cf iret 1ea: 66 90 xchg %ax,%ax No functionality change. Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1431793079-11153-1-git-send-email-bp@alien8.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-17x86: Pack loops tightly as wellIngo Molnar1-0/+3
Packing loops tightly (-falign-loops=1) is beneficial to code size: text data bss dec filename 12566391 1617840 1089536 15273767 vmlinux.align.16-byte 12224951 1617840 1089536 14932327 vmlinux.align.1-byte 11976567 1617840 1089536 14683943 vmlinux.align.1-byte.funcs-1-byte 11903735 1617840 1089536 14611111 vmlinux.align.1-byte.funcs-1-byte.loops-1-byte Which reduces the size of the kernel by another 0.6%, so the the total combined size reduction of the alignment-packing patches is ~5.5%. The x86 decoder bandwidth and caching arguments laid out in: be6cb02779ca ("x86: Align jump targets to 1-byte boundaries") apply to loop alignment as well. Furtermore, modern CPU uarchs have a loop cache/buffer that is a L0 cache before even any uop cache, covering a few dozen most recently executed instructions. This loop cache generally does not have the 16-byte alignment restrictions of the uop cache. Now loop alignment can still be beneficial if: - a loop is cache-hot and its surroundings are not. - if the loop is so cache hot that the instruction flow becomes x86 decoder bandwidth limited But loop alignment is harmful if: - a loop is cache-cold - a loop's surroundings are cache-hot as well - two cache-hot loops are close to each other - if the loop fits into the loop cache - if the code flow is not decoder bandwidth limited and I'd argue that the latter five scenarios are much more common in the kernel, as our hottest loops are typically: - pointer chasing: this should fit into the loop cache in most cases and is typically data cache and address generation limited - generic memory ops (memset, memcpy, etc.): these generally fit into the loop cache as well, and are likewise data cache limited. So this patch packs loop addresses tightly as well. Acked-by: Denys Vlasenko <dvlasenk@redhat.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Aswin Chandramouleeswaran <aswin@hp.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jason Low <jason.low2@hp.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Link: http://lkml.kernel.org/r/20150410123017.GB19918@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-15MIPS: tlb-r4k: Fix PG_ELPA commentJames Hogan1-1/+1
The ELPA bit in PageGrain is all about large *physical* addresses, so correct the reference to "large virtual address" in the comment above where it is set for MIPS64. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: David Daney <ddaney@caviumnetworks.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10038/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-05-15MIPS: Fix up obsolete cpu_set usageEzequiel Garcia1-1/+1
cpu_set was removed (along with a bunch of cpumask helpers) by commit 2f0f267ea072 ("cpumask: remove deprecated functions."). Fix this by replacing cpu_set with cpumask_set_cpu. Without this fix the following error is triggered when CONFIG_MIPS_MT_FPAFF=y. arch/mips/kernel/smp-cps.c: In function 'cps_smp_setup': arch/mips/kernel/smp-cps.c:95:3: error: implicit declaration of function 'cpu_set' [-Werror=implicit-function-declaration] Fixes: 90db024f140d ("MIPS: smp-cps: cpu_set FPU mask if FPU present") Signed-off-by: Ezequiel Garcia <ezequiel.garcia@imgtec.com> Acked-by: Niklas Cassel <niklass@axis.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9912/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-05-15x86: Consolidate irq entering inlinesThomas Gleixner3-21/+14
smp.c and irq_work.c implement the same inline helper. Move it to apic.h and use it everywhere. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org>
2015-05-15x86: Use entering[_ack]_irq() instead of open coding itThomas Gleixner3-20/+8
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-05-15MAINTAINERS: Add dts entries for some of the Marvell SoCsGregory CLEMENT1-1/+9
Since many releases, the modifications of the mvebu and berlin device tree files are merged through the mvebu subsystem. This patch makes it official in order to help the contributors using the get_maintainer.pl to find the accurate peoples. In the same time, updated the mvebu description which now includes the kirkwood SoCs and new Armada SoCs. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Andrew Lunn <andrew@lunn.ch>
2015-05-15x86: Align jump targets to 1-byte boundariesIngo Molnar1-0/+3
The following NOP in a hot function caught my attention: > 5a: 66 0f 1f 44 00 00 nopw 0x0(%rax,%rax,1) That's a dead NOP that bloats the function a bit, added for the default 16-byte alignment that GCC applies for jump targets. I realize that x86 CPU manufacturers recommend 16-byte jump target alignments (it's in the Intel optimization manual), to help their relatively narrow decoder prefetch alignment and uop cache constraints, but the cost of that is very significant: text data bss dec filename 12566391 1617840 1089536 15273767 vmlinux.align.16-byte 12224951 1617840 1089536 14932327 vmlinux.align.1-byte By using 1-byte jump target alignment (i.e. no alignment at all) we get an almost 3% reduction in kernel size (!) - and a probably similar reduction in I$ footprint. Now, the usual justification for jump target alignment is the following: - modern decoders tend to have 16-byte (effective) decoder prefetch windows. (AMD documents it higher but measurements suggest the effective prefetch window on curretn uarchs is still around 16 bytes) - on Intel there's also the uop-cache with cachelines that have 16-byte granularity and limited associativity. - older x86 uarchs had a penalty for decoder fetches that crossed 16-byte boundaries. These limits are mostly gone from recent uarchs. So if a forward jump target is aligned to cacheline boundary then prefetches will start from a new prefetch-cacheline and there's higher chance for decoding in fewer steps and packing tightly. But I think that argument is flawed for typical optimized kernel code flows: forward jumps often go to 'cold' (uncommon) pieces of code, and aligning cold code to cache lines does not bring a lot of advantages (they are uncommon), while it causes collateral damage: - their alignment 'spreads out' the cache footprint, it shifts followup hot code further out - plus it slows down even 'cold' code that immediately follows 'hot' code (like in the above case), which could have benefited from the partial cacheline that comes off the end of hot code. But even in the cache-hot case the 16 byte alignment brings disadvantages: - it spreads out the cache footprint, possibly making the code fall out of the L1 I$. - On Intel CPUs, recent microarchitectures have plenty of uop cache (typically doubling every 3 years) - while the size of the L1 cache grows much less aggressively. So workloads are rarely uop cache limited. The only situation where alignment might matter are tight loops that could fit into a single 16 byte chunk - but those are pretty rare in the kernel: if they exist they tend to be pointer chasing or generic memory ops, which both tend to be cache miss (or cache allocation) intensive and are not decoder bandwidth limited. So the balance of arguments strongly favors packing kernel instructions tightly versus maximizing for decoder bandwidth: this patch changes the jump target alignment from 16 bytes to 1 byte (tightly packed, unaligned). Acked-by: Denys Vlasenko <dvlasenk@redhat.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Aswin Chandramouleeswaran <aswin@hp.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jason Low <jason.low2@hp.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Link: http://lkml.kernel.org/r/20150410120846.GA17101@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-15ext4: fix an ext3 collapse range regression in xfstestsTheodore Ts'o1-0/+8
The xfstests test suite assumes that an attempt to collapse range on the range (0, 1) will return EOPNOTSUPP if the file system does not support collapse range. Commit 280227a75b56: "ext4: move check under lock scope to close a race" broke this, and this caused xfstests to fail when run when testing file systems that did not have the extents feature enabled. Reported-by: Eric Whitney <enwlinux@gmail.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-05-14mm, numa: really disable NUMA balancing by default on single node machinesMel Gorman1-1/+1
NUMA balancing is meant to be disabled by default on UMA machines but the check is using nr_node_ids (highest node) instead of num_online_nodes (online nodes). The consequences are that a UMA machine with a node ID of 1 or higher will enable NUMA balancing. This will incur useless overhead due to minor faults with the impact depending on the workload. These are the impact on the stats when running a kernel build on a single node machine whose node ID happened to be 1: vanilla patched NUMA base PTE updates 5113158 0 NUMA huge PMD updates 643 0 NUMA page range updates 5442374 0 NUMA hint faults 2109622 0 NUMA hint local faults 2109622 0 NUMA hint local percent 100 100 NUMA pages migrated 0 0 Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: <stable@vger.kernel.org> [3.8+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14MAINTAINERS: update Jingoo Han's email addressJingoo Han1-5/+5
Change my private email address. Signed-off-by: Jingoo Han <jingoohan1@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14CMA: page_isolation: check buddy before accessing itHui Zhu1-1/+2
I had an issue: Unable to handle kernel NULL pointer dereference at virtual address 0000082a pgd = cc970000 [0000082a] *pgd=00000000 Internal error: Oops: 5 [#1] PREEMPT SMP ARM PC is at get_pageblock_flags_group+0x5c/0xb0 LR is at unset_migratetype_isolate+0x148/0x1b0 pc : [<c00cc9a0>] lr : [<c0109874>] psr: 80000093 sp : c7029d00 ip : 00000105 fp : c7029d1c r10: 00000001 r9 : 0000000a r8 : 00000004 r7 : 60000013 r6 : 000000a4 r5 : c0a357e4 r4 : 00000000 r3 : 00000826 r2 : 00000002 r1 : 00000000 r0 : 0000003f Flags: Nzcv IRQs off FIQs on Mode SVC_32 ISA ARM Segment user Control: 10c5387d Table: 2cb7006a DAC: 00000015 Backtrace: get_pageblock_flags_group+0x0/0xb0 unset_migratetype_isolate+0x0/0x1b0 undo_isolate_page_range+0x0/0xdc __alloc_contig_range+0x0/0x34c alloc_contig_range+0x0/0x18 This issue is because when calling unset_migratetype_isolate() to unset a part of CMA memory, it try to access the buddy page to get its status: if (order >= pageblock_order) { page_idx = page_to_pfn(page) & ((1 << MAX_ORDER) - 1); buddy_idx = __find_buddy_index(page_idx, order); buddy = page + (buddy_idx - page_idx); if (!is_migrate_isolate_page(buddy)) { But the begin addr of this part of CMA memory is very close to a part of memory that is reserved at boot time (not in buddy system). So add a check before accessing it. [akpm@linux-foundation.org: use conventional code layout] Signed-off-by: Hui Zhu <zhuhui@xiaomi.com> Suggested-by: Laura Abbott <labbott@redhat.com> Suggested-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14uidgid: make uid_valid and gid_valid work with !CONFIG_MULTIUSERJosh Triplett1-2/+2
{u,g}id_valid call {u,g}id_eq, which calls __k{u,g}id_val on both arguments and compares. With !CONFIG_MULTIUSER, __k{u,g}id_val return a constant 0, which makes {u,g}id_valid always return false. Change {u,g}id_valid to compare their argument against -1 instead. That produces identical results in the normal CONFIG_MULTIUSER=y case, but with !CONFIG_MULTIUSER will make {u,g}id_valid constant-fold into "return true;" rather than "return false;". This fixes uses of devpts without CONFIG_MULTIUSER. Signed-off-by: Josh Triplett <josh@joshtriplett.org> Reported-by: Fengguang Wu <fengguang.wu@intel.com>, Cc: Peter Hurley <peter@hurleysoftware.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14kernfs: do not account ino_ida allocations to memcgVladimir Davydov1-1/+8
root->ino_ida is used for kernfs inode number allocations. Since IDA has a layered structure, different IDs can reside on the same layer, which is currently accounted to some memory cgroup. The problem is that each kmem cache of a memory cgroup has its own directory on sysfs (under /sys/fs/kernel/<cache-name>/cgroup). If the inode number of such a directory or any file in it gets allocated from a layer accounted to the cgroup which the cache is created for, the cgroup will get pinned for good, because one has to free all kmem allocations accounted to a cgroup in order to release it and destroy all its kmem caches. That said we must not account layers of ino_ida to any memory cgroup. Since per net init operations may create new sysfs entries directly (e.g. lo device) or indirectly (nf_conntrack creates a new kmem cache per each namespace, which, in turn, creates new sysfs entries), an easy way to reproduce this issue is by creating network namespace(s) from inside a kmem-active memory cgroup. Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Acked-by: Tejun Heo <tj@kernel.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Greg Thelen <gthelen@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: <stable@vger.kernel.org> [4.0.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14gfp: add __GFP_NOACCOUNTVladimir Davydov3-1/+8
Not all kmem allocations should be accounted to memcg. The following patch gives an example when accounting of a certain type of allocations to memcg can effectively result in a memory leak. This patch adds the __GFP_NOACCOUNT flag which if passed to kmalloc and friends will force the allocation to go through the root cgroup. It will be used by the next patch. Note, since in case of kmemleak enabled each kmalloc implies yet another allocation from the kmemleak_object cache, we add __GFP_NOACCOUNT to gfp_kmemleak_mask. Alternatively, we could introduce a per kmem cache flag disabling accounting for all allocations of a particular kind, but (a) we would not be able to bypass accounting for kmalloc then and (b) a kmem cache with this flag set could not be merged with a kmem cache without this flag, which would increase the number of global caches and therefore fragmentation even if the memory cgroup controller is not used. Despite its generic name, currently __GFP_NOACCOUNT disables accounting only for kmem allocations while user page allocations are always charged. To catch abusing of this flag, a warning is issued on an attempt of passing it to mem_cgroup_try_charge. Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Cc: Tejun Heo <tj@kernel.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Greg Thelen <gthelen@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: <stable@vger.kernel.org> [4.0.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14tools/vm: fix page-flags buildAndi Kleen1-1/+1
libabikfs.a doesn't exist anymore, so we now need to link with libapi.a. Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14drivers/rtc/rtc-armada38x.c: remove unused local `flags'Andrew Morton1-1/+1
Reported-by: Fengguang Wu <fengguang.wu@gmail.com> Cc: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-14jbd2: fix r_count overflows leading to buffer overflow in journal recoveryDarrick J. Wong2-9/+19
The journal revoke block recovery code does not check r_count for sanity, which means that an evil value of r_count could result in the kernel reading off the end of the revoke table and into whatever garbage lies beyond. This could crash the kernel, so fix that. However, in testing this fix, I discovered that the code to write out the revoke tables also was not correctly checking to see if the block was full -- the current offset check is fine so long as the revoke table space size is a multiple of the record size, but this is not true when either journal_csum_v[23] are set. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Jan Kara <jack@suse.cz> Cc: stable@vger.kernel.org
2015-05-14ext4: check for zero length extent explicitlyEryu Guan1-1/+1
The following commit introduced a bug when checking for zero length extent 5946d08 ext4: check for overlapping extents in ext4_valid_extent_entries() Zero length extent could pass the check if lblock is zero. Adding the explicit check for zero length back. Signed-off-by: Eryu Guan <guaneryu@gmail.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: stable@vger.kernel.org
2015-05-14ext4: fix NULL pointer dereference when journal restart failsLukas Czerner2-9/+22
Currently when journal restart fails, we'll have the h_transaction of the handle set to NULL to indicate that the handle has been effectively aborted. We handle this situation quietly in the jbd2_journal_stop() and just free the handle and exit because everything else has been done before we attempted (and failed) to restart the journal. Unfortunately there are a number of problems with that approach introduced with commit 41a5b913197c "jbd2: invalidate handle if jbd2_journal_restart() fails" First of all in ext4 jbd2_journal_stop() will be called through __ext4_journal_stop() where we would try to get a hold of the superblock by dereferencing h_transaction which in this case would lead to NULL pointer dereference and crash. In addition we're going to free the handle regardless of the refcount which is bad as well, because others up the call chain will still reference the handle so we might potentially reference already freed memory. Moreover it's expected that we'll get aborted handle as well as detached handle in some of the journalling function as the error propagates up the stack, so it's unnecessary to call WARN_ON every time we get detached handle. And finally we might leak some memory by forgetting to free reserved handle in jbd2_journal_stop() in the case where handle was detached from the transaction (h_transaction is NULL). Fix the NULL pointer dereference in __ext4_journal_stop() by just calling jbd2_journal_stop() quietly as suggested by Jan Kara. Also fix the potential memory leak in jbd2_journal_stop() and use proper handle refcounting before we attempt to free it to avoid use-after-free issues. And finally remove all WARN_ON(!transaction) from the code so that we do not get random traces when something goes wrong because when journal restart fails we will get to some of those functions. Cc: stable@vger.kernel.org Signed-off-by: Lukas Czerner <lczerner@redhat.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Jan Kara <jack@suse.cz>
2015-05-14ext4: remove unused function prototype from ext4.hTheodore Ts'o1-1/+0
The ext4_extent_tree_init() function hasn't been in the ext4 code for a long time ago, except in an unused function prototype in ext4.h Google-Bug-Id: 4530137 Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-05-14ext4: don't save the error information if the block device is read-onlyTheodore Ts'o1-0/+2
Google-Bug-Id: 20939131 Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-05-14ext4: fix lazytime optimizationTheodore Ts'o1-1/+1
We had a fencepost error in the lazytime optimization which means that timestamp would get written to the wrong inode. Cc: stable@vger.kernel.org Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-05-14MAINTAINERS: ARM: EXYNOS: Add Krzysztof Kozlowski as co-maintainerKrzysztof Kozlowski1-0/+1
Add Krzysztof Kozlowski as a co-maintainer of Samsung Exynos ARM architecture to review the patches. Patches will go as usual - picked up by Kukjin Kim. Cc: Russell King <linux@arm.linux.org.uk> Cc: Kukjin Kim <kgene@kernel.org> Cc: Kevin Hilman <khilman@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Olof Johansson <olof@lixom.net> Cc: linux-samsung-soc@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk> Acked-by: Tobias Jakobi <liquid.acid@gmx.net> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Kevin Hilman <khilman@linaro.org>
2015-05-14drm/radeon: don't do mst probing if MST isn't enabled.Dave Airlie1-0/+3
This causes an oops as we haven't initialised the mst layer. Reported-by: Dave Jones <<davej@codemonkey.org.uk> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2015-05-14firmware: dmi_scan: Fix ordering of product_uuidJean Delvare1-3/+4
In function dmi_present(), dmi_walk_early() calls dmi_table(), which calls dmi_decode(), which ultimately calls dmi_save_uuid(). This last function makes a decision based on the value of global variable dmi_ver. The problem is that this variable is set right _after_ dmi_walk_early() returns. So dmi_save_uuid() always sees dmi_ver == 0 regardless of the actual version implemented. This causes /sys/class/dmi/id/product_uuid to always use the old ordering even on systems implementing DMI/SMBIOS 2.6 or later, which should use the new ordering. This is broken since kernel v3.8 for legacy DMI implementations and since kernel v3.10 for SMBIOS 2 implementations. SMBIOS 3 implementations with the 64-bit entry point are not affected. The first breakage does not matter much as in practice legacy DMI implementations are always for versions older than 2.6, which is when the UUID ordering changed. The second breakage is more problematic as it affects the vast majority of x86 systems manufactured since 2009. Signed-off-by: Jean Delvare <jdelvare@suse.de> Fixes: 9f9c9cbb6057 ("drivers/firmware/dmi_scan.c: fetch dmi version from SMBIOS if it exists") Fixes: 79bae42d51a5 ("dmi_scan: refactor dmi_scan_machine(), {smbios,dmi}_present()") Acked-by: Zhenzhong Duan <zhenzhong.duan@oracle.com> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Artem Savkov <artem.savkov@gmail.com> Cc: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Cc: Matt Fleming <matt.fleming@intel.com> Cc: stable@vger.kernel.org [v3.10+]
2015-05-14firmware: dmi_scan: Simplified displayed versionJean Delvare1-3/+2
The trailing .x adds no information for the reader, and if anyone tries to parse that line, this is more work as they have 3 different formats to handle instead of 2. Plus, this makes backporting fixes harder. Signed-off-by: Jean Delvare <jdelvare@suse.de> Fixes: 95be58df74a5 ("firmware: dmi_scan: Use full dmi version for SMBIOS3") Cc: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
2015-05-14x86/asm/uaccess: Get rid of copy_user_nocache_64.SBorislav Petkov3-108/+93
Move __copy_user_nocache() to arch/x86/lib/copy_user_64.S and kill the containing file. No functionality change. Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1431538944-27724-4-git-send-email-bp@alien8.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-14x86/asm/uaccess: Unify the ALIGN_DESTINATION macroBorislav Petkov3-48/+25
Pull it up into the header and kill duplicate versions. Separately, both macros are identical: 35948b2bd3431aee7149e85cfe4becbc /tmp/a 35948b2bd3431aee7149e85cfe4becbc /tmp/b Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1431538944-27724-3-git-send-email-bp@alien8.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-14x86/asm/uaccess: Remove FIX_ALIGNMENT define from copy_user_nocache_64.S:Borislav Petkov1-5/+0
No code changed: # arch/x86/lib/copy_user_nocache_64.o: text data bss dec hex filename 390 0 0 390 186 copy_user_nocache_64.o.before 390 0 0 390 186 copy_user_nocache_64.o.after md5: 7fa0577b28700af89d3a67a8b590426e copy_user_nocache_64.o.before.asm 7fa0577b28700af89d3a67a8b590426e copy_user_nocache_64.o.after.asm Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1431538944-27724-2-git-send-email-bp@alien8.de Signed-off-by: Ingo Molnar <mingo@kernel.org>