aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2016-08-01KVM: PPC: Introduce KVM_CAP_PPC_HTMSam Bobroff2-0/+5
Introduce a new KVM capability, KVM_CAP_PPC_HTM, that can be queried to determine if a PowerPC KVM guest should use HTM (Hardware Transactional Memory). This will be used by QEMU to populate the pa-features bits in the guest's device tree. Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-01MIPS: Select HAVE_KVM for MIPS64_R{2,6}James Hogan1-0/+2
We are now able to support KVM T&E with MIPS32 guests on some MIPS64r2 and MIPS64r6 hosts, so select HAVE_KVM so it can be enabled. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-01MIPS: KVM: Reset CP0_PageMask during host TLB flushJames Hogan1-0/+2
KVM sometimes flushes host TLB entries, reading each one to check if it corresponds to a guest KSeg0 address. In the absence of EntryHi.EHInv bits to invalidate the whole entry, the entries will be set to unique virtual addresses in KSeg0 (which is not TLB mapped), spaced 2*PAGE_SIZE apart. The TLB read however will clobber the CP0_PageMask register with whatever page size that TLB entry had, and that same page size will be written back into the TLB entry along with the unique address. This would cause breakage when transparent huge pages are enabled on 64-bit host kernels, since huge page entries will overlap other nearby entries when separated by only 2*PAGE_SIZE, causing a machine check exception. Fix this by restoring the old CP0_PageMask value (which should be set to the normal page size) after reading the TLB entry if we're going to go ahead and invalidate it. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-01MIPS: KVM: Fix ptr->int cast via KVM_GUEST_KSEGX()James Hogan1-1/+1
kvm_mips_trans_replace() passes a pointer to KVM_GUEST_KSEGX(). This breaks on 64-bit builds due to the cast of that 64-bit pointer to a different sized 32-bit int. Cast the pointer argument to an unsigned long to work around the warning. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-01MIPS: KVM: Sign extend MFC0/RDHWR resultsJames Hogan1-3/+4
When emulating MFC0 instructions to load 32-bit values from guest COP0 registers and the RDHWR instruction to read the CC (Count) register, sign extend the result to comply with the MIPS64 architecture. The result must be in canonical 32-bit form or the guest may malfunction. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-01MIPS: KVM: Fix 64-bit big endian dynamic translationJames Hogan1-0/+8
The MFC0 and MTC0 instructions in the guest which cause traps can be replaced with 32-bit loads and stores to the commpage, however on big endian 64-bit builds the offset needs to have 4 added so as to load/store the least significant half of the long instead of the most significant half. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-01MIPS: KVM: Fail if ebase doesn't fit in CP0_EBaseJames Hogan1-0/+12
Fail if the address of the allocated exception base doesn't fit into the CP0_EBase register. This can happen on MIPS64 if CP0_EBase.WG isn't implemented but RAM is available outside of the range of KSeg0. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-01MIPS: KVM: Use 64-bit CP0_EBase when appropriateJames Hogan1-3/+22
Update the KVM entry point to write CP0_EBase as a 64-bit register when it is 64-bits wide, and to set the WG (write gate) bit if it exists in order to write bits 63:30 (or 31:30 on MIPS32). Prior to MIPS64r6 it was UNDEFINED to perform a 64-bit read or write of a 32-bit COP0 register. Since this is dynamically generated code, generate the right type of access depending on whether the kernel is 64-bit and cpu_has_ebase_wg. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-01MIPS: KVM: Set CP0_Status.KX on MIPS64James Hogan1-2/+8
Update the KVM entry code to set the CP0_Entry.KX bit on 64-bit kernels. This is important to allow the entry code, running in kernel mode, to access the full 64-bit address space right up to the point of entering the guest, and immediately after exiting the guest, so it can safely restore & save the guest context from 64-bit segments. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-01MIPS: KVM: Make entry code MIPS64 friendlyJames Hogan1-24/+24
The MIPS KVM entry code (originally kvm_locore.S, later locore.S, and now entry.c) has never quite been right when built for 64-bit, using 32-bit instructions when 64-bit instructions were needed for handling 64-bit registers and pointers. Fix several cases of this now. The changes roughly fall into the following categories. - COP0 scratch registers contain guest register values and the VCPU pointer, and are themselves full width. Similarly CP0_EPC and CP0_BadVAddr registers are full width (even though technically we don't support 64-bit guest address spaces with trap & emulate KVM). Use MFC0/MTC0 for accessing them. - Handling of stack pointers and the VCPU pointer must match the pointer size of the kernel ABI (always o32 or n64), so use ADDIU. - The CPU number in thread_info, and the guest_{user,kernel}_asid arrays in kvm_vcpu_arch are all 32 bit integers, so use lw (instead of LW) to load them. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-01MIPS: KVM: Use kmap instead of CKSEG0ADDR()James Hogan2-7/+17
There are several unportable uses of CKSEG0ADDR() in MIPS KVM, which implicitly assume that a host physical address will be in the low 512MB of the physical address space (accessible in KSeg0). These assumptions don't hold for highmem or on 64-bit kernels. When interpreting the guest physical address when reading or overwriting a trapping instruction, use kmap_atomic() to get a usable virtual address to access guest memory, which is portable to 64-bit and highmem kernels. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-01MIPS: KVM: Use virt_to_phys() to get commpage PFNJames Hogan1-1/+1
Calculate the PFN of the commpage using virt_to_phys() instead of CPHYSADDR(). This is more portable as kzalloc() may allocate from XKPhys instead of KSeg0 on 64-bit kernels, which CPHYSADDR() doesn't handle. This is sufficient for highmem kernels too since kzalloc() will allocate from lowmem in KSeg0. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-01MIPS: Fix definition of KSEGX() for 64-bitJames Hogan1-1/+1
The KSEGX() macro is defined to 32-bit sign extend the address argument and logically AND the result with 0xe0000000, with the final result usually compared against one of the CKSEG macros. However the literal 0xe0000000 is unsigned as the high bit is set, and is therefore zero-extended on 64-bit kernels, resulting in the sign extension bits of the argument being masked to zero. This results in the odd situation where: KSEGX(CKSEG) != CKSEG (0xffffffff80000000 & 0x00000000e0000000) != 0xffffffff80000000) Fix this by 32-bit sign extending the 0xe0000000 literal using _ACAST32_. This will help some MIPS KVM code handling 32-bit guest addresses to work on 64-bit host kernels, but will also affect KSEGX in dec_kn01_be_backend() on a 64-bit DECstation kernel, and the SiByte DMA page ops KSEGX check in clear_page() and copy_page() on 64-bit SB1 kernels, neither of which appear to be designed with 64-bit segments in mind anyway. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: linux-mips@linux-mips.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-01KVM: VMX: Add VMCS to CPU's loaded VMCSs before VMPTRLDJim Mattson1-11/+15
Kexec needs to know the addresses of all VMCSs that are active on each CPU, so that it can flush them from the VMCS caches. It is safe to record superfluous addresses that are not associated with an active VMCS, but it is not safe to omit an address associated with an active VMCS. After a call to vmcs_load, the VMCS that was loaded is active on the CPU. The VMCS should be added to the CPU's list of active VMCSs before it is loaded. Signed-off-by: Jim Mattson <jmattson@google.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2016-08-01kvm: x86: nVMX: maintain internal copy of current VMCSDavid Matlack1-3/+28
KVM maintains L1's current VMCS in guest memory, at the guest physical page identified by the argument to VMPTRLD. This makes hairy time-of-check to time-of-use bugs possible,as VCPUs can be writing the the VMCS page in memory while KVM is emulating VMLAUNCH and VMRESUME. The spec documents that writing to the VMCS page while it is loaded is "undefined". Therefore it is reasonable to load the entire VMCS into an internal cache during VMPTRLD and ignore writes to the VMCS page -- the guest should be using VMREAD and VMWRITE to access the current VMCS. To adhere to the spec, KVM should flush the current VMCS during VMPTRLD, and the target VMCS during VMCLEAR (as given by the operand to VMCLEAR). Since this implementation of VMCS caching only maintains the the current VMCS, VMCLEAR will only do a flush if the operand to VMCLEAR is the current VMCS pointer. KVM will also flush during VMXOFF, which is not mandated by the spec, but also not in conflict with the spec. Signed-off-by: David Matlack <dmatlack@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-07-31hwmon: (adt7411) set sane values for CFG1 and CFG3Michael Walle1-4/+44
According to the datasheet we have to set some bits as 0 and others as 1. Make sure we do this for CFG1 and CFG3. Signed-off-by: Michael Walle <michael@walle.cc> Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2016-07-31hwmon: (iio_hwmon) fix memory leak in name attributeQuentin Schulz1-12/+12
The "name" variable's memory is now freed when the device is destructed thanks to devm function. Signed-off-by: Quentin Schulz <quentin.schulz@free-electrons.com> Reported-by: Guenter Roeck <linux@roeck-us.net> Fixes: e0f8a24e0edfd ("staging:iio::hwmon interface client driver.") Fixes: 61bb53bcbdd86 ("hwmon: (iio_hwmon) Add support for humidity sensors") Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2016-07-31hwmon: (ftsteutates) Fix potential memory access errorGuenter Roeck1-1/+1
Using set_bit() to set a bit in an integer is not a good idea, since the function expects an unsigned long as argument, which can be 64 bit wide. Coverity reports this problem as >>> CID 1364488: Memory - illegal accesses (INCOMPATIBLE_CAST) >>> Pointer "&ret" points to an object whose effective type is "int" >>> (32 bits, signed) but is dereferenced as a wider "unsigned +long" (64 bits, unsigned). This may lead to memory corruption. 245 set_bit(1, (unsigned long *)&ret); Just use BIT instead. Cc: Thilo Cestonaro <thilo@cestona.ro> Fixes: 08426eda58e0 ("hwmon: Add driver for FTS BMC chip "Teutates"") Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2016-07-31hwmon: (tmp102) Improve error handlingGuenter Roeck1-1/+3
Use devm_add_action_or_reset() instead of devm_add_action(), and check its return code. Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2016-07-31hwmon: (lm75) Improve error handlingGuenter Roeck1-2/+4
Use devm_add_action_or_reset() instead of devm_add_action(), and check its return value. Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2016-07-31hwmon: (lm90) Improve error handlingGuenter Roeck1-5/+7
Replace devm_add_action() with devm_add_action_or_reset(), and check its return value. Reviewed-by: Jean Delvare <jdelvare@suse.de> Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2016-07-31hwmon: (lm90) Add missing assignmentGuenter Roeck1-1/+1
Coverity reports the following error. >>> CID 1364474: Error handling issues (CHECKED_RETURN) >>> Calling "lm90_read_reg" without checking return value (as is done >>> elsewhere 28 out of 29 times). 532 lm90_read_reg(client, LM90_REG_R_REMOTE_LOWH); 533 if (val < 0) 534 return val; Fixes: 10bfef47bd259 ("hwmon: (lm90) Read limit registers only once") Reviewed-by: Jean Delvare <jdelvare@suse.de> Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2016-07-31hwmon: (sht3x) set initial jiffies to last_updateMatt Ranostay1-1/+1
Handling the wraparound requires the data->last_update to be set to an initial jiffies value. Otherwise on 32-bit systems you will not be able to request a reading till the 5 minute jiffies rollover happens. Cc: Guenter Roeck <linux@roeck-us.net> Cc: David Frey <david.frey@sensirion.com> Signed-off-by: Matt Ranostay <mranostay@gmail.com> Reviewed-by: Jean Delvare <jdelvare@suse.de> Fixes: 7c84f7f80d6fc ("hwmon: add support for Sensirion SHT3x sensors") Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2016-07-31s390/ftrace/jprobes: Fix conflict between jprobes and function graph tracingJiri Olsa1-0/+12
This fixes the same issue Steven already fixed for x86 in following commit: 237d28db036e ftrace/jprobes/x86: Fix conflict between jprobes and function graph tracing It fixes the crash, that happens when function graph tracing and jprobes are used simultaneously. Please refer to above commit for details. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Acked-by: Steven Rostedt <rostedt@goodmis.org>
2016-07-31s390: Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFOJames Hogan2-0/+3
AT_VECTOR_SIZE_ARCH should be defined with the maximum number of NEW_AUX_ENT entries that ARCH_DLINFO can contain, but it wasn't defined for s390 at all even though ARCH_DLINFO can contain one NEW_AUX_ENT when VDSO is enabled. This shouldn't be a problem as AT_VECTOR_SIZE_BASE includes space for AT_BASE_PLATFORM which s390 doesn't use, but lets define it now and add the comment above ARCH_DLINFO as found in several other architectures to remind future modifiers of ARCH_DLINFO to keep AT_VECTOR_SIZE_ARCH up to date. Fixes: b020632e40c3 ("[S390] introduce vdso on s390") Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: linux-s390@vger.kernel.org
2016-07-31s390/zcrypt: fix possible memory leak in ap_module_init()Wei Yongjun1-1/+3
ap_configuration is malloced in ap_module_init() and should be freed before leaving from the error handling cases, otherwise it may cause memory leak. Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-07-31s390/numa: only set possible nodes within node_possible_mapHeiko Carstens2-1/+11
Make sure that only those nodes appear in the node_possible_map that may actually be used. Usually that means that the node online and possible maps are identical. For mode "plain" we only have one node, for mode "emu" we have "emu_nodes" nodes. Before this the possible map included (with default config) 16 nodes while usually only one was used. That made a couple of loops that iterated over all possible nodes do more work than necessary. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Acked-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-07-31s390/als: fix compile with gcov enabledHeiko Carstens1-0/+1
Fix this one when gcov is enabled: arch/s390/kernel/als.o:(.data+0x118): undefined reference to `__gcov_merge_add' arch/s390/kernel/als.o: In function `_GLOBAL__sub_I_65535_0_verify_facilities': (.text.startup+0x8): undefined reference to `__gcov_init' Please merge with "s390/als: convert architecture level set code to C". Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-07-31s390/facilities: do not generate DWORDS define anymoreHeiko Carstens1-1/+0
The architecture level set code has been converted to C and doesn't need a define to figure out array sizes. Since the old code was the only user of the DWORDS define, we can get rid of it again. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Sascha Silbe <silbe@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-07-31s390/als: print missing facilities on facility mismatchHeiko Carstens1-0/+48
If the kernel needs more facilities to run than the machine provides it is running on, print the facility bit numbers which are missing. This allows to easily tell what went wrong and if simply the machine does not provide a required facility or if either the kernel or the hypervisor may have a bug. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Sascha Silbe <silbe@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-07-31s390/als: print machine type on facility mismatchHeiko Carstens1-4/+34
If we have a facility mismatch the kernel only emits a warning that the processor is not recent enough and stops operating. This doesn't give us a lot of an idea of what actually went wrong. As a first step print the machine type in addition. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Sascha Silbe <silbe@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-07-31s390/als: convert architecture level set code to CHeiko Carstens5-43/+60
There is no reason to have this code in assembly language. Therefore convert it to C. Note that this code needs special treatment: it is called very early and one of the side effects is that e.g. the bss section is not cleared. Therefore the preferred way for static variables is to put them on the stack which has a size of 16KB. There is no functional change with this patch. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Sascha Silbe <silbe@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-07-31s390/sclp: move uninitialized data to data sectionHeiko Carstens1-2/+3
The early sclp code may be called before the bss section is cleared. Therefore move all variables to the data section. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-07-31s390/zcrypt: Fix zcrypt suspend/resume behaviorIngo Tuchscherer5-6/+47
The device suspend call triggers all ap devices to fetch potentially available response messages from the queues. Therefore the corresponding zcrypt device, that is allocated asynchronously after ap device probing, needs to be fully prepared. This race condition could lead to uninitialized response buffers while trying to read from the queues. Introduce a new callback within the ap layer to get noticed when a zcrypt device is fully prepared. Additional checks prevent reading from devices that are not fully prepared. Signed-off-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-07-31s390/cio: fix premature wakeup during chp configureSebastian Ott1-14/+32
We store requests for channel path configure operations in an array but maintain an additional cfg_busy variable (indicating if we have requests stored in said array). When 2 tasks request a channel path configure operation cfg_busy could be set to 0 even if we still have unprocessed requests. This would lead to the second task being woken up although its request was not processed yet. Fix that by getting rid of cfg_busy and use the chp_cfg_task array in the wake up condition. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-07-31s390/cio: convert cfg_lock mutex to spinlockSebastian Ott1-9/+9
cfg_lock is never held long and we don't want to sleep while the lock is being held. Thus it can be converted to a simple spinlock. In addition we can now use the lock during the evaluation of a wake_up condition. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-07-31s390/mm: clean up pte/pmd encodingGerald Schaefer2-24/+48
The hugetlbfs pte<->pmd conversion functions currently assume that the pmd bit layout is consistent with the pte layout, which is not really true. The SW read and write bits are encoded as the sequence "wr" in a pte, but in a pmd it is "rw". The hugetlbfs conversion assumes that the sequence is identical in both cases, which results in swapped read and write bits in the pmd. In practice this is not a problem, because those pmd bits are only relevant for THP pmds and not for hugetlbfs pmds. The hugetlbfs code works on (fake) ptes, and the converted pte bits are correct. There is another variation in pte/pmd encoding which affects dirty prot-none ptes/pmds. In this case, a pmd has both its HW read-only and invalid bit set, while it is only the invalid bit for a pte. This also has no effect in practice, but it should better be consistent. This patch fixes both inconsistencies by changing the SW read/write bit layout for pmds as well as the PAGE_NONE encoding for ptes. It also makes the hugetlbfs conversion functions more robust by introducing a move_set_bit() macro that uses the pte/pmd bit #defines instead of constant shifts. Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-07-30random: Fix crashes with sparse node idsMichael Ellerman1-3/+2
On a system with sparse node ids, eg. a powerpc system with 4 nodes numbered like so: node 0: [mem 0x0000000000000000-0x00000007ffffffff] node 1: [mem 0x0000000800000000-0x0000000fffffffff] node 16: [mem 0x0000001000000000-0x00000017ffffffff] node 17: [mem 0x0000001800000000-0x0000001fffffffff] The code in rand_initialize() will allocate 4 pointers for the pool array, and initialise them correctly. However when go to use the pool, in eg. extract_crng(), we use the numa_node_id() to index into the array. For the higher numbered node ids this leads to random memory corruption, depending on what was kmalloc'ed adjacent to the pool array. Fix it by using nr_node_ids to size the pool array. Fixes: 1e7f583af67b ("random: make /dev/urandom scalable for silly userspace programs") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-30drm/nouveau/gr/nv3x: fix instobj write offsets in gr setupIlia Mirkin2-4/+4
This should fix some unaligned access warnings. This is also likely to fix non-descript issues on nv30/nv34 as a result of incorrect channel setup. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96836 Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu> Cc: stable@vger.kernel.org Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2016-07-30drm/nouveau/acpi: fix lockup with PCIe runtime PMPeter Wu1-4/+31
Since "PCI: Add runtime PM support for PCIe ports", the parent PCIe port can be runtime-suspended which disables power resources via ACPI. This is incompatible with DSM, resulting in a GPU device which is still in D3 and locks up the kernel on resume (on a Clevo P651RA, GTX965M). Mirror the behavior of Windows 8 and newer[1] (as observed via an AMLi debugger trace) and stop using the DSM functions for D3cold when power resources are available on the parent PCIe port. pci_d3cold_disable() is not used because on some machines, the old DSM method is broken. On a Lenovo T440p (GT 730M) memory and disk corruption would occur, but that is fixed with this patch[2]. [1]: https://msdn.microsoft.com/windows/hardware/drivers/bringup/firmware-requirements-for-d3cold [2]: https://github.com/Bumblebee-Project/bbswitch/issues/78#issuecomment-223549072 v2: simply check directly for _PR3. Added affected machines. v3: fixed block comment coding style. Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Peter Wu <peter@lekensteyn.nl> Acked-by: Dave Airlie <airlied@redhat.com Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2016-07-30drm/nouveau/acpi: check for function 0x1B before using itPeter Wu1-5/+13
Do not unconditionally invoke function 0x1B without checking for its availability, it leads to an infinite loop on some firmware. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=104791 Fixes: 5addcf0a5f0fad ("nouveau: add runtime PM support (v0.9)") Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Peter Wu <peter@lekensteyn.nl> Acked-by: Dave Airlie <airlied@redhat.com Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2016-07-30drm/nouveau/acpi: return supported DSM functionsPeter Wu1-7/+9
Return the set of supported functions to the caller. No functional changes. Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Peter Wu <peter@lekensteyn.nl> Acked-by: Dave Airlie <airlied@redhat.com Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2016-07-30drm/nouveau/acpi: ensure matching ACPI handle and supported functionsPeter Wu1-32/+26
Ensure that the returned set of supported DSM functions (MUX, Optimus) match the ACPI handle that is set in nouveau_dsm_pci_probe. As there are no machines with a MUX function on just one PCI device and an Optimus on another, there should not be a functional impact. This change however makes this implicit assumption more obvious. Convert int to bool and rename has_dsm to has_mux while at it. Let the caller set nouveau_dsm_priv.dhandle as needed. v2: pass dhandle to the caller. Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Peter Wu <peter@lekensteyn.nl> Acked-by: Dave Airlie <airlied@redhat.com Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2016-07-30drm/nouveau/fbcon: fix font width not divisible by 8Mikulas Patocka3-4/+4
The patch f045f459d925 ("drm/nouveau/fbcon: fix out-of-bounds memory accesses") tries to fix some out of memory accesses. Unfortunatelly, the patch breaks the display when using fonts with width that is not divisiable by 8. The monochrome bitmap for each character is stored in memory by lines from top to bottom. Each line is padded to a full byte. For example, for 22x11 font, each line is padded to 16 bits, so each character is consuming 44 bytes total, that is 11 32-bit words. The patch f045f459d925 changed the logic to "dsize = ALIGN(image->width * image->height, 32) >> 5", that is just 8 words - this is incorrect and it causes display corruption. This patch adds the necesary padding of lines to 8 bytes. This patch should be backported to stable kernels where f045f459d925 was backported. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Fixes: f045f459d925 ("drm/nouveau/fbcon: fix out-of-bounds memory accesses") Cc: stable@vger.kernel.org Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2016-07-29documentation: da9052: Update regulator bindings names to match DA9052/53 DTS expectationsSteve Twiss1-11/+11
Buck and LDO binding name changes. The binding names for the regulators have been changed to match the current expectation from existing device tree source files. This fix rectifies the disparity between what currently exists in some .dts[i] board files and what is listed in this binding document. This change re-aligns those differences and also brings the binding document in-line with the expectations of the product datasheet from Dialog Semiconductor. Bucks and LDOs now follow the expected notation: { buck1, buck2, buck3, buck4 } { ldo1, ldo2, ldo3, ldo4, ldo5, ldo6, ldo7, ldo8, ldo9, ldo10 } Signed-off-by: Steve Twiss <stwiss.opensource@diasemi.com> Signed-off-by: Rob Herring <robh@kernel.org>
2016-07-29Revert "vfs: add lookup_hash() helper"Linus Torvalds2-30/+5
This reverts commit 3c9fe8cdff1b889a059a30d22f130372f2b3885f. As Miklos points out in commit c1b2cc1a765a, the "lookup_hash()" helper is now unused, and in fact, with the hash salting changes, since the hash of a dentry name now depends on the directory dentry it is in, the helper function isn't even really likely to be useful. So rather than keep it around in case somebody else might end up finding a use for it, let's just remove the helper and not trick people into thinking it might be a useful thing. For example, I had obviously completely missed how the helper didn't follow the normal dentry hashing patterns, and how the hash salting patch broke overlayfs. Things would quietly build and look sane, but not work. Suggested-by: Miklos Szeredi <mszeredi@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-29drm/amd/powerplay: remove enable_clock_power_gatings_tasks from initialize and resume eventsTom St Denis1-2/+0
Setting PG state this early would cause lock ups in the IP block initialized functions. Signed-off-by: Tom St Denis <tom.stdenis@amd.com> Reviewed-by: Rex Zhu <Rex.Zhu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2016-07-29drm/amd/powerplay: move clockgating to after ungating power in pp for uvd/vceTom St Denis1-7/+7
Cannot set clockgating state before ungating power. Signed-off-by: Tom St Denis <tom.stdenis@amd.com> Reviewed-by: Rex Zhu <Rex.Zhu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2016-07-29drm/amdgpu: add query device id and revision id into system info entry at CGSHuang Rui2-1/+9
This patch adds device id and revision into system info entry at CGS, it's able to get PCI device id and revision id from amdgpu, it might get more info in future. PCI device id will be also used on powerplay part at current. Suggested-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Huang Rui <ray.huang@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2016-07-29drm/amdgpu: add new definition in bif headerHuang Rui1-0/+1
This patch adds new definition in bif header, and will be used on iceland HW powertune part. Signed-off-by: Huang Rui <ray.huang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>