aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2017-11-03xfs: Implement xfs_filemap_pfn_mkwrite() using __xfs_filemap_fault()Jan Kara2-27/+4
xfs_filemap_pfn_mkwrite() duplicates a lot of __xfs_filemap_fault(). It will also need to handle flushing for synchronous page faults. So just make that function use __xfs_filemap_fault(). Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03ext4: Support for synchronous DAX faultsJan Kara4-1/+47
We return IOMAP_F_DIRTY flag from ext4_iomap_begin() when asked to prepare blocks for writing and the inode has some uncommitted metadata changes. In the fault handler ext4_dax_fault() we then detect this case (through VM_FAULT_NEEDDSYNC return value) and call helper dax_finish_sync_fault() to flush metadata changes and insert page table entry. Note that this will also dirty corresponding radix tree entry which is what we want - fsync(2) will still provide data integrity guarantees for applications not using userspace flushing. And applications using userspace flushing can avoid calling fsync(2) and thus avoid the performance overhead. Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03ext4: Simplify error handling in ext4_dax_huge_fault()Jan Kara1-6/+7
If transaction starting fails, just bail out of the function immediately instead of checking for that condition throughout the function. Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03dax: Implement dax_finish_sync_fault()Jan Kara3-0/+87
Implement a function that filesystems can call to finish handling of synchronous page faults. It takes care of syncing appropriare file range and insertion of page table entry. Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03dax, iomap: Add support for synchronous faultsJan Kara3-3/+47
Add a flag to iomap interface informing the caller that inode needs fdstasync(2) for returned extent to become persistent and use it in DAX fault code so that we don't map such extents into page tables immediately. Instead we propagate the information that fdatasync(2) is necessary from dax_iomap_fault() with a new VM_FAULT_NEEDDSYNC flag. Filesystem fault handler is then responsible for calling fdatasync(2) and inserting pfn into page tables. Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03mm: Define MAP_SYNC and VM_SYNC flagsJan Kara4-2/+9
Define new MAP_SYNC flag and corresponding VMA VM_SYNC flag. As the MAP_SYNC flag is not part of LEGACY_MAP_MASK, currently it will be refused by all MAP_SHARED_VALIDATE map attempts and silently ignored for everything else. Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03dax: Allow tuning whether dax_insert_mapping_entry() dirties entryJan Kara1-7/+7
Currently we dirty radix tree entry whenever dax_insert_mapping_entry() gets called for a write fault. With synchronous page faults we would like to insert clean radix tree entry and dirty it only once we call fdatasync() and update page tables to save some unnecessary cache flushing. Add 'dirty' argument to dax_insert_mapping_entry() for that. Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03dax: Allow dax_iomap_fault() to return pfnJan Kara5-11/+12
For synchronous page fault dax_iomap_fault() will need to return PFN which will then need to be inserted into page tables after fsync() completes. Add necessary parameter to dax_iomap_fault(). Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03dax: Fix comment describing dax_iomap_fault()Jan Kara1-1/+2
Add missing argument description. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03dax: Inline dax_pmd_insert_mapping() into the callsiteJan Kara2-29/+19
dax_pmd_insert_mapping() has only one callsite and we will need to further fine tune what it does for synchronous faults. Just inline it into the callsite so that we don't have to pass awkward bools around. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03dax: Inline dax_insert_mapping() into the callsiteJan Kara1-27/+19
dax_insert_mapping() has only one callsite and we will need to further fine tune what it does for synchronous faults. Just inline it into the callsite so that we don't have to pass awkward bools around. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03dax: Create local variable for vmf->flags & FAULT_FLAG_WRITE testJan Kara1-2/+3
There are already two users and more are coming. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03dax: Create local variable for VMA in dax_iomap_pte_fault()Jan Kara1-2/+3
There are already two users and more are coming. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03dax: Factor out getting of pfn out of iomapJan Kara1-40/+43
Factor out code to get pfn out of iomap that is shared between PTE and PMD fault path. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03dax: Simplify arguments of dax_insert_mapping()Jan Kara1-16/+16
dax_insert_mapping() has lots of arguments and a lot of them is actuall duplicated by passing vm_fault structure as well. Change the function to take the same arguments as dax_pmd_insert_mapping(). Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03mm: Remove VM_FAULT_HWPOISON_LARGE_MASKJan Kara1-2/+0
It is unused. Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03mm: introduce MAP_SHARED_VALIDATE, a mechanism to safely define new mmap flagsDan Williams9-0/+61
The mmap(2) syscall suffers from the ABI anti-pattern of not validating unknown flags. However, proposals like MAP_SYNC need a mechanism to define new behavior that is known to fail on older kernels without the support. Define a new MAP_SHARED_VALIDATE flag pattern that is guaranteed to fail on all legacy mmap implementations. It is worth noting that the original proposal was for a standalone MAP_VALIDATE flag. However, when that could not be supported by all archs Linus observed: I see why you *think* you want a bitmap. You think you want a bitmap because you want to make MAP_VALIDATE be part of MAP_SYNC etc, so that people can do ret = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_SYNC, fd, 0); and "know" that MAP_SYNC actually takes. And I'm saying that whole wish is bogus. You're fundamentally depending on special semantics, just make it explicit. It's already not portable, so don't try to make it so. Rename that MAP_VALIDATE as MAP_SHARED_VALIDATE, make it have a value of 0x3, and make people do ret = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED_VALIDATE | MAP_SYNC, fd, 0); and then the kernel side is easier too (none of that random garbage playing games with looking at the "MAP_VALIDATE bit", but just another case statement in that map type thing. Boom. Done. Similar to ->fallocate() we also want the ability to validate the support for new flags on a per ->mmap() 'struct file_operations' instance basis. Towards that end arrange for flags to be generically validated against a mmap_supported_flags exported by 'struct file_operations'. By default all existing flags are implicitly supported, but new flags require MAP_SHARED_VALIDATE and per-instance-opt-in. Cc: Jan Kara <jack@suse.cz> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Suggested-by: Christoph Hellwig <hch@lst.de> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-11-03mm: Handle 0 flags in _calc_vm_trans() macroJan Kara1-1/+2
_calc_vm_trans() does not handle the situation when some of the passed flags are 0 (which can happen if these VM flags do not make sense for the architecture). Improve the _calc_vm_trans() macro to return 0 in such situation. Since all passed flags are constant, this does not add any runtime overhead. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-10-08Linux 4.14-rc4Linus Torvalds1-1/+1
2017-10-06ARC: [plat-hsdk]: Add reset controller node to manage ethernet resetEugeniy Paltsev2-0/+10
DW ethernet controller on HSDK hangs sometimes after SW reset, so add reset node to make possible to reset DW ethernet controller HW. Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-10-06arm64: Ensure fpsimd support is ready before userspace is activeSuzuki K Poulose1-1/+1
We register the pm/hotplug callbacks for FPSIMD as late_initcall, which happens after the userspace is active (from initramfs via populate_rootfs, a rootfs_initcall). Make sure we are ready even before the userspace could potentially use it, by promoting to a core_initcall. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-10-06arm64: Ensure the instruction emulation is ready for userspaceSuzuki K Poulose2-2/+2
We trap and emulate some instructions (e.g, mrs, deprecated instructions) for the userspace. However the handlers for these are registered as late_initcalls and the userspace could be up and running from the initramfs by that time (with populate_rootfs, which is a rootfs_initcall()). This could cause problems for the early applications ending up in failure like : [ 11.152061] modprobe[93]: undefined instruction: pc=0000ffff8ca48ff4 This patch promotes the specific calls to core_initcalls, which are guaranteed to be completed before we hit userspace. Cc: stable@vger.kernel.org Cc: Dave Martin <dave.martin@arm.com> Cc: Matthias Brugger <mbrugger@suse.com> Cc: James Morse <james.morse@arm.com> Reported-by: Matwey V. Kornilov <matwey.kornilov@gmail.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-10-06powerpc/powernv: Increase memory block size to 1GB on radixAnton Blanchard1-1/+9
Memory hot unplug on PowerNV radix hosts is broken. Our memory block size is 256MB but since we map the linear region with very large pages, each pte we tear down maps 1GB. A hot unplug of one 256MB memory block results in 768MB of memory getting unintentionally unmapped. At this point we are likely to oops. Fix this by increasing our memory block size to 1GB on PowerNV radix hosts. Fixes: 4b5d62ca17a1 ("powerpc/mm: add radix__remove_section_mapping()") Cc: stable@vger.kernel.org # v4.11+ Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-10-05dm raid: fix incorrect status output at the end of a "recover" processJonathan Brassow2-5/+7
There are three important fields that indicate the overall health and status of an array: dev_health, sync_ratio, and sync_action. They tell us the condition of the devices in the array, and the degree to which the array is synchronized. This commit fixes a condition that is reported incorrectly. When a member of the array is being rebuilt or a new device is added, the "recover" process is used to synchronize it with the rest of the array. When the process is complete, but the sync thread hasn't yet been reaped, it is possible for the state of MD to be: mddev->recovery = [ MD_RECOVERY_RUNNING MD_RECOVERY_RECOVER MD_RECOVERY_DONE ] curr_resync_completed = <max dev size> (but not MaxSector) and all rdevs to be In_sync. This causes the 'array_in_sync' output parameter that is passed to rs_get_progress() to be computed incorrectly and reported as 'false' -- or not in-sync. This in turn causes the dev_health status characters to be reported as all 'a', rather than the proper 'A'. This can cause erroneous output for several seconds at a time when tools will want to be checking the condition due to events that are raised at the end of a sync process. Fix this by properly calculating the 'array_in_sync' return parameter in rs_get_progress(). Also, remove an unnecessary intermediate 'recovery_cp' variable in rs_get_progress(). Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-10-05KVM: add X86_LOCAL_APIC dependencyArnd Bergmann1-0/+1
The rework of the posted interrupt handling broke building without support for the local APIC: ERROR: "boot_cpu_physical_apicid" [arch/x86/kvm/kvm-intel.ko] undefined! That configuration is probably not particularly useful anyway, so we can avoid the randconfig failures by adding a Kconfig dependency. Fixes: 8b306e2f3c41 ("KVM: VMX: avoid double list add with VT-d posted interrupts") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-05ovl: fix regression caused by exclusive upper/work dir protectionAmir Goldstein3-9/+26
Enforcing exclusive ownership on upper/work dirs caused a docker regression: https://github.com/moby/moby/issues/34672. Euan spotted the regression and pointed to the offending commit. Vivek has brought the regression to my attention and provided this reproducer: Terminal 1: mount -t overlay -o workdir=work,lowerdir=lower,upperdir=upper none merged/ Terminal 2: unshare -m Terminal 1: umount merged mount -t overlay -o workdir=work,lowerdir=lower,upperdir=upper none merged/ mount: /root/overlay-testing/merged: none already mounted or mount point busy To fix the regression, I replaced the error with an alarming warning. With index feature enabled, mount does fail, but logs a suggestion to override exclusive dir protection by disabling index. Note that index=off mount does take the inuse locks, so a concurrent index=off will issue the warning and a concurrent index=on mount will fail. Documentation was updated to reflect this change. Fixes: 2cac0c00a6cd ("ovl: get exclusive ownership on upper/work dirs") Cc: <stable@vger.kernel.org> # v4.13 Reported-by: Euan Kemp <euank@euank.com> Reported-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-10-05ovl: fix missing unlock_rename() in ovl_do_copy_up()Amir Goldstein4-24/+22
Use the ovl_lock_rename_workdir() helper which requires unlock_rename() only on lock success. Fixes: ("fd210b7d67ee ovl: move copy up lock out") Cc: <stable@vger.kernel.org> # v4.13 Signed-off-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-10-05ovl: fix dentry leak in ovl_indexdir_cleanup()Amir Goldstein1-2/+4
index dentry was not released when breaking out of the loop due to index verification error. Fixes: 415543d5c64f ("ovl: cleanup bad and stale index entries on mount") Cc: <stable@vger.kernel.org> # v4.13 Signed-off-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-10-05ovl: fix dput() of ERR_PTR in ovl_cleanup_index()Amir Goldstein1-1/+4
Fixes: caf70cb2ba5d ("ovl: cleanup orphan index entries") Cc: <stable@vger.kernel.org> # v4.13 Signed-off-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-10-05ovl: fix error value printed in ovl_lookup_index()Amir Goldstein1-0/+1
Fixes: 359f392ca53e ("ovl: lookup index entry for copy up origin") Cc: <stable@vger.kernel.org> # v4.13 Signed-off-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-10-05ovl: fix may_write_real() for overlayfs directoriesAmir Goldstein1-1/+3
Overlayfs directory file_inode() is the overlay inode whether the real inode is upper or lower. This fixes a regression in xfstest generic/158. Fixes: 7c6893e3c9ab ("ovl: don't allow writing ioctl on lower layer") Signed-off-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2017-10-05x86/kvm: Move kvm_fastop_exception to .fixup sectionJosh Poimboeuf1-2/+4
When compiling the kernel with the '-frecord-gcc-switches' flag, objtool complains: arch/x86/kvm/emulate.o: warning: objtool: .GCC.command.line+0x0: special: can't find new instruction And also the kernel fails to link. The problem is that the 'kvm_fastop_exception' code gets placed into the throwaway '.GCC.command.line' section instead of '.text'. Exception fixup code is conventionally placed in the '.fixup' section, so put it there where it belongs. Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-05i2c: i2c-stm32f7: make structure stm32f7_setup static constColin Ian King1-1/+1
The structure stm32f7_setup is local to the source and does not need to be in global scope, make it static const. Cleans up sparse warning: symbol 'stm32f7_setup' was not declared. Should it be static? Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2017-10-05i2c: ensure termination of *_device_id tablesThomas Meyer1-0/+1
Make sure (of/i2c/platform)_device_id tables are NULL terminated. Found by coccinelle spatch "misc/of_table.cocci" Signed-off-by: Thomas Meyer <thomas@m3y3r.de> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2017-10-05i2c: i801: Add support for Intel Cedar ForkJarkko Nikula3-0/+6
Add PCI ID for Intel Cedar Fork PCH. Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com> Reviewed-by: Jean Delvare <jdelvare@suse.de> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2017-10-05i2c: stm32f7: fix setup structurePierre-Yves MORDRET1-9/+6
I2C drive setup structure is not properly allocated. Make it static instead of pointer to store driver data. Fixes: aeb068c5721485 ("i2c: i2c-stm32f7: add driver") Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2017-10-04net: 8021q: skip packets if the vlan is downVishakha Narvekar1-0/+6
If the vlan is down, free the packet instead of proceeding with other processing, or counting it as received. If vlan interfaces are used as slaves for bonding, with arp monitoring for connectivity, if the rx counter is seen to be incrementing, then the bond device will not observe that the interface is down. CC: David S. Miller <davem@davemloft.net> Signed-off-by: Vishakha Narvekar <Vishakha.Narvekar@dell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-04Update James Hogan's email addressJames Hogan5-6/+8
Update my imgtec.com and personal email address to my kernel.org one in a few places as MIPS will soon no longer be part of Imagination Technologies, and add mappings in .mailcap so get_maintainer.pl reports the right address. Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-10-04drm/i915/glk: Fix DMC/DC state idleness calculationImre Deak1-1/+1
According to BSpec GLK like BXT needs to ignore the idle state of cores before starting the DMC firmware's DC state handler. Fixes: dbb28b5c3d3c ("drm/i915/DMC/GLK: Load DMC on GLK") Cc: Anusha Srivatsa <anusha.srivatsa@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171003095159.711-2-imre.deak@intel.com (cherry picked from commit b7208a3f3e52791571df064fb96025ad48edd1bf) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2017-10-04drm/i915/cnl: Reprogram DMC firmware after S3/S4 resumeImre Deak1-0/+3
The DMC firmware program memory is lost after S3/S4 system suspend, so we need to reprogram it during resume. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103070 Fixes: cebfcead63de ("drm/i915/DMC/CNL: Load DMC on CNL") Cc: Anusha Srivatsa <anusha.srivatsa@intel.com> Cc: Animesh Manna <animesh.manna@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171003095159.711-1-imre.deak@intel.com (cherry picked from commit 57522c4c87de20d8f7ad4e142a3a4334066d55ff) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2017-10-04i40iw: Fix port number for query QPMustafa Ismail1-0/+2
Port number 0 is an invalid port number. Set it to 1 as there is one port per i40iw device. Fixes: d37498417947 ("i40iw: add files for iwarp interface") Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-10-04i40iw: Add missing memory barriersMustafa Ismail3-8/+7
Remove duplicate set_64bit_val call to offset 24. Replace some instances of set_64bit_val with i40iw_insert_wqe_hdr as valid bit needs a write barrier and should be the last write operation for the WQE. Fixes: 786c6adb3a94 ("i40iw: add puda code") Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-10-04RDMA/qedr: Parse vlan priority as slAmrani, Ram1-0/+2
Parse the vlan priority from the vlan tag and configure it to the WC's sl field. Fixes: abd49676c707 ("qed: Add RoCE ll2 & GSI support") Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com> Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-10-04RDMA/qedr: Parse VLAN ID correctly and ignore the value of zeroAmrani, Ram2-4/+8
Rename vlan_id field name to vlan as it contains more than the vlan_id. Mask out non vlan id fields from vlan tag of the QED LL2 RX GSI vlan output. As it is expected to be vlan id only. Ignore vlan_id with value of zero. Fixes: abd49676c707 ("qed: Add RoCE ll2 & GSI support") Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com> Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-10-04IB/mlx5: Fix label order in error path handlingParav Pandit1-2/+2
When UAR get_page fails, it needs to continue to cleanup debugfs for congestion control parameters. Labels for error path were incorrectly ordered. This patch fixes to do correct cleanup on debugfs init failure and uar get page failure. Fixes: 4a2da0b8c078 ("IB/mlx5: Add debug control parameters for congestion control") Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Mark Bloch <markb@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-10-04arm64: Use larger stacks when KASAN is selectedMark Rutland1-3/+6
AddressSanitizer instrumentation can significantly bloat the stack, and with GCC 7 this can result in stack overflows at boot time in some configurations. We can avoid this by doubling our stack size when KASAN is in use, as is already done on x86 (and has been since KASAN was introduced). Regardless of other patches to decrease KASAN's stack utilization, kernels built with KASAN will always require more stack space than those built without, and we should take this into account. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-10-04ACPI/IORT: Fix PCI ACS enablementLorenzo Pieralisi1-0/+35
commit f6810c15cf97 ("iommu/arm-smmu: Clean up early-probing workarounds") removed kernel code that was allowing to initialize and probe the SMMU devices early (ie earlier than PCI devices, through linker script callback entries) in the boot process because it was not needed any longer in that the SMMU devices/drivers now support deferred probing. Since the SMMUs probe routines are also in charge of requesting global PCI ACS kernel enablement, commit f6810c15cf97 ("iommu/arm-smmu: Clean up early-probing workarounds") also postponed PCI ACS enablement to SMMUs devices probe time, which is too late given that PCI devices needs to detect if PCI ACS is enabled to init the respective capability through the following call path: pci_device_add() -> pci_init_capabilities() -> pci_enable_acs() Add code in the ACPI IORT SMMU platform devices initialization path (that is called before ACPI PCI enumeration) to detect if there exists firmware mappings to map root complexes ids to SMMU ids and if so enable ACS for the system. Fixes: f6810c15cf97 ("iommu/arm-smmu: Clean up early-probing workarounds") Reviewed-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Nate Watterson <nwatters@codeaurora.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Zhou Wang <wangzhou1@hisilicon.com> Cc: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-10-04kvm/x86: Avoid async PF preempting the kernel incorrectlyBoqun Feng3-7/+13
Currently, in PREEMPT_COUNT=n kernel, kvm_async_pf_task_wait() could call schedule() to reschedule in some cases. This could result in accidentally ending the current RCU read-side critical section early, causing random memory corruption in the guest, or otherwise preempting the currently running task inside between preempt_disable and preempt_enable. The difficulty to handle this well is because we don't know whether an async PF delivered in a preemptible section or RCU read-side critical section for PREEMPT_COUNT=n, since preempt_disable()/enable() and rcu_read_lock/unlock() are both no-ops in that case. To cure this, we treat any async PF interrupting a kernel context as one that cannot be preempted, preventing kvm_async_pf_task_wait() from choosing the schedule() path in that case. To do so, a second parameter for kvm_async_pf_task_wait() is introduced, so that we know whether it's called from a context interrupting the kernel, and the parameter is set properly in all the callsites. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Cc: stable@vger.kernel.org Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-10-04clk: samsung: exynos4: Enable VPLL and EPLL clocks for suspend/resume cycleMarek Szyprowski1-0/+15
Commit 6edfa11cb396 ("clk: samsung: Add enable/disable operation for PLL36XX clocks") added enable/disable operations to PLL clocks. Prior that VPLL and EPPL clocks were always enabled because the enable bit was never touched. Those clocks have to be enabled during suspend/resume cycle, because otherwise board fails to enter sleep mode. This patch enables them unconditionally before entering system suspend state. System restore function will set them to the previous state saved in the register cache done before that unconditional enable. Fixes: 6edfa11cb396 ("clk: samsung: Add enable/disable operation for PLL36XX clocks") CC: stable@vger.kernel.org # v4.13 Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Chanwoo Choi <cw00.choi@samsung.com> Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Acked-by: Sylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2017-10-04dm crypt: reject sector_size feature if device length is not aligned to itMilan Broz1-0/+4
If a crypt mapping uses optional sector_size feature, additional restrictions to mapped device segment size must be applied in constructor, otherwise the device activation will fail later. Fixes: 8f0009a225 ("dm crypt: optionally support larger encryption sector size") Cc: stable@vger.kernel.org # 4.12+ Signed-off-by: Milan Broz <gmazyland@gmail.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>