aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/exported-sql-viewer.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2022-01-07s390/pci: simplify __pciwb_mio() inline asmNiklas Schnelle1-4/+1
The PCI Write Barrier instruction ignores the registers encoded in it. There is thus no need to explicitly set the register to zero or to associate it with a variable at all. In the resulting binary this removes an unnecessary lghi and it makes the code simpler. Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-29s390: remove unused TASK_SIZE_OFGuo Ren1-2/+1
This macro isn't used in Linux sched, now. Delete in include/linux/sched.h and arch's include/asm. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20211228064730.2882351-6-guoren@kernel.org Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-20s390/crash_dump: fix virtual vs physical address handlingHeiko Carstens2-12/+8
Signal processor STORE STATUS requires a physical address where register contents are supposed to be written to, however the kernel must read the data via the corresponding virtual address. Also the allocated save_area, where register contents are copied to, resides in virtual address space. Fix this by using proper __pa() conversion, or correct memblock_alloc() invocation. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-20s390/crypto: fix compile error for ChaCha20 moduleHeiko Carstens1-10/+10
The clgfi instruction used within the ChaCha20 assembly is only available for z9-109 and newer machines, and therefore this will generate a compile error if compiled e.g. with MARCH_Z900. Given that the assembler code will only be executed on machines with vector instructions, which became much later available than z9-109, use insn notation to generate the clgfi instruction, and avoid compile errors due to unknown instructions. Fixes: b087dfab4d39 ("s390/crypto: add SIMD implementation for ChaCha20") Reported-by: kernel test robot <lkp@intel.com> Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-16s390/mm: check 2KB-fragment page on releaseAlexander Gordeev1-11/+30
When CONFIG_DEBUG_VM is defined check that pending remove and tracking nibbles (bits 31-24 of the page refcount) are cleared. Should the earlier stages of the page lifespan have a race or logical error, such check could help in exposing the issue. Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-16s390/mm: better annotate 2KB pagetable fragments handlingAlexander Gordeev1-20/+107
Explicitly encode immediate value of pending remove nibble (bits 31-28) and tracking nibble (bits 27-24) of the page refcount whenever these nibbles are tested or changed, for better readability. Also, add some comments describing how the fragments are handled. Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-16s390/mm: fix 2KB pgtable release raceAlexander Gordeev1-1/+3
There is a race on concurrent 2KB-pgtables release paths when both upper and lower halves of the containing parent page are freed, one via page_table_free_rcu() + __tlb_remove_table(), and the other via page_table_free(). The race might lead to a corruption as result of remove of list item in page_table_free() concurrently with __free_page() in __tlb_remove_table(). Let's assume first the lower and next the upper 2KB-pgtables are freed from a page. Since both halves of the page are allocated the tracking byte (bits 24-31 of the page _refcount) has value of 0x03 initially: CPU0 CPU1 ---- ---- page_table_free_rcu() // lower half { // _refcount[31..24] == 0x03 ... atomic_xor_bits(&page->_refcount, 0x11U << (0 + 24)); // _refcount[31..24] <= 0x12 ... table = table | (1U << 0); tlb_remove_table(tlb, table); } ... __tlb_remove_table() { // _refcount[31..24] == 0x12 mask = _table & 3; // mask <= 0x01 ... page_table_free() // upper half { // _refcount[31..24] == 0x12 ... atomic_xor_bits( &page->_refcount, 1U << (1 + 24)); // _refcount[31..24] <= 0x10 // mask <= 0x10 ... atomic_xor_bits(&page->_refcount, mask << (4 + 24)); // _refcount[31..24] <= 0x00 // mask <= 0x00 ... if (mask != 0) // == false break; fallthrough; ... if (mask & 3) // == false ... else __free_page(page); list_del(&page->lru); ^^^^^^^^^^^^^^^^^^ RACE! ^^^^^^^^^^^^^^^^^^^^^ } ... } The problem is page_table_free() releases the page as result of lower nibble unset and __tlb_remove_table() observing zero too early. With this update page_table_free() will use the similar logic as page_table_free_rcu() + __tlb_remove_table(), and mark the fragment as pending for removal in the upper nibble until after the list_del(). In other words, the parent page is considered as unreferenced and safe to release only when the lower nibble is cleared already and unsetting a bit in upper nibble results in that nibble turned zero. Cc: stable@vger.kernel.org Suggested-by: Vlastimil Babka <vbabka@suse.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-16s390/sclp: release SCLP early buffer after kernel initializationAlexander Egorenkov1-0/+3
The SCLP early buffer is used only during kernel initialization and can be freed afterwards. The only way to ensure that it is not released while being in use, is to release it in free_initmem(). Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Alexander Egorenkov <egorenar@linux.ibm.com> [agordeev@linux.ibm.com: added debug output] Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-16s390/nmi: disable interrupts on extended save area updateAlexander Gordeev4-33/+29
Updating of the pointer to machine check extended save area on the IPL CPU needs the lowcore protection to be disabled. Disable interrupts while the protection is off to avoid unnoticed writes to the lowcore. Suggested-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-16s390/zcrypt: CCA control CPRB sendingJuergen Christ1-4/+3
When sending a CCA CPRB to a control domain, the CPRB has to be sent via a usage domain. Previous code used the default domain to route this message. If the default domain is not online and ready to send the CPRB, the ioctl will fail even if other usage domains could be used to send the CPRB. To improve this, instead of using the default domain, switch to auto-select of the domain. Signed-off-by: Juergen Christ <jchrist@linux.ibm.com> Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-16s390/disassembler: update opcode tableHeiko Carstens2-1/+3
Sync with binutils: update opcode table to reflect the instruction format update of the lpswey instruction, and add the qpaci instruction. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-16s390/uv: fix memblock virtual vs physical address confusionHeiko Carstens1-5/+5
memblock_alloc_try_nid() returns a virtual address, however in error case the allocated memory is incorrectly freed with memblock_phys_free(). Properly use memblock_free() instead, and pass a physical address to uv_init() to fix this. Note: this doesn't fix a bug currently, since virtual and physical addresses are identical. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-16s390/smp: fix memblock_phys_free() vs memblock_free() confusionHeiko Carstens1-1/+1
memblock_phys_free() is used on a virtual address. Fix this by using memblock_free(). Note: this doesn't fix a bug currently, since virtual and physical addresses are identical. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-16s390/sclp: fix memblock_phys_free() vs memblock_free() confusionHeiko Carstens1-1/+1
memblock_phys_free() is used on a virtual address. Fix this by using memblock_free(). Note: this doesn't fix a bug currently, since virtual and physical addresses are identical. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-16s390/exit: remove dead reference to do_exit from copy_threadEric W. Biederman1-1/+0
My s390 assembly is not particularly good so I have read the history of the reference to do_exit copy_thread and have been able to verify that do_exit is not used. The general argument is that s390 has been changed to use the generic kernel_thread and kernel_execve and the generic versions do not call do_exit. So it is strange to see a do_exit reference sitting there. The history of the do_exit reference in s390's version of copy_thread seems conclusive that the do_exit reference is something that lingers and should have been removed several years ago. Up through 8d19f15a60be ("[PATCH] s390 update (1/27): arch.") the s390 code made a call to the exit(2) system call when a kernel thread finished. Then kernel_thread_starter was added which branched directly to the value in register 11 when the kernel thread finshed. The value in register 11 was set in kernel_thread to "regs.gprs[11] = (unsigned long) do_exit" In commit 37fe5d41f640 ("s390: fold kernel_thread_helper() into ret_from_fork()") kernel_thread_starter was moved into entry.S and entry64.S unchanged (except for the syntax differences between inline assemly and in the assembly file). In commit f9a7e025dfc2 ("s390: switch to generic kernel_thread()") the assignment to "gprs[11]" was moved into copy_thread from the old kernel_thread. The helper kernel_thread_starter was still being used and was still branching to "%r11" at the end. In commit 30dcb0996e40 ("s390: switch to saner kernel_execve() semantics") kernel_thread_starter was changed to unconditionally branch to sysc_tracenogo instead to %r11 which held the value of do_exit. Unfortunately copy_thread was not updated to stop passing do_exit in "gprs[11]". In commit 56e62a737028 ("s390: convert to generic entry") kernel_thread_starter was replaced by __ret_from_fork. And the code still continued to pass do_exit in "gprs[11]" despite __ret_from_fork not caring in the slightest. Remove this dead reference to do_exit to make it clear that s390 is not doing anything with do_exit in copy_thread. History Tree: https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Fixes: 30dcb0996e40 ("s390: switch to saner kernel_execve() semantics") Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Link: https://lore.kernel.org/r/20211208202532.16409-1-ebiederm@xmission.com Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-10s390/ap: add missing virt_to_phys address conversionHeiko Carstens1-1/+3
The address of the notification-indicator byte is an absolute address. Therefore convert its virtual to a physical address before being used with PQAP(AQIC). Note: this currently doesn't fix a real bug, since virtual addresses are indentical to physical ones. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-10s390/pgalloc: use pointers instead of unsigned long valuesHeiko Carstens1-32/+32
After adding the missing __va()/__pa() calls to the base asce functions there are even more casts in the code than before. Make the code more readable by passing and using pointers to page tables, instead of using unsigned values for the same purpose. This allows to get rid of nearly all casts within the code. Suggested-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-10s390/pgalloc: add virt/phys address handling to base asce functionsHeiko Carstens1-13/+13
The base asce functions create/free page tables open-coded to make sure that the returned asce and page tables do not make use of any enhanced DAT features like e.g. large pages. This is required for some I/O functions that use an asce, like e.g. some service call requests. Handling of virtual vs physical addresses is missing; therefore add that now. Note: this currently doesn't fix a real bug, since virtual addresses are indentical to physical ones. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-10s390/cmm: add missing virt_to_phys() conversionHeiko Carstens1-1/+1
diag10_range() expects a pfn, however the current cmm code is shifting a virtual address, instead of a physical address by PAGE_SHIFT bits, which would give a wrong result in case if V!=R. Use virt_to_pfn() to fix this. Note: this currently doesn't fix a real bug, since virtual addresses are indentical to physical ones. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-10s390/diag: use pfn_to_phys() instead of open codingHeiko Carstens1-2/+2
Use pfn_to_phys() instead of open coding to make it clear what the code is doing. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-10s390/mm: add missing phys_to_virt translation to page table dumperHeiko Carstens1-4/+4
The page table dumper walks page table tables without using standard page table primitives in order to also dump broken entries. However it currently does not translate physical to virtual addresses before dereferencing them. Therefore add this missing translation. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/vfio-ap: add status attribute to AP queue device's sysfs dirTony Krowiak1-3/+76
This patch adds a sysfs 'status' attribute to a queue device when it is bound to the vfio_ap device driver. The field displays a string indicating the status of the queue device: Status String: Indicates: ------------- --------- "assigned" the queue is assigned to an mdev, but is not in use by a KVM guest. "in use" the queue is assigned to an mdev and is in use by a KVM guest. "unassigned" the queue is not assigned to an mdev. The status string will be displayed by the 'lszcrypt' command if the queue device is bound to the vfio_ap device driver. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> [akrowiak@linux.ibm.com: added check for queue in use by guest] Signed-off-by: Tony Krowiak <akrowiak@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/nmi: add missing __pa/__va address conversion of extended save areaHeiko Carstens3-7/+7
Add missing __pa/__va address conversion of machine check extended save area designation, which is an absolute address. Note: this currently doesn't fix a real bug, since virtual addresses are indentical to physical ones. Reported-by: Vineeth Vijayan <vneethv@linux.ibm.com> Tested-by: Vineeth Vijayan <vneethv@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/qdio: clarify logical vs absolute in QIB's kerneldocJulian Wiedmann1-2/+2
qib.isliba and qib.osliba are actually logical addresses, and this is also how the relevant code sets up these fields. Fix up the documentation. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Alexandra Winter <wintera@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/qdio: remove unneeded sanity check in qdio_do_sqbs()Julian Wiedmann1-2/+0
All callers of set_buf_states() are already making sure that 'count' is not 0. So don't check it an additional time. Note that our own code also doesn't _require_ the count to be sane (ie. we can't overrun an array or similar). So worst case HW would simply reject the SQBS operation and report an error. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Benjamin Block <bblock@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/pci: use physical addresses in DMA tablesNiklas Schnelle4-32/+35
The entries in the DMA translation tables for our IOMMU must specify physical addresses of either the next level table or the final page to be mapped for DMA. Currently however the code simply passes the virtual addresses of both. On the other hand we still need to walk the tables via their virtual addresses so we need to do a phys_to_virt() when setting the entries and a virt_to_phys() when getting them. Similarly when passing the I/O translation anchor to the hardware we must also specify its physical address. As the DMA and IOMMU APIs we are implementing already use the correct phys_addr_t type for the address to be mapped let's also thread this through instead of treating it as just an unsigned long. Note: this currently doesn't fix a real bug, since virtual addresses are indentical to physical ones. Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/pci: use phys_to_virt() for AIBVs/DIBVsNiklas Schnelle1-3/+3
The adapter and directed interrupt bit vectors need to be referenced in the FIB with their physical not their virtual address. So use virt_to_phys() as approrpiate. Note: this currently doesn't fix a real bug, since virtual addresses are indentical to physical ones. Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/vmcp: use page_to_virt instead of page_to_physHeiko Carstens1-2/+2
Fix wrong usage of page_to_phys/phys_to_page. Note: this currently doesn't fix a real bug, since virtual addresses are indentical to physical ones. Acked-by: Thomas Richter <tmricht@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/qdio: split do_QDIO()Julian Wiedmann4-34/+63
The callers know what type of queue they want to work with. Introduce type-specific variants to add buffers on an {Input,Output} queue, so that we can avoid some function parameters and the de-muxing into type-specific hot paths. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Benjamin Block <bblock@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/qdio: split qdio_inspect_queue()Julian Wiedmann4-47/+64
The callers know what type of queue they want to inspect. Introduce type-specific variants to inspect an {Input,Output} queue, so that we can avoid one function parameter and some conditional branches in the hot paths. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Benjamin Block <bblock@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/qdio: clarify handler logic for qdio_handle_activate_check()Julian Wiedmann4-15/+14
qdio_handle_activate_check() tries to re-use one of the queue-specific handlers to report that the ACTIVATE ccw has been terminated. But the logic to select that handler is overly complex - in practice both qdio drivers have at least one Input Queue, so we never take the other paths. Make things more obvious by removing this unused code, and clearly spelling out that we re-use the Input Handler for generic error reporting. This also paves the way for a world without queue-specific error handlers. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Benjamin Block <bblock@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/qdio: clean up access to queue in qdio_handle_activate_check()Julian Wiedmann1-2/+2
qdio_handle_activate_check() re-uses a queue-specific handler to report that the ACTIVATE ccw has been terminated. It uses either the first input or output queue, so we can hard-code q->nr as 0. Also don't access the q->irq_ptr parent pointer, we already have a pointer to the qdio_irq. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Benjamin Block <bblock@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/qdio: avoid allocating the qdio_irq with GFP_DMAJulian Wiedmann3-14/+23
The qdio_irq contains only two fields that are directly exposed to the HW (ccw and qib). And only the ccw needs to reside in 31-bit memory. So allocate it separately, and remove the GFP_DMA constraint from the qdio_irq allocation. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Benjamin Block <bblock@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/qdio: improve handling of CIWsJulian Wiedmann3-26/+20
Fetch the individual CIWs when we actually need them, rather than fetching both of them in qdio_setup_irq() and then needing to cache them inside the qdio_irq. Also deal with the error when a CIW is not available, instead of silently dropping this error condition in qdio_setup_irq()'s caller. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Benjamin Block <bblock@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/qdio: remove QDIO_SBAL_SIZE macroJulian Wiedmann1-1/+0
It's unused, and duplicates sizeof(struct qdio_buffer). Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Benjamin Block <bblock@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/cio: remove uevent suppress from cio driverVineeth Vijayan5-51/+3
commit fa1a8c23eb7d ("s390: cio: Delay uevents for subchannels") introduced suppression of uevents for a subchannel until after it is clear that the subchannel would not be unregistered again immediately. This was done to avoid uevents being generated for I/O subchannels with no valid device, which can happen on LPAR. However, this also has some drawbacks: All subchannel drivers need to manually remove the uevent suppression and generate an ADD uevent as soon as they are sure that the subchannel will stay around. This misses out on all uevents that are not the initial ADD uevent that would be generated while uevents are suppressed; for example, all subchannels were missing the BIND uevent. As uevents being generated even for I/O subchannels without an operational device turned out to be not as bad as missing uevents and complicating the code flow, let's remove uevent suppression for subchannels. Signed-off-by: Vineeth Vijayan <vneethv@linux.ibm.com> [cohuck@redhat.com: modified changelog] Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.ibm.com> Link: https://lore.kernel.org/r/20211122103756.352463-2-vneethv@linux.ibm.com Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06s390/crypto: add SIMD implementation for ChaCha20Patrick Steuer8-0/+1154
Add an implementation of the ChaCha20 stream cipher (see e.g. RFC 7539) that makes use of z13's vector instruction set extension. The original implementation is by Andy Polyakov which is adapted for kernel use. Four to six blocks are processed in parallel resulting in a performance gain for inputs >= 256 bytes. chacha20-generic 1 operation in 622 cycles (256 bytes) 1 operation in 2346 cycles (1024 bytes) chacha20-s390 1 operation in 218 cycles (256 bytes) 1 operation in 647 cycles (1024 bytes) Cc: Andy Polyakov <appro@openssl.org> Reviewed-by: Harald Freudenberger <freude@de.ibm.com> Signed-off-by: Patrick Steuer <patrick.steuer@de.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-05Linux 5.16-rc4Linus Torvalds1-1/+1
2021-12-05KVM: SVM: Do not terminate SEV-ES guests on GHCB validation failureTom Lendacky2-46/+71
Currently, an SEV-ES guest is terminated if the validation of the VMGEXIT exit code or exit parameters fails. The VMGEXIT instruction can be issued from userspace, even though userspace (likely) can't update the GHCB. To prevent userspace from being able to kill the guest, return an error through the GHCB when validation fails rather than terminating the guest. For cases where the GHCB can't be updated (e.g. the GHCB can't be mapped, etc.), just return back to the guest. The new error codes are documented in the lasest update to the GHCB specification. Fixes: 291bd20d5d88 ("KVM: SVM: Add initial support for a VMGEXIT VMEXIT") Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Message-Id: <b57280b5562893e2616257ac9c2d4525a9aeeb42.1638471124.git.thomas.lendacky@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-12-05KVM: SEV: Fall back to vmalloc for SEV-ES scratch area if necessarySean Christopherson1-4/+4
Use kvzalloc() to allocate KVM's buffer for SEV-ES's GHCB scratch area so that KVM falls back to __vmalloc() if physically contiguous memory isn't available. The buffer is purely a KVM software construct, i.e. there's no need for it to be physically contiguous. Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20211109222350.2266045-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-12-05KVM: SEV: Return appropriate error codes if SEV-ES scratch setup failsSean Christopherson1-13/+17
Return appropriate error codes if setting up the GHCB scratch area for an SEV-ES guest fails. In particular, returning -EINVAL instead of -ENOMEM when allocating the kernel buffer could be confusing as userspace would likely suspect a guest issue. Fixes: 8f423a80d299 ("KVM: SVM: Support MMIO for an SEV-ES guest") Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20211109222350.2266045-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-12-04parisc: Mark cr16 CPU clocksource unstable on all SMP machinesHelge Deller1-22/+8
In commit c8c3735997a3 ("parisc: Enhance detection of synchronous cr16 clocksources") I assumed that CPUs on the same physical core are syncronous. While booting up the kernel on two different C8000 machines, one with a dual-core PA8800 and one with a dual-core PA8900 CPU, this turned out to be wrong. The symptom was that I saw a jump in the internal clocks printed to the syslog and strange overall behaviour. On machines which have 4 cores (2 dual-cores) the problem isn't visible, because the current logic already marked the cr16 clocksource unstable in this case. This patch now marks the cr16 interval timers unstable if we have more than one CPU in the system, and it fixes this issue. Fixes: c8c3735997a3 ("parisc: Enhance detection of synchronous cr16 clocksources") Signed-off-by: Helge Deller <deller@gmx.de> Cc: <stable@vger.kernel.org> # v5.15+
2021-12-04parisc: Fix "make install" on newer debian releasesHelge Deller1-0/+1
On newer debian releases the debian-provided "installkernel" script is installed in /usr/sbin. Fix the kernel install.sh script to look for the script in this directory as well. Signed-off-by: Helge Deller <deller@gmx.de> Cc: <stable@vger.kernel.org> # v3.13+
2021-12-04sched/uclamp: Fix rq->uclamp_max not set on first enqueueQais Yousef1-1/+1
Commit d81ae8aac85c ("sched/uclamp: Fix initialization of struct uclamp_rq") introduced a bug where uclamp_max of the rq is not reset to match the woken up task's uclamp_max when the rq is idle. The code was relying on rq->uclamp_max initialized to zero, so on first enqueue static inline void uclamp_rq_inc_id(struct rq *rq, struct task_struct *p, enum uclamp_id clamp_id) { ... if (uc_se->value > READ_ONCE(uc_rq->value)) WRITE_ONCE(uc_rq->value, uc_se->value); } was actually resetting it. But since commit d81ae8aac85c changed the default to 1024, this no longer works. And since rq->uclamp_flags is also initialized to 0, neither above code path nor uclamp_idle_reset() update the rq->uclamp_max on first wake up from idle. This is only visible from first wake up(s) until the first dequeue to idle after enabling the static key. And it only matters if the uclamp_max of this task is < 1024 since only then its uclamp_max will be effectively ignored. Fix it by properly initializing rq->uclamp_flags = UCLAMP_FLAG_IDLE to ensure uclamp_idle_reset() is called which then will update the rq uclamp_max value as expected. Fixes: d81ae8aac85c ("sched/uclamp: Fix initialization of struct uclamp_rq") Signed-off-by: Qais Yousef <qais.yousef@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <Valentin.Schneider@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Link: https://lkml.kernel.org/r/20211202112033.1705279-1-qais.yousef@arm.com
2021-12-04preempt/dynamic: Fix setup_preempt_mode() return valueAndrew Halaney1-2/+2
__setup() callbacks expect 1 for success and 0 for failure. Correct the usage here to reflect that. Fixes: 826bfeb37bb4 ("preempt/dynamic: Support dynamic preempt with preempt= boot option") Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Andrew Halaney <ahalaney@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20211203233203.133581-1-ahalaney@redhat.com
2021-12-03cifs: avoid use of dstaddr as key for fscache client cookieShyam Prasad N1-37/+1
server->dstaddr can change when the DNS mapping for the server hostname changes. But conn_id is a u64 counter that is incremented each time a new TCP connection is setup. So use only that as a key. Signed-off-by: Shyam Prasad N <sprasad@microsoft.com> Reviewed-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Signed-off-by: Steve French <stfrench@microsoft.com>
2021-12-03cifs: add server conn_id to fscache client cookieShyam Prasad N2-0/+14
The fscache client cookie uses the server address (and port) as the cookie key. This is a problem when nosharesock is used. Two different connections will use duplicate cookies. Avoid this by adding server->conn_id to the key, so that it's guaranteed that cookie will not be duplicated. Also, for secondary channels of a session, copy the fscache pointer from the primary channel. The primary channel is guaranteed not to go away as long as secondary channels are in use. Also addresses minor problem found by kernel test robot. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Shyam Prasad N <sprasad@microsoft.com> Reviewed-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Signed-off-by: Steve French <stfrench@microsoft.com>
2021-12-03cifs: wait for tcon resource_id before getting fscache superShyam Prasad N3-7/+8
The logic for initializing tcon->resource_id is done inside cifs_root_iget. fscache super cookie relies on this for aux data. So we need to push the fscache initialization to this later point during mount. Signed-off-by: Shyam Prasad N <sprasad@microsoft.com> Reviewed-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Signed-off-by: Steve French <stfrench@microsoft.com>
2021-12-03cifs: fix missed refcounting of ipc tconPaulo Alcantara1-0/+1
Fix missed refcounting of IPC tcon used for getting domain-based DFS root referrals. We want to keep it alive as long as mount is active and can be refreshed. For standalone DFS root referrals it wouldn't be a problem as the client ends up having an IPC tcon for both mount and cache. Fixes: c88f7dcd6d64 ("cifs: support nested dfs links over reconnect") Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Reviewed-by: Enzo Matsumiya <ematsumiya@suse.de> Signed-off-by: Steve French <stfrench@microsoft.com>
2021-12-03x86/xen: Add xenpv_restore_regs_and_return_to_usermode()Lai Jiangshan2-0/+24
In the native case, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is the trampoline stack. But XEN pv doesn't use trampoline stack, so PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is also the kernel stack. In that case, source and destination stacks are identical, which means that reusing swapgs_restore_regs_and_return_to_usermode() in XEN pv would cause %rsp to move up to the top of the kernel stack and leave the IRET frame below %rsp. This is dangerous as it can be corrupted if #NMI / #MC hit as either of these events occurring in the middle of the stack pushing would clobber data on the (original) stack. And, with XEN pv, swapgs_restore_regs_and_return_to_usermode() pushing the IRET frame on to the original address is useless and error-prone when there is any future attempt to modify the code. [ bp: Massage commit message. ] Fixes: 7f2590a110b8 ("x86/entry/64: Use a per-CPU trampoline stack for IDT entries") Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lkml.kernel.org/r/20211126101209.8613-4-jiangshanlai@gmail.com