aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2021-02-19KVM: x86: Further clarify the logic and comments for toggling log dirtySean Christopherson1-4/+11
Add a sanity check in kvm_mmu_slot_apply_flags to assert that the LOG_DIRTY_PAGES flag is indeed being toggled, and explicitly rely on that holding true when zapping collapsible SPTEs. Manipulating the CPU dirty log (PML) and write-protection also relies on this assertion, but that's not obvious in the current code. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210213005015.1651772-11-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-19KVM: x86: Move MMU's PML logic to common codeSean Christopherson5-100/+24
Drop the facade of KVM's PML logic being vendor specific and move the bits that aren't truly VMX specific into common x86 code. The MMU logic for dealing with PML is tightly coupled to the feature and to VMX's implementation, bouncing through kvm_x86_ops obfuscates the code without providing any meaningful separation of concerns or encapsulation. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210213005015.1651772-10-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-19KVM: x86/mmu: Make dirty log size hook (PML) a value, not a functionSean Christopherson4-13/+4
Store the vendor-specific dirty log size in a variable, there's no need to wrap it in a function since the value is constant after hardware_setup() runs. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210213005015.1651772-9-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-19KVM: x86/mmu: Expand on the comment in kvm_vcpu_ad_need_write_protect()Sean Christopherson1-1/+4
Expand the comment about need to use write-protection for nested EPT when PML is enabled to clarify that the tagging is a nop when PML is _not_ enabled. Without the clarification, omitting the PML check looks wrong at first^Wfifth glance. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210213005015.1651772-8-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-19KVM: nVMX: Disable PML in hardware when running L2Sean Christopherson2-16/+25
Unconditionally disable PML in vmcs02, KVM emulates PML purely in the MMU, e.g. vmx_flush_pml_buffer() doesn't even try to copy the L2 GPAs from vmcs02's buffer to vmcs12. At best, enabling PML is a nop. At worst, it will cause vmx_flush_pml_buffer() to record bogus GFNs in the dirty logs. Initialize vmcs02.GUEST_PML_INDEX such that PML writes would trigger VM-Exit if PML was somehow enabled, skip flushing the buffer for guest mode since the index is bogus, and freak out if a PML full exit occurs when L2 is active. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210213005015.1651772-7-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-19KVM: x86/mmu: Consult max mapping level when zapping collapsible SPTEsSean Christopherson3-12/+14
When zapping SPTEs in order to rebuild them as huge pages, use the new helper that computes the max mapping level to detect whether or not a SPTE should be zapped. Doing so avoids zapping SPTEs that can't possibly be rebuilt as huge pages, e.g. due to hardware constraints, memslot alignment, etc... This also avoids zapping SPTEs that are still large, e.g. if migration was canceled before write-protected huge pages were shattered to enable dirty logging. Note, such pages are still write-protected at this time, i.e. a page fault VM-Exit will still occur. This will hopefully be addressed in a future patch. Sadly, TDP MMU loses its const on the memslot, but that's a pervasive problem that's been around for quite some time. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210213005015.1651772-6-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-19KVM: x86/mmu: Pass the memslot to the rmap callbacksSean Christopherson1-9/+15
Pass the memslot to the rmap callbacks, it will be used when zapping collapsible SPTEs to verify the memslot is compatible with hugepages before zapping its SPTEs. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210213005015.1651772-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-19KVM: x86/mmu: Split out max mapping level calculation to helperSean Christopherson2-15/+24
Factor out the logic for determining the maximum mapping level given a memslot and a gpa. The helper will be used when zapping collapsible SPTEs when disabling dirty logging, e.g. to avoid zapping SPTEs that can't possibly be rebuilt as hugepages. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210213005015.1651772-4-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-19KVM: x86/mmu: Expand collapsible SPTE zap for TDP MMU to ZONE_DEVICE and HugeTLB pagesSean Christopherson1-1/+2
Zap SPTEs that are backed by ZONE_DEVICE pages when zappings SPTEs to rebuild them as huge pages in the TDP MMU. ZONE_DEVICE huge pages are managed differently than "regular" pages and are not compound pages. Likewise, PageTransCompoundMap() will not detect HugeTLB, so switch to PageCompound(). This matches the similar check in kvm_mmu_zap_collapsible_spte. Cc: Ben Gardon <bgardon@google.com> Fixes: 14881998566d ("kvm: x86/mmu: Support disabling dirty logging for the tdp MMU") Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210213005015.1651772-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-18KVM: nVMX: no need to undo inject_page_fault change on nested vmexitPaolo Bonzini1-3/+0
This is not needed because the tweak was done on the guest_mmu, while nested_ept_uninit_mmu_context has just changed vcpu->arch.walk_mmu back to the root_mmu. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-18KVM: nSVM: fix running nested guests when npt=0Paolo Bonzini1-0/+20
In case of npt=0 on host, nSVM needs the same .inject_page_fault tweak as VMX has, to make sure that shadow mmu faults are injected as vmexits. It is not clear why this is needed at all, but for now keep the same code as VMX and we'll fix it for both. Based on a patch by Maxim Levitsky <mlevitsk@redhat.com>. Fixes: 7c86663b68ba ("KVM: nSVM: inject exceptions via svm_check_nested_events") Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-18KVM: nSVM: move nested vmrun tracepoint to enter_svm_guest_modeMaxim Levitsky1-12/+14
This way trace will capture all the nested mode entries (including entries after migration, and from smm) Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Message-Id: <20210217145718.1217358-3-mlevitsk@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-18KVM: VMX: read idt_vectoring_info a bit earlierMaxim Levitsky1-1/+3
trace_kvm_exit prints this value (using vmx_get_exit_info) so it makes sense to read it before the trace point. Fixes: dcf068da7eb2 ("KVM: VMX: Introduce generic fastpath handler") Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Message-Id: <20210217145718.1217358-2-mlevitsk@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-18KVM: VMX: Allow INVPCID in guest without PCIDSean Christopherson1-10/+0
Remove the restriction that prevents VMX from exposing INVPCID to the guest without PCID also being exposed to the guest. The justification of the restriction is that INVPCID will #UD if it's disabled in the VMCS. While that is a true statement, it's also true that RDTSCP will #UD if it's disabled in the VMCS. Neither of those things has any dependency whatsoever on the guest being able to set CR4.PCIDE=1, which is what is effectively allowed by exposing PCID to the guest. Removing the bogus restriction aligns VMX with SVM, and also allows for an interesting configuration. INVPCID is that fastest way to do a global TLB flush, e.g. see native_flush_tlb_global(). Allowing INVPCID without PCID would let a guest use the expedited flush while also limiting the number of ASIDs consumed by the guest. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210212003411.1102677-4-seanjc@google.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-18KVM: x86: Advertise INVPCID by defaultSean Christopherson3-6/+3
Advertise INVPCID by default (if supported by the host kernel) instead of having both SVM and VMX opt in. INVPCID was opt in when it was a VMX only feature so that KVM wouldn't prematurely advertise support if/when it showed up in the kernel on AMD hardware. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210212003411.1102677-3-seanjc@google.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-18KVM: SVM: Intercept INVPCID when it's disabled to inject #UDSean Christopherson1-4/+4
Intercept INVPCID if it's disabled in the guest, even when using NPT, as KVM needs to inject #UD in this case. Fixes: 4407a797e941 ("KVM: SVM: Enable INVPCID feature on AMD") Cc: Babu Moger <babu.moger@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210212003411.1102677-2-seanjc@google.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-15selftests: kvm: avoid uninitialized variable warningPaolo Bonzini1-1/+2
The variable in practice will never be uninitialized, because the loop will always go through at least one iteration. In case it would not, make vcpu_get_cpuid report an assertion failure. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-15selftests: kvm: add hardware_disable testIgnacio Alvarado3-0/+167
This test launches 512 VMs in serial and kills them after a random amount of time. The test was original written to exercise KVM user notifiers in the context of1650b4ebc99d: - KVM: Disable irq while unregistering user notifier - https://lore.kernel.org/kvm/CACXrx53vkO=HKfwWwk+fVpvxcNjPrYmtDZ10qWxFvVX_PTGp3g@mail.gmail.com/ Recently, this test piqued my interest because it proved useful to for AMD SNP in exercising the "in-use" pages, described in APM section 15.36.12, "Running SNP-Active Virtual Machines". Signed-off-by: Ignacio Alvarado <ikalvarado@google.com> Signed-off-by: Marc Orr <marcorr@google.com> Message-Id: <20210213001452.1719001-1-marcorr@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-11KVM: x86/xen: Explicitly pad struct compat_vcpu_info to 64 bytesSean Christopherson1-5/+6
Add a 2 byte pad to struct compat_vcpu_info so that the sum size of its fields is actually 64 bytes. The effective size without the padding is also 64 bytes due to the compiler aligning evtchn_pending_sel to a 4-byte boundary, but depending on compiler alignment is subtle and unnecessary. Opportunistically replace spaces with tables in the other fields. Cc: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210210182609.435200-6-seanjc@google.com> Reviewed-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-11KVM: selftests: Don't bother mapping GVA for Xen shinfo testSean Christopherson1-4/+3
Don't bother mapping the Xen shinfo pages into the guest, they don't need to be accessed using the GVAs and passing a define with "GPA" in the name to addr_gva2hpa() is confusing. Cc: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210210182609.435200-5-seanjc@google.com> Reviewed-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-11KVM: selftests: Fix hex vs. decimal snafu in Xen testSean Christopherson1-1/+1
The Xen shinfo selftest uses '40' when setting the GPA of the vCPU info struct, but checks for the result at '0x40'. Arbitrarily use the hex version to resolve the bug. Fixes: 8d4e7e80838f ("KVM: x86: declare Xen HVM shared info capability and add test case") Cc: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210210182609.435200-4-seanjc@google.com> Reviewed-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-11KVM: selftests: Fix size of memslots created by Xen testsSean Christopherson2-4/+2
For better or worse, the memslot APIs take the number of pages, not the size in bytes. The Xen tests need 2 pages, not 8192 pages. Fixes: 8d4e7e80838f ("KVM: x86: declare Xen HVM shared info capability and add test case") Cc: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210210182609.435200-3-seanjc@google.com> Reviewed-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-11KVM: selftests: Ignore recently added Xen tests' build outputSean Christopherson1-0/+2
Add the new Xen test binaries to KVM selftest's .gitnore. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210210182609.435200-2-seanjc@google.com> Reviewed-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>