| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the CPU has the new VERW behavior than that is used, otherwise
use the proper sequence from Intel's "Deep Dive" doc is used in the
return-to-userspace and enter-VMM-guest paths. The enter-C3-idle
path is not mitigated because it's only a problem when SMT/HT is
enabled: mitigating everything when that's enabled would be a _huge_
set of changes that we see no point in doing.
Update vmm(4) to pass through the MSR bits so that guests can apply
the optimal mitigation.
VMM help and specific feedback from mlarkin@
vendor-portability help from jsg@ and kettenis@
ok kettenis@ mlarkin@ deraadt@ jsg@
|
|
|
|
|
|
|
|
|
| |
Emulate kvm pvclock in vmm(4). Compatible with pvclock(4) in OpenBSD. Linux
does not attach to this (yet).
Fixes by reyk@ and tested extensively by reyk@, tb@ and phessler@
ok mlarkin@ phessler@ reyk@
|
|
|
|
|
|
|
|
|
|
| |
Add a first cut of x86 page table walker to vmd(8) and vmm(4). This function is
not used right now but is a building block for future features like HPET, OUTSB
and INSB emulation, nested virtualisation support, etc.
With help from Mike Larkin
ok mlarkin@
|
|
|
|
|
|
|
|
|
|
|
| |
control features on AMD. Linux tries to use them and since these are not
fully implemented yet, it results in an OOPS during boot on recent
hardware.
When these are properly passed through, we can restore advertising
support for this feature.
ok deraadt@
|
|
|
|
|
|
|
| |
the MSRs to support them. Fixes an OOPS during Linux guest VM boot on
Ryzen.
ok deraadt
|
|
|
|
| |
ok mlarkin@
|
|
|
|
|
|
| |
Allow save/restore of %drX registers during VM exit and entry
discussed with deraadt@
|
|
|
|
|
|
|
|
|
| |
like we already do for MWAIT/MONITOR. Also match Intel here by not
exposing the SVM capability to AMD guests.
Allows Linux guests to boot in vmd(8) on Ryzen CPUs.
ok mlarkin@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(1) Future cpus which don't have the bug, (2) cpu's with microcode
containing a L1D flush operation, (3) stuffing the L1D cache with fresh
data and expiring old content. This stuffing loop is complicated and
interesting, no details on the mitigation have been released by Intel so
Mike and I studied other systems for inspiration. Replacement algorithm
for the L1D is described in the tlbleed paper. We use a 64K PA-linear
region filled with trapsleds (in case there is L1D->L1I data movement).
The TLBs covering the region are loaded first, because TLB loading
apparently flows through the D cache. Before performing vmlaunch or
vmresume, the cachelines covering the guest registers are also flushed.
with mlarkin, additional testing by pd, handy comments from the
kettenis and guenther peanuts
|
| |
|
|
|
|
|
|
|
| |
avoiding multiple readregs ioctls back to vmm in case register content
is needed subsequently.
ok phessler
|
|
|
|
|
|
|
| |
Make the cache neighbor fields match the number of VCPUs present
(currently 1)
ok reyk
|
|
|
|
| |
for noticing.
|
|
|
|
|
| |
These ports are used for Edge/Level control on the legacy PIC and will be
needed for a subsequent commit.
|
|
|
|
| |
ok guenther
|
|
|
|
|
|
|
| |
was already disabled, but reporting it as available and then failing it
caused SmartOS to crash during boot.
ok pd@
|
|
|
|
|
|
| |
usermode daemons handle that.
ok pd@
|
| |
|
|
|
|
| |
Diff from carlos cardenas, thanks
|
|
|
|
|
|
|
|
|
|
| |
This restricts receiving vms from hosts with more cpu features.
Tested on
broadwell -> skylake (works)
skylake -> broadwell (don't work)
ok mlarkin@
|
|
|
|
| |
various events into the guest
|
|
|
|
|
|
| |
some feature flags in CPUID being set or cleared.
ok pd
|
|
|
|
|
|
| |
guest VMs can now use MAXDSIZ ram.
ok deraadt@, stefan@, pd@
|
|
|
|
|
|
| |
VM setup.
ok pd
|
|
|
|
|
|
| |
as a struct passed to vmm has changed size.
ok deraadt, pd
|
|
|
|
| |
ok deraadt
|
|
|
|
|
|
| |
from a few weeks ago that did the same for Intel/VMX.
ok deraadt
|
| |
|
| |
|
|
|
|
|
|
| |
Tested on linux and amd64 OpenBSD guests.
Posted to tech by Pratik Vyas.
|
|
|
|
|
|
|
|
|
|
| |
a larger effort to implement vmctl send/vmctl receive (snapshot and VM
migration).
From Pratik Vyas, Siri Chandana, Harshada Mone and Ashwin Agrawal, a
group of students I am supervising.
ok kettenis
|
|
|
|
| |
vcpu setup
|
|
|
|
|
|
| |
that it can be used in SVM and VMX.
no functional change
|
|
|
|
|
|
|
| |
tested by reyk, dcoppa, and a few others.
ok kettenis@ on the fpu bits
ok deraadt@ on the vmm bits
|
|
|
|
|
|
|
|
| |
penalizes i386 guests who previously had memory allocated by vmd after
0xF0FFFFFF (the previous range end) but makes memory range calculation
in vmd/mc146818 much much easier. This diff needs to be combined with
the previous vmd diffs or you won't be able to create a vm with memory
size larger than ~3855MB.
|
| |
|
|
|
|
| |
merging)
|
| |
|
|
|
|
| |
next SVM diff
|
|
|
|
| |
msr bitmap, ioio bitmap, and host state save area)
|
|
|
|
| |
SVM/RVI: VMCB structure definitions for amd64/i386
|
|
|
|
| |
longer needed.
|
| |
|
|
|
|
|
| |
initializing the unused bits, leading to VMABORTs during vmentry. Found the
hard way on i386 vmm, but the problem could occur on amd64 as well.
|
|
|
|
|
|
|
| |
caused IRQ9 to be shared between the second disk device and the vio(4)s,
which caused poor network performance.
ok reyk, stefan
|
|
|
|
| |
broadwell/skylake bug.
|
| |
|
| |
|
|
|
|
| |
Makes reset code a little simpler. ok mlarkin@
|
|
|
|
| |
ok mlarkin@
|