aboutsummaryrefslogtreecommitdiffstats
path: root/virt/kvm/kvm_main.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2010-04-21KVM: Add missing srcu_read_lock() for kvm_mmu_notifier_release()Lai Jiangshan1-0/+4
I got this dmesg due to srcu_read_lock() is missing in kvm_mmu_notifier_release(). =================================================== [ INFO: suspicious rcu_dereference_check() usage. ] --------------------------------------------------- arch/x86/kvm/x86.h:72 invoked rcu_dereference_check() without protection! other info that might help us debug this: rcu_scheduler_active = 1, debug_locks = 0 2 locks held by qemu-system-x86/3100: #0: (rcu_read_lock){.+.+..}, at: [<ffffffff810d73dc>] __mmu_notifier_release+0x38/0xdf #1: (&(&kvm->mmu_lock)->rlock){+.+...}, at: [<ffffffffa0130a6a>] kvm_mmu_zap_all+0x21/0x5e [kvm] stack backtrace: Pid: 3100, comm: qemu-system-x86 Not tainted 2.6.34-rc3-22949-gbc8a97a-dirty #2 Call Trace: [<ffffffff8106afd9>] lockdep_rcu_dereference+0xaa/0xb3 [<ffffffffa0123a89>] unalias_gfn+0x56/0xab [kvm] [<ffffffffa0119600>] gfn_to_memslot+0x16/0x25 [kvm] [<ffffffffa012ffca>] gfn_to_rmap+0x17/0x6e [kvm] [<ffffffffa01300c1>] rmap_remove+0xa0/0x19d [kvm] [<ffffffffa0130649>] kvm_mmu_zap_page+0x109/0x34d [kvm] [<ffffffffa0130a7e>] kvm_mmu_zap_all+0x35/0x5e [kvm] [<ffffffffa0122870>] kvm_arch_flush_shadow+0x16/0x22 [kvm] [<ffffffffa01189e0>] kvm_mmu_notifier_release+0x15/0x17 [kvm] [<ffffffff810d742c>] __mmu_notifier_release+0x88/0xdf [<ffffffff810d73dc>] ? __mmu_notifier_release+0x38/0xdf [<ffffffff81040848>] ? exit_mm+0xe0/0x115 [<ffffffff810c2cb0>] exit_mmap+0x2c/0x17e [<ffffffff8103c472>] mmput+0x2d/0xd4 [<ffffffff81040870>] exit_mm+0x108/0x115 [...] Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-04-20KVM: fix the handling of dirty bitmaps to avoid overflowsTakuya Yoshikawa1-5/+8
Int is not long enough to store the size of a dirty bitmap. This patch fixes this problem with the introduction of a wrapper function to calculate the sizes of dirty bitmaps. Note: in mark_page_dirty(), we have to consider the fact that __set_bit() takes the offset as int, not long. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-30include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.hTejun Heo1-1/+1
percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-01KVM: Convert kvm->requests_lock to raw_spinlock_tAvi Kivity1-3/+3
The code relies on kvm->requests_lock inhibiting preemption. Noted by Jan Kiszka. Signed-off-by: Avi Kivity <avi@redhat.com>
2010-03-01KVM: Introduce kvm_host_page_sizeJoerg Roedel1-0/+25
This patch introduces a generic function to find out the host page size for a given gfn. This function is needed by the kvm iommu code. This patch also simplifies the x86 host_mapping_level function. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-03-01KVM: Fix kvm_coalesced_mmio_ring duplicate allocationSheng Yang1-17/+0
The commit 0953ca73 "KVM: Simplify coalesced mmio initialization" allocate kvm_coalesced_mmio_ring in the kvm_coalesced_mmio_init(), but didn't discard the original allocation... Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: fix cleanup_srcu_struct on vm destructionMarcelo Tosatti1-1/+0
cleanup_srcu_struct on VM destruction remains broken: BUG: unable to handle kernel paging request at ffffffffffffffff IP: [<ffffffff802533d2>] srcu_read_lock+0x16/0x21 RIP: 0010:[<ffffffff802533d2>] [<ffffffff802533d2>] srcu_read_lock+0x16/0x21 Call Trace: [<ffffffffa05354c4>] kvm_arch_vcpu_uninit+0x1b/0x48 [kvm] [<ffffffffa05339c6>] kvm_vcpu_uninit+0x9/0x15 [kvm] [<ffffffffa0569f7d>] vmx_free_vcpu+0x7f/0x8f [kvm_intel] [<ffffffffa05357b5>] kvm_arch_destroy_vm+0x78/0x111 [kvm] [<ffffffffa053315b>] kvm_put_kvm+0xd4/0xfe [kvm] Move it to kvm_arch_destroy_vm. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Reported-by: Jan Kiszka <jan.kiszka@siemens.com>
2010-03-01KVM: convert slots_lock to a mutexMarcelo Tosatti1-5/+5
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: convert io_bus to SRCUMarcelo Tosatti1-43/+63
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: x86: switch kvm_set_memory_alias to SRCU updateMarcelo Tosatti1-2/+2
Using a similar two-step procedure as for memslots. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: introduce kvm->srcu and convert kvm_set_memory_region to SRCU updateMarcelo Tosatti1-36/+105
Use two steps for memslot deletion: mark the slot invalid (which stops instantiation of new shadow pages for that slot, but allows destruction), then instantiate the new empty slot. Also simplifies kvm_handle_hva locking. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: use gfn_to_pfn_memslot in kvm_iommu_map_pagesMarcelo Tosatti1-1/+1
So its possible to iommu map a memslot before making it visible to kvm. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: introduce gfn_to_pfn_memslotMarcelo Tosatti1-8/+25
Which takes a memslot pointer instead of using kvm->memslots. To be used by SRCU convertion later. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: split kvm_arch_set_memory_region into prepare and commitMarcelo Tosatti1-7/+5
Required for SRCU convertion later. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: modify memslots layout in struct kvmMarcelo Tosatti1-13/+23
Have a pointer to an allocated region inside struct kvm. [alex: fix ppc book 3s] Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: Simplify coalesced mmio initializationAvi Kivity1-6/+1
- add destructor function - move related allocation into constructor - add stubs for !CONFIG_KVM_MMIO Signed-off-by: Avi Kivity <avi@redhat.com>
2010-03-01KVM: Remove ifdefs from mmu notifier initializationAvi Kivity1-5/+15
Signed-off-by: Avi Kivity <avi@redhat.com>
2010-03-01KVM: Disentangle mmu notifiers and coalesced_mmio registrationAvi Kivity1-11/+7
They aren't related. Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-27KVM: get rid of kvm_create_vm() unused label warning on s390Heiko Carstens1-0/+3
arch/s390/kvm/../../../virt/kvm/kvm_main.c: In function 'kvm_create_vm': arch/s390/kvm/../../../virt/kvm/kvm_main.c:409: warning: label 'out_err' defined but not used Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-27KVM: Fix possible circular locking in kvm_vm_ioctl_assign_device()Sheng Yang1-1/+1
One possible order is: KVM_CREATE_IRQCHIP ioctl(took kvm->lock) -> kvm_iobus_register_dev() -> down_write(kvm->slots_lock). The other one is in kvm_vm_ioctl_assign_device(), which take kvm->slots_lock first, then kvm->lock. Update the comment of lock order as well. Observe it due to kernel locking debug warnings. Cc: stable@kernel.org Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-22anonfd: Allow making anon files read-onlyRoland Dreier1-2/+2
It seems a couple places such as arch/ia64/kernel/perfmon.c and drivers/infiniband/core/uverbs_main.c could use anon_inode_getfile() instead of a private pseudo-fs + alloc_file(), if only there were a way to get a read-only file. So provide this by having anon_inode_getfile() create a read-only file if we pass O_RDONLY in flags. Signed-off-by: Roland Dreier <rolandd@cisco.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2009-12-09Merge commit 'origin/master' into nextBenjamin Herrenschmidt1-814/+147
Conflicts: include/linux/kvm.h
2009-12-03KVM: Allow internal errors reported to userspace to carry extra dataAvi Kivity1-0/+1
Usually userspace will freeze the guest so we can inspect it, but some internal state is not available. Add extra data to internal error reporting so we can expose it to the debugger. Extra data is specific to the suberror. Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-03KVM: Enable 32bit dirty log pointers on 64bit hostArnd Bergmann1-1/+50
With big endian userspace, we can't quite figure out if a pointer is 32 bit (shifted >> 32) or 64 bit when we read a 64 bit pointer. This is what happens with dirty logging. To get the pointer interpreted correctly, we thus need Arnd's patch to implement a compat layer for the ioctl: A better way to do this is to add a separate compat_ioctl() method that converts this for you. Based on initial patch from Arnd Bergmann. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-03KVM: introduce kvm_vcpu_on_spinZhai, Edwin1-0/+15
Introduce kvm_vcpu_on_spin, to be used by VMX/SVM to yield processing once the cpu detects pause-based looping. Signed-off-by: "Zhai, Edwin" <edwin.zhai@intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2009-12-03KVM: Activate Virtualization On DemandAlexander Graf1-12/+78
X86 CPUs need to have some magic happening to enable the virtualization extensions on them. This magic can result in unpleasant results for users, like blocking other VMMs from working (vmx) or using invalid TLB entries (svm). Currently KVM activates virtualization when the respective kernel module is loaded. This blocks us from autoloading KVM modules without breaking other VMMs. To circumvent this problem at least a bit, this patch introduces on demand activation of virtualization. This means, that instead virtualization is enabled on creation of the first virtual machine and disabled on destruction of the last one. So using this, KVM can be easily autoloaded, while keeping other hypervisors usable. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-03KVM: Move assigned device code to own fileAvi Kivity1-796/+2
Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-03KVM: Drop kvm->irq_lock lock from irq injection pathGleb Natapov1-2/+0
The only thing it protects now is interrupt injection into lapic and this can work lockless. Even now with kvm->irq_lock in place access to lapic is not entirely serialized since vcpu access doesn't take kvm->irq_lock. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-03KVM: Move irq ack notifier list to arch independent codeGleb Natapov1-0/+1
Mask irq notifier list is already there. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-03KVM: Change irq routing table to use gsi indexed arrayGleb Natapov1-1/+0
Use gsi indexed array instead of scanning all entries on each interrupt injection. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-03KVM: Don't wrap schedule() with vcpu_put()/vcpu_load()Avi Kivity1-2/+0
Preemption notifiers will do that for us automatically. Signed-off-by: Avi Kivity <avi@redhat.com>
2009-11-05Use Little Endian for Dirty BitmapAlexander Graf1-2/+3
We currently use host endian long types to store information in the dirty bitmap. This works reasonably well on Little Endian targets, because the u32 after the first contains the next 32 bits. On Big Endian this breaks completely though, forcing us to be inventive here. So Ben suggested to always use Little Endian, which looks reasonable. We only have dirty bitmap implemented in Little Endian targets so far and since PowerPC would be the first Big Endian platform, we can just as well switch to Little Endian always with little effort without breaking existing targets. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-10-16KVM: Prevent kvm_init from corrupting debugfs structuresDarrick J. Wong1-4/+3
I'm seeing an oops condition when kvm-intel and kvm-amd are modprobe'd during boot (say on an Intel system) and then rmmod'd: # modprobe kvm-intel kvm_init() kvm_init_debug() kvm_arch_init() <-- stores debugfs dentries internally (success, etc) # modprobe kvm-amd kvm_init() kvm_init_debug() <-- second initialization clobbers kvm's internal pointers to dentries kvm_arch_init() kvm_exit_debug() <-- and frees them # rmmod kvm-intel kvm_exit() kvm_exit_debug() <-- double free of debugfs files! *BOOM* If execution gets to the end of kvm_init(), then the calling module has been established as the kvm provider. Move the debugfs initialization to the end of the function, and remove the now-unnecessary call to kvm_exit_debug() from the error path. That way we avoid trampling on the debugfs entries and freeing them twice. Cc: stable@kernel.org Signed-off-by: Darrick J. Wong <djwong@us.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2009-10-04KVM: add support for change_pte mmu notifiersIzik Eidus1-0/+14
this is needed for kvm if it want ksm to directly map pages into its shadow page tables. [marcelo: cast pfn assignment to u64] Signed-off-by: Izik Eidus <ieidus@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2009-10-01const: constify remaining file_operationsAlexey Dobriyan1-1/+1
[akpm@linux-foundation.org: fix KVM] Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Acked-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-27const: mark struct vm_struct_operationsAlexey Dobriyan1-2/+2
* mark struct vm_area_struct::vm_ops as const * mark vm_ops in AGP code But leave TTM code alone, something is fishy there with global vm_ops being used. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-24cpumask: use zalloc_cpumask_var() where possibleLi Zefan1-2/+1
Remove open-coded zalloc_cpumask_var() and zalloc_cpumask_var_node(). Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-10KVM: fix compile warnings on s390Heiko Carstens1-3/+5
CC arch/s390/kvm/../../../virt/kvm/kvm_main.o arch/s390/kvm/../../../virt/kvm/kvm_main.c: In function '__kvm_set_memory_region': arch/s390/kvm/../../../virt/kvm/kvm_main.c:485: warning: unused variable 'j' arch/s390/kvm/../../../virt/kvm/kvm_main.c:484: warning: unused variable 'lpages' arch/s390/kvm/../../../virt/kvm/kvm_main.c:483: warning: unused variable 'ugfn' Cc: Carsten Otte <cotte@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2009-09-10KVM: Move #endif KVM_CAP_IRQ_ROUTING to correct placeAvi Kivity1-1/+1
The symbol only controls irq routing, not MSI-X. Signed-off-by: Avi Kivity <avi@redhat.com>
2009-09-10KVM: fix kvm_init() error handlingXiao Guangrong1-1/+1
Remove debugfs file if kvm_arch_init() return error Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-09-10KVM: Drop obsolete cpu_get/put in make_all_cpus_requestJan Kiszka1-2/+1
spin_lock disables preemption, so we can simply read the current cpu. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2009-09-10KVM: Reduce runnability interface with arch support codeGleb Natapov1-3/+1
Remove kvm_cpu_has_interrupt() and kvm_arch_interrupt_allowed() from interface between general code and arch code. kvm_arch_vcpu_runnable() checks for interrupts instead. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-09-10KVM: add ioeventfd supportGregory Haskins1-1/+10
ioeventfd is a mechanism to register PIO/MMIO regions to trigger an eventfd signal when written to by a guest. Host userspace can register any arbitrary IO address with a corresponding eventfd and then pass the eventfd to a specific end-point of interest for handling. Normal IO requires a blocking round-trip since the operation may cause side-effects in the emulated model or may return data to the caller. Therefore, an IO in KVM traps from the guest to the host, causes a VMX/SVM "heavy-weight" exit back to userspace, and is ultimately serviced by qemu's device model synchronously before returning control back to the vcpu. However, there is a subclass of IO which acts purely as a trigger for other IO (such as to kick off an out-of-band DMA request, etc). For these patterns, the synchronous call is particularly expensive since we really only want to simply get our notification transmitted asychronously and return as quickly as possible. All the sychronous infrastructure to ensure proper data-dependencies are met in the normal IO case are just unecessary overhead for signalling. This adds additional computational load on the system, as well as latency to the signalling path. Therefore, we provide a mechanism for registration of an in-kernel trigger point that allows the VCPU to only require a very brief, lightweight exit just long enough to signal an eventfd. This also means that any clients compatible with the eventfd interface (which includes userspace and kernelspace equally well) can now register to be notified. The end result should be a more flexible and higher performance notification API for the backend KVM hypervisor and perhipheral components. To test this theory, we built a test-harness called "doorbell". This module has a function called "doorbell_ring()" which simply increments a counter for each time the doorbell is signaled. It supports signalling from either an eventfd, or an ioctl(). We then wired up two paths to the doorbell: One via QEMU via a registered io region and through the doorbell ioctl(). The other is direct via ioeventfd. You can download this test harness here: ftp://ftp.novell.com/dev/ghaskins/doorbell.tar.bz2 The measured results are as follows: qemu-mmio: 110000 iops, 9.09us rtt ioeventfd-mmio: 200100 iops, 5.00us rtt ioeventfd-pio: 367300 iops, 2.72us rtt I didn't measure qemu-pio, because I have to figure out how to register a PIO region with qemu's device model, and I got lazy. However, for now we can extrapolate based on the data from the NULLIO runs of +2.56us for MMIO, and -350ns for HC, we get: qemu-pio: 153139 iops, 6.53us rtt ioeventfd-hc: 412585 iops, 2.37us rtt these are just for fun, for now, until I can gather more data. Here is a graph for your convenience: http://developer.novell.com/wiki/images/7/76/Iofd-chart.png The conclusion to draw is that we save about 4us by skipping the userspace hop. -------------------- Signed-off-by: Gregory Haskins <ghaskins@novell.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-09-10KVM: make io_bus interface more robustGregory Haskins1-5/+34
Today kvm_io_bus_regsiter_dev() returns void and will internally BUG_ON if it fails. We want to create dynamic MMIO/PIO entries driven from userspace later in the series, so we need to enhance the code to be more robust with the following changes: 1) Add a return value to the registration function 2) Fix up all the callsites to check the return code, handle any failures, and percolate the error up to the caller. 3) Add an unregister function that collapses holes in the array Signed-off-by: Gregory Haskins <ghaskins@novell.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-09-10KVM: document lock nesting ruleMichael S. Tsirkin1-1/+1
Document kvm->lock nesting within kvm->slots_lock Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-09-10KVM: remove in_range from io devicesMichael S. Tsirkin1-10/+16
This changes bus accesses to use high-level kvm_io_bus_read/kvm_io_bus_write functions. in_range now becomes unused so it is removed from device ops in favor of read/write callbacks performing range checks internally. This allows aliasing (mostly for in-kernel virtio), as well as better error handling by making it possible to pass errors up to userspace. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-09-10KVM: convert bus to slots_lockMichael S. Tsirkin1-1/+11
Use slots_lock to protect device list on the bus. slots_lock is already taken for read everywhere, so we only need to take it for write when registering devices. This is in preparation to removing in_range and kvm->lock around it. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-09-10KVM: remove old KVMTRACE support codeMarcelo Tosatti1-2/+1
Return EOPNOTSUPP for KVM_TRACE_ENABLE/PAUSE/DISABLE ioctls. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-09-10KVM: x86: missing locking in PIT/IRQCHIP/SET_BSP_CPU ioctl pathsMarcelo Tosatti1-0/+2
Correct missing locking in a few places in x86's vm_ioctl handling path. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-09-10KVM: Prepare memslot data structures for multiple hugepage sizesJoerg Roedel1-17/+39
[avi: fix build on non-x86] Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>