aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/virtio (follow)
AgeCommit message (Collapse)AuthorFilesLines
2019-10-28virtio_ring: fix stalls for packed ringsMarvin Liu1-4/+3
When VIRTIO_F_RING_EVENT_IDX is negotiated, virtio devices can use virtqueue_enable_cb_delayed_packed to reduce the number of device interrupts. At the moment, this is the case for virtio-net when the napi_tx module parameter is set to false. In this case, the virtio driver selects an event offset and expects that the device will send a notification when rolling over the event offset in the ring. However, if this roll-over happens before the event suppression structure update, the notification won't be sent. To address this race condition the driver needs to check wether the device rolled over the offset after updating the event suppression structure. With VIRTIO_F_RING_PACKED, the virtio driver did this by reading the flags field of the descriptor at the specified offset. Unfortunately, checking at the event offset isn't reliable: if descriptors are chained (e.g. when INDIRECT is off) not all descriptors are overwritten by the device, so it's possible that the device skipped the specific descriptor driver is checking when writing out used descriptors. If this happens, the driver won't detect the race condition and will incorrectly expect the device to send a notification. For virtio-net, the result will be a TX queue stall, with the transmission getting blocked forever. With the packed ring, it isn't easy to find a location which is guaranteed to change upon the roll-over, except the next device descriptor, as described in the spec: Writes of device and driver descriptors can generally be reordered, but each side (driver and device) are only required to poll (or test) a single location in memory: the next device descriptor after the one they processed previously, in circular order. while this might be sub-optimal, let's do exactly this for now. Cc: stable@vger.kernel.org Cc: Jason Wang <jasowang@redhat.com> Fixes: f51f982682e2a ("virtio_ring: leverage event idx in packed ring") Signed-off-by: Marvin Liu <yong.liu@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2019-09-09virtio_ring: fix unmap of indirect descriptorsMatthias Lange1-2/+6
The function virtqueue_add_split() DMA-maps the scatterlist buffers. In case a mapping error occurs the already mapped buffers must be unmapped. This happens by jumping to the 'unmap_release' label. In case of indirect descriptors the release is wrong and may leak kernel memory. Because the implementation assumes that the head descriptor is already mapped it starts iterating over the descriptor list starting from the head descriptor. However for indirect descriptors the head descriptor is never mapped in case of an error. The fix is to initialize the start index with zero in case of indirect descriptors and use the 'desc' pointer directly for iterating over the descriptor chain. Signed-off-by: Matthias Lange <matthias.lange@kernkonzept.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2019-07-19Merge branch 'work.mount0' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds1-9/+4
Pull vfs mount updates from Al Viro: "The first part of mount updates. Convert filesystems to use the new mount API" * 'work.mount0' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (63 commits) mnt_init(): call shmem_init() unconditionally constify ksys_mount() string arguments don't bother with registering rootfs init_rootfs(): don't bother with init_ramfs_fs() vfs: Convert smackfs to use the new mount API vfs: Convert selinuxfs to use the new mount API vfs: Convert securityfs to use the new mount API vfs: Convert apparmorfs to use the new mount API vfs: Convert openpromfs to use the new mount API vfs: Convert xenfs to use the new mount API vfs: Convert gadgetfs to use the new mount API vfs: Convert oprofilefs to use the new mount API vfs: Convert ibmasmfs to use the new mount API vfs: Convert qib_fs/ipathfs to use the new mount API vfs: Convert efivarfs to use the new mount API vfs: Convert configfs to use the new mount API vfs: Convert binfmt_misc to use the new mount API convenience helper: get_tree_single() convenience helper get_tree_nodev() vfs: Kill sget_userns() ...
2019-07-18Merge tag 'libnvdimm-for-5.3' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimmLinus Torvalds1-0/+11
Pull libnvdimm updates from Dan Williams: "Primarily just the virtio_pmem driver: - virtio_pmem The new virtio_pmem facility introduces a paravirtualized persistent memory device that allows a guest VM to use DAX mechanisms to access a host-file with host-page-cache. It arranges for MAP_SYNC to be disabled and instead triggers a host fsync() when a 'write-cache flush' command is sent to the virtual disk device. - Miscellaneous small fixups" * tag 'libnvdimm-for-5.3' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm: virtio_pmem: fix sparse warning xfs: disable map_sync for async flush ext4: disable map_sync for async flush dax: check synchronous mapping is supported dm: enable synchronous dax libnvdimm: add dax_dev sync flag virtio-pmem: Add virtio pmem driver libnvdimm: nd_region flush callback support libnvdimm, namespace: Drop uuid_t implementation detail
2019-07-11virtio-mmio: add error check for platform_get_irqIhor Matushchak1-1/+6
in vm_find_vqs() irq has a wrong type so, in case of no IRQ resource defined, wrong parameter will be passed to request_irq() Signed-off-by: Ihor Matushchak <ihor.matushchak@foobox.net> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Ivan T. Ivanov <iivanov.xz@gmail.com>
2019-07-05virtio-pmem: Add virtio pmem driverPankaj Gupta1-0/+11
This patch adds virtio-pmem driver for KVM guest. Guest reads the persistent memory range information from Qemu over VIRTIO and registers it on nvdimm_bus. It also creates a nd_region object with the persistent memory range information so that existing 'nvdimm/pmem' driver can reserve this into system memory map. This way 'virtio-pmem' driver uses existing functionality of pmem driver to register persistent memory compatible for DAX capable filesystems. This also provides function to perform guest flush over VIRTIO from 'pmem' driver when userspace performs flush on DAX memory range. Signed-off-by: Pankaj Gupta <pagupta@redhat.com> Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jakub Staron <jstaron@google.com> Tested-by: Jakub Staron <jstaron@google.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2019-05-27virtio: Fix indentation of VIRTIO_MMIOFabrizio Castro1-4/+4
VIRTIO_MMIO config option block starts with a space, fix that. Signed-off-by: Fabrizio Castro <fabrizio.castro@bp.renesas.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2019-05-25vfs: Convert virtio_balloon to use the new mount APIDavid Howells1-4/+4
Convert the virtio_balloon filesystem to the new internal mount API as the old one will be obsoleted and removed. This allows greater flexibility in communication of mount parameters between userspace, the VFS and the filesystem. See Documentation/filesystems/mount_api.txt for more information. Signed-off-by: David Howells <dhowells@redhat.com> cc: "Michael S. Tsirkin" <mst@redhat.com> cc: Jason Wang <jasowang@redhat.com> cc: virtualization@lists.linux-foundation.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2019-05-25mount_pseudo(): drop 'name' argument, switch to d_make_root()Al Viro1-2/+1
Once upon a time we used to set ->d_name of e.g. pipefs root so that d_path() on pipes would work. These days it's completely pointless - dentries of pipes are not even connected to pipefs root. However, mount_pseudo() had set the root dentry name (passed as the second argument) and callers kept inventing names to pass to it. Including those that didn't *have* any non-root dentries to start with... All of that had been pointless for about 8 years now; it's time to get rid of that cargo-culting... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2019-05-24treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 102Thomas Gleixner2-28/+2
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 51 franklin st fifth floor boston ma 02110 1301 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 50 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Richard Fontana <rfontana@redhat.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190523091649.499889647@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-24treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 78Thomas Gleixner5-21/+5
Based on 1 normalized pattern(s): this work is licensed under the terms of the gnu gpl version 2 or later see the copying file in the top level directory extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 6 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Richard Fontana <rfontana@redhat.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190520075210.858783702@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-21treewide: Add SPDX license identifier - Makefile/KconfigThomas Gleixner1-0/+1
Add SPDX license identifiers to all Make/Kconfig files which: - Have no license information of any form These files fall under the project license, GPL v2 only. The resulting SPDX license identifier is: GPL-2.0-only Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-21treewide: Add SPDX license identifier for more missed filesThomas Gleixner2-0/+2
Add SPDX license identifiers to all files which: - Have no license information of any form - Have MODULE_LICENCE("GPL*") inside which was used in the initial scan/conversion to ignore the file These files fall under the project license, GPL v2 only. The resulting SPDX license identifier is: GPL-2.0-only Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-20balloon: don't bother with dentry_operationsAl Viro1-5/+1
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2019-05-12virtio/virtio_ring: do some comment fixesJiang Biao1-13/+14
There are lots of mismatches between comments and codes, this patch do these comment fixes. Signed-off-by: Jiang Biao <benbjiang@tencent.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2019-05-12virtio_ring: Fix potential mem leak in virtqueue_add_indirect_packedYueHaibing1-0/+1
'desc' should be freed before leaving from err handing path. Fixes: 1ce9e6055fa0 ("virtio_ring: introduce packed ring support") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> stable@vger.kernel.org
2019-04-08virtio: Honour 'may_reduce_num' in vring_create_virtqueueCornelia Huck1-0/+2
vring_create_virtqueue() allows the caller to specify via the may_reduce_num parameter whether the vring code is allowed to allocate a smaller ring than specified. However, the split ring allocation code tries to allocate a smaller ring on allocation failure regardless of what the caller specified. This may cause trouble for e.g. virtio-pci in legacy mode, which does not support ring resizing. (The packed ring code does not resize in any case.) Let's fix this by bailing out immediately in the split ring code if the requested size cannot be allocated and may_reduce_num has not been specified. While at it, fix a typo in the usage instructions. Fixes: 2a2d1382fe9d ("virtio: Add improved queue allocation API") Cc: stable@vger.kernel.org # v4.6+ Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Jens Freimann <jfreimann@redhat.com>
2019-04-08virtio_pci: fix a NULL pointer reference in vp_del_vqsLongpeng1-3/+5
If the msix_affinity_masks is alloced failed, then we'll try to free some resources in vp_free_vectors() that may access it directly. We met the following stack in our production: [ 29.296767] BUG: unable to handle kernel NULL pointer dereference at (null) [ 29.311151] IP: [<ffffffffc04fe35a>] vp_free_vectors+0x6a/0x150 [virtio_pci] [ 29.324787] PGD 0 [ 29.333224] Oops: 0000 [#1] SMP [...] [ 29.425175] RIP: 0010:[<ffffffffc04fe35a>] [<ffffffffc04fe35a>] vp_free_vectors+0x6a/0x150 [virtio_pci] [ 29.441405] RSP: 0018:ffff9a55c2dcfa10 EFLAGS: 00010206 [ 29.453491] RAX: 0000000000000000 RBX: ffff9a55c322c400 RCX: 0000000000000000 [ 29.467488] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff9a55c322c400 [ 29.481461] RBP: ffff9a55c2dcfa20 R08: 0000000000000000 R09: ffffc1b6806ff020 [ 29.495427] R10: 0000000000000e95 R11: 0000000000aaaaaa R12: 0000000000000000 [ 29.509414] R13: 0000000000010000 R14: ffff9a55bd2d9e98 R15: ffff9a55c322c400 [ 29.523407] FS: 00007fdcba69f8c0(0000) GS:ffff9a55c2840000(0000) knlGS:0000000000000000 [ 29.538472] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 29.551621] CR2: 0000000000000000 CR3: 000000003ce52000 CR4: 00000000003607a0 [ 29.565886] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 29.580055] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 29.594122] Call Trace: [ 29.603446] [<ffffffffc04fe8a2>] vp_request_msix_vectors+0xe2/0x260 [virtio_pci] [ 29.618017] [<ffffffffc04fedc5>] vp_try_to_find_vqs+0x95/0x3b0 [virtio_pci] [ 29.632152] [<ffffffffc04ff117>] vp_find_vqs+0x37/0xb0 [virtio_pci] [ 29.645582] [<ffffffffc057bf63>] init_vq+0x153/0x260 [virtio_blk] [ 29.658831] [<ffffffffc057c1e8>] virtblk_probe+0xe8/0x87f [virtio_blk] [...] Cc: Gonglei <arei.gonglei@huawei.com> Signed-off-by: Longpeng <longpeng2@huawei.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Gonglei <arei.gonglei@huawei.com>
2019-03-06virtio: hint if callbacks surprisingly might sleepCornelia Huck1-0/+2
A virtio transport is free to implement some of the callbacks in virtio_config_ops in a matter that they cannot be called from atomic context (e.g. virtio-ccw, which maps a lot of the callbacks to channel I/O, which is an inherently asynchronous mechanism). This can be very surprising for developers using the much more common virtio-pci transport, just to find out that things break when used on s390. The documentation for virtio_config_ops now contains a comment explaining this, but it makes sense to add a might_sleep() annotation to various wrapper functions in the virtio core to avoid surprises later. Note that annotations are NOT added to two classes of calls: - direct calls from device drivers (all current callers should be fine, however) - calls which clearly won't be made from atomic context (such as those ultimately coming in via the driver core) Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2019-03-06virtio_balloon: remove the unnecessary 0-initializationWei Wang1-1/+0
We've changed to kzalloc the vb struct, so no need to 0-initialize this field one more time. Signed-off-by: Wei Wang <wei.w.wang@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com>
2019-03-06virtio-balloon: improve update_balloon_size_funcWei Wang1-1/+4
There is no need to update the balloon actual register when there is no ballooning request. This patch avoids update_balloon_size when diff is 0. Signed-off-by: Wei Wang <wei.w.wang@intel.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2019-03-06virtio: Introduce virtio_max_dma_size()Joerg Roedel1-0/+11
This function returns the maximum segment size for a single dma transaction of a virtio device. The possible limit comes from the SWIOTLB implementation in the Linux kernel, that has an upper limit of (currently) 256kb of contiguous memory it can map. Other DMA-API implementations might also have limits. Use the new dma_max_mapping_size() function to determine the maximum mapping size when DMA-API is in use for virtio. Cc: stable@vger.kernel.org Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2019-02-05virtio: drop internal struct from UAPIMichael S. Tsirkin1-1/+6
There's no reason to expose struct vring_packed in UAPI - if we do we won't be able to change or drop it, and it's not part of any interface. Let's move it to virtio_ring.c Cc: Tiwei Bie <tiwei.bie@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2019-01-24virtio: support VIRTIO_F_ORDER_PLATFORMTiwei Bie1-0/+8
This patch introduces the support for VIRTIO_F_ORDER_PLATFORM. If this feature is negotiated, the driver must use the barriers suitable for hardware devices. Otherwise, the device and driver are assumed to be implemented in software, that is they can be assumed to run on identical CPUs in an SMP configuration. Thus a weaker form of memory barriers is sufficient to yield better performance. It is recommended that an add-in card based PCI device offers this feature for portability. The device will fail to operate further or will operate in a slower emulation mode if this feature is offered but not accepted. Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2019-01-14virtio-balloon: tweak config_changed implementationWei Wang1-33/+65
virtio-ccw has deadlock issues with reading the config space inside the interrupt context, so we tweak the virtballoon_changed implementation by moving the config read operations into the related workqueue contexts. The config_read_bitmap is used as a flag to the workqueue callbacks about the related config fields that need to be read. The cmd_id_received is also renamed to cmd_id_received_cache, and the value should be obtained via virtio_balloon_cmd_id_received. Reported-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Wei Wang <wei.w.wang@intel.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Cc: stable@vger.kernel.org Fixes: 86a559787e6f ("virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT") Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
2019-01-14virtio: don't allocate vqs when names[i] = NULLWei Wang1-2/+7
Some vqs may not need to be allocated when their related feature bits are disabled. So callers may pass in such vqs with "names = NULL". Then we skip such vq allocations. Signed-off-by: Wei Wang <wei.w.wang@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Wei Wang <wei.w.wang@intel.com> Signed-off-by: Wei Wang <wei.w.wang@intel.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Cc: stable@vger.kernel.org Fixes: 86a559787e6f ("virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT")
2019-01-14virtio_pci: use queue idx instead of array idx to set up the vqWei Wang1-4/+4
When find_vqs, there will be no vq[i] allocation if its corresponding names[i] is NULL. For example, the caller may pass in names[i] (i=4) with names[2] being NULL because the related feature bit is turned off, so technically there are 3 queues on the device, and name[4] should correspond to the 3rd queue on the device. So we use queue_idx as the queue index, which is increased only when the queue exists. Signed-off-by: Wei Wang <wei.w.wang@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Wei Wang <wei.w.wang@intel.com> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
2019-01-02Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds1-2/+4
Pull virtio/vhost updates from Michael Tsirkin: "Features, fixes, cleanups: - discard in virtio blk - misc fixes and cleanups" * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: vhost: correct the related warning message vhost: split structs into a separate header file virtio: remove deprecated VIRTIO_PCI_CONFIG() vhost/vsock: switch to a mutex for vhost_vsock_hash virtio_blk: add discard and write zeroes support
2018-12-19virtio: remove deprecated VIRTIO_PCI_CONFIG()Dongli Zhang1-2/+4
VIRTIO_PCI_CONFIG() is deprecated. Use VIRTIO_PCI_CONFIG_OFF() instead. Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-11-26virtio_ring: advertize packed ring layoutTiwei Bie1-0/+2
Advertize the packed ring layout support. Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-26virtio_ring: leverage event idx in packed ringTiwei Bie1-6/+71
Leverage the EVENT_IDX feature in packed ring to suppress events when it's available. Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-26virtio_ring: introduce packed ring supportTiwei Bie1-30/+870
Introduce the packed ring support. Packed ring can only be created by vring_create_virtqueue() and each chunk of packed ring will be allocated individually. Packed ring can not be created on preallocated memory by vring_new_virtqueue() or the likes currently. Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-26virtio_ring: cache whether we will use DMA APITiwei Bie1-4/+8
Cache whether we will use DMA API, instead of doing the check every time. We are going to check whether DMA API is used more often in packed ring. Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-26virtio_ring: extract split ring handling from ring creationTiwei Bie1-99/+121
Introduce a specific function to create the split ring. And also move the DMA allocation and size information to the .split sub-structure. Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-26virtio_ring: allocate desc state for split ring separatelyTiwei Bie1-18/+27
Put the split ring's desc state into the .split sub-structure, and allocate desc state for split ring separately, this makes the code more readable and more consistent with what we will do for packed ring. Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-26virtio_ring: introduce helper for indirect featureTiwei Bie1-3/+13
Introduce a helper to check whether we will use indirect feature. It will be used by packed ring too. Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-26virtio_ring: introduce debug helpersTiwei Bie1-22/+27
Introduce debug helpers for last_add_time update, check and invalid. They will be used by packed ring too. Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-26virtio_ring: put split ring fields in a sub structTiwei Bie1-65/+91
Put the split ring specific fields in a sub-struct named as "split" to avoid misuse after introducing packed ring. There is no functional change. Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-26virtio_ring: put split ring functions togetherTiwei Bie1-254/+271
Put the xxx_split() functions together to make the code more readable and avoid misuse after introducing the packed ring. There is no functional change. Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-26virtio_ring: add _split suffix for split ring functionsTiwei Bie1-96/+155
Add _split suffix for split ring specific functions. This is a preparation for introducing the packed ring support. There is no functional change. Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-24virtio-balloon: VIRTIO_BALLOON_F_PAGE_POISONWei Wang1-0/+10
The VIRTIO_BALLOON_F_PAGE_POISON feature bit is used to indicate if the guest is using page poisoning. Guest writes to the poison_val config field to tell host about the page poisoning value that is in use. Suggested-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Wei Wang <wei.w.wang@intel.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-10-24virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINTWei Wang1-33/+331
Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the support of reporting hints of guest free pages to host via virtio-balloon. Currenlty, only free page blocks of MAX_ORDER - 1 are reported. They are obtained one by one from the mm free list via the regular allocation function. Host requests the guest to report free page hints by sending a new cmd id to the guest via the free_page_report_cmd_id configuration register. When the guest starts to report, it first sends a start cmd to host via the free page vq, which acks to host the cmd id received. When the guest finishes reporting free pages, a stop cmd is sent to host via the vq. Host may also send a stop cmd id to the guest to stop the reporting. VIRTIO_BALLOON_CMD_ID_STOP: Host sends this cmd to stop the guest reporting. VIRTIO_BALLOON_CMD_ID_DONE: Host sends this cmd to tell the guest that the reported pages are ready to be freed. Why does the guest free the reported pages when host tells it is ready to free? This is because freeing pages appears to be expensive for live migration. free_pages() dirties memory very quickly and makes the live migraion not converge in some cases. So it is good to delay the free_page operation when the migration is done, and host sends a command to guest about that. Why do we need the new VIRTIO_BALLOON_CMD_ID_DONE, instead of reusing VIRTIO_BALLOON_CMD_ID_STOP? This is because live migration is usually done in several rounds. At the end of each round, host needs to send a VIRTIO_BALLOON_CMD_ID_STOP cmd to the guest to stop (or say pause) the reporting. The guest resumes the reporting when it receives a new command id at the beginning of the next round. So we need a new cmd id to distinguish between "stop reporting" and "ready to free the reported pages". TODO: - Add a batch page allocation API to amortize the allocation overhead. Signed-off-by: Wei Wang <wei.w.wang@intel.com> Signed-off-by: Liang Li <liang.z.li@intel.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-08-24Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds3-62/+97
Pull virtio updates from Michael Tsirkin: "virtio, vhost: fixes, tweaks No new features but a bunch of tweaks such as switching balloon from oom notifier to shrinker" * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: vhost/scsi: increase VHOST_SCSI_PREALLOC_PROT_SGLS to 2048 vhost: allow vhost-scsi driver to be built-in virtio: pci-legacy: Validate queue pfn virtio: mmio-v1: Validate queue PFN virtio_balloon: replace oom notifier with shrinker virtio-balloon: kzalloc the vb struct virtio-balloon: remove BUG() in init_vqs
2018-08-22virtio: pci-legacy: Validate queue pfnSuzuki K Poulose1-2/+12
Legacy PCI over virtio uses a 32bit PFN for the queue. If the queue pfn is too large to fit in 32bits, which we could hit on arm64 systems with 52bit physical addresses (even with 64K page size), we simply miss out a proper link to the other side of the queue. Add a check to validate the PFN, rather than silently breaking the devices. Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <cdall@kernel.org> Cc: Peter Maydel <peter.maydell@linaro.org> Cc: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-08-22virtio: mmio-v1: Validate queue PFNSuzuki K Poulose1-2/+18
virtio-mmio with virtio-v1 uses a 32bit PFN for the queue. If the queue pfn is too large to fit in 32bits, which we could hit on arm64 systems with 52bit physical addresses (even with 64K page size), we simply miss out a proper link to the other side of the queue. Add a check to validate the PFN, rather than silently breaking the devices. Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <cdall@kernel.org> Cc: Peter Maydel <peter.maydell@linaro.org> Cc: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-08-22virtio_balloon: replace oom notifier with shrinkerWei Wang1-51/+59
The OOM notifier is getting deprecated to use for the reasons: - As a callout from the oom context, it is too subtle and easy to generate bugs and corner cases which are hard to track; - It is called too late (after the reclaiming has been performed). Drivers with large amuont of reclaimable memory is expected to release them at an early stage of memory pressure; - The notifier callback isn't aware of oom contrains; Link: https://lkml.org/lkml/2018/7/12/314 This patch replaces the virtio-balloon oom notifier with a shrinker to release balloon pages on memory pressure. The balloon pages are given back to mm adaptively by returning the number of pages that the reclaimer is asking for (i.e. sc->nr_to_scan). Currently the max possible value of sc->nr_to_scan passed to the balloon shrinker is SHRINK_BATCH, which is 128. This is smaller than the limitation that only VIRTIO_BALLOON_ARRAY_PFNS_MAX (256) pages can be returned via one invocation of leak_balloon. But this patch still considers the case that SHRINK_BATCH or shrinker->batch could be changed to a value larger than VIRTIO_BALLOON_ARRAY_PFNS_MAX, which will need to do multiple invocations of leak_balloon. Historically, the feature VIRTIO_BALLOON_F_DEFLATE_ON_OOM has been used to release balloon pages on OOM. We continue to use this feature bit for the shrinker, so the shrinker is only registered when this feature bit has been negotiated with host. Signed-off-by: Wei Wang <wei.w.wang@intel.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-08-22virtio-balloon: kzalloc the vb structWei Wang1-4/+1
Zero all the vb fields at alloaction, so that we don't need to zero-initialize each field one by one later. Signed-off-by: Wei Wang <wei.w.wang@intel.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-08-22virtio-balloon: remove BUG() in init_vqsWei Wang1-3/+7
It's a bit overkill to use BUG when failing to add an entry to the stats_vq in init_vqs. So remove it and just return the error to the caller to bail out nicely. Signed-off-by: Wei Wang <wei.w.wang@intel.com> Cc: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-08-11virtio: Make vp_set_vq_affinity() take a mask.Caleb Raitto2-5/+4
Make vp_set_vq_affinity() take a cpumask instead of taking a single CPU. If there are fewer queues than cores, queue affinity should be able to map to multiple cores. Link: https://patchwork.ozlabs.org/patch/948149/ Suggested-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Caleb Raitto <caraitto@google.com> Acked-by: Gonglei <arei.gonglei@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-30virtio_balloon: fix another race between migration and ballooningJiang Biao1-0/+2
Kernel panic when with high memory pressure, calltrace looks like, PID: 21439 TASK: ffff881be3afedd0 CPU: 16 COMMAND: "java" #0 [ffff881ec7ed7630] machine_kexec at ffffffff81059beb #1 [ffff881ec7ed7690] __crash_kexec at ffffffff81105942 #2 [ffff881ec7ed7760] crash_kexec at ffffffff81105a30 #3 [ffff881ec7ed7778] oops_end at ffffffff816902c8 #4 [ffff881ec7ed77a0] no_context at ffffffff8167ff46 #5 [ffff881ec7ed77f0] __bad_area_nosemaphore at ffffffff8167ffdc #6 [ffff881ec7ed7838] __node_set at ffffffff81680300 #7 [ffff881ec7ed7860] __do_page_fault at ffffffff8169320f #8 [ffff881ec7ed78c0] do_page_fault at ffffffff816932b5 #9 [ffff881ec7ed78f0] page_fault at ffffffff8168f4c8 [exception RIP: _raw_spin_lock_irqsave+47] RIP: ffffffff8168edef RSP: ffff881ec7ed79a8 RFLAGS: 00010046 RAX: 0000000000000246 RBX: ffffea0019740d00 RCX: ffff881ec7ed7fd8 RDX: 0000000000020000 RSI: 0000000000000016 RDI: 0000000000000008 RBP: ffff881ec7ed79a8 R8: 0000000000000246 R9: 000000000001a098 R10: ffff88107ffda000 R11: 0000000000000000 R12: 0000000000000000 R13: 0000000000000008 R14: ffff881ec7ed7a80 R15: ffff881be3afedd0 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 It happens in the pagefault and results in double pagefault during compacting pages when memory allocation fails. Analysed the vmcore, the page leads to second pagefault is corrupted with _mapcount=-256, but private=0. It's caused by the race between migration and ballooning, and lock missing in virtballoon_migratepage() of virtio_balloon driver. This patch fix the bug. Fixes: e22504296d4f64f ("virtio_balloon: introduce migration primitives to balloon pages") Cc: stable@vger.kernel.org Signed-off-by: Jiang Biao <jiang.biao2@zte.com.cn> Signed-off-by: Huang Chong <huang.chong@zte.com.cn> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>