aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/s390/cio/vfio_ccw_cp.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2022-08-01vfio/ccw: Add length to DMA_UNMAP checksEric Farman1-5/+11
As pointed out with the simplification of the VFIO_IOMMU_NOTIFY_DMA_UNMAP notifier [1], the length parameter was never used to check against the pinned pages. Let's correct that, and see if a page is within the affected range instead of simply the first page of the range. [1] https://lore.kernel.org/kvm/20220720170457.39cda0d0.alex.williamson@redhat.com/ Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220728204914.2420989-2-farman@linux.ibm.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2022-07-25vfio: Replace phys_pfn with pages for vfio_pin_pages()Nicolin Chen1-10/+9
Most of the callers of vfio_pin_pages() want "struct page *" and the low-level mm code to pin pages returns a list of "struct page *" too. So there's no gain in converting "struct page *" to PFN in between. Replace the output parameter "phys_pfn" list with a "pages" list, to simplify callers. This also allows us to replace the vfio_iommu_type1 implementation with a more efficient one. And drop the pfn_valid check in the gvt code, as there is no need to do such a check at a page-backed struct page pointer. For now, also update vfio_iommu_type1 to fit this new parameter too. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Eric Farman <farman@linux.ibm.com> Tested-by: Terrence Xu <terrence.xu@intel.com> Tested-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/20220723020256.30081-11-nicolinc@nvidia.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2022-07-25vfio/ccw: Add kmap_local_page() for memcpyNicolin Chen1-3/+6
A PFN is not secure enough to promise that the memory is not IO. And direct access via memcpy() that only handles CPU memory will crash on S390 if the PFN is an IO PFN, as we have to use the memcpy_to/fromio() that uses the special S390 IO access instructions. On the other hand, a "struct page *" is always a CPU coherent thing that fits memcpy(). Also, casting a PFN to "void *" for memcpy() is not a proper practice, kmap_local_page() is the correct API to call here, though S390 doesn't use highmem, which means kmap_local_page() is a NOP. There's a following patch changing the vfio_pin_pages() API to return a list of "struct page *" instead of PFNs. It will block any IO memory from ever getting into this call path, for such a security purpose. In this patch, add kmap_local_page() to prepare for that. Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Eric Farman <farman@linux.ibm.com> Tested-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/20220723020256.30081-10-nicolinc@nvidia.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2022-07-25vfio/ccw: Change pa_pfn list to pa_iova listNicolin Chen1-71/+64
The vfio_ccw_cp code maintains both iova and its PFN list because the vfio_pin/unpin_pages API wanted pfn list. Since vfio_pin/unpin_pages() now accept "iova", change to maintain only pa_iova list and rename all "pfn_array" strings to "page_array", so as to simplify the code. Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Eric Farman <farman@linux.ibm.com> Tested-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/20220723020256.30081-8-nicolinc@nvidia.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2022-07-25vfio: Pass in starting IOVA to vfio_pin/unpin_pages APINicolin Chen1-2/+2
The vfio_pin/unpin_pages() so far accepted arrays of PFNs of user IOVA. Among all three callers, there was only one caller possibly passing in a non-contiguous PFN list, which is now ensured to have contiguous PFN inputs too. Pass in the starting address with "iova" alone to simplify things, so callers no longer need to maintain a PFN list or to pin/unpin one page at a time. This also allows VFIO to use more efficient implementations of pin/unpin_pages. For now, also update vfio_iommu_type1 to fit this new parameter too, while keeping its input intact (being user_iova) since we don't want to spend too much effort swapping its parameters and local variables at that level. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Acked-by: Eric Farman <farman@linux.ibm.com> Tested-by: Terrence Xu <terrence.xu@intel.com> Tested-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/20220723020256.30081-6-nicolinc@nvidia.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2022-07-23vfio/ccw: Only pass in contiguous pagesNicolin Chen1-14/+56
This driver is the only caller of vfio_pin/unpin_pages that might pass in a non-contiguous PFN list, but in many cases it has a contiguous PFN list to process. So letting VFIO API handle a non-contiguous PFN list is actually counterproductive. Add a pair of simple loops to pass in contiguous PFNs only, to have an efficient implementation in VFIO. Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Eric Farman <farman@linux.ibm.com> Tested-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/20220723020256.30081-5-nicolinc@nvidia.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2022-05-11vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()Jason Gunthorpe1-3/+3
Every caller has a readily available vfio_device pointer, use that instead of passing in a generic struct device. The struct vfio_device already contains the group we need so this avoids complexity, extra refcountings, and a confusing lifecycle model. Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/3-v4-8045e76bf00b+13d-vfio_mdev_no_group_jgg@nvidia.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2022-05-11vfio/ccw: Remove mdev from struct channel_programJason Gunthorpe1-19/+28
The next patch wants the vfio_device instead. There is no reason to store a pointer here since we can container_of back to the vfio_device. Reviewed-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/2-v4-8045e76bf00b+13d-vfio_mdev_no_group_jgg@nvidia.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2021-05-12vfio-ccw: Check initialized flag in cp_init()Eric Farman1-0/+4
We have a really nice flag in the channel_program struct that indicates if it had been initialized by cp_init(), and use it as a guard in the other cp accessor routines, but not for a duplicate call into cp_init(). The possibility of this occurring is low, because that flow is protected by the private->io_mutex and FSM CP_PROCESSING state. But then why bother checking it in (for example) cp_prefetch() then? Let's just be consistent and check for that in cp_init() too. Fixes: 71189f263f8a3 ("vfio-ccw: make it safe to access channel programs") Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Acked-by: Matthew Rosato <mjrosato@linux.ibm.com> Message-Id: <20210511195631.3995081-2-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2020-06-02vfio-ccw: Enable transparent CCW IPL from DASDJared Rossi1-7/+12
Remove the explicit prefetch check when using vfio-ccw devices. This check does not trigger in practice as all Linux channel programs are intended to use prefetch. It is expected that all ORBs issued by Linux will request prefetch. Although non-prefetching ORBs are not rejected, they will prefetch nonetheless. A warning is issued up to once per 5 seconds when a forced prefetch occurs. A non-prefetch ORB does not necessarily result in an error, however frequent encounters with non-prefetch ORBs indicate that channel programs are being executed in a way that is inconsistent with what the guest is requesting. While there is currently no known case of an error caused by forced prefetch, it is possible in theory that forced prefetch could result in an error if applied to a channel program that is dependent on non-prefetch. Signed-off-by: Jared Rossi <jrossi@linux.ibm.com> Reviewed-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20200506212440.31323-2-jrossi@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-07-15vfio-ccw: Set pa_nr to 0 if memory allocation fails for pa_iova_pfnFarhan Ali1-1/+3
So we don't call try to call vfio_unpin_pages() incorrectly. Fixes: 0a19e61e6d4c ("vfio: ccw: introduce channel program interfaces") Signed-off-by: Farhan Ali <alifm@linux.ibm.com> Reviewed-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <33a89467ad6369196ae6edf820cbcb1e2d8d050c.1562854091.git.alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-07-15vfio-ccw: Fix memory leak and don't call cp_free in cp_initFarhan Ali1-4/+7
We don't set cp->initialized to true so calling cp_free will just return and not do anything. Also fix a memory leak where we fail to free a ccwchain on an error. Fixes: 812271b910 ("s390/cio: Squash cp_free() and cp_unpin_free()") Signed-off-by: Farhan Ali <alifm@linux.ibm.com> Message-Id: <3173c4216f4555d9765eb6e4922534982bc820e4.1562854091.git.alifm@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-07-15vfio-ccw: Fix misleading comment when setting orb.cmd.c64Farhan Ali1-6/+7
The comment is misleading because it tells us that we should set orb.cmd.c64 before calling ccwchain_calc_length, otherwise the function ccwchain_calc_length would return an error. This is not completely accurate. We want to allow an orb without cmd.c64, and this is fine as long as the channel program does not use IDALs. But we do want to reject any channel program that uses IDALs and does not set the flag, which is what we do in ccwchain_calc_length. After we have done the ccw processing, we need to set cmd.c64, as we use IDALs for all translated channel programs. Also for better code readability let's move the setting of cmd.c64 within the non error path. Fixes: fb9e7880af35 ("vfio: ccw: push down unsupported IDA check") Signed-off-by: Farhan Ali <alifm@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <f68636106aef0faeb6ce9712584d102d1b315ff8.1562854091.git.alifm@linux.ibm.com> Reviewed-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-07-05vfio-ccw: Fix the conversion of Format-0 CCWs to Format-1Eric Farman1-1/+1
When processing Format-0 CCWs, we use the "len" variable as the number of CCWs to convert to Format-1. But that variable contains zero here, and is not a meaningful CCW count until ccwchain_calc_length() returns. Since that routine requires and expects Format-1 CCWs to identify the chaining behavior, the format conversion must be done first. Convert the 2KB we copied even if it's more than we need. Fixes: 7f8e89a8f2fd ("vfio-ccw: Factor out the ccw0-to-ccw1 transition") Reported-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190702180928.18113-1-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-24vfio-ccw: make convert_ccw0_to_ccw1 staticCornelia Huck1-1/+1
Reported by sparse. Fixes: 7f8e89a8f2fd ("vfio-ccw: Factor out the ccw0-to-ccw1 transition") Signed-off-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190624090721.16241-1-cohuck@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-06-21vfio-ccw: Remove copy_ccw_from_iova()Eric Farman1-12/+2
Just to keep things tidy. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-6-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-21vfio-ccw: Factor out the ccw0-to-ccw1 transitionEric Farman1-23/+25
This is a really useful function, but it's buried in the copy_ccw_from_iova() routine so that ccwchain_calc_length() can just work with Format-1 CCWs while doing its counting. But it means we're translating a full 2K of "CCWs" to Format-1, when in reality there's probably far fewer in that space. Let's factor it out, so maybe we can do something with it later. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-5-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-21vfio-ccw: Copy CCW data outside length calculationEric Farman1-12/+7
It doesn't make much sense to "hide" the copy to the channel_program struct inside a routine that calculates the length of the chain. Let's move it to the calling routine, which will later copy from channel_program to the memory it allocated itself. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-4-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-21vfio-ccw: Skip second copy of guest cp to hostEric Farman1-7/+3
We already pinned/copied/unpinned 2K (256 CCWs) of guest memory to the host space anchored off vfio_ccw_private. There's no need to do that again once we have the length calculated, when we could just copy the section we need to the "permanent" space for the I/O. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-3-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-21vfio-ccw: Move guest_cp storage into common structEric Farman1-19/+4
Rather than allocating/freeing a piece of memory every time we try to figure out how long a CCW chain is, let's use a piece of memory allocated for each device. The io_mutex added with commit 4f76617378ee9 ("vfio-ccw: protect the I/O region") is held for the duration of the VFIO_CCW_EVENT_IO_REQ event that accesses/uses this space, so there should be no race concerns with another CPU attempting an (unexpected) SSCH for the same device. Suggested-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-2-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17s390/cio: Combine direct and indirect CCW pathsEric Farman1-76/+39
With both the direct-addressed and indirect-addressed CCW paths simplified to this point, the amount of shared code between them is (hopefully) more easily visible. Move the processing of IDA-specific bits into the direct-addressed path, and add some useful commentary of what the individual pieces are doing. This allows us to remove the entire ccwchain_fetch_idal() routine and maintain a single function for any non-TIC CCW. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-10-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17vfio-ccw: Rearrange IDAL allocation in direct CCWEric Farman1-10/+15
This is purely deck furniture, to help understand the merge of the direct and indirect handlers. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-9-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17vfio-ccw: Remove pfn_array_tableEric Farman1-85/+33
Now that both CCW codepaths build this nested array: ccwchain->pfn_array_table[1]->pfn_array[#idaws/#pages] We can collapse this into simply: ccwchain->pfn_array[#idaws/#pages] Let's do that, so that we don't have to continually navigate two nested arrays when the first array always has a count of one. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-8-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17vfio-ccw: Adjust the first IDAW outside of the nested loopsEric Farman1-2/+3
Now that pfn_array_table[] is always an array of 1, it seems silly to check for the very first entry in an array in the middle of two nested loops, since we know it'll only ever happen once. Let's move this outside the loops to simplify things, even though the "k" variable is still necessary. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-7-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17vfio-ccw: Rearrange pfn_array and pfn_array_table arraysEric Farman1-15/+11
While processing a channel program, we currently have two nested arrays that carry a slightly different structure. The direct CCW path creates this: ccwchain->pfn_array_table[1]->pfn_array[#pages] while an IDA CCW creates: ccwchain->pfn_array_table[#idaws]->pfn_array[1] The distinction appears to state that each pfn_array_table entry points to an array of contiguous pages, represented by a pfn_array, um, array. Since the direct-addressed scenario can ONLY represent contiguous pages, it makes the intermediate array necessary but difficult to recognize. Meanwhile, since an IDAL can contain non-contiguous pages and there is no logic in vfio-ccw to detect adjacent IDAWs, it is the second array that is necessary but appearing to be superfluous. I am not aware of any documentation that states the pfn_array[] needs to be of contiguous pages; it is just what the code does today. I don't see any reason for this either, let's just flip the IDA codepath around so that it generates: ch_pat->pfn_array_table[1]->pfn_array[#idaws] This will bring it in line with the direct-addressed codepath, so that we can understand the behavior of this memory regardless of what type of CCW is being processed. And it means the casual observer does not need to know/care whether the pfn_array[] represents contiguous pages or not. NB: The existing vfio-ccw code only supports 4K-block Format-2 IDAs, so that "#pages" == "#idaws" in this area. This means that we will have difficulty with this overlap in terminology if support for Format-1 or 2K-block Format-2 IDAs is ever added. I don't think that this patch changes our ability to make that distinction. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-6-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17s390/cio: Use generalized CCW handler in cp_init()Eric Farman1-23/+4
It is now pretty apparent that ccwchain_handle_ccw() (nee ccwchain_handle_tic()) does everything that cp_init() wants to do. Let's remove that duplicated code from cp_init() and let ccwchain_handle_ccw() handle it itself. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-5-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17s390/cio: Generalize the TIC handlerEric Farman1-5/+6
Refactor ccwchain_handle_tic() into a routine that handles a channel program address (which itself is a CCW pointer), rather than a CCW pointer that is only a TIC CCW. This will make it easier to reuse this code for other CCW commands. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-4-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17s390/cio: Refactor the routine that handles TIC CCWsEric Farman1-4/+4
Extract the "does the target of this TIC already exist?" check from ccwchain_handle_tic(), so that it's easier to refactor that function into one that cp_init() is able to use. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-3-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17s390/cio: Squash cp_free() and cp_unpin_free()Eric Farman1-20/+16
The routine cp_free() does nothing but call cp_unpin_free(), and while most places call cp_free() there is one caller of cp_unpin_free() used when the cp is guaranteed to have not been marked initialized. This seems like a dubious way to make a distinction, so let's combine these routines and make cp_free() do all the work. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-2-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-03s390/cio: Remove vfio-ccw checks of command codesEric Farman1-6/+5
If the CCW being processed is a No-Operation, then by definition no data is being transferred. Let's fold those checks into the normal CCW processors, rather than skipping out early. Likewise, if the CCW being processed is a "test" (a category defined here as an opcode that contains zero in the lowest four bits) then no special processing is necessary as far as vfio-ccw is concerned. These command codes have not been valid since the S/370 days, meaning they are invalid in the same way as one that ends in an eight [1] or an otherwise valid command code that is undefined for the device type in question. Considering that, let's just process "test" CCWs like any other CCW, and send everything to the hardware. [1] POPS states that a x08 is a TIC CCW, and that having any high-order bits enabled is invalid for format-1 CCWs. For format-0 CCWs, the high-order bits are ignored. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190516161403.79053-4-farman@linux.ibm.com> Acked-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-03s390/cio: Allow zero-length CCWs in vfio-ccwEric Farman1-18/+12
It is possible that a guest might issue a CCW with a length of zero, and will expect a particular response. Consider this chain: Address Format-1 CCW -------- ----------------- 0 33110EC0 346022CC 33177468 1 33110EC8 CF200000 3318300C CCW[0] moves a little more than two pages, but also has the Suppress Length Indication (SLI) bit set to handle the expectation that considerably less data will be moved. CCW[1] also has the SLI bit set, and has a length of zero. Once vfio-ccw does its magic, the kernel issues a start subchannel on behalf of the guest with this: Address Format-1 CCW -------- ----------------- 0 021EDED0 346422CC 021F0000 1 021EDED8 CF240000 3318300C Both CCWs were converted to an IDAL and have the corresponding flags set (which is by design), but only the address of the first data address is converted to something the host is aware of. The second CCW still has the address used by the guest, which happens to be (A) (probably) an invalid address for the host, and (B) an invalid IDAW address (doubleword boundary, etc.). While the I/O fails, it doesn't fail correctly. In this example, we would receive a program check for an invalid IDAW address, instead of a unit check for an invalid command. To fix this, revert commit 4cebc5d6a6ff ("vfio: ccw: validate the count field of a ccw before pinning") and allow the individual fetch routines to process them like anything else. We'll make a slight adjustment to our allocation of the pfn_array (for direct CCWs) or IDAL (for IDAL CCWs) memory, so that we have room for at least one address even though no guest memory will be pinned and thus the IDAW will not be populated with a host address. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190516161403.79053-3-farman@linux.ibm.com> Acked-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-03s390/cio: Don't pin vfio pages for empty transfersEric Farman1-5/+50
The skip flag of a CCW offers the possibility of data not being transferred, but is only meaningful for certain commands. Specifically, it is only applicable for a read, read backward, sense, or sense ID CCW and will be ignored for any other command code (SA22-7832-11 page 15-64, and figure 15-30 on page 15-75). (A sense ID is xE4, while a sense is x04 with possible modifiers in the upper four bits. So we will cover the whole "family" of sense CCWs.) For those scenarios, since there is no requirement for the target address to be valid, we should skip the call to vfio_pin_pages() and rely on the IDAL address we have allocated/built for the channel program. The fact that the individual IDAWs within the IDAL are invalid is fine, since they aren't actually checked in these cases. Set pa_nr to zero when skipping the pfn_array_pin() call, since it is defined as the number of pages pinned and is used to determine whether to call vfio_unpin_pages() upon cleanup. The pfn_array_pin() routine returns the number of pages that were pinned, but now might be skipped for some CCWs. Thus we need to calculate the expected number of pages ourselves such that we are guaranteed to allocate a reasonable number of IDAWs, which will provide a valid address in CCW.CDA regardless of whether the IDAWs are filled in with pinned/translated addresses or not. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190516161403.79053-2-farman@linux.ibm.com> Acked-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-03s390/cio: Initialize the host addresses in pfn_arrayEric Farman1-1/+4
Let's initialize the host address to something that is invalid, rather than letting it default to zero. This just makes it easier to notice when a pin operation has failed or been skipped. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190514234248.36203-5-farman@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-03s390/cio: Split pfn_array_alloc_pin into piecesEric Farman1-18/+46
The pfn_array_alloc_pin routine is doing too much. Today, it does the alloc of the pfn_array struct and its member arrays, builds the iova address lists out of a contiguous piece of guest memory, and asks vfio to pin the resulting pages. Let's effectively revert a significant portion of commit 5c1cfb1c3948 ("vfio: ccw: refactor and improve pfn_array_alloc_pin()") such that we break pfn_array_alloc_pin() into its component pieces, and have one routine that allocates/populates the pfn_array structs, and another that actually pins the memory. In the future, we will be able to handle scenarios where pinning memory isn't actually appropriate. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190514234248.36203-4-farman@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-03s390/cio: Update SCSW if it points to the end of the chainEric Farman1-1/+5
Per the POPs [1], when processing an interrupt the SCSW.CPA field of an IRB generally points to 8 bytes after the last CCW that was executed (there are exceptions, but this is the most common behavior). In the case of an error, this points us to the first un-executed CCW in the chain. But in the case of normal I/O, the address points beyond the end of the chain. While the guest generally only cares about this when possibly restarting a channel program after error recovery, we should convert the address even in the good scenario so that we provide a consistent, valid, response upon I/O completion. [1] Figure 16-6 in SA22-7832-11. The footnotes in that table also state that this is true even if the resulting address is invalid or protected, but moving to the end of the guest chain should not be a surprise. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190514234248.36203-2-farman@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-04-24vfio-ccw: make it safe to access channel programsCornelia Huck1-1/+20
When we get a solicited interrupt, the start function may have been cleared by a csch, but we still have a channel program structure allocated. Make it safe to call the cp accessors in any case, so we can call them unconditionally. While at it, also make sure that functions called from other parts of the code return gracefully if the channel program structure has not been initialized (even though that is a bug in the caller). Reviewed-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-02-26s390/cio: Use cpa range elsewhere within vfio-ccwEric Farman1-12/+6
Since we have a little function to see whether a channel program address falls within a range of CCWs, let's use it in the other places of code that make these checks. (Why isn't ccw_head fully removed? Well, because this way some longs lines don't have to be reflowed.) Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190222183941.29596-3-farman@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-02-26s390/cio: Fix vfio-ccw handling of recursive TICsEric Farman1-1/+36
The routine ccwchain_calc_length() is tasked with looking at a channel program, seeing how many CCWs are chained together by the presence of the Chain-Command flag, and returning a count to the caller. Previously, it also considered a Transfer-in-Channel CCW as being an appropriate mechanism for chaining. The problem at the time was that the TIC CCW will almost certainly not go to the next CCW in memory (because the CC flag would be sufficient), and so advancing to the next 8 bytes will cause us to read potentially invalid memory. So that comparison was removed, and the target of the TIC is processed as a new chain. This is fine when a TIC goes to a new chain (consider a NOP+TIC to a channel program that is being redriven), but there is another scenario where this falls apart. A TIC can be used to "rewind" a channel program, for example to find a particular record on a disk with various orientation CCWs. In this case, we DO want to consider the memory after the TIC since the TIC will be skipped once the requested criteria is met. This is due to the Status Modifier presented by the device, though software doesn't need to operate on it beyond understanding the behavior change of how the channel program is executed. So to handle this, we will re-introduce the check for a TIC CCW but limit it by examining the target of the TIC. If the TIC doesn't go back into the current chain, then current behavior applies; we should stop counting CCWs and let the target of the TIC be handled as a new chain. But, if the TIC DOES go back into the current chain, then we need to keep looking at the memory after the TIC for when the channel breaks out of the TIC loop. We can't use tic_target_chain_exists() because the chain in question hasn't been built yet, so we will redefine that comparison with some small functions to make it more readable and to permit refactoring later. Fixes: 405d566f98ae ("vfio-ccw: Don't assume there are more ccws after a TIC") Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190222183941.29596-2-farman@linux.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-02-04vfio-ccw: Don't assume there are more ccws after a TICFarhan Ali1-1/+1
When trying to calculate the length of a ccw chain, we assume there are ccws after a TIC. This can lead to overcounting and copying garbage data from guest memory. Signed-off-by: Farhan Ali <alifm@linux.ibm.com> Message-Id: <d63748c1f1b03147bcbf401596638627a5e35ef7.1548082107.git.alifm@linux.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-11-13s390/cio: Fix cleanup when unsupported IDA format is usedEric Farman1-1/+3
Direct returns from within a loop are rude, but it doesn't mean it gets to avoid releasing the memory acquired beforehand. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20181109023937.96105-3-farman@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Acked-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-11-13s390/cio: Fix cleanup of pfn_array alloc failureEric Farman1-1/+1
If pfn_array_alloc fails somehow, we need to release the pfn_array_table that was malloc'd earlier. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20181109023937.96105-2-farman@linux.ibm.com> Acked-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-10-02s390/cio: Fix how vfio-ccw checks pinned pagesEric Farman1-1/+1
We have two nested loops to check the entries within the pfn_array_table arrays. But we mistakenly use the outer array as an index in our check, and completely ignore the indexing performed by the inner loop. Cc: stable@vger.kernel.org Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20181002010235.42483-1-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-05-29vfio: ccw: set ccw->cda to NULL defensivelyDong Jia Shi1-11/+20
Let's avoid free on ccw->cda that points to a guest address or an already freed memory area by setting it to NULL if memory allocation didn't happen or failed. Signed-off-by: Dong Jia Shi <bjsdjshi@linux.ibm.com> Message-Id: <20180523025645.8978-4-bjsdjshi@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-05-29vfio: ccw: refactor and improve pfn_array_alloc_pin()Dong Jia Shi1-46/+36
This refactors pfn_array_alloc_pin() and also improves it by adding defensive code in error handling so that calling pfn_array_unpin_free() after error return won't lead to problem. This mainly does: 1. Merge pfn_array_pin() into pfn_array_alloc_pin(), since there is no other user of pfn_array_pin(). As a result, also remove kernel-doc for pfn_array_pin() and add/update kernel-doc for pfn_array_alloc_pin() and struct pfn_array. 2. For a vfio_pin_pages() failure, set pa->pa_nr to zero to indicate zero pages were pinned. 3. Set pa->pa_iova_pfn to NULL right after it was freed. Suggested-by: Pierre Morel <pmorel@linux.ibm.com> Signed-off-by: Dong Jia Shi <bjsdjshi@linux.ibm.com> Message-Id: <20180523025645.8978-3-bjsdjshi@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-05-29vfio: ccw: shorten kernel doc description for pfn_array_pin()Dong Jia Shi1-8/+6
The kernel doc description for usage of the struct pfn_array in pfn_array_pin() is unnecessary long. Let's shorten it by describing the contents of the struct pfn_array fields at the struct's definition instead. Suggested-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Dong Jia Shi <bjsdjshi@linux.ibm.com> Message-Id: <20180523025645.8978-2-bjsdjshi@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-05-29vfio: ccw: push down unsupported IDA checkHalil Pasic1-3/+16
There is at least one relevant guest OS that doesn't set the IDA flags in the ORB as we would like them, but never uses any IDA. So instead of saying -EOPNOTSUPP when observing an ORB, such that a channel program specified by it could be a not supported one, let us say -EOPNOTSUPP only if the channel program is a not supported one. Of course, the real solution would be doing proper translation for all IDA. This is possible, but given the current code not straight forward. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Tested-by: Jason J. Herne <jjherne@linux.ibm.com> Message-Id: <20180516173342.15174-1-pasic@linux.ibm.com> Reviewed-by: Dong Jia Shi <bjsdjshi@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-04-27vfio: ccw: fix cleanup if cp_prefetch failsHalil Pasic1-1/+12
If the translation of a channel program fails, we may end up attempting to clean up (free, unpin) stuff that never got translated (and allocated, pinned) in the first place. By adjusting the lengths of the chains accordingly (so the element that failed, and all subsequent elements are excluded) cleanup activities based on false assumptions can be avoided. Let's make sure cp_free works properly after cp_prefetch returns with an error by setting ch_len of a ccw chain to the number of the translated CCWs on that chain. Cc: stable@vger.kernel.org #v4.12+ Acked-by: Pierre Morel <pmorel@linux.vnet.ibm.com> Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com> Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com> Signed-off-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com> Message-Id: <20180423110113.59385-2-bjsdjshi@linux.vnet.ibm.com> [CH: fixed typos] Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-02-05s390/cio: fix kernel-doc usageSebastian Ott1-1/+1
Fix the kernel-doc usage in cio to get rid of (W=1) build warnings like: drivers/s390/cio/cio.c:1068: warning: No description found for parameter 'sch' Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-11-13Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linuxLinus Torvalds1-5/+19
Pull s390 updates from Heiko Carstens: "Since Martin is on vacation you get the s390 pull request for the v4.15 merge window this time from me. Besides a lot of cleanups and bug fixes these are the most important changes: - a new regset for runtime instrumentation registers - hardware accelerated AES-GCM support for the aes_s390 module - support for the new CEX6S crypto cards - support for FORTIFY_SOURCE - addition of missing z13 and new z14 instructions to the in-kernel disassembler - generate opcode tables for the in-kernel disassembler out of a simple text file instead of having to manually maintain those tables - fast memset16, memset32 and memset64 implementations - removal of named saved segment support - hardware counter support for z14 - queued spinlocks and queued rwlocks implementations for s390 - use the stack_depth tracking feature for s390 BPF JIT - a new s390_sthyi system call which emulates the sthyi (store hypervisor information) instruction - removal of the old KVM virtio transport - an s390 specific CPU alternatives implementation which is used in the new spinlock code" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (88 commits) MAINTAINERS: add virtio-ccw.h to virtio/s390 section s390/noexec: execute kexec datamover without DAT s390: fix transactional execution control register handling s390/bpf: take advantage of stack_depth tracking s390: simplify transactional execution elf hwcap handling s390/zcrypt: Rework struct ap_qact_ap_info. s390/virtio: remove unused header file kvm_virtio.h s390: avoid undefined behaviour s390/disassembler: generate opcode tables from text file s390/disassembler: remove insn_to_mnemonic() s390/dasd: avoid calling do_gettimeofday() s390: vfio-ccw: Do not attempt to free no-op, test and tic cda. s390: remove named saved segment support s390/archrandom: Reconsider s390 arch random implementation s390/pci: do not require AIS facility s390/qdio: sanitize put_indicator s390/qdio: use atomic_cmpxchg s390/nmi: avoid using long-displacement facility s390: pass endianness info to sparse s390/decompressor: remove informational messages ...
2017-11-08s390: vfio-ccw: Do not attempt to free no-op, test and tic cda.Jason J. Herne1-0/+2
Because we do not make use of the cda (channel data address) for test, no-op ccws no address translation takes place. This means cda could contain a guest address which we do not want to attempt to free. Let's check the command type and skip cda free when it is not needed. For a TIC ccw, ccw->cda points to either a ccw in an existing chain or it points to a whole new allocated chain. In either case the data will be freed when the owning chain is freed. Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com> Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com> Message-Id: <1510068152-21988-1-git-send-email-jjherne@linux.vnet.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>