aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/zorro (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2019-07-02s390/dasd: Fix a precision vs width bug in dasd_feature_list()Dan Carpenter1-1/+1
The "len" variable is the length of the option up to the next option or to the end of the string which ever first. We want to print the invalid option so we want precision "%.*s" but the format is width "%*s" so it prints up to the end of the string. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Tested-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-07-02s390/cio: introduce driver_override on the css busCornelia Huck3-0/+77
Sometimes, we want to control which of the matching drivers binds to a subchannel device (e.g. for subchannels we want to handle via vfio-ccw). For pci devices, a mechanism to do so has been introduced in 782a985d7af2 ("PCI: Introduce new device binding path using pci_dev.driver_override"). It makes sense to introduce the driver_override attribute for subchannel devices as well, so that we can easily extend the 'driverctl' tool (which makes use of the driver_override attribute for pci). Note that unlike pci we still require a driver override to match the subchannel type; matching more than one subchannel type is probably not useful anyway. Signed-off-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-06-24vfio-ccw: make convert_ccw0_to_ccw1 staticCornelia Huck1-1/+1
Reported by sparse. Fixes: 7f8e89a8f2fd ("vfio-ccw: Factor out the ccw0-to-ccw1 transition") Signed-off-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190624090721.16241-1-cohuck@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-06-21vfio-ccw: Remove copy_ccw_from_iova()Eric Farman1-12/+2
Just to keep things tidy. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-6-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-21vfio-ccw: Factor out the ccw0-to-ccw1 transitionEric Farman1-23/+25
This is a really useful function, but it's buried in the copy_ccw_from_iova() routine so that ccwchain_calc_length() can just work with Format-1 CCWs while doing its counting. But it means we're translating a full 2K of "CCWs" to Format-1, when in reality there's probably far fewer in that space. Let's factor it out, so maybe we can do something with it later. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-5-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-21vfio-ccw: Copy CCW data outside length calculationEric Farman1-12/+7
It doesn't make much sense to "hide" the copy to the channel_program struct inside a routine that calculates the length of the chain. Let's move it to the calling routine, which will later copy from channel_program to the memory it allocated itself. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-4-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-21vfio-ccw: Skip second copy of guest cp to hostEric Farman1-7/+3
We already pinned/copied/unpinned 2K (256 CCWs) of guest memory to the host space anchored off vfio_ccw_private. There's no need to do that again once we have the length calculated, when we could just copy the section we need to the "permanent" space for the I/O. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-3-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-21vfio-ccw: Move guest_cp storage into common structEric Farman3-19/+18
Rather than allocating/freeing a piece of memory every time we try to figure out how long a CCW chain is, let's use a piece of memory allocated for each device. The io_mutex added with commit 4f76617378ee9 ("vfio-ccw: protect the I/O region") is held for the duration of the VFIO_CCW_EVENT_IO_REQ event that accesses/uses this space, so there should be no race concerns with another CPU attempting an (unexpected) SSCH for the same device. Suggested-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190618202352.39702-2-farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-19s390/cio: move struct node_descriptor to cio.hJulian Wiedmann2-30/+30
This allows device drivers (eg. qeth) to use the struct when processing information retrieved via RCD. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-06-19s390: fix stfle zero paddingHeiko Carstens1-7/+14
The stfle inline assembly returns the number of double words written (condition code 0) or the double words it would have written (condition code 3), if the memory array it got as parameter would have been large enough. The current stfle implementation assumes that the array is always large enough and clears those parts of the array that have not been written to with a subsequent memset call. If however the array is not large enough memset will get a negative length parameter, which means that memset clears memory until it gets an exception and the kernel crashes. To fix this simply limit the maximum length. Move also the inline assembly to an extra function to avoid clobbering of register 0, which might happen because of the added min_t invocation together with code instrumentation. The bug was introduced with commit 14375bc4eb8d ("[S390] cleanup facility list handling") but was rather harmless, since it would only write to a rather large array. It became a potential problem with commit 3ab121ab1866 ("[S390] kernel: Add z/VM LGR detection"). Since then it writes to an array with only four double words, while some machines already deliver three double words. As soon as machines have a facility bit within the fifth double a crash on IPL would happen. Fixes: 14375bc4eb8d ("[S390] cleanup facility list handling") Cc: <stable@vger.kernel.org> # v2.6.37+ Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-06-19s390/sclp: remove call home supportHeiko Carstens4-227/+0
This feature has never been used, so remove it. Acked-by: Vasily Gorbik <gor@linux.ibm.com> Acked-by: Hendrik Brueckner <brueckner@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-06-19s390: replace defconfig with performance_defconfigHeiko Carstens2-768/+514
Replace defconfig with performance_defconfig. defconfig had some more or less random debug options enabled, where nobody knows why anymore. Just remove the old defconfig and replace it with performance_defconfig, which reduces the number of configs to maintain. A config with debugging options enabled is debug_defconfig which is supposed to be rather close to performance_defconfig except that is has debug options enabled. Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-06-17s390/cio: Combine direct and indirect CCW pathsEric Farman1-76/+39
With both the direct-addressed and indirect-addressed CCW paths simplified to this point, the amount of shared code between them is (hopefully) more easily visible. Move the processing of IDA-specific bits into the direct-addressed path, and add some useful commentary of what the individual pieces are doing. This allows us to remove the entire ccwchain_fetch_idal() routine and maintain a single function for any non-TIC CCW. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-10-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17vfio-ccw: Rearrange IDAL allocation in direct CCWEric Farman1-10/+15
This is purely deck furniture, to help understand the merge of the direct and indirect handlers. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-9-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17vfio-ccw: Remove pfn_array_tableEric Farman1-85/+33
Now that both CCW codepaths build this nested array: ccwchain->pfn_array_table[1]->pfn_array[#idaws/#pages] We can collapse this into simply: ccwchain->pfn_array[#idaws/#pages] Let's do that, so that we don't have to continually navigate two nested arrays when the first array always has a count of one. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-8-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17vfio-ccw: Adjust the first IDAW outside of the nested loopsEric Farman1-2/+3
Now that pfn_array_table[] is always an array of 1, it seems silly to check for the very first entry in an array in the middle of two nested loops, since we know it'll only ever happen once. Let's move this outside the loops to simplify things, even though the "k" variable is still necessary. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-7-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17vfio-ccw: Rearrange pfn_array and pfn_array_table arraysEric Farman1-15/+11
While processing a channel program, we currently have two nested arrays that carry a slightly different structure. The direct CCW path creates this: ccwchain->pfn_array_table[1]->pfn_array[#pages] while an IDA CCW creates: ccwchain->pfn_array_table[#idaws]->pfn_array[1] The distinction appears to state that each pfn_array_table entry points to an array of contiguous pages, represented by a pfn_array, um, array. Since the direct-addressed scenario can ONLY represent contiguous pages, it makes the intermediate array necessary but difficult to recognize. Meanwhile, since an IDAL can contain non-contiguous pages and there is no logic in vfio-ccw to detect adjacent IDAWs, it is the second array that is necessary but appearing to be superfluous. I am not aware of any documentation that states the pfn_array[] needs to be of contiguous pages; it is just what the code does today. I don't see any reason for this either, let's just flip the IDA codepath around so that it generates: ch_pat->pfn_array_table[1]->pfn_array[#idaws] This will bring it in line with the direct-addressed codepath, so that we can understand the behavior of this memory regardless of what type of CCW is being processed. And it means the casual observer does not need to know/care whether the pfn_array[] represents contiguous pages or not. NB: The existing vfio-ccw code only supports 4K-block Format-2 IDAs, so that "#pages" == "#idaws" in this area. This means that we will have difficulty with this overlap in terminology if support for Format-1 or 2K-block Format-2 IDAs is ever added. I don't think that this patch changes our ability to make that distinction. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-6-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17s390/cio: Use generalized CCW handler in cp_init()Eric Farman1-23/+4
It is now pretty apparent that ccwchain_handle_ccw() (nee ccwchain_handle_tic()) does everything that cp_init() wants to do. Let's remove that duplicated code from cp_init() and let ccwchain_handle_ccw() handle it itself. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-5-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17s390/cio: Generalize the TIC handlerEric Farman1-5/+6
Refactor ccwchain_handle_tic() into a routine that handles a channel program address (which itself is a CCW pointer), rather than a CCW pointer that is only a TIC CCW. This will make it easier to reuse this code for other CCW commands. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-4-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17s390/cio: Refactor the routine that handles TIC CCWsEric Farman1-4/+4
Extract the "does the target of this TIC already exist?" check from ccwchain_handle_tic(), so that it's easier to refactor that function into one that cp_init() is able to use. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-3-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-17s390/cio: Squash cp_free() and cp_unpin_free()Eric Farman1-20/+16
The routine cp_free() does nothing but call cp_unpin_free(), and while most places call cp_free() there is one caller of cp_unpin_free() used when the cp is guaranteed to have not been marked initialized. This seems like a dubious way to make a distinction, so let's combine these routines and make cp_free() do all the work. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190606202831.44135-2-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-15Update default configurationMartin Schwidefsky3-3/+8
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15processor: get rid of cpu_relax_yieldHeiko Carstens5-13/+9
stop_machine is the only user left of cpu_relax_yield. Given that it now has special semantics which are tied to stop_machine introduce a weak stop_machine_yield function which architectures can override, and get rid of the generic cpu_relax_yield implementation. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390: improve wait logic of stop_machineMartin Schwidefsky5-13/+25
The stop_machine loop to advance the state machine and to wait for all affected CPUs to check-in calls cpu_relax_yield in a tight loop until the last missing CPUs acknowledged the state transition. On a virtual system where not all logical CPUs are backed by real CPUs all the time it can take a while for all CPUs to check-in. With the current definition of cpu_relax_yield a diagnose 0x44 is done which tells the hypervisor to schedule *some* other CPU. That can be any CPU and not necessarily one of the CPUs that need to run in order to advance the state machine. This can lead to a pretty bad diagnose 0x44 storm until the last missing CPU finally checked-in. Replace the undirected cpu_relax_yield based on diagnose 0x44 with a directed yield. Each CPU in the wait loop will pick up the next CPU in the cpumask of stop_machine. The diagnose 0x9c is used to tell the hypervisor to run this next CPU instead of the current one. If there is only a limited number of real CPUs backing the virtual CPUs we end up with the real CPUs passed around in a round-robin fashion. [heiko.carstens@de.ibm.com]: Use cpumask_next_wrap as suggested by Peter Zijlstra. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15processor: remove spin_cpu_yieldHeiko Carstens2-11/+0
spin_cpu_yield is unused, therefore remove it. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/traps: simplify data exception handlerVasily Gorbik1-8/+2
Simplify conditions and remove unnecessary variable in data exception handler. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Hendrik Brueckner <brueckner@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15virtio/s390: make airq summary indicators DMAHalil Pasic1-8/+24
The hypervisor needs to interact with the summary indicators, so these need to be DMA memory as well (at least for protected virtualization guests). Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15virtio/s390: use DMA memory for ccw I/O and classic notifiersHalil Pasic1-80/+89
Before virtio-ccw could get away with not using DMA API for the pieces of memory it does ccw I/O with. With protected virtualization this has to change, since the hypervisor needs to read and sometimes also write these pieces of memory. The hypervisor is supposed to poke the classic notifiers, if these are used, out of band with regards to ccw I/O. So these need to be allocated as DMA memory (which is shared memory for protected virtualization guests). Let us factor out everything from struct virtio_ccw_device that needs to be DMA memory in a satellite that is allocated as such. Note: The control blocks of I/O instructions do not need to be shared. These are marshalled by the ultravisor. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15virtio/s390: add indirection to indicators accessHalil Pasic1-15/+25
This will come in handy soon when we pull out the indicators from virtio_ccw_device to a memory area that is shared with the hypervisor (in particular for protected virtualization guests). Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15virtio/s390: use cacheline aligned airq bit vectorsHalil Pasic1-1/+2
The flag AIRQ_IV_CACHELINE was recently added to airq_iv_create(). Let us use it! We actually wanted the vector to span a cacheline all along. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/airq: use DMA memory for adapter interruptsHalil Pasic4-14/+28
Protected virtualization guests have to use shared pages for airq notifier bit vectors, because the hypervisor needs to write these bits. Let us make sure we allocate DMA memory for the notifier bit vectors by replacing the kmem_cache with a dma_cache and kalloc() with cio_dma_zalloc(). Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Sebastian Ott <sebott@linux.ibm.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/cio: add basic protected virtualization supportHalil Pasic10-83/+164
As virtio-ccw devices are channel devices, we need to use the dma area within the common I/O layer for any communication with the hypervisor. Note that we do not need to use that area for control blocks directly referenced by instructions, e.g. the orb. It handles neither QDIO in the common code, nor any device type specific stuff (like channel programs constructed by the DASD driver). An interesting side effect is that virtio structures are now going to get allocated in 31 bit addressable storage. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Sebastian Ott <sebott@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/cio: introduce DMA pools to cioHalil Pasic3-4/+141
To support protected virtualization cio will need to make sure the memory used for communication with the hypervisor is DMA memory. Let us introduce one global pool for cio. Our DMA pools are implemented as a gen_pool backed with DMA pages. The idea is to avoid each allocation effectively wasting a page, as we typically allocate much less than PAGE_SIZE. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Sebastian Ott <sebott@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/mm: force swiotlb for protected virtualizationHalil Pasic3-0/+68
On s390, protected virtualization guests have to use bounced I/O buffers. That requires some plumbing. Let us make sure, any device that uses DMA API with direct ops correctly is spared from the problems, that a hypervisor attempting I/O to a non-shared page would bring. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/crypto: sha: Use -ENODEV instead of -EOPNOTSUPPDavid Hildenbrand3-3/+3
Let's use the error value that is typically used if HW support is not available when trying to load a module - this is also what systemd's systemd-modules-load.service expects. Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/crypto: prng: Use -ENODEV instead of -EOPNOTSUPPDavid Hildenbrand1-2/+2
Let's use the error value that is typically used if HW support is not available when trying to load a module - this is also what systemd's systemd-modules-load.service expects. Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/crypto: ghash: Use -ENODEV instead of -EOPNOTSUPPDavid Hildenbrand1-1/+1
Let's use the error value that is typically used if HW support is not available when trying to load a module - this is also what systemd's systemd-modules-load.service expects. Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/pkey: Use -ENODEV instead of -EOPNOTSUPPDavid Hildenbrand1-4/+4
systemd-modules-load.service automatically tries to load the pkey module on systems that have MSA. Pkey also requires the MSA3 facility and a bunch of subfunctions. Failing with -EOPNOTSUPP makes "systemd-modules-load.service" fail on any system that does not have all needed subfunctions. For example, when running under QEMU TCG (but also on systems where protected keys are disabled via the HMC). Let's use -ENODEV, so systemd-modules-load.service properly ignores failing to load the pkey module because of missing HW functionality. While at it, also convert the -EOPNOTSUPP in pkey_clr2protkey() to -ENODEV. Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-11s390/kdump: get rid of compile warningHeiko Carstens1-1/+2
Move the CONFIG_CRASH_DUMP ifdef to get rid of this: arch/s390/kernel/machine_kexec.c:146:22: warning: 'do_start_kdump' defined but not used [-Wunused-function] Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-11RAID/s390: remove invalid 'r' inline asm operand modifierVasily Gorbik1-1/+1
gcc silently ignores unsupported inline asm operand modifiers, effectively turning '%r0' into '%0', but upcoming clang 9 complains about them: lib/raid6/s390vx8.c:63:16: error: invalid operand in inline asm: 'VLM $2,$3,0,${1:r}' asm volatile ("VLM %2,%3,0,%r1" ^ Clean up what look like a typo 'r' inline asm operand modifier usage. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-11s390: include/asm/debug.h add kerneldoc markupsMauro Carvalho Chehab2-671/+232
Instead of keeping the documentation inside s390dbf.rst, move them to arch/s390/include/asm/debug.h, using standard kernel-doc markups. Keeping the documentation close to the code helps to keep it updated. It also makes easier to document other stuff inside debug.h, as all it needs is to add kernel-doc markups inside it, as the file will be already be included at the produced documentation. - Those were converted to kerneldoc using this script specially designed to parse ths file, and manually editted: <script> use strict; my $mode = ""; my $parameter = ""; my $ret = ""; my $descr = ""; sub add_var($) { my $ln = shift; $ln =~ s/^\s+//; $ln =~ s/\s+$//; return if ($ln eq ""); $ln =~ s/^(\S+)\s+/$1\t/; print " * \@$ln\n"; } sub add_return($) { my $ln = shift; print " *\n * Return:\n" if ($mode ne "Return Value:"); $ln =~ s/^\s+//; $ln =~ s/\s+$//; return if ($ln eq ""); print " * - $ln\n"; } sub add_description($) { my $ln = shift; print " *\n * \n" if ($mode ne "Description:"); $ln =~ s/^\s+//; $ln =~ s/\s+$//; return if ($ln eq ""); print " * $ln\n"; } sub flush_results() { print " */\n\n"; } while (<>) { if (m/^[\-]+$/) { flush_results(); $mode = ""; $parameter = ""; $ret = ""; $descr = ""; next; } if (m/(Parameter:)(.*)/) { print " *\n" if ($mode eq "func"); add_var($2); $mode = $1; next; } if (m/(Return Value:)(.*)/) { add_return($2); $mode = $1; next; } if (m/(Description:)(.*)/) { add_description($2); $mode = $1; next; } if ($mode eq "Parameter:") { add_var($_); next; } if ($mode eq "Return Value:") { add_return($_); next; } if ($mode eq "Description:") { add_description($_); next; } next if (m/^\s*$/); if (m/^\S+.*\s\*?(\S+)\s*\(/) { if ($mode eq "") { print "/**\n * $1()\n"; } else { print " * $1()\n"; } $mode="func"; } } flush_results(); </script> Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-11docs: s390: convert docs to ReST and rename to *.rstMauro Carvalho Chehab21-2262/+3118
Convert all text files with s390 documentation to ReST format. Tried to preserve as much as possible the original document format. Still, some of the files required some work in order for it to be visible on both plain text and after converted to html. The conversion is actually: - add blank lines and identation in order to identify paragraphs; - fix tables markups; - add some lists markups; - mark literal blocks; - adjust title markups. At its new index.rst, let's add a :orphan: while this is not linked to the main index.rst file, in order to avoid build warnings. Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-11docs: Debugging390.txt: convert table to ascii artworkMauro Carvalho Chehab1-90/+120
The first bit/value table inside the document is very hard to read and won't fit ReST format. Also, some columns aren't properly aligned. Convert it to a nice ascii artwork table with makes it easier to read as plain text and is compatible with ReST format parser on Sphinx. Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-07s390/qdio: handle PENDING state for QEBSM devicesJulian Wiedmann1-0/+1
When a CQ-enabled device uses QEBSM for SBAL state inspection, get_buf_states() can return the PENDING state for an Output Queue. get_outbound_buffer_frontier() isn't prepared for this, and any PENDING buffer will permanently stall all further completion processing on this Queue. This isn't a concern for non-QEBSM devices, as get_buf_states() for such devices will manually turn PENDING buffers into EMPTY ones. Fixes: 104ea556ee7f ("qdio: support asynchronous delivery of storage blocks") Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-07s390/jump_label: remove unused structure definitionHeiko Carstens1-5/+0
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-07s390/boot: disable address-of-packed-member warningHeiko Carstens1-0/+1
Get rid of gcc9 warnings like this: arch/s390/boot/ipl_report.c: In function 'find_bootdata_space': arch/s390/boot/ipl_report.c:42:26: warning: taking address of packed member of 'struct ipl_rb_components' may result in an unaligned pointer value [-Waddress-of-packed-member] 42 | for_each_rb_entry(comp, comps) | ^~~~~ This is effectively the s390 variant of commit 20c6c1890455 ("x86/boot: Disable the address-of-packed-member compiler warning"). Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-07s390/cio: fix kdoc for tiqdio_thinint_handlerSebastian Ott1-0/+1
Add missing parameter description to fix the following warning: drivers/s390/cio/qdio_thinint.c:183: warning: Function parameter or member 'floating' not described in 'tiqdio_thinint_handler' Signed-off-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-07s390/zcrypt: support special flagged EP11 cprbsHarald Freudenberger1-0/+4
Within an EP11 cprb there exists a byte field flags. Bit 0x20 of this field indicates a special cprb. A special cprb triggers special handling in the firmware below the OS layer. However, a special cprb also needs to have the S bit in GPR0 set when NQAP is called. This was not the case for EP11 cprbs and this patch now introduces the code to support this. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-07s390: fix unrecognized __aligned() in uapi headerMasahiro Yamada1-1/+1
__aligned() is a shorthand that is only available in the kernel space because it is defined in include/linux/compiler_attributes.h, which is not exported to the user space. Detected by compile-testing exported headers. ./usr/include/asm/runtime_instr.h:60:37: error: expected declaration specifiers or ‘...’ before numeric constant } __attribute__((packed)) __aligned(8); ^ Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-07s390/configs: remove useless UEVENT_HELPER_PATHKrzysztof Kozlowski2-2/+0
Remove the CONFIG_UEVENT_HELPER_PATH because: 1. It is disabled since commit 1be01d4a5714 ("driver: base: Disable CONFIG_UEVENT_HELPER by default") as its dependency (UEVENT_HELPER) was made default to 'n', 2. It is not recommended (help message: "This should not be used today [...] creates a high system load") and was kept only for ancient userland, 3. Certain userland specifically requests it to be disabled (systemd README: "Legacy hotplug slows down the system and confuses udev"). Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Acked-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>