Age | Commit message (Collapse) | Author | Files | Lines |
|
Get rid of do_fault_error() and move its contents to do_exception(),
which makes do_exception(). With removing do_fault_error() it is also
possible to get rid of the handle_fault_error_nolock() wrapper.
Instead rename do_no_context() to handle_fault_error_nolock().
In result the whole fault handling looks much more like on other
architectures.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Remove the last two private vm_fault reasons: VM_FAULT_BADMAP and
VM_FAULT_BADACCESS.
In order to achieve this add an si_code parameter to do_no_context()
and it's wrappers and directly call the wrappers instead of relying on
do_fault_error() handling.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Remove VM_FAULT_SIGNAL and open-code it at the only two locations
where it is used.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Remove VM_FAULT_BADCONTEXT and instead call do_no_context() via
wrappers. This adds two new wrappers similar to what x86 has:
handle_fault_error() and handle_fault_error_nolock(). Both of them
simply call do_no_context(), while handle_fault_error() also unlocks
mmap lock, which avoids adding lots of mmap_read_unlock() calls with
this and subsequent patches.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
do_no_context() can be simplified by removing its fault parameter,
which is only used to decide if kfence_handle_page_fault() should be
called.
If the fault happened within the kernel space it is ok to always check
if this happened on a page which was unmapped because of the kfence
feature. Limiting the check to the VM_FAULT_BADCONTEXT case doesn't
add any value.
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Remove duplicated fault error handling and handle it only once within
do_exception().
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
There is only one caller of do_low_address(). Given that this code is
quite special just get rid of do_low_address, and add it to
do_protection_exception() in order to make the code a bit more
readable.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Handling of VM_FAULT_PFAULT and VM_FAULT_BADCONTEXT is nearly identical;
the only difference is within do_no_context() where however the fault_type
(KERNEL_FAULT vs GMAP_FAULT) makes sure that both types will be handled
differently.
Therefore it is possible to get rid of VM_FAULT_PFAULT and use
VM_FAULT_BADCONTEXT instead.
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
The page table dumper uses get_kernel_nofault() to test if dereferencing
page table entries is possible. Use the result, which is the required page
table entry, instead of throwing it away and dereferencing a second time
without any safe guard.
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Get rid of some magic numbers, and use the teid union and also some
ptrace PSW defines to improve readability.
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Move translation-exception identification structure to new fault.h
header file, change it to a union, and change existing kvm code
accordingly. The new union will be used by subsequent patches.
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Generate slightly better code by using a static key to implement store
indication. This allows to get rid of a memory access on the hot path.
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Use the get_fault_address() helper function instead of open-coding it
at many locations.
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
do_secure_storage_access() contains a switch statements which handles
all possible return values from get_fault_type(). Therefore remove the
pointless default case error handling and replace it with unreachable().
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Remove all noinline attribute from all functions and leave the
inlining decisions up to the compiler.
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
chechpatch reports:
CHECK: Alignment should match open parenthesis
+ if (IS_ENABLED(CONFIG_PGSTE) && gmap &&
+ (flags & FAULT_FLAG_RETRY_NOWAIT)) {
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Include linux/mmu_context.h instead asm/mmu_context.h.
checkpatch reports:
CHECK: Consider using #include <linux/mmu_context.h> instead of <asm/mmu_context.h>
+#include <asm/mmu_context.h>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Remove unnecessary braces and also blanks after casts.
Add braces to have balanced braces where missing.
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Use pr_warn() and friends instead of open-coding with printk().
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Use pr_warn_ratelimited() instead of printk_ratelimited().
checkpatch reports:
WARNING: Prefer ... pr_warn_ratelimited(... to printk_ratelimited(KERN_WARNING ...
+ printk_ratelimited(KERN_WARNING
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Just like other architectures make use __ratelimit() instead of
printk_ratelimit().
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Have reverse x-mas tree coding style for variables everywhere.
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Remove wrong, outdated, and pointless comments. Adjust wording for
some comments, and adjust whitespace at some places.
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Struct paicrypt_map is a data structure and is statically defined
for each possible CPU. Rework this and replace it by dynamically
allocated data structures created when a perf_event_open() system call
is invoked.
It is replaced by an array of pointers to all possible CPUs and
reference counting. The array of pointers is allocated when the first
event is created. For each online CPU an event is installed on, a struct
paicrypt_map is allocated and a pointer to struct cpu_cf_events is
stored in the array:
CPU 0 1 2 3 ... N
+---+---+---+---+---+---+
paicrypt_root::mapptr--> | * | | | |...| |
+-|-+---+---+---+---+---+
|
|
\|/
+--------------+
| paicrypt_map |
+--------------+
With this approach the large data structure is only allocated when
an event is actually installed and used.
Also implement proper reference counting for allocation and removal.
PAI crypto counter events can not be created when a CPU hot plug
add is processed. This means a CPU hot plug add does not get
the necessary PAI event to record PAI cryptography counter increments
on the newly added CPU. There is no possibility to notify user space
of a new CPU and the necessary event infrastructure assoiciated with
the file descriptor returned by perf_event_open() system call.
However system call perf_event_open() can use the newly added CPU
when issued after the CPU hot plug add.
Kernel CPU hot plug remove deletes the CPU and stops the PAI counters on
that CPU. When the process closes the file descriptor associated
with that event, the event's destroy() function removes any
allocated data structures and adjusts the reference counts.
Signed-off-by: Thomas Richter <tmricht@linux.ibm.com>
Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com>
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Get rid of this W=1 compile warning:
arch/s390/mm/vmem.c:502:6: warning: no previous prototype for ‘vmemmap_free’ [-Wmissing-prototypes]
502 | void vmemmap_free(unsigned long start, unsigned long end,
| ^~~~~~~~~~~~
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Remove unnecessary __GFP_HIGHMEM masking, which was introduced with
commit 6326c26c1514 ("s390: convert various pgalloc functions to use
ptdescs"). Also remove a whitespace change which was introduced with
the same commit.
Link: https://lore.kernel.org/all/CAOzc2px-SFSnmjcPriiB3cm1fNj3+YC8S0VSp4t1QvDR0f4E2A@mail.gmail.com
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Fix the follow warning reported by sparse:
arch/s390/boot/vmem.c:170:15: warning: unused variable ‘entry’ [-Wunused-variable]
170 | pte_t entry;
| ^~~~~
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Implement load_unaligned_zeropad() and enable DCACHE_WORD_ACCESS to
speed up string operations in fs/dcache.c and fs/namei.c.
Reviewed-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|