summaryrefslogtreecommitdiffstats
path: root/sys/uvm (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Introduce UAO_USES_SWHASH() and use tabs instead of spaces in #defines.mpi2021-03-311-25/+26
| | | | | | No functionnal change, reduce the difference with NetBSD. ok jmatthew@
* Remove parenthesis around return value to reduce the diff with NetBSD.mpi2021-03-2613-176/+176
| | | | | | No functional change. ok mlarkin@
* Sync some comments in order to reduce the difference with NetBSD.mpi2021-03-209-292/+463
| | | | | | No functionnal change. ok kettenis@
* spellingjsg2021-03-1211-27/+27
| | | | ok mpi@
* ansijsg2021-03-051-9/+5
|
* Modify `uvmexp.swpgonly' atomically, required for uvm_fault() w/o KERNEL_LOCK()mpi2021-03-046-17/+24
| | | | ok kettenis@
* Bring back previous fix for UVM vnode deadlock.mpi2021-03-043-49/+95
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | tb@ reports that refaulting when there's contention on the vnode makes firefox start very slowly on his machine. To revisit when the fault handler will be unlocked. ok anton@ Original commit message: Fix a deadlock between uvn_io() and uvn_flush(). While faulting on a page backed by a vnode, uvn_io() will end up being called in order to populate newly allocated pages using I/O on the backing vnode. Before performing the I/O, newly allocated pages are flagged as busy by uvn_get(), that is before uvn_io() tries to lock the vnode. Such pages could then end up being flushed by uvn_flush() which already has acquired the vnode lock. Since such pages are flagged as busy, uvn_flush() will wait for them to be flagged as not busy. This will never happens as uvn_io() cannot make progress until the vnode lock is released. Instead, grab the vnode lock before allocating and flagging pages as busy in uvn_get(). This does extend the scope in uvn_get() in which the vnode is locked but resolves the deadlock. ok mpi@ Reported-by: syzbot+e63407b35dff08dbee02@syzkaller.appspotmail.com
* Fix the deadlock between uvn_io() and uvn_flush() by restarting the fault.mpi2021-03-022-5/+10
| | | | | | | | | Do not allow a faulting thread to sleep on a contended vnode lock to prevent lock ordering issues with upcoming per-uobj lock. ok anton@ Reported-by: syzbot+e63407b35dff08dbee02@syzkaller.appspotmail.com
* Revert the fix for the deadlock between uvn_io() and uvn_flush().mpi2021-03-022-95/+44
| | | | | | | | This fix (ab)use the vnode lock to serialize access to some fields of the corresponding pages associated with UVM vnode object and this will create new deadlocks with the introduction of a per-uobj lock. ok anton@
* If an anon is associated with a page, acquire its lock before any modification.mpi2021-03-011-3/+34
| | | | | | | This change should have been part of the previous anon-locking diff and is necessary to run the top part of uvm_fault() unlocked. ok jmatthew@
* Move the top part of uvm_fault_lower(), the lookup, in its own function.mpi2021-03-011-76/+98
| | | | | | | | | The name and logic come from NetBSD in order to reduce the difference between the two code bases. No functional change intended. ok tb@
* remove unused uvm_mapent_bias()jsg2021-02-231-35/+1
| | | | ok mpi@
* Move `pgo_fault' handler outside of uvm_fault_lower().mpi2021-02-231-25/+32
| | | | | | Reduce differences with NetBSD and prepare for `uobj' locking. No functionnal change. ok chris@, kettenis@
* Comments & style cleanup, no functional change intended.mpi2021-02-161-224/+284
| | | | | | | | | | | | - Sync comments with NetBSD including locking details. - Remove superfluous parenthesis and spaces. - Add brackets, even if questionable, to reduce diff with NetBSD - Use for (;;) instead of while(1) - Rename a variable from 'result' into 'error'. - Move uvm_fault() and uvm_fault_upper_lookup() - Add an locking assert in uvm_fault_upper_lookup() ok tb@, mlarkin@
* Fix double unlock in uvmfault_anonget().mpi2021-02-151-3/+3
| | | | Reported by and ok jsg@
* Revert the convertion of per-process thread into a SMR_TAILQ.mpi2021-02-081-2/+2
| | | | | We did not reach a consensus about using SMR to unlock single_thread_set() so there's no point in keeping this change.
* (re)Introduce locking for amaps & anons.mpi2021-01-198-98/+291
| | | | | | | | | | | | | | | A rwlock is attached to every amap and is shared with all its anon. The same lock will be used by multiple amaps if they have anons in common. This should be enough to get the upper part of the fault handler out of the KERNEL_LOCK() which seems to bring up to 20% improvements in builds. This is based/copied/adapted from the most recent work done in NetBSD which is an evolution of the precendent simple_lock scheme. Tested by many, thanks! ok kettenis@, mvs@
* Move `access_type' to the fault context.mpi2021-01-161-20/+20
| | | | | | | Fix a regression where the valye wasn't correctly overwritten for wired mapping, introduced in previous refactoring. ok mvs@
* Assert that the KERNEL_LOCK() is held in uao_set_swslot().mpi2021-01-111-1/+3
| | | | ok kettenis@
* Enforce range with sysctl_int_bounded in swap_encrypt_ctlgnezdo2021-01-091-2/+3
| | | | OK millert@
* uvm: uvm_fault_lower(): don't sleep on lboltcheloha2021-01-021-2/+3
| | | | | | | We can simulate the current behavior without lbolt by sleeping for 1 second on the &nowake channel. ok mpi@
* Use per-CPU counters for fault and stats counters reached in uvm_fault().mpi2020-12-285-56/+147
| | | | ok kettenis@, dlg@
* Remove the assertion in uvm_km_pgremove().mpi2020-12-151-2/+1
| | | | | | At least some initialization code on i386 calls it w/o KERNEL_LOCK(). Found the hardway by jungle Boogie and Hrvoje Popovski.
* Grab the KERNEL_LOCK() or ensure it's held when poking at swap data structures.mpi2020-12-143-8/+18
| | | | | | | This will allow uvm_fault_upper() to enter swap-related functions without holding the KERNEL_LOCK(). ok jmatthew@
* Use a while loop instead of goto in uvm_fault().mpi2020-12-081-34/+23
| | | | ok jmatthew@, tb@
* Convert the per-process thread list into a SMR_TAILQ.mpi2020-12-071-2/+2
| | | | | | | Currently all iterations are done under KERNEL_LOCK() and therefor use the *_LOCKED() variant. From and ok claudio@
* Document that the page queue must only be locked if the page is managed.mpi2020-12-021-5/+7
| | | | ok kettenis@
* Turn uvm_pagealloc() mp-safe by checking uvmexp global with pageqlock held.mpi2020-12-014-59/+62
| | | | | | | | | | | | | | Use a new flag, UVM_PLA_USERESERVE, to tell uvm_pmr_getpages() that using kernel reserved pages is allowed. Merge duplicated checks waking the pagedaemon to uvm_pmr_getpages(). Add two more pages to the amount reserved for the kernel to compensate the fact that the pagedaemon may now consume an additional page. Document locking of some uvmexp fields. ok kettenis@
* Set the correct IPL for `pageqlock' now that it is grabbed from interrupt.mpi2020-11-271-2/+2
| | | | | | Reported by AIsha Tammy. ok kettenis@
* Grab the `pageqlock' before calling uvm_pageclean() as intended.mpi2020-11-245-11/+35
| | | | | | | | | Document which global data structures require this lock and add some asserts where the lock should be held. Some code paths are still incorrect and should be revisited. ok jmatthew@
* Move logic handling lower faults, case 2, to its own function.mpi2020-11-191-63/+77
| | | | | | No functionnal change. ok kettenis@, jmatthew@, tb@
* Remove Case2 goto, use a simple if () instead.mpi2020-11-161-23/+17
| | | | ok tb@, jmatthew@
* Use a helper to look for existing mapping & return if there's an anon.mpi2020-11-131-56/+81
| | | | | | | Separate fault handling code for type 1 and 2 and reduce differences with NetBSD. ok tb@, jmatthew@, kettenis@
* Move the logic dealing with faults 1A & 1B to its own function.mpi2020-11-131-151/+173
| | | | | | | Some minor documentation improvments and style nits but this should not contain any functionnal change. ok tb@
* Introduce amap_adjref_anons() an helper to reference count amaps.mpi2020-11-132-51/+61
| | | | | | | Reduce code duplication, reduce differences with NetBSD and simplify upcoming locking diff. ok jmatthew@
* Remove unused `anon' argument from uvmfault_unlockall().mpi2020-11-063-25/+23
| | | | | | | | | It won't be used when amap and anon locking will be introduced. This "fixes" passing a unrelated/uninitialized pointer in an error path in case of memory shortage. ok kettenis@
* Fix a deadlock between uvn_io() and uvn_flush(). While faulting on aanton2020-10-262-44/+95
| | | | | | | | | | | | | | | | | | | | page backed by a vnode, uvn_io() will end up being called in order to populate newly allocated pages using I/O on the backing vnode. Before performing the I/O, newly allocated pages are flagged as busy by uvn_get(), that is before uvn_io() tries to lock the vnode. Such pages could then end up being flushed by uvn_flush() which already has acquired the vnode lock. Since such pages are flagged as busy, uvn_flush() will wait for them to be flagged as not busy. This will never happens as uvn_io() cannot make progress until the vnode lock is released. Instead, grab the vnode lock before allocating and flagging pages as busy in uvn_get(). This does extend the scope in uvn_get() in which the vnode is locked but resolves the deadlock. ok mpi@ Reported-by: syzbot+e63407b35dff08dbee02@syzkaller.appspotmail.com
* We will soon have DRM on powerpc64.kettenis2020-10-241-2/+3
|
* move the backwards-stack vm_minsaddr check from hppa trap.c to uvm_grow(),deraadt2020-10-211-1/+5
| | | | | within the correct #ifdef of course. ok kettenis
* Constify and use C99 initializer for "struct uvm_pagerops".mpi2020-10-217-41/+53
| | | | | | | | | While here put some KERNEL_ASSERT_LOCKED() in the functions called from the page fault handler. The removal of locking of `uobj' will need to be revisited and these are good indicator that something is missing and that many comments are lying. ok kettenis
* Move the top part of uvm_fault() (lookups, checks, etc) in their own function.mpi2020-10-211-113/+170
| | | | | | | | | The name, uvm_fault_check() and logic comes from NetBSD as reuducing diff with their tree is useful to learn from their experience and backport fixes. No functional change intended. ok kettenis@
* Remove guard, uao_init() is called only once and no other function use one.mpi2020-10-201-7/+1
| | | | ok kettenis@
* Clear vmspace pointer in struct process before calling uvmspace_free(9).kettenis2020-10-191-2/+4
| | | | ok patrick@, mpi@
* Serialize accesses to "struct vmspace" and document its refcounting.mpi2020-10-193-12/+31
| | | | | | | The underlying vm_space lock is used as a substitute to the KERNEL_LOCK() in uvm_grow() to make sure `vm_ssize' is not corrupted. ok anton@, kettenis@
* typo in commentmpi2020-10-131-2/+2
|
* Use KASSERT() instead of if(x) panic() for NULL dereference checks.mpi2020-10-121-20/+17
| | | | | | | | | | Improves readability and reduces the difference with NetBSD without compromising debuggability on RAMDISK. While here also use local variables to help with future locking and reference counting. ok semarie@
* Remove unecesary includes.mpi2020-10-091-8/+1
| | | | ok deraadt@
* Do not release the KERNEL_LOCK() when mmap(2)ing files.mpi2020-10-071-6/+11
| | | | | | | | | | | | | | | | | | | | | | Previous attempt to unlock amap & anon exposed a race in vnode reference counting. So be conservative with the code paths that we're not fully moving out of the KERNEL_LOCK() to allow us to concentrate on one area at a time. The panic reported was: ....panic: vref used where vget required ....db_enter() at db_enter+0x5 ....panic() at panic+0x129 ....vref(ffffff03b20d29e8) at vref+0x5d ....uvn_attach(1010000,ffffff03a5879dc0) at uvn_attach+0x11d ....uvm_mmapfile(7,ffffff03a5879dc0,2,1,13,100000012) at uvm_mmapfile+0x12c ....sys_mmap(c50,ffff8000225f82a0,1) at sys_mmap+0x604 ....syscall() at syscall+0x279 Note that this change has no effect as long as mmap(2) is still executed with ze big lock. ok kettenis@
* Recent changes for PROT_NONE pages to not count against resource limits,deraadt2020-10-041-2/+2
| | | | | | | failed to note this also guarded against heavy amap allocations in the MAP_SHARED case. Bring back the checks for MAP_SHARED from semarie, ok kettenis https://syzkaller.appspot.com/bug?extid=d80de26a8db6c009d060
* Introduce a helper to check if all available swap is in use.mpi2020-09-294-18/+27
| | | | | | | This reduces code duplication, reduces the diff with NetBSD and will help to introduce locks around global variables. ok cheloha@