| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
No functionnal change, reduce the difference with NetBSD.
ok jmatthew@
|
|
|
|
|
|
| |
No functional change.
ok mlarkin@
|
|
|
|
|
|
| |
No functionnal change.
ok kettenis@
|
|
|
|
| |
ok mpi@
|
| |
|
|
|
|
| |
ok kettenis@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tb@ reports that refaulting when there's contention on the vnode makes
firefox start very slowly on his machine. To revisit when the fault
handler will be unlocked.
ok anton@
Original commit message:
Fix a deadlock between uvn_io() and uvn_flush(). While faulting on a
page backed by a vnode, uvn_io() will end up being called in order to
populate newly allocated pages using I/O on the backing vnode. Before
performing the I/O, newly allocated pages are flagged as busy by
uvn_get(), that is before uvn_io() tries to lock the vnode. Such pages
could then end up being flushed by uvn_flush() which already has
acquired the vnode lock. Since such pages are flagged as busy,
uvn_flush() will wait for them to be flagged as not busy. This will
never happens as uvn_io() cannot make progress until the vnode lock is
released.
Instead, grab the vnode lock before allocating and flagging pages as
busy in uvn_get(). This does extend the scope in uvn_get() in which the
vnode is locked but resolves the deadlock.
ok mpi@
Reported-by: syzbot+e63407b35dff08dbee02@syzkaller.appspotmail.com
|
|
|
|
|
|
|
|
|
| |
Do not allow a faulting thread to sleep on a contended vnode lock to prevent
lock ordering issues with upcoming per-uobj lock.
ok anton@
Reported-by: syzbot+e63407b35dff08dbee02@syzkaller.appspotmail.com
|
|
|
|
|
|
|
|
| |
This fix (ab)use the vnode lock to serialize access to some fields of
the corresponding pages associated with UVM vnode object and this will
create new deadlocks with the introduction of a per-uobj lock.
ok anton@
|
|
|
|
|
|
|
| |
This change should have been part of the previous anon-locking diff and is
necessary to run the top part of uvm_fault() unlocked.
ok jmatthew@
|
|
|
|
|
|
|
|
|
| |
The name and logic come from NetBSD in order to reduce the difference
between the two code bases.
No functional change intended.
ok tb@
|
|
|
|
| |
ok mpi@
|
|
|
|
|
|
| |
Reduce differences with NetBSD and prepare for `uobj' locking.
No functionnal change. ok chris@, kettenis@
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Sync comments with NetBSD including locking details.
- Remove superfluous parenthesis and spaces.
- Add brackets, even if questionable, to reduce diff with NetBSD
- Use for (;;) instead of while(1)
- Rename a variable from 'result' into 'error'.
- Move uvm_fault() and uvm_fault_upper_lookup()
- Add an locking assert in uvm_fault_upper_lookup()
ok tb@, mlarkin@
|
|
|
|
| |
Reported by and ok jsg@
|
|
|
|
|
| |
We did not reach a consensus about using SMR to unlock single_thread_set()
so there's no point in keeping this change.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A rwlock is attached to every amap and is shared with all its anon. The
same lock will be used by multiple amaps if they have anons in common.
This should be enough to get the upper part of the fault handler out of the
KERNEL_LOCK() which seems to bring up to 20% improvements in builds.
This is based/copied/adapted from the most recent work done in NetBSD which
is an evolution of the precendent simple_lock scheme.
Tested by many, thanks!
ok kettenis@, mvs@
|
|
|
|
|
|
|
| |
Fix a regression where the valye wasn't correctly overwritten for wired
mapping, introduced in previous refactoring.
ok mvs@
|
|
|
|
| |
ok kettenis@
|
|
|
|
| |
OK millert@
|
|
|
|
|
|
|
| |
We can simulate the current behavior without lbolt by sleeping for 1
second on the &nowake channel.
ok mpi@
|
|
|
|
| |
ok kettenis@, dlg@
|
|
|
|
|
|
| |
At least some initialization code on i386 calls it w/o KERNEL_LOCK().
Found the hardway by jungle Boogie and Hrvoje Popovski.
|
|
|
|
|
|
|
| |
This will allow uvm_fault_upper() to enter swap-related functions without
holding the KERNEL_LOCK().
ok jmatthew@
|
|
|
|
| |
ok jmatthew@, tb@
|
|
|
|
|
|
|
| |
Currently all iterations are done under KERNEL_LOCK() and therefor use
the *_LOCKED() variant.
From and ok claudio@
|
|
|
|
| |
ok kettenis@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use a new flag, UVM_PLA_USERESERVE, to tell uvm_pmr_getpages() that using
kernel reserved pages is allowed.
Merge duplicated checks waking the pagedaemon to uvm_pmr_getpages().
Add two more pages to the amount reserved for the kernel to compensate the
fact that the pagedaemon may now consume an additional page.
Document locking of some uvmexp fields.
ok kettenis@
|
|
|
|
|
|
| |
Reported by AIsha Tammy.
ok kettenis@
|
|
|
|
|
|
|
|
|
| |
Document which global data structures require this lock and add some
asserts where the lock should be held.
Some code paths are still incorrect and should be revisited.
ok jmatthew@
|
|
|
|
|
|
| |
No functionnal change.
ok kettenis@, jmatthew@, tb@
|
|
|
|
| |
ok tb@, jmatthew@
|
|
|
|
|
|
|
| |
Separate fault handling code for type 1 and 2 and reduce differences
with NetBSD.
ok tb@, jmatthew@, kettenis@
|
|
|
|
|
|
|
| |
Some minor documentation improvments and style nits but this should
not contain any functionnal change.
ok tb@
|
|
|
|
|
|
|
| |
Reduce code duplication, reduce differences with NetBSD and simplify
upcoming locking diff.
ok jmatthew@
|
|
|
|
|
|
|
|
|
| |
It won't be used when amap and anon locking will be introduced.
This "fixes" passing a unrelated/uninitialized pointer in an error path
in case of memory shortage.
ok kettenis@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
page backed by a vnode, uvn_io() will end up being called in order to
populate newly allocated pages using I/O on the backing vnode. Before
performing the I/O, newly allocated pages are flagged as busy by
uvn_get(), that is before uvn_io() tries to lock the vnode. Such pages
could then end up being flushed by uvn_flush() which already has
acquired the vnode lock. Since such pages are flagged as busy,
uvn_flush() will wait for them to be flagged as not busy. This will
never happens as uvn_io() cannot make progress until the vnode lock is
released.
Instead, grab the vnode lock before allocating and flagging pages as
busy in uvn_get(). This does extend the scope in uvn_get() in which the
vnode is locked but resolves the deadlock.
ok mpi@
Reported-by: syzbot+e63407b35dff08dbee02@syzkaller.appspotmail.com
|
| |
|
|
|
|
|
| |
within the correct #ifdef of course.
ok kettenis
|
|
|
|
|
|
|
|
|
| |
While here put some KERNEL_ASSERT_LOCKED() in the functions called from
the page fault handler. The removal of locking of `uobj' will need to be
revisited and these are good indicator that something is missing and that
many comments are lying.
ok kettenis
|
|
|
|
|
|
|
|
|
| |
The name, uvm_fault_check() and logic comes from NetBSD as reuducing diff
with their tree is useful to learn from their experience and backport fixes.
No functional change intended.
ok kettenis@
|
|
|
|
| |
ok kettenis@
|
|
|
|
| |
ok patrick@, mpi@
|
|
|
|
|
|
|
| |
The underlying vm_space lock is used as a substitute to the KERNEL_LOCK()
in uvm_grow() to make sure `vm_ssize' is not corrupted.
ok anton@, kettenis@
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Improves readability and reduces the difference with NetBSD without
compromising debuggability on RAMDISK.
While here also use local variables to help with future locking and
reference counting.
ok semarie@
|
|
|
|
| |
ok deraadt@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previous attempt to unlock amap & anon exposed a race in vnode reference
counting. So be conservative with the code paths that we're not fully moving
out of the KERNEL_LOCK() to allow us to concentrate on one area at a time.
The panic reported was:
....panic: vref used where vget required
....db_enter() at db_enter+0x5
....panic() at panic+0x129
....vref(ffffff03b20d29e8) at vref+0x5d
....uvn_attach(1010000,ffffff03a5879dc0) at uvn_attach+0x11d
....uvm_mmapfile(7,ffffff03a5879dc0,2,1,13,100000012) at uvm_mmapfile+0x12c
....sys_mmap(c50,ffff8000225f82a0,1) at sys_mmap+0x604
....syscall() at syscall+0x279
Note that this change has no effect as long as mmap(2) is still executed with
ze big lock.
ok kettenis@
|
|
|
|
|
|
|
| |
failed to note this also guarded against heavy amap allocations in the
MAP_SHARED case. Bring back the checks for MAP_SHARED
from semarie, ok kettenis
https://syzkaller.appspot.com/bug?extid=d80de26a8db6c009d060
|
|
|
|
|
|
|
| |
This reduces code duplication, reduces the diff with NetBSD and will help
to introduce locks around global variables.
ok cheloha@
|