| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
No functional change.
ok mlarkin@
|
|
|
|
| |
ok mpi@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tb@ reports that refaulting when there's contention on the vnode makes
firefox start very slowly on his machine. To revisit when the fault
handler will be unlocked.
ok anton@
Original commit message:
Fix a deadlock between uvn_io() and uvn_flush(). While faulting on a
page backed by a vnode, uvn_io() will end up being called in order to
populate newly allocated pages using I/O on the backing vnode. Before
performing the I/O, newly allocated pages are flagged as busy by
uvn_get(), that is before uvn_io() tries to lock the vnode. Such pages
could then end up being flushed by uvn_flush() which already has
acquired the vnode lock. Since such pages are flagged as busy,
uvn_flush() will wait for them to be flagged as not busy. This will
never happens as uvn_io() cannot make progress until the vnode lock is
released.
Instead, grab the vnode lock before allocating and flagging pages as
busy in uvn_get(). This does extend the scope in uvn_get() in which the
vnode is locked but resolves the deadlock.
ok mpi@
Reported-by: syzbot+e63407b35dff08dbee02@syzkaller.appspotmail.com
|
|
|
|
|
|
|
|
|
| |
Do not allow a faulting thread to sleep on a contended vnode lock to prevent
lock ordering issues with upcoming per-uobj lock.
ok anton@
Reported-by: syzbot+e63407b35dff08dbee02@syzkaller.appspotmail.com
|
|
|
|
|
|
|
|
| |
This fix (ab)use the vnode lock to serialize access to some fields of
the corresponding pages associated with UVM vnode object and this will
create new deadlocks with the introduction of a per-uobj lock.
ok anton@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
page backed by a vnode, uvn_io() will end up being called in order to
populate newly allocated pages using I/O on the backing vnode. Before
performing the I/O, newly allocated pages are flagged as busy by
uvn_get(), that is before uvn_io() tries to lock the vnode. Such pages
could then end up being flushed by uvn_flush() which already has
acquired the vnode lock. Since such pages are flagged as busy,
uvn_flush() will wait for them to be flagged as not busy. This will
never happens as uvn_io() cannot make progress until the vnode lock is
released.
Instead, grab the vnode lock before allocating and flagging pages as
busy in uvn_get(). This does extend the scope in uvn_get() in which the
vnode is locked but resolves the deadlock.
ok mpi@
Reported-by: syzbot+e63407b35dff08dbee02@syzkaller.appspotmail.com
|
|
|
|
|
|
|
|
|
| |
While here put some KERNEL_ASSERT_LOCKED() in the functions called from
the page fault handler. The removal of locking of `uobj' will need to be
revisited and these are good indicator that something is missing and that
many comments are lying.
ok kettenis
|
|
|
|
|
|
| |
contiguous pages.
ok beck@
|
|
|
|
| |
ok visa@, jca@
|
|
|
|
|
|
|
|
|
|
|
| |
UVM_WAIT() doesn't provide much of a useful abstraction. All callers
tsleep forever and no callers set PCATCH, so only 2 of 4 parameters are
actually used. Might as well just use tsleep_nsec(9) directly and make
the uvm code a bit less specialized.
Suggested by mpi@.
ok mpi@ visa@ millert@
|
|
|
|
|
|
|
| |
kernel calls to ensure that the UVM cache for memory mapped files is
up to date.
ok mpi@
|
|
|
|
|
|
| |
unnecessary because curproc always does the locking.
OK mpi@
|
|
|
|
|
|
|
| |
curproc that does the locking or unlocking, so the proc parameter
is pointless and can be dropped.
OK mpi@, deraadt@
|
|
|
|
|
|
| |
issues with upcoming NFSnode's locks.
ok visa@
|
|
|
|
|
|
|
|
| |
revoked while syncing disk, so the processes lose their executable
pages. Instead of killing them with a SIGBUS after page fault,
just sleep. This should prevent that init dies without pages
followed by a kernel panic.
initial diff from tedu@; OK deraadt@ tedu@
|
|
|
|
| |
Tested by Hrvoje Popovski.
|
|
|
|
|
|
| |
Recursions are still marked as XXXSMP.
ok deraadt@, bluhm@
|
|
|
|
|
|
| |
between mount locks and inode locks, which may been recorded in either order
ok visa@
|
|
|
|
|
|
|
|
|
| |
For the moment the NET_LOCK() is always taken by threads running under
KERNEL_LOCK(). That means it doesn't buy us anything except a possible
deadlock that we did not spot. So make sure this doesn't happen, we'll
have plenty of time in the next release cycle to stress test it.
ok visa@
|
|
|
|
|
|
| |
Recursions are currently known and marked a XXXSMP.
Please report any assert to bugs@
|
|
|
|
|
|
|
| |
vm_page structs go into three trees, uvm_objtree, uvm_pmr_addr, and
uvm_pmr_size. all these have been moved to RBT code.
this should give us a decent chunk of code space back.
|
|
|
|
|
|
| |
torture tested on amd64, i386 and macppc
ok beck mpi stefan
"the change looks right" deraadt
|
| |
|
|
|
|
| |
ok miod@
|
|
|
|
|
|
|
| |
have any direct symbols used. Tested for indirect use by compiling
amd64/i386/sparc64 kernels.
ok tedu@ deraadt@
|
| |
|
|
|
|
|
| |
era. fix uvm including c files to include lock.h or atomic.h as necessary.
ok deraadt
|
|
|
|
|
|
| |
objective: vnode.h doesn't include uvm_extern.h anymore.
followup changes: include uvm_extern.h or lock.h where necessary.
ok and help from deraadt
|
|
|
|
|
|
|
| |
PROT_NONE, PROT_READ, PROT_WRITE, and PROT_EXEC from mman.h.
PROT_MASK is introduced as the one true way of extracting those bits.
Remove UVM_ADV_* wrapper, using the standard names.
ok doug guenther kettenis
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on the 2nd of February 2011 in NetBSD.
http://marc.info/?l=netbsd-source-changes&m=129658899212732&w=2
http://marc.info/?l=netbsd-source-changes&m=129659095515558&w=2
http://marc.info/?l=netbsd-source-changes&m=129659157916514&w=2
http://marc.info/?l=netbsd-source-changes&m=129665962324372&w=2
http://marc.info/?l=netbsd-source-changes&m=129666033625342&w=2
http://marc.info/?l=netbsd-source-changes&m=129666052825545&w=2
http://marc.info/?l=netbsd-source-changes&m=129666922906480&w=2
http://marc.info/?l=netbsd-source-changes&m=129667725518082&w=2
|
|
|
|
| |
ok guenther
|
|
|
|
|
|
|
|
|
|
| |
an offset/size/address by shifting by PAGE_SHIFT. Make uvm_objwrire/unwire
use voff_t instead of off_t. The former is the right type here even if it is
equivalent to the latter.
Inspired by a somewhat similar changes in Bitrig.
ok deraadt@, guenther@
|
|
|
|
| |
emphatic ok usual suspects, grudging ok miod
|
|
|
|
|
|
|
| |
on the written buffers. Use the flag for writes from the page daemon to
ensure that we free buffers written out by the page daemon rather than
caching them.
ok kettenis@
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
now we can free vnodes again.
ok gcc@, jetpack@, beck@, art@.
(the results of this were hilarious)
|
|
|
|
|
|
|
|
| |
The vm hackers don't use it, don't maintain it and have to look at it all the
time. About time this 800 lines of code hit /dev/null.
``never liked it'' tedu@. ariane@ was very happy when i told her i wrote
this diff.
|
|
|
|
| |
prompted by tedu@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
deactivate pages after syncing.
While here, don't check flags for PQ_INACTIVE (this is the only place
outside uvm_page.c where this is done) because pagedeactivate does this
already.
First part from Christian Ehrhart, second from me.
Both ok ariane@.
I meant to commit this about a week ago, but accidentally commited to my
local cvs mirror then forgot about it.
|
|
|
|
|
|
|
|
|
|
|
| |
gets rid of #include <sys/dkio.h> in sys/ioctl.h and adds #include
<sys/dkio.h> to the places that actually want and use the disk
ioctls.
this became an issue when krw@'s X build failed when he was testing
a change to dkio.h.
tested by krw@
help from and ok miod@
|
|
|
|
|
|
|
|
|
|
|
|
| |
places in the tree need to be touched to update the object
initialisation with respect to that.
So, make a function (uvm_initobj) that takes the refcount, object and
pager ops and does this initialisation for us. This should save on
maintainance in the future.
looked good to fgs@. Tedu complained about the British spelling but OKed
it anyway.
|
|
|
|
|
|
|
|
|
|
| |
Now instead of the global object hashtable, we have a per object tree.
Testing shows no performance difference and a slight code shrink. OTOH when
locking is more fine grained this should be faster due to lock contention on
uvm.hashlock.
ok thib@, art@.
|
|
|
|
|
|
|
|
| |
This has has been tested very very thoroughly on all archs we have
excepting 88k and 68k. Please see cvs log for the individual commit
messages.
ok beck@, thib@
|
|
|
|
|
|
|
|
|
|
|
| |
which is exactly what the macro does.
Macro's that are nothing more then:
#define FUNCTION(arg) function(arg)
are almost always pointless and should go away.
OK blambert@
Agreed by many.
|
|
|
|
|
|
|
|
|
|
|
| |
More backouts in line with previous ones, this appears to bring us back to a
stable condition.
A machine forced to 64mb of ram cycled 10GB through swap with this diff
and is still running as I type this. Other tests by ariane@ and thib@
also seem to show that it's alright.
ok deraadt@, thib@, ariane@
|
|
|
|
|
|
|
|
|
|
| |
This is for the same reason as the earlier backouts, to avoid the bug
either added or exposed sometime around c2k9. This *should* be the last
one.
prompted by deraadt@
ok ariane@
|
|
|
|
|
|
| |
allocator).
"i can't see any obvious problems" oga
|