| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
under lock
ok guenther@
|
|
|
|
| |
ok guenther@
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
allocations will recover some memory from the dma_constraint range.
The allocation still fails, the intent is to ensure that the
pagedaemon will free some memory to possibly allow a subsequent
allocation to succeed.
This also adds a UVM_PLA_NOWAKE flag to allow special cases in the
buffer cache to not wake up the pagedaemon until they want to.
ok kettenis@
|
|
|
|
|
|
|
| |
no other process which could free it. Better panic in malloc(9)
or pool_get(9) instead of sleeping forever.
tested by visa@ patrick@ Jan Klemkow
suggested by kettenis@; OK deraadt@
|
|
|
|
|
|
|
|
| |
The distinction between preempt() and yield() stays as it is usueful
to know if a thread decided to yield by itself or if the kernel told
him to go away.
ok tedu@, guenther@
|
|
|
|
|
|
|
|
| |
mtx_enter() and mtx_leave() operations. Not 100% this won't blow up but
there is only one way to find out, and we need this to make progress on
further unlocking uvm.
prodded by deraadt@
|
|
|
|
|
|
| |
the page loaning code is already in the Attic.
ok kettenis@, beck@
|
|
|
|
|
| |
era. fix uvm including c files to include lock.h or atomic.h as necessary.
ok deraadt
|
|
|
|
|
|
|
| |
PROT_NONE, PROT_READ, PROT_WRITE, and PROT_EXEC from mman.h.
PROT_MASK is introduced as the one true way of extracting those bits.
Remove UVM_ADV_* wrapper, using the standard names.
ok doug guenther kettenis
|
|
|
|
| |
ok mpi@ kspillner@
|
|
|
|
|
|
| |
yield() if the cpu is marked SHOULDYIELD.
ok miod@ tedu@ phessler@
|
|
|
|
|
|
| |
it when we hibernate.
ok mlarkin@, miod@, deraadt@
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on the 2nd of February 2011 in NetBSD.
http://marc.info/?l=netbsd-source-changes&m=129658899212732&w=2
http://marc.info/?l=netbsd-source-changes&m=129659095515558&w=2
http://marc.info/?l=netbsd-source-changes&m=129659157916514&w=2
http://marc.info/?l=netbsd-source-changes&m=129665962324372&w=2
http://marc.info/?l=netbsd-source-changes&m=129666033625342&w=2
http://marc.info/?l=netbsd-source-changes&m=129666052825545&w=2
http://marc.info/?l=netbsd-source-changes&m=129666922906480&w=2
http://marc.info/?l=netbsd-source-changes&m=129667725518082&w=2
|
| |
|
| |
|
|
|
|
| |
emphatic ok usual suspects, grudging ok miod
|
|
|
|
|
|
|
| |
after analysis and testing. when flushing a large mmapped file, we can
eat up all the reserve bufs, but there's a good chance there will be more
clean ones available.
ok beck kettenis
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
to do work, just as is done when waking it up.
tested by me, phessler@, espie@, landry@
ok kettenis@
|
|
|
|
|
|
|
|
|
|
| |
a few problems noticed by phessler@ and beck@ where certain allocations
would repeatedly wake the page daemon even though the page daemon's targets
were met already so it didn't do any work. We can avoid this problem when
the buffer cache has pages to throw away by always doing so any time
the page daemon is woken, rather than only when we are under the free
page target.
ok phessler@ deraadt@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A long time ago (in vienna) the reserves for the cleaner and syncer were
removed. softdep and many things have not performed ths same ever since.
Follow on generations of buffer cache hackers assumed the exising code
was the reference and have been in frustrating state of coprophagia ever
since.
This commit
0) Brings back a (small) reserve allotment of buffer pages, and the kva to
map them, to allow the cleaner and syncer to run even when under intense
memory or kva pressure.
1) Fixes a lot of comments and variables to represent reality.
2) Simplifies and corrects how the buffer cache backs off down to the lowest
level.
3) Corrects how the page daemons asks the buffer cache to back off, ensuring
that uvmpd_scan is done to recover inactive pages in low memory situaitons
4) Adds a high water mark to the pool used to allocate struct buf's
5) Correct the cleaner and the sleep/wakeup cases in both low memory and low
kva situations. (including accounting for the cleaner/syncer reserve)
Tested by many, with very much helpful input from deraadt, miod, tobiasu,
kettenis and others.
ok kettenis@ deraadt@ jj@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Make the pagedaemon aware of the memory ranges and size of allocations
where memory is being requested, and pass this information on to
bufbackoff(), which will later (not yet) be used to ensure that the
buffer cache gets out of the way in the right area of memory.
Note that this commit does not yet make it *do* that - as currently
the buffer cache is all in dma-able memory and it will simply back
off.
2) Add uvm_pagerealloc_multi - to be used by the buffer cache code
for reallocating pages to particular regions.
much of this work by ariane, with smatterings of me, art,and oga
ok oga@, thib@, ariane@, deraadt@
|
|
|
|
|
|
|
|
| |
The vm hackers don't use it, don't maintain it and have to look at it all the
time. About time this 800 lines of code hit /dev/null.
``never liked it'' tedu@. ariane@ was very happy when i told her i wrote
this diff.
|
| |
|
|
|
|
| |
ok miod@, oga@, tedu@
|
|
|
|
|
|
|
|
|
| |
more correctly reflect the new state of the world - that is - how many pages
can be cheaply reclaimed - which now includes clean buffer cache pages.
This change fixes situations where people would be running with a large bufcachepercent, and still notice swapping without the buffer cache backing off.
ok oga@, testing by many on tech@ and others. Thanks.
|
|
|
|
|
|
|
|
|
|
| |
where we are below the inactive page target. This fixes a problem with a large
buffer cache on low memory machines where the the page daemon would woken up,
however the buffer cache would never be backed off because we were below the
inactive page target, which could result in constant paging and basically
a livelock condition.
ok oga@ art@
|
|
|
|
|
|
|
|
| |
after c2k9
allows buffer cache to be extended and grow/shrink dynamically
tested by many, ok oga@, "why not just commit it" deraadt@
|
|
|
|
|
|
|
|
| |
This has has been tested very very thoroughly on all archs we have
excepting 88k and 68k. Please see cvs log for the individual commit
messages.
ok beck@, thib@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
specifically, if we free a RELEASED anon, then we will first of all
remove the page from the anon, free the anon, then get the next page
relative to the anon page, then call uvm_pagefree().
The problem is that while we zero out anon->an_page, we do not zero out
pg->uanon. Now, uvm_pagefree() if pg->uanon is not NULL zeroes out some
variables in the struct for us. One of the backed out commits added more
zeroing there which would have exacerbated this use after free under
heavy paging (which was where we saw bugs). Fix this by zeroing out
pg->uanon.
I have looked for other similar cases, but have not found any as of yet.
been in snaps a while, "please do commit that" deraadt@
|
|
|
|
|
|
|
|
|
|
|
| |
More backouts in line with previous ones, this appears to bring us back to a
stable condition.
A machine forced to 64mb of ram cycled 10GB through swap with this diff
and is still running as I type this. Other tests by ariane@ and thib@
also seem to show that it's alright.
ok deraadt@, thib@, ariane@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
separately).
a change at or just before the hackathon has either exposed or added a
very very nasty memory corruption bug that is giving us hell right now.
So in the interest of kernel stability these diffs are being backed out
until such a time as that corruption bug has been found and squashed,
then the ones that are proven good may slowly return.
a quick hitlist of the main commits this backs out:
mine:
uvm_objwire
the lock change in uvm_swap.c
using trees for uvm objects instead of the hash
removing the pgo_releasepg callback.
art@'s:
putting pmap_page_protect(VM_PROT_NONE) in uvm_pagedeactivate() since
all callers called that just prior anyway.
ok beck@, ariane@.
prompted by deraadt@.
|
|
|
|
|
|
|
|
|
|
| |
commits:
1) The sysctl allowing bufcachepercent to be changed at boot time.
2) The change moving the buffer cache hash chains to a red-black tree
3) The dynamic buffer cache (Which depended on the earlier too).
ok on the backout from marco and todd
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit won't change the default behaviour of the system unless the
buffer cache size is increased with sysctl kern.bufcachepercent. By default
our buffer cache is 10% of memory, which with this commit is now treated
as a low water mark. If the buffer cache size is increased, the new size
is treated as a high water mark and the buffer cache is permitted to grow
to that percentage of memory.
If the page daemon is invoked, the page daemon will ask the buffer cache
to relenquish pages. if the buffer cache has more than the low water mark it
will relenquish pages allowing them to be consumed by uvm. after a short
period the buffer cache will attempt to re-grow back to the high water mark.
This permits the use of a large buffer cache without penalizing the available
memory for other purposes.
Above the low water mark the buffer cache remains entirely subservient to
the page daemon, so if uvm requires pages, the buffer cache will abandon
them.
ok art@ thib@ oga@
|
|
|
|
|
|
|
|
| |
pgo_releasepg() hook and just free the page the "normal" way in the one
place we'll ever see PG_RELEASED and should care (uvm_page_unbusy,
called in aiodoned).
ok art@, beck@, thib@
|
|
|
|
|
|
| |
Makes trace in ddb useful.
ok oga
|
|
|
|
|
|
|
| |
sleep on them (and otherwise ignore them) sleep on the pointer to the
{aiodoned,pagedaemon}_proc members, and nuke the two extra words.
"no objections" art@, ok beck@.
|
|
|
|
|
|
| |
needed.
"of course" art@.
|
|
|
|
|
|
| |
of uvmexp.free.
"yeah, go for it" art@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fraction of the wakeups and sleeps involved here actually grab that
lock. The remainder, on the other hand, always have the fpageq_lock
locked.
So, make this locking correct by switching the other users over to
fpageq_lock, too.
This would probably be better off being a semaphore, but for now at
least it's correct.
"ok, unless you want to implement semaphores" art@
|
|
|
|
|
|
|
|
| |
Fix up the one case of lock recursion (which blatantly ignored the
comment right above it saying that we don't need to lock). The rest of
the lock usage has been checked and appears to be correct.
ok ariane@.
|
|
|
|
|
|
|
| |
the simple lock with a real lock - a IPL_BIO mutex. While i'm here, make
the sleeping condition one hell of a lot simpler in the aio daemon.
some ideas from and ok art@.
|
|
|
|
|
|
|
| |
K&R function declarations, so switch them all over to ansi-style, in
accordance with the prophesy.
"go for it" art@
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. When checking if the pagedaemon should be awakened and to see how
much work it should do, consider the buffer cache deficit
(how much pages the buffer cache can eat max vs. how much it has
now) as pages that are not free. They are actually still usable by
the allocator, but the presure on the pagedaemon is increased when
we starting to chew into the memory that the buffer cache wants to
use.
2. Remove the stupid 512kB limit of how much memory should be our
free target. That maybe made sense on 68k, but on modern systems
512k is just a joke. Keep it at 3% of physical memory just like
it was meant to be.
3. When doing allocations for the pagedaemon, always let it use the
reserve. the whole UVM_OBJ_IS_KERN_OBJECT is silly and doesn't
work in most cases anyway. We still don't have a reserve for
the pagedaemon in the km_page allocator, but this seems to help
enough. (yes, there are still bad cases in that code and the comment
is only half-true, the whole section needs a massage, but that will
happen later, this diff only touches pagedaemon parts)
Testing by many, prodded by theo.
|
|
|
|
|
|
|
| |
macros that just expand into the mutex functions
to keep the abstraction, do assorted cleanup.
ok miod@,art@
|
|
|
|
| |
ckuethe@ for a while. Okay beck@, "it is good timing" deraadt@.
|