summaryrefslogtreecommitdiffstats
path: root/sys/uvm/uvm_pdaemon.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* simplify logic after wakeup since this variable is only manipulatedbeck2019-05-101-9/+4
| | | | | under lock ok guenther@
* Check for nowait failed *after* the wakeup point, not before.beck2019-05-101-6/+6
| | | | ok guenther@
* Ensure that pagedaemon wakeups as a result of failed UVM_PLA_NOWAITbeck2019-05-091-4/+26
| | | | | | | | | | | | | allocations will recover some memory from the dma_constraint range. The allocation still fails, the intent is to ensure that the pagedaemon will free some memory to possibly allow a subsequent allocation to succeed. This also adds a UVM_PLA_NOWAKE flag to allow special cases in the buffer cache to not wake up the pagedaemon until they want to. ok kettenis@
* While booting it does not make sense to wait for memory, there isbluhm2018-01-181-1/+7
| | | | | | | no other process which could free it. Better panic in malloc(9) or pool_get(9) instead of sleeping forever. tested by visa@ patrick@ Jan Klemkow suggested by kettenis@; OK deraadt@
* Convert most of the manual checks for CPU hogging to sched_pause().mpi2017-02-141-3/+3
| | | | | | | | The distinction between preempt() and yield() stays as it is usueful to know if a thread decided to yield by itself or if the kernel told him to go away. ok tedu@, guenther@
* Lock the page queues by turning uvm_lock_pageq() and uvm_unlock_pageq() intokettenis2015-10-081-1/+3
| | | | | | | | mtx_enter() and mtx_leave() operations. Not 100% this won't blow up but there is only one way to find out, and we need this to make progress on further unlocking uvm. prodded by deraadt@
* Remove the unused loan_count field and the related uvm logic. Most ofvisa2015-08-211-39/+7
| | | | | | the page loaning code is already in the Attic. ok kettenis@, beck@
* remove lock.h from uvm_extern.h. another holdover from the simpletonlocktedu2014-12-171-2/+2
| | | | | era. fix uvm including c files to include lock.h or atomic.h as necessary. ok deraadt
* Replace a plethora of historical protection options with justderaadt2014-11-161-6/+6
| | | | | | | PROT_NONE, PROT_READ, PROT_WRITE, and PROT_EXEC from mman.h. PROT_MASK is introduced as the one true way of extracting those bits. Remove UVM_ADV_* wrapper, using the standard names. ok doug guenther kettenis
* remove uneeded proc.h includesjsg2014-09-141-2/+1
| | | | ok mpi@ kspillner@
* Make the cleaner, syncer, pagedaemon, aiodone daemons allblambert2014-09-091-1/+5
| | | | | | yield() if the cpu is marked SHOULDYIELD. ok miod@ tedu@ phessler@
* Add a function to drop all clean pages on the page daemon queues and callkettenis2014-07-121-1/+59
| | | | | | it when we hibernate. ok mlarkin@, miod@, deraadt@
* Chuck Cranor rescinded clauses in his licensejsg2014-07-111-7/+2
| | | | | | | | | | | | | on the 2nd of February 2011 in NetBSD. http://marc.info/?l=netbsd-source-changes&m=129658899212732&w=2 http://marc.info/?l=netbsd-source-changes&m=129659095515558&w=2 http://marc.info/?l=netbsd-source-changes&m=129659157916514&w=2 http://marc.info/?l=netbsd-source-changes&m=129665962324372&w=2 http://marc.info/?l=netbsd-source-changes&m=129666033625342&w=2 http://marc.info/?l=netbsd-source-changes&m=129666052825545&w=2 http://marc.info/?l=netbsd-source-changes&m=129666922906480&w=2 http://marc.info/?l=netbsd-source-changes&m=129667725518082&w=2
* subtle rearrangement of includesderaadt2014-07-081-2/+2
|
* bye bye UBC; ok beck dlgderaadt2014-07-081-11/+1
|
* compress code by turning four line comments into one line comments.tedu2014-04-131-77/+13
| | | | emphatic ok usual suspects, grudging ok miod
* add some more bufbackoff calls. uvm_wait optimistically (?), uvm_wait_platedu2014-02-061-1/+4
| | | | | | | after analysis and testing. when flushing a large mmapped file, we can eat up all the reserve bufs, but there's a good chance there will be more clean ones available. ok beck kettenis
* parenthesis to make the math right. ok beck kettenistedu2014-02-061-3/+3
|
* remove lots of comments about locking per beck's requesttedu2013-05-301-22/+6
|
* remove simple_locks from uvm code. ok beck deraadttedu2013-05-301-66/+1
|
* make sure the page daemon considers BUFPAGES_INACT when decidingbeck2013-02-071-2/+2
| | | | | | to do work, just as is done when waking it up. tested by me, phessler@, espie@, landry@ ok kettenis@
* Always back the buffer cache off on any page daemon wakeup. This avoidsbeck2012-12-101-22/+14
| | | | | | | | | | a few problems noticed by phessler@ and beck@ where certain allocations would repeatedly wake the page daemon even though the page daemon's targets were met already so it didn't do any work. We can avoid this problem when the buffer cache has pages to throw away by always doing so any time the page daemon is woken, rather than only when we are under the free page target. ok phessler@ deraadt@
* Fix the buffer cache.beck2012-11-071-8/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | A long time ago (in vienna) the reserves for the cleaner and syncer were removed. softdep and many things have not performed ths same ever since. Follow on generations of buffer cache hackers assumed the exising code was the reference and have been in frustrating state of coprophagia ever since. This commit 0) Brings back a (small) reserve allotment of buffer pages, and the kva to map them, to allow the cleaner and syncer to run even when under intense memory or kva pressure. 1) Fixes a lot of comments and variables to represent reality. 2) Simplifies and corrects how the buffer cache backs off down to the lowest level. 3) Corrects how the page daemons asks the buffer cache to back off, ensuring that uvmpd_scan is done to recover inactive pages in low memory situaitons 4) Adds a high water mark to the pool used to allocate struct buf's 5) Correct the cleaner and the sleep/wakeup cases in both low memory and low kva situations. (including accounting for the cleaner/syncer reserve) Tested by many, with very much helpful input from deraadt, miod, tobiasu, kettenis and others. ok kettenis@ deraadt@ jj@
* uvm changes for buffer cache improvements.beck2011-07-061-7/+40
| | | | | | | | | | | | | | | | | | 1) Make the pagedaemon aware of the memory ranges and size of allocations where memory is being requested, and pass this information on to bufbackoff(), which will later (not yet) be used to ensure that the buffer cache gets out of the way in the right area of memory. Note that this commit does not yet make it *do* that - as currently the buffer cache is all in dma-able memory and it will simply back off. 2) Add uvm_pagerealloc_multi - to be used by the buffer cache code for reallocating pages to particular regions. much of this work by ariane, with smatterings of me, art,and oga ok oga@, thib@, ariane@, deraadt@
* Rip out and burn support for UVM_HIST.oga2011-07-031-24/+1
| | | | | | | | The vm hackers don't use it, don't maintain it and have to look at it all the time. About time this 800 lines of code hit /dev/null. ``never liked it'' tedu@. ariane@ was very happy when i told her i wrote this diff.
* Typo in comment.krw2011-04-011-2/+2
|
* remove static so things show up in ddb.thib2010-09-261-6/+6
| | | | ok miod@, oga@, tedu@
* Fix buffer cache backoff in the page daemon - deal with inactive pages tobeck2009-10-141-11/+4
| | | | | | | | | more correctly reflect the new state of the world - that is - how many pages can be cheaply reclaimed - which now includes clean buffer cache pages. This change fixes situations where people would be running with a large bufcachepercent, and still notice swapping without the buffer cache backing off. ok oga@, testing by many on tech@ and others. Thanks.
* fix the page daemon to back off the buffer cache correctly even in the casebeck2009-08-081-5/+11
| | | | | | | | | | where we are below the inactive page target. This fixes a problem with a large buffer cache on low memory machines where the the page daemon would woken up, however the buffer cache would never be backed off because we were below the inactive page target, which could result in constant paging and basically a livelock condition. ok oga@ art@
* Dynamic buffer cache support - a re-commit of what was backed outbeck2009-08-021-4/+6
| | | | | | | | after c2k9 allows buffer cache to be extended and grow/shrink dynamically tested by many, ok oga@, "why not just commit it" deraadt@
* Put the PG_RELEASED changes diff back in.oga2009-07-221-33/+18
| | | | | | | | This has has been tested very very thoroughly on all archs we have excepting 88k and 68k. Please see cvs log for the individual commit messages. ok beck@, thib@
* Fix a use after free in the pagedaemon.oga2009-06-261-2/+7
| | | | | | | | | | | | | | | | | specifically, if we free a RELEASED anon, then we will first of all remove the page from the anon, free the anon, then get the next page relative to the anon page, then call uvm_pagefree(). The problem is that while we zero out anon->an_page, we do not zero out pg->uanon. Now, uvm_pagefree() if pg->uanon is not NULL zeroes out some variables in the struct for us. One of the backed out commits added more zeroing there which would have exacerbated this use after free under heavy paging (which was where we saw bugs). Fix this by zeroing out pg->uanon. I have looked for other similar cases, but have not found any as of yet. been in snaps a while, "please do commit that" deraadt@
* date based reversion of uvm to the 4th May.oga2009-06-171-11/+11
| | | | | | | | | | | More backouts in line with previous ones, this appears to bring us back to a stable condition. A machine forced to 64mb of ram cycled 10GB through swap with this diff and is still running as I type this. Other tests by ariane@ and thib@ also seem to show that it's alright. ok deraadt@, thib@, ariane@
* Backout all changes to uvm after pmemrange (which will be backed outoga2009-06-161-13/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | separately). a change at or just before the hackathon has either exposed or added a very very nasty memory corruption bug that is giving us hell right now. So in the interest of kernel stability these diffs are being backed out until such a time as that corruption bug has been found and squashed, then the ones that are proven good may slowly return. a quick hitlist of the main commits this backs out: mine: uvm_objwire the lock change in uvm_swap.c using trees for uvm objects instead of the hash removing the pgo_releasepg callback. art@'s: putting pmap_page_protect(VM_PROT_NONE) in uvm_pagedeactivate() since all callers called that just prior anyway. ok beck@, ariane@. prompted by deraadt@.
* Back out all the buffer cache changes I committed during c2k9. This reverts threebeck2009-06-151-8/+6
| | | | | | | | | | commits: 1) The sysctl allowing bufcachepercent to be changed at boot time. 2) The change moving the buffer cache hash chains to a red-black tree 3) The dynamic buffer cache (Which depended on the earlier too). ok on the backout from marco and todd
* Somehow I missed comitting this.art2009-06-061-2/+1
|
* Dynamic buffer cache sizing.beck2009-06-051-6/+8
| | | | | | | | | | | | | | | | | | | | | | | This commit won't change the default behaviour of the system unless the buffer cache size is increased with sysctl kern.bufcachepercent. By default our buffer cache is 10% of memory, which with this commit is now treated as a low water mark. If the buffer cache size is increased, the new size is treated as a high water mark and the buffer cache is permitted to grow to that percentage of memory. If the page daemon is invoked, the page daemon will ask the buffer cache to relenquish pages. if the buffer cache has more than the low water mark it will relenquish pages allowing them to be consumed by uvm. after a short period the buffer cache will attempt to re-grow back to the high water mark. This permits the use of a large buffer cache without penalizing the available memory for other purposes. Above the low water mark the buffer cache remains entirely subservient to the page daemon, so if uvm requires pages, the buffer cache will abandon them. ok art@ thib@ oga@
* Since we've now cleared up a lot of the PG_RELEASED setting, remove theoga2009-06-011-28/+13
| | | | | | | | pgo_releasepg() hook and just free the page the "normal" way in the one place we'll ever see PG_RELEASED and should care (uvm_page_unbusy, called in aiodoned). ok art@, beck@, thib@
* Remove static qualifier of functions that are not inline.ariane2009-05-081-6/+6
| | | | | | Makes trace in ddb useful. ok oga
* Instead of keeping two ints in the uvm structure specifically just tooga2009-05-041-6/+6
| | | | | | | sleep on them (and otherwise ignore them) sleep on the pointer to the {aiodoned,pagedaemon}_proc members, and nuke the two extra words. "no objections" art@, ok beck@.
* Another case of locking just to read uvmexp.free. Kill the locking, notoga2009-04-171-3/+1
| | | | | | needed. "of course" art@.
* We don't need to grab the fpageqlock to do nothing but look at the valueoga2009-04-151-4/+1
| | | | | | of uvmexp.free. "yeah, go for it" art@
* The use of uvm.pagedaemon_lock is incredibly inconsistent. only aoga2009-04-141-22/+13
| | | | | | | | | | | | | | fraction of the wakeups and sleeps involved here actually grab that lock. The remainder, on the other hand, always have the fpageq_lock locked. So, make this locking correct by switching the other users over to fpageq_lock, too. This would probably be better off being a semaphore, but for now at least it's correct. "ok, unless you want to implement semaphores" art@
* Convert the page queue lock to a mutex instead of a simplelock.oga2009-04-131-12/+4
| | | | | | | | Fix up the one case of lock recursion (which blatantly ignored the comment right above it saying that we don't need to lock). The rest of the lock usage has been checked and appears to be correct. ok ariane@.
* Instead of doing splbio(); simple_lock(&uvm.aiodoned_lock); just replaceoga2009-04-061-28/+10
| | | | | | | the simple lock with a real lock - a IPL_BIO mutex. While i'm here, make the sleeping condition one hell of a lot simpler in the aio daemon. some ideas from and ok art@.
* While working on some stuff in uvm I've gotten REALLY sick of readingoga2009-03-201-5/+3
| | | | | | | K&R function declarations, so switch them all over to ansi-style, in accordance with the prophesy. "go for it" art@
* Register aiodoned_proc, although it is not used anywhere yet; PR #6034miod2009-01-121-1/+3
|
* Make the pagedaemon a bit happier.art2008-07-021-17/+10
| | | | | | | | | | | | | | | | | | | | | | | 1. When checking if the pagedaemon should be awakened and to see how much work it should do, consider the buffer cache deficit (how much pages the buffer cache can eat max vs. how much it has now) as pages that are not free. They are actually still usable by the allocator, but the presure on the pagedaemon is increased when we starting to chew into the memory that the buffer cache wants to use. 2. Remove the stupid 512kB limit of how much memory should be our free target. That maybe made sense on 68k, but on modern systems 512k is just a joke. Keep it at 3% of physical memory just like it was meant to be. 3. When doing allocations for the pagedaemon, always let it use the reserve. the whole UVM_OBJ_IS_KERN_OBJECT is silly and doesn't work in most cases anyway. We still don't have a reserve for the pagedaemon in the km_page allocator, but this seems to help enough. (yes, there are still bad cases in that code and the comment is only half-true, the whole section needs a massage, but that will happen later, this diff only touches pagedaemon parts) Testing by many, prodded by theo.
* Turn the uvm_{lock/unlock}_fpageq() inlines intothib2007-12-181-9/+9
| | | | | | | macros that just expand into the mutex functions to keep the abstraction, do assorted cleanup. ok miod@,art@
* Bring back Mickey's UVM anon change. Testing by thib@, beck@ andpedro2007-06-181-3/+3
| | | | ckuethe@ for a while. Okay beck@, "it is good timing" deraadt@.