summaryrefslogtreecommitdiffstats
path: root/sys/uvm/uvm_pmemrange.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* spellingjsg2021-03-121-5/+5
| | | | ok mpi@
* Turn uvm_pagealloc() mp-safe by checking uvmexp global with pageqlock held.mpi2020-12-011-1/+36
| | | | | | | | | | | | | | Use a new flag, UVM_PLA_USERESERVE, to tell uvm_pmr_getpages() that using kernel reserved pages is allowed. Merge duplicated checks waking the pagedaemon to uvm_pmr_getpages(). Add two more pages to the amount reserved for the kernel to compensate the fact that the pagedaemon may now consume an additional page. Document locking of some uvmexp fields. ok kettenis@
* Cleanup <sys/kthread.h> and <sys/proc.h> includes.mpi2020-02-181-2/+2
| | | | | | | Do not include <sys/kthread.h> where it is not needed and stop including <sys/proc.h> in it. ok visa@, anton@
* Add uvm_pmr_remove_1strange_reverse to efficiently free pagesbeck2020-01-011-6/+135
| | | | | | | | | | | | | | | in reverse order from uvm. Use it in uvm_pmr_freepageq when the pages appear to be in reverse order. This greatly improves cases of massive page freeing as noticed by mlarkin@ in his ongoing efforts to have the most gigantish buffer cache on the planet. Most of this work done by me with help and polish from kettenis@ at e2k19. Follow on commits to this will make use of this for more efficient freeing of amaps and a few other things. ok kettenis@ deraadt@
* convert infinite msleep(9) to msleep_nsec(9)jsg2019-12-301-3/+3
| | | | ok mpi@
* Convert infinite sleeps to {m,t}sleep_nsec(9).mpi2019-12-081-2/+2
| | | | ok visa@, jca@
* Add tsleep_nsec(9), msleep_nsec(9), and rwsleep_nsec(9).cheloha2019-07-031-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | Equivalent to their unsuffixed counterparts except that (a) they take a timeout in terms of nanoseconds, and (b) INFSLP, aka UINT64_MAX (not zero) indicates that a timeout should not be set. For now, zero nanoseconds is not a strictly valid invocation: we log a warning on DIAGNOSTIC kernels if we see such a call. We still sleep until the next tick in such a case, however. In the future this could become some sort of poll... TBD. To facilitate conversions to these interfaces: add inline conversion functions to sys/time.h for turning your timeout into nanoseconds. Also do a few easy conversions for warmup and to demonstrate how further conversions should be done. Lots of input from mpi@ and ratchov@. Additional input from tedu@, deraadt@, mortimer@, millert@, and claudio@. Partly inspired by FreeBSD r247787. positive feedback from deraadt@, ok mpi@
* Ensure that pagedaemon wakeups as a result of failed UVM_PLA_NOWAITbeck2019-05-091-3/+14
| | | | | | | | | | | | | allocations will recover some memory from the dma_constraint range. The allocation still fails, the intent is to ensure that the pagedaemon will free some memory to possibly allow a subsequent allocation to succeed. This also adds a UVM_PLA_NOWAKE flag to allow special cases in the buffer cache to not wake up the pagedaemon until they want to. ok kettenis@
* fix some DEBUG code so its using the right rb tree codedlg2016-09-161-11/+11
|
* move uvm_pmemrange_addr from RB macros to RBT functionsdlg2016-09-161-11/+13
|
* move the vm_page struct from being stored in RB macro trees to RBT functionsdlg2016-09-161-41/+39
| | | | | | | vm_page structs go into three trees, uvm_objtree, uvm_pmr_addr, and uvm_pmr_size. all these have been moved to RBT code. this should give us a decent chunk of code space back.
* Therefor -> Therefore (where appropriate)tb2016-01-291-2/+2
| | | | from ray@, ok jmc@
* Since the page zeroing thread runs without the kernel lock,blambert2015-12-061-1/+3
| | | | | | | | | | | | | | it relies upon the fpageq lock for data consistency and sleep/wakeup interlocking. Therefore, code which modifies page zeroing thread data or performs a wakeup of the thread must also hold the fpageq lock. Fix an instance where this was not the case. ok kettenis@ diff --git a/sys/uvm/uvm_pmemrange.c b/sys/uvm/uvm_pmemrange.c
* Remove the unused loan_count field and the related uvm logic. Most ofvisa2015-08-211-3/+1
| | | | | | the page loaning code is already in the Attic. ok kettenis@, beck@
* Make uvm_pmr_isfree() work correctly when RB_NFIND() returnsvisa2015-08-191-2/+2
| | | | | | an exact match. ok kettenis@
* uvm_pmr_get1page() should return psize_t, not int; dhill@miod2015-06-271-3/+3
|
* Fix a bug that causes uvm_pmr_get1page() to fail for allocations thatkettenis2015-06-201-4/+28
| | | | | | | | | | | | | | specify an address constraint even when free pages that meet the constraint are still available. This happens because the old code was using the root of the size tree as a starting point for a search down the address tree. This meant only part of the address tree was searched, and that part could very well not contain any of the pages that met the constraint. Instead, always walk the address tree from its root if the list of single pages is empty and the root of the size tree doesn't meet our constraints. From Visa Hankala. ok deraadt@
* bzero -> memsettedu2014-11-131-2/+2
|
* Initialize uvm_pagezero_thread()'s page list variable.guenther2014-10-031-1/+2
| | | | ok krw@ sthen@
* Introduce a thread for zeroing pages without holding the kernel lock. Thiskettenis2014-10-031-4/+50
| | | | | | | | | | | | way we can do some useful kernel lock in parallel with other things and create a reservoir of zeroed pages ready for use elsewhere. This should reduce latency. The thread runs at the absolutel lowest priority such that we don't keep other kernel threads or userland from doing useful work. Can be easily disabled by disabling the kthread_create(9) call in main(). Which perhaps we should do for non-MP kernels. ok deraadt@, tedu@
* remove uneeded proc.h includesjsg2014-09-141-2/+1
| | | | ok mpi@ kspillner@
* compress code by turning four line comments into one line comments.tedu2014-04-131-31/+10
| | | | emphatic ok usual suspects, grudging ok miod
* Fix logic error and prevent theoretical infinite loop in the worst case scenariomiod2014-04-051-3/+6
| | | | in uvm_pmr_rootupdate(). Issue spotted and fix provided by Kieran Devlin.
* Allow for two more pmap-specific bits in vm_page pg_flags. Definemiod2014-03-211-5/+3
| | | | | | | | PG_PMAPMASK as all the possible pmap-specific bits (similar to the other PG_fooMASK) to make sure MI code does not need to be updated, the next time more bits are allocated to greedy pmaps. No functional change, soon to be used by the (greedy) mips64 pmap.
* add some more bufbackoff calls. uvm_wait optimistically (?), uvm_wait_platedu2014-02-061-1/+16
| | | | | | | after analysis and testing. when flushing a large mmapped file, we can eat up all the reserve bufs, but there's a good chance there will be more clean ones available. ok beck kettenis
* 7 &&'ed elements in a single KASSERT involving complex tests is just painfulbeck2013-01-291-8/+9
| | | | | when you hit it. Separate out these tests. ok millert@ kettenis@, phessler@, with miod@ bikeshedding.
* Stop hiding when this is failing - make this as obvious as it isbeck2013-01-211-4/+10
| | | | | when uvm_wait gets hit from the pagedaemon. - code copied from uvm_wait. ok guenther@, kettenis@
* Prevent integer wrap-around in pmemrange.ariane2012-01-051-6/+6
| | | | | | | Found by and original fix from Geoff Steckel. While here, switch the assert that prevents this from happening from DEBUG to DIAGNOSTIC. ok thib@, miod@
* Be sure not to access the vm_page array out of bounds in uvm_pmr_freepages().miod2011-12-031-5/+5
| | | | | | | Among other things, this fixes early panics on hppa system which memory size is exactly 128MB. Found the hard way and reported by fries@, not reported by beck@
* Move uvm_pmr_alloc_pig to kern/subr_hibernate.cariane2011-07-081-47/+1
| | | | No callers, no functional change.
* Move uvm_pmr_zero_everything() to subr_hibernate.ariane2011-07-081-39/+1
| | | | | | | This function will probably die before ever being called from the in-tree code, since hibernate will move to RLE encoding. No functional change, function had no callers.
* Expose pmemrange internal functions via pmemrange.h.ariane2011-07-081-20/+1
| | | | | | This is so I can move the pig allocator to subr_hibernate. No functional change.
* some machines don't boot with the previous uvm reserve enforcement diff.tedu2011-07-081-37/+1
| | | | back it out.
* Move the uvm reserve enforcement from uvm_pagealloc to pmemrange.oga2011-07-071-1/+37
| | | | | | | | | More and more things are allocating outside of uvm_pagealloc these days making it easy for something like the buffer cache to eat your last page with no repercussions (other than a hung machine, of course). ok ariane@ also ok ariane@ again after I spotted and fixed a possible underflow problem in the calculation.
* uvm changes for buffer cache improvements.beck2011-07-061-8/+99
| | | | | | | | | | | | | | | | | | 1) Make the pagedaemon aware of the memory ranges and size of allocations where memory is being requested, and pass this information on to bufbackoff(), which will later (not yet) be used to ensure that the buffer cache gets out of the way in the right area of memory. Note that this commit does not yet make it *do* that - as currently the buffer cache is all in dma-able memory and it will simply back off. 2) Add uvm_pagerealloc_multi - to be used by the buffer cache code for reallocating pages to particular regions. much of this work by ariane, with smatterings of me, art,and oga ok oga@, thib@, ariane@, deraadt@
* Don't derefence the item past the end of the array to figure out ifariane2011-07-051-6/+4
| | | | | | | the extraction loop should stop. No more 298 pages in 42 segments when asking for only 32 pages in 1 segment. ok oga@
* Validate pmemrange result, enabling early catching of bugs in the code.ariane2011-06-221-1/+39
| | | | ok beck@
* for (some; stuff; here)oga2011-05-301-2/+3
| | | | | | | | | | | ; instead of for (some; stuff; here); reads easier. ok ariane@
* s/hart/heart/ to make more sense (another dutchism).oga2011-05-301-2/+2
| | | | ok ariane@
* fix uvm_pmr_alloc_pig to return the proper pig range sizemlarkin2011-04-061-1/+3
| | | | ok ariane
* Test iterated variable instead of a temporary variable from the previousariane2011-04-051-3/+3
| | | | | | | | | code block (not 'high_next' but 'low'). While here, change the KASSERT to a KDASSERT. Pointed out by Amit Kulkarni. ok thib@, miod@
* Remove debug code.ariane2011-04-041-11/+1
| | | | Pointed out and ok mlarkin@
* Helper functions for suspend.ariane2011-04-031-1/+93
| | | | | | | | | | | Allow reclaiming pages from all pools. Allow zeroing all pages. Allocate the more equal pig. mlarking@ needs this. Not called yet. ok mlarkin@, theo@
* Fix an uninitialized value leading to bogus KASSERT in uvm_pmr_use_inc().miod2010-08-281-1/+2
|
* We don't do CamelCase: fix style(9) violations in goto labels.oga2010-07-011-16/+16
| | | | no binary change.
* skip empty ranges in uvm_pmr_assertvalid;thib2010-06-291-1/+5
| | | | ok oga@
* uvm constraints. Add two mandatory MD symbols, uvm_md_constraintsthib2010-06-271-24/+37
| | | | | | | | | | | | | | | | | | | | | | which contains the constraints for DMA/memory allocation for each architecture, and dma_constraints which contains the range of addresses that are dma accessable by the system. This is based on ariane@'s physcontig diff, with lots of bugfixes and additions the following additions by my self: Introduce a new function pool_set_constraints() which sets the address range for which we allocate pages for the pool from, this is now used for the mbuf/mbuf cluster pools to keep them dma accessible. The !direct archs no longer stuff pages into the kernel object in uvm_km_getpage_pla but rather do a pmap_extract() in uvm_km_putpages. Tested heavily by my self on i386, amd64 and sparc64. Some tests on alpha and SGI. "commit it" beck, art, oga, deraadt "i like the diff" deraadt
* Fix a bug in uvm_pmr_get1page() which could cause us to bouncethib2010-06-231-8/+11
| | | | | | | | | | | | | | | between an allocating process failing and waking up the pagedaemon and the pagedaemon (since everything was dandy). Rework the do ... while () logic searching for pages of a certain memtype in a pmr into a while () loop where we check if we've found enough pages and break out of the pmr and check the memtype inside the loop. This prevents us from doing an early return without enough pages for the caller even though more pages exist. comments and help from oga, style nit from miod. OK miod@, oga@
* fix typos in comments: lineair -> linear.thib2010-06-101-2/+2
|
* the pagedaemon sleeps on uvm.pagedaemon notthib2010-06-101-2/+2
| | | | | | | | | | uvm.pagedaemon_proc, do the wakeup on the right ident. this had been fixed, but the fix got backed out during The Big Backout. ok oga@