| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
ok mpi@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use a new flag, UVM_PLA_USERESERVE, to tell uvm_pmr_getpages() that using
kernel reserved pages is allowed.
Merge duplicated checks waking the pagedaemon to uvm_pmr_getpages().
Add two more pages to the amount reserved for the kernel to compensate the
fact that the pagedaemon may now consume an additional page.
Document locking of some uvmexp fields.
ok kettenis@
|
|
|
|
|
|
|
| |
Do not include <sys/kthread.h> where it is not needed and stop including
<sys/proc.h> in it.
ok visa@, anton@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in reverse order from uvm. Use it in uvm_pmr_freepageq when the
pages appear to be in reverse order.
This greatly improves cases of massive page freeing as noticed by
mlarkin@ in his ongoing efforts to have the most gigantish buffer
cache on the planet.
Most of this work done by me with help and polish from kettenis@
at e2k19. Follow on commits to this will make use of this for
more efficient freeing of amaps and a few other things.
ok kettenis@ deraadt@
|
|
|
|
| |
ok mpi@
|
|
|
|
| |
ok visa@, jca@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Equivalent to their unsuffixed counterparts except that (a) they take
a timeout in terms of nanoseconds, and (b) INFSLP, aka UINT64_MAX (not
zero) indicates that a timeout should not be set.
For now, zero nanoseconds is not a strictly valid invocation: we log a
warning on DIAGNOSTIC kernels if we see such a call. We still sleep
until the next tick in such a case, however. In the future this could
become some sort of poll... TBD.
To facilitate conversions to these interfaces: add inline conversion
functions to sys/time.h for turning your timeout into nanoseconds.
Also do a few easy conversions for warmup and to demonstrate how
further conversions should be done.
Lots of input from mpi@ and ratchov@. Additional input from tedu@,
deraadt@, mortimer@, millert@, and claudio@.
Partly inspired by FreeBSD r247787.
positive feedback from deraadt@, ok mpi@
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
allocations will recover some memory from the dma_constraint range.
The allocation still fails, the intent is to ensure that the
pagedaemon will free some memory to possibly allow a subsequent
allocation to succeed.
This also adds a UVM_PLA_NOWAKE flag to allow special cases in the
buffer cache to not wake up the pagedaemon until they want to.
ok kettenis@
|
| |
|
| |
|
|
|
|
|
|
|
| |
vm_page structs go into three trees, uvm_objtree, uvm_pmr_addr, and
uvm_pmr_size. all these have been moved to RBT code.
this should give us a decent chunk of code space back.
|
|
|
|
| |
from ray@, ok jmc@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it relies upon the fpageq lock for data consistency and
sleep/wakeup interlocking.
Therefore, code which modifies page zeroing thread data
or performs a wakeup of the thread must also hold the
fpageq lock.
Fix an instance where this was not the case.
ok kettenis@
diff --git a/sys/uvm/uvm_pmemrange.c b/sys/uvm/uvm_pmemrange.c
|
|
|
|
|
|
| |
the page loaning code is already in the Attic.
ok kettenis@, beck@
|
|
|
|
|
|
| |
an exact match.
ok kettenis@
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
specify an address constraint even when free pages that meet the constraint
are still available. This happens because the old code was using the root
of the size tree as a starting point for a search down the address tree.
This meant only part of the address tree was searched, and that part could
very well not contain any of the pages that met the constraint. Instead,
always walk the address tree from its root if the list of single pages is
empty and the root of the size tree doesn't meet our constraints.
From Visa Hankala.
ok deraadt@
|
| |
|
|
|
|
| |
ok krw@ sthen@
|
|
|
|
|
|
|
|
|
|
|
|
| |
way we can do some useful kernel lock in parallel with other things and create
a reservoir of zeroed pages ready for use elsewhere. This should reduce
latency. The thread runs at the absolutel lowest priority such that we don't
keep other kernel threads or userland from doing useful work.
Can be easily disabled by disabling the kthread_create(9) call in main().
Which perhaps we should do for non-MP kernels.
ok deraadt@, tedu@
|
|
|
|
| |
ok mpi@ kspillner@
|
|
|
|
| |
emphatic ok usual suspects, grudging ok miod
|
|
|
|
| |
in uvm_pmr_rootupdate(). Issue spotted and fix provided by Kieran Devlin.
|
|
|
|
|
|
|
|
| |
PG_PMAPMASK as all the possible pmap-specific bits (similar to the other
PG_fooMASK) to make sure MI code does not need to be updated, the next time
more bits are allocated to greedy pmaps.
No functional change, soon to be used by the (greedy) mips64 pmap.
|
|
|
|
|
|
|
| |
after analysis and testing. when flushing a large mmapped file, we can
eat up all the reserve bufs, but there's a good chance there will be more
clean ones available.
ok beck kettenis
|
|
|
|
|
| |
when you hit it. Separate out these tests.
ok millert@ kettenis@, phessler@, with miod@ bikeshedding.
|
|
|
|
|
| |
when uvm_wait gets hit from the pagedaemon. - code copied from uvm_wait.
ok guenther@, kettenis@
|
|
|
|
|
|
|
| |
Found by and original fix from Geoff Steckel.
While here, switch the assert that prevents this from happening from DEBUG to DIAGNOSTIC.
ok thib@, miod@
|
|
|
|
|
|
|
| |
Among other things, this fixes early panics on hppa system which memory size
is exactly 128MB.
Found the hard way and reported by fries@, not reported by beck@
|
|
|
|
| |
No callers, no functional change.
|
|
|
|
|
|
|
| |
This function will probably die before ever being called
from the in-tree code, since hibernate will move to RLE encoding.
No functional change, function had no callers.
|
|
|
|
|
|
| |
This is so I can move the pig allocator to subr_hibernate.
No functional change.
|
|
|
|
| |
back it out.
|
|
|
|
|
|
|
|
|
| |
More and more things are allocating outside of uvm_pagealloc these days making
it easy for something like the buffer cache to eat your last page with no
repercussions (other than a hung machine, of course).
ok ariane@ also ok ariane@ again after I spotted and fixed a possible underflow
problem in the calculation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Make the pagedaemon aware of the memory ranges and size of allocations
where memory is being requested, and pass this information on to
bufbackoff(), which will later (not yet) be used to ensure that the
buffer cache gets out of the way in the right area of memory.
Note that this commit does not yet make it *do* that - as currently
the buffer cache is all in dma-able memory and it will simply back
off.
2) Add uvm_pagerealloc_multi - to be used by the buffer cache code
for reallocating pages to particular regions.
much of this work by ariane, with smatterings of me, art,and oga
ok oga@, thib@, ariane@, deraadt@
|
|
|
|
|
|
|
| |
the extraction loop should stop.
No more 298 pages in 42 segments when asking for only 32 pages in 1 segment.
ok oga@
|
|
|
|
| |
ok beck@
|
|
|
|
|
|
|
|
|
|
|
| |
;
instead of
for (some; stuff; here);
reads easier.
ok ariane@
|
|
|
|
| |
ok ariane@
|
|
|
|
| |
ok ariane
|
|
|
|
|
|
|
|
|
| |
code block (not 'high_next' but 'low').
While here, change the KASSERT to a KDASSERT.
Pointed out by Amit Kulkarni.
ok thib@, miod@
|
|
|
|
| |
Pointed out and ok mlarkin@
|
|
|
|
|
|
|
|
|
|
|
| |
Allow reclaiming pages from all pools.
Allow zeroing all pages.
Allocate the more equal pig.
mlarking@ needs this.
Not called yet.
ok mlarkin@, theo@
|
| |
|
|
|
|
| |
no binary change.
|
|
|
|
| |
ok oga@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
which contains the constraints for DMA/memory allocation for each
architecture, and dma_constraints which contains the range of addresses
that are dma accessable by the system.
This is based on ariane@'s physcontig diff, with lots of bugfixes and
additions the following additions by my self:
Introduce a new function pool_set_constraints() which sets the address
range for which we allocate pages for the pool from, this is now used
for the mbuf/mbuf cluster pools to keep them dma accessible.
The !direct archs no longer stuff pages into the kernel object in
uvm_km_getpage_pla but rather do a pmap_extract() in uvm_km_putpages.
Tested heavily by my self on i386, amd64 and sparc64. Some tests on
alpha and SGI.
"commit it" beck, art, oga, deraadt
"i like the diff" deraadt
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
between an allocating process failing and waking up the pagedaemon
and the pagedaemon (since everything was dandy).
Rework the do ... while () logic searching for pages of a certain
memtype in a pmr into a while () loop where we check if we've found
enough pages and break out of the pmr and check the memtype inside
the loop. This prevents us from doing an early return without enough
pages for the caller even though more pages exist.
comments and help from oga, style nit from miod.
OK miod@, oga@
|
| |
|
|
|
|
|
|
|
|
|
|
| |
uvm.pagedaemon_proc, do the wakeup on the
right ident.
this had been fixed, but the fix got backed
out during The Big Backout.
ok oga@
|