summaryrefslogtreecommitdiffstats
path: root/sys/uvm/uvm_km.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
...
* Make gcc stop whining. pointed out by ariane@.art2011-04-041-2/+2
|
* Some minor fixes:art2011-04-041-2/+7
| | | | | - Clarify a comment. - Change all the flags to chars from ints to make the structs smaller.
* New unified allocator of kernel memory.art2011-04-041-1/+245
| | | | | | | | | | | | | | | | | | | | We've reached the point where we have a dozen allocators that all do more or less the same thing, but slightly different, with slightly different semantics, slightly different default behaviors and default behaviors that most callers don't expect or want. A random sample on the last general hackathon showed that no one could explain what all the different allocators did. And every time someone needed to do something slightly different a new allocator was written. Unify everything. One single function to allocate multiples of PAGE_SIZE kernel memory. Four arguments: size, how va is allocated, how pa is allocated and misc parameters. Same parameters are passed to the free function so that we don't need to guess what's going on. Functions are currently unused, we'll do one thing at a time to avoid a giant commit. looked at by lots of people, deraadt@ and beck@ are yelling at me to commit.
* make the comment explaining the kernel submaps a bit better.thib2010-08-261-9/+10
| | | | ok art@, oga@
* Remove the VM_KMPAGESFREE sysctl. After the pmemrangethib2010-07-221-3/+1
| | | | | | | | | | | changes it was returing a constant 0, changing to cope with those changes makes less sense then just removing as it provides the user with no usefull information. sthen@ grepped the port's tree for me and found not hits, thanks! OK deraadt@, matthew@
* the uvm_km_putpage is calling into tangly uvm guts again on not pmap direct.tedu2010-07-151-21/+57
| | | | | | | go back to something more like the previous design, and have the thread do the heavy lifting. solves vmmaplk panics. ok deraadt oga thib [and even simple diffs are hard to get perfect. help from mdempsky and deraadt]
* no need to call uvm_km_free_wakup for the kernel map, uvm_km_free isthib2010-07-021-3/+3
| | | | | | enough. ok tedu@, art@
* add an align argument to uvm_km_kmemalloc_pla.art2010-07-021-4/+4
| | | | | | | | | Use uvm_km_kmemalloc_pla with the dma constraint to allocate kernel stacks. Yes, that means DMA is possible to kernel stacks, but only until we've fixed all the scary drivers. deraadt@ ok
* Drop the uvm_km_pages.mtx mutex in uvm_km_putpage before we free va's,thib2010-07-021-4/+8
| | | | | | | as calls to uvm_km_free_wakup can end up in uvm_mapent_alloc which tries to grab this mutex. ok tedu@
* Add a no_constraint uvm_constraint_range; use it in the pool code.thib2010-06-291-1/+4
| | | | ok tedu@, beck@, oga@
* Move uvm_km_pages struct declaration and watermark bounds to uvm_km.h, somiod2010-06-281-18/+1
| | | | | | that md code can peek at it, and update m68k !__HAVE_PMAP_DIRECT setup code to the recent uvm_km changes. ok thib@
* doh! Use pmap_kenter/pmap_kremove in the backend page allocator to preventthib2010-06-271-4/+4
| | | | | | | | | recursion in pmap_enter as seen on zaurus. ok art@ also, release a the uvm_km_page.mtx before calling uvm_km_kmemalloc as we can sleep there. ok oga@
* uvm constraints. Add two mandatory MD symbols, uvm_md_constraintsthib2010-06-271-127/+178
| | | | | | | | | | | | | | | | | | | | | | which contains the constraints for DMA/memory allocation for each architecture, and dma_constraints which contains the range of addresses that are dma accessable by the system. This is based on ariane@'s physcontig diff, with lots of bugfixes and additions the following additions by my self: Introduce a new function pool_set_constraints() which sets the address range for which we allocate pages for the pool from, this is now used for the mbuf/mbuf cluster pools to keep them dma accessible. The !direct archs no longer stuff pages into the kernel object in uvm_km_getpage_pla but rather do a pmap_extract() in uvm_km_putpages. Tested heavily by my self on i386, amd64 and sparc64. Some tests on alpha and SGI. "commit it" beck, art, oga, deraadt "i like the diff" deraadt
* introduce a uvm_km_valloc_try function that won't get a lower level locktedu2010-02-121-4/+10
| | | | | | for use by the uvm pseg code. this is the path of least resistance until we sort out how many of these functions we really need. problem found by mikeb ok kettenis oga
* Add an extra argument to uvm_unmap_remove(), for the caller to tell itmiod2009-07-251-2/+2
| | | | | | | whether removing holes or parts of them is allowed or not. Only allow hole removal in uvmspace_free(), when tearing the vmspace down. ok art@
* Put the PG_RELEASED changes diff back in.oga2009-07-221-18/+7
| | | | | | | | This has has been tested very very thoroughly on all archs we have excepting 88k and 68k. Please see cvs log for the individual commit messages. ok beck@, thib@
* date based reversion of uvm to the 4th May.oga2009-06-171-1/+1
| | | | | | | | | | | More backouts in line with previous ones, this appears to bring us back to a stable condition. A machine forced to 64mb of ram cycled 10GB through swap with this diff and is still running as I type this. Other tests by ariane@ and thib@ also seem to show that it's alright. ok deraadt@, thib@, ariane@
* Backout all the PG_RELEASED changes.oga2009-06-161-7/+18
| | | | | | | | | | This is for the same reason as the earlier backouts, to avoid the bug either added or exposed sometime around c2k9. This *should* be the last one. prompted by deraadt@ ok ariane@
* Second step of PG_RELEASED cleanup.oga2009-05-051-18/+7
| | | | | | | | | | | | uvm_km deals with kernel memory which is either part of one of the kernel maps, or the main kernel object (a uao). If on km_pgremove we hit a busy page, just sleep on it, if so there's some async io (and that is unlikely). we can remove the check for uvm_km_alloc1() for a released page since now we will never end up with a removed but released page in the kernel map (due to the other chunk and the last diff). ok ariane@. Diff survived several make builds, on amd64 and sparc64, also forced paging with ariane's evil program.
* On machines with less than 16MB of physical memory, reduce the lower boundmiod2009-02-221-3/+5
| | | | | | of uvm_km_pages. ok deraadt@ tedu@
* Remove uvm_km_alloc_poolpage1 as it serves no particular purposemikeb2009-02-111-84/+17
| | | | | | | | | | | now and valid for __HAVE_PMAP_DIRECT archs only, though implements both code paths. Put it's code directly into the uvm_km_getpage for PMAP_DIRECT archs. No functional change. ok tedu, art
* a better fix for the "uvm_km thread runs out of memory" problem.tedu2008-10-231-17/+11
| | | | | | | | | | | | add a new arg to the backend so it can tell pool to slow down. when we get this flag, yield *after* putting the page in the pool's free list. whatever we do, don't let the thread sleep. this makes things better by still letting the thread run when a huge pf request comes in, but without artificially increasing pressure on the backend by eating pages without feeding them forward. ok deraadt
* If we have one syscall that consumes large amounts of memory (like forart2008-06-141-3/+16
| | | | | | | | | | | | | | | example an ioctl that loads bazillions of entries into a pf table) it would exhaust the pool of free pages and not let uvm_km_thread catch up until the pool was actually empty. This could be bad for non-sleeping allocators since they can't wait for the memory while the big hog can. Instead of letting the syscall exhaust the pool, detect when we fall below the low watermark, wake the thread, sleep once and let the thread catch up. This paces the huge consumer so that the more critical consumers never find an exhausted pool of pages. "seems reasonable" kettenis@
* export kernel uvm_km_pages_free as vm.kmpagesfree; ok tedu, tested jsgderaadt2007-12-151-2/+3
|
* use a mutex for protection of the uvm_km list. ok arttedu2007-12-111-11/+11
|
* Don't let pagedaemon wait for pages here. We could trigger this easilyart2007-08-031-4/+14
| | | | | | | when we hit swap before actually fully populating the buffer cache which would lead to deadlocks. From pedro, tested by many, deraadt@ ok
* Change the loop test in uvm_km_kmemalloc from '<' to '!='. Everythingart2007-04-291-2/+2
| | | | | | | is aligned just fine and in case we allocate the last piece of the address space we don't want wrap-around to cause us to fail. pointed out by and ok miod@
* Use the right size when we're backing out the allocation inart2007-04-271-4/+3
| | | | | | uvm_km_kmemalloc. "should probbaly go in" millert@, "I think it should too" deraadt@
* One more voff_t in the right place.art2007-04-151-2/+2
| | | | miod@ ok
* Use the right types for calculating the object offset.art2007-04-151-3/+4
| | | | miod@ ok
* Clean up prototypes, change vm_map_t to struct vm_map *.art2007-04-151-42/+16
| | | | miod@ ok
* While splitting flags and pqflags might have been a good idea in theoryart2007-04-131-5/+5
| | | | | | | | | | | | to separate locking, on most modern machines this is not enough since operations on short types touch other short types that share the same word in memory. Merge pg_flags and pqflags again and now use atomic operations to change the flags. Also bump wire_count to an int and pg_version might go int as well, just for alignment. tested by many, many. ok miod@
* Instead of managing pages for intrsafe maps in special objects (aka.art2007-04-111-155/+24
| | | | | | | | kmem_object) just so that we can remove them, just use pmap_extract to get the pages to free and simplify a lot of code to not deal with the list of intrsafe maps, intrsafe objects, etc. miod@ ok
* Mechanically rename the "flags" and "version" fields in struct vm_pageart2007-04-041-14/+14
| | | | | | | | | | to "pg_flags" and "pg_version", so that they are a bit easier to work with. Whoever uses generic names like this for a popular struct obviously doesn't read much code. Most architectures compile and there are no functionality changes. deraadt@ ok ("if something fails to compile, we fix that by hand")
* remove KERN_SUCCESS and use 0 instead.art2007-03-251-13/+9
| | | | eyeballed by miod@ and pedro@
* We don't use mb_map anymore since a long time already. Remove it.miod2006-11-291-2/+1
|
* Add an alignment parameter to uvm_km_alloc1(), and change all callers tomiod2006-11-291-6/+3
| | | | pass zero; this will be used shortly. From art@
* fix uvmhist #2: args are always u_long so fix missing %d and %x and no %ll; no change for normal codemickey2006-07-311-5/+5
|
* fix fmts for UVMHIST_LOG() entries making it more useful on 64bit archs; miod@ okmickey2006-07-261-16/+16
|
* limit pool backend preallocation to 2048 pages max (which only affects >2g physmem); miod@ toby@ okmickey2006-04-251-1/+3
|
* deal w/ uvm_km_alloc() returning null; tedu@ okmickey2006-03-061-10/+14
|
* typobrad2005-10-061-2/+2
|
* typospedro2005-09-091-3/+3
|
* add a new field to vm_space and use it to track the number of anontedu2005-05-241-2/+2
| | | | | pages a process uses. this is now the userland "data size" value. ok art deraadt tdeval. thanks testers.
* Import M_CANFAIL support from NetBSD, removes a nasty panic during low-mem scenarios, instead generating an ENOMEM backfeed, ok tedu@, prodded by manyniklas2004-12-301-3/+5
|
* change physmem divisor to 256. divide by page size was wrong. this doestedu2004-08-241-2/+2
| | | | | what i intended all along, without contrived arithmetic screw up. from discussions with mickey and deraadt
* adapt uvm_km_pages_lowat to physmem. thanks testers. ok deraadt@tedu2004-08-241-2/+12
|
* #define __HAVE_PMAP_DIRECT and use it. requested by arttedu2004-07-131-14/+8
|
* rename POOLPAGE macros to pmap_map_directtedu2004-06-091-41/+39
| | | | | break out uvm_km_page bits for this case, no thread here lots of testing tech@, deraadt@, naddy@, mickey@, ...
* explanatory comments for the uvm_km_page functions.tedu2004-05-311-3/+30
|