summaryrefslogtreecommitdiffstats
path: root/sys/uvm/uvm_init.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Sync some comments in order to reduce the difference with NetBSD.mpi2021-03-201-21/+21
| | | | | | No functionnal change. ok kettenis@
* Use per-CPU counters for fault and stats counters reached in uvm_fault().mpi2020-12-281-1/+11
| | | | ok kettenis@, dlg@
* reorder uvm init to avoid use before initialisation.dlg2017-05-111-7/+12
| | | | | | | | | | | | | | the particular use before init was in uvm_init step 6, which calls kmeminit to set up malloc(9), which calls uvm_km_zalloc, which calls pmap_enter, which calls pool_get, which tries to allocate a page using km_alloc, which isnt initalised until step 9 in uvm_init. uvm_km_page_init calls kthread_create though, which uses malloc internally, so it cant be reordered before malloc init. to cope with this, uvm_km_page_init is split up. it sets up the subsystem, and is called before kmeminit. the thread init is moved to uvm_km_page_lateinit, which is called after kmeminit in uvm_init.
* Remove some includes include-what-you-use claims don'tjsg2015-03-141-2/+1
| | | | | | | have any direct symbols used. Tested for indirect use by compiling amd64/i386/sparc64 kernels. ok tedu@ deraadt@
* Introduce VM_KERNEL_SPACE_SIZE as a replacement formiod2015-02-071-4/+9
| | | | | | | (VM_MAX_KERNEL_ADDRESS - VM_MIN_KERNEL_ADDRESS). This will allow these to no longer be constants in the future. ok guenther@
* Prefer MADV_* over POSIX_MADV_* in kernel for consistency: the latterguenther2014-12-171-3/+3
| | | | | | doesn't have all the values and therefore can't be used everywhere. ok deraadt@ kettenis@
* Use MAP_INHERIT_* for the 'inh' argument to the UMV_MAPFLAG() macro,guenther2014-12-151-3/+3
| | | | | | eliminating the must-be-kept-in-sync UVM_INH_* macros ok deraadt@ tedu@
* Replace a plethora of historical protection options with justderaadt2014-11-161-5/+5
| | | | | | | PROT_NONE, PROT_READ, PROT_WRITE, and PROT_EXEC from mman.h. PROT_MASK is introduced as the one true way of extracting those bits. Remove UVM_ADV_* wrapper, using the standard names. ok doug guenther kettenis
* remove uneeded proc.h includesjsg2014-09-141-2/+1
| | | | ok mpi@ kspillner@
* Chuck Cranor rescinded clauses in his licensejsg2014-07-111-8/+1
| | | | | | | | | | | | | on the 2nd of February 2011 in NetBSD. http://marc.info/?l=netbsd-source-changes&m=129658899212732&w=2 http://marc.info/?l=netbsd-source-changes&m=129659095515558&w=2 http://marc.info/?l=netbsd-source-changes&m=129659157916514&w=2 http://marc.info/?l=netbsd-source-changes&m=129665962324372&w=2 http://marc.info/?l=netbsd-source-changes&m=129666033625342&w=2 http://marc.info/?l=netbsd-source-changes&m=129666052825545&w=2 http://marc.info/?l=netbsd-source-changes&m=129666922906480&w=2 http://marc.info/?l=netbsd-source-changes&m=129667725518082&w=2
* compress code by turning four line comments into one line comments.tedu2014-04-131-18/+4
| | | | emphatic ok usual suspects, grudging ok miod
* uvm_fault() will try to fault neighbouring pages for the MADV_NORMAL case,miod2014-04-031-1/+7
| | | | | | | | | | | | | | | | | | which is the default, unless the fault call is explicitly used to wire a given page. The amount of pages being faulted in was borrowed from the FreeBSD VM code, about 15 years ago, at a time FreeBSD was only reliably running on 4KB page size systems. It is questionable whether faulting the same amount of pages, on platforms where the page size is larger, is a good idea, as it may cause too much I/O. Add an uvmfault_init() routine, which will compute the proper number of pages at runtime, depending upon the actual page size, and attempting to fault in the same overall size the previous code would have done with 4KB pages. ok tedu@
* Reduce installmedia pressure from new vmmap.ariane2012-03-151-1/+3
| | | | | | Has less special allocators on install media (where they aren't required anyway). Bonus: makes the vmmap initialization code easier to read.
* New vmmap implementation.ariane2012-03-091-1/+13
| | | | | | | | | | | | no oks (it is really a pain to review properly) extensively tested, I'm confident it'll be stable 'now is the time' from several icb inhabitants Diff provides: - ability to specify different allocators for different regions/maps - a simpler implementation of the current allocator - currently in compatibility mode: it will generate similar addresses as the old allocator
* No "\n" needed at the end of panic() strings.krw2010-08-071-3/+3
| | | | | | | Bogus chunks pointed out by matthew@ and miod@. No cookies for marco@ and jasper@. ok deraadt@ miod@ matthew@ jasper@ macro@
* need pool.h to initialize the dma allocatorderaadt2010-07-131-1/+2
|
* dma_alloc() and dma_free(). This is a thin shim on top of a bag ofderaadt2010-07-131-1/+6
| | | | | pools, sized by powers of 2, which are constrained to dma memory. ok matthew tedu thib
* reintroduce the uvm_tree commit.oga2009-08-061-5/+3
| | | | | | | | | | Now instead of the global object hashtable, we have a per object tree. Testing shows no performance difference and a slight code shrink. OTOH when locking is more fine grained this should be faster due to lock contention on uvm.hashlock. ok thib@, art@.
* date based reversion of uvm to the 4th May.oga2009-06-161-1/+1
| | | | | | | | | | We still have no idea why this stops the crashes. but it does. a machine forced to 64mb of ram cycled 10GB through swap with this diff and is still running as I type this. Other tests by ariane@ and thib@ also seem to show that it's alright. ok deraadt@, thib@, ariane@
* Backout all changes to uvm after pmemrange (which will be backed outoga2009-06-161-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | separately). a change at or just before the hackathon has either exposed or added a very very nasty memory corruption bug that is giving us hell right now. So in the interest of kernel stability these diffs are being backed out until such a time as that corruption bug has been found and squashed, then the ones that are proven good may slowly return. a quick hitlist of the main commits this backs out: mine: uvm_objwire the lock change in uvm_swap.c using trees for uvm objects instead of the hash removing the pgo_releasepg callback. art@'s: putting pmap_page_protect(VM_PROT_NONE) in uvm_pagedeactivate() since all callers called that just prior anyway. ok beck@, ariane@. prompted by deraadt@.
* Instead of the global hash table with the terrible hashfunction and aoga2009-06-021-2/+1
| | | | | | | | | global lock, switch the uvm object pages to being kept in a per-object RB_TREE. Right now this is approximately the same speed, but cleaner. When biglock usage is reduced this will improve concurrency due to lock contention.. ok beck@ art@. Thanks to jasper for the speed testing.
* a few more memset changes.oga2009-05-021-4/+2
| | | | | | | two cases of pool_get() + memset(0) -> pool_get(,,,PR_ZERO) 1.5 cases of global variables are already zeroed, so don't zero them. ok ariane@, comments on stuff i'd missed from blambert@ and cnst@.
* While working on some stuff in uvm I've gotten REALLY sick of readingoga2009-03-201-2/+2
| | | | | | | K&R function declarations, so switch them all over to ansi-style, in accordance with the prophesy. "go for it" art@
* init uvm_km_page memory a bit earlier to reduce pressure on pmap bootstrapkurt2008-11-241-3/+6
| | | | | | pages. "looks good/no problems with it" tedu@ miod@ art@
* Revert the change to use pools for <= PAGE_SIZE allocations. Itkettenis2008-10-181-2/+3
| | | | | | | | changes the pressure on the uvm system, uncovering several bugs. Some of those bugs result in provable deadlocks. We'll have to reconsider integrating this diff again after fixing those bugs. ok art@
* Use pools to do allocations for all sizes <= PAGE_SIZE.art2008-09-291-3/+2
| | | | | | | | | | | This will allow us to escape the limitations of kmem_map. At this moment, the per-type limits are still enforced for all sizes, but we might loosen that limit in the future after some thinking. Original diff from Mickey in kernel/5761 , I massaged it a little to obey the per-type limits. miod@ ok
* Bring back Mickey's UVM anon change. Testing by thib@, beck@ andpedro2007-06-181-6/+4
| | | | ckuethe@ for a while. Okay beck@, "it is good timing" deraadt@.
* Truncate the addresses for the deadbeef values so that they don't needart2007-05-091-3/+3
| | | | | | to be page aligned and can contain more "noise". From mickey art@ ok
* Allow machine-dependant overrides for the ``deadbeef'' sentinel values,miod2007-04-121-1/+18
| | | | | | | | and make sure that nothing can ever be mapped at theses addresses. Only i386 overrides the default for now. From mickey@, ok art@ miod@
* Back out the anon change. Apparently it was tested by a few, but most ofderaadt2006-07-131-4/+6
| | | | | us did not see it or get a chance to test it before it was commited. It broke cvs, in the ami driver, making it not succeed at seeing it's devices.
* from netbsd: make anons dynamically allocated from pool.mickey2006-06-211-6/+4
| | | | | | this results in lesse kva waste due to static preallocation of those for every phys page and also every swap page. tested by beck krw miod
* introduce a new km_page allocator that gets pages from kernel_map usingtedu2004-04-191-1/+3
| | | | | | | | | | | | an interrupt safe thread. use this as the new backend for mbpool and mclpool, eliminating the mb_map. introduce a sysctl kern.maxclusters which controls the limit of clusters allocated. testing by many people, works everywhere but m68k. ok deraadt@ this essentially deprecates the NMBCLUSTERS option, don't use it. this should reduce pressure on the kmem_map and the uvm reserve of static map entries.
* Move the last content from vm/ to uvm/art2001-11-061-5/+2
| | | | | | | The only thing left in vm/ are just dumb wrappers. vm/vm.h includes uvm/uvm_extern.h vm/pmap.h includes uvm/uvm_pmap.h vm/vm_page.h includes uvm/uvm_page.h
* Minor sync to NetBSD.art2001-11-051-3/+2
|
* merge vm/vm_kern.h into uvm/uvm_extern.h; art@ okmickey2001-09-191-2/+1
|
* Various random fixes from NetBSD.art2001-08-111-3/+2
| | | | Including support for zeroing pages in the idle loop (not enabled yet).
* $OpenBSD$niklas2001-01-291-0/+1
|
* Convert bzero to memset(X, 0..) and bcopy to memcpy.art2000-09-071-1/+1
| | | | | This is to match (make diffs smaller) the code in NetBSD. new gcc inlines those functions, so this could also be a performance win.
* Fix the NetBSD id strings.art2000-03-151-1/+1
|
* sync with NetBSD from 1999.05.24 (there is a reason for this date)art1999-08-231-5/+0
| | | | | Mostly cleanups, but also a few improvements to pagedaemon for better handling of low memory and/or low swap conditions.
* add OpenBSD tagsart1999-02-261-0/+1
|
* Import of uvm from NetBSD. Some local changes, some code disabledart1999-02-261-0/+167