summaryrefslogtreecommitdiffstats
path: root/sys/uvm/uvm_km.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Remove the assertion in uvm_km_pgremove().mpi2020-12-151-2/+1
| | | | | | At least some initialization code on i386 calls it w/o KERNEL_LOCK(). Found the hardway by jungle Boogie and Hrvoje Popovski.
* Grab the KERNEL_LOCK() or ensure it's held when poking at swap data structures.mpi2020-12-141-1/+2
| | | | | | | This will allow uvm_fault_upper() to enter swap-related functions without holding the KERNEL_LOCK(). ok jmatthew@
* Prevent km_alloc() from returning garbage if pagelist is empty.jan2020-05-231-2/+2
| | | | ok bluhm@, visa@
* Cleanup <sys/kthread.h> and <sys/proc.h> includes.mpi2020-02-181-1/+2
| | | | | | | Do not include <sys/kthread.h> where it is not needed and stop including <sys/proc.h> in it. ok visa@, anton@
* convert infinite msleep(9) to msleep_nsec(9)jsg2019-12-301-5/+5
| | | | ok mpi@
* Set vm_map's pmap in uvm_map_setup().visa2019-12-181-5/+3
| | | | OK guenther@, kettenis@, mpi@
* Convert infinite sleeps to {m,t}sleep_nsec(9).mpi2019-12-081-3/+3
| | | | ok visa@, jca@
* R.I.P. UVM_WAIT(). Use tsleep_nsec(9) directly.cheloha2019-07-181-2/+2
| | | | | | | | | | | UVM_WAIT() doesn't provide much of a useful abstraction. All callers tsleep forever and no callers set PCATCH, so only 2 of 4 parameters are actually used. Might as well just use tsleep_nsec(9) directly and make the uvm code a bit less specialized. Suggested by mpi@. ok mpi@ visa@ millert@
* at some point the uvm_km_thread learned to free memory, but the commenttedu2019-02-221-4/+2
| | | | | was never updated. from Amit Kulkarni
* unbreak PMAP_DIRECT archs.dlg2017-05-111-2/+2
| | | | found by jmc@
* reorder uvm init to avoid use before initialisation.dlg2017-05-111-1/+11
| | | | | | | | | | | | | | the particular use before init was in uvm_init step 6, which calls kmeminit to set up malloc(9), which calls uvm_km_zalloc, which calls pmap_enter, which calls pool_get, which tries to allocate a page using km_alloc, which isnt initalised until step 9 in uvm_init. uvm_km_page_init calls kthread_create though, which uses malloc internally, so it cant be reordered before malloc init. to cope with this, uvm_km_page_init is split up. it sets up the subsystem, and is called before kmeminit. the thread init is moved to uvm_km_page_lateinit, which is called after kmeminit in uvm_init.
* Protect the list of free map entries with a mutex. This should fix thekettenis2015-09-261-1/+3
| | | | | | crashes seen by sthen@ on i386. ok visa@, guenther@, tedu@
* Back out rev. 1.125. This bit was left behind (intentionally?) when thekettenis2015-09-171-2/+1
| | | | | | | | | | | | | | | | | | remainder of that commit was backed out. However,clearing the PQ_AOBJ bit here is definitely wrong. Our pagedaemon uses two separate lists to keep track of inactive pages. It uses PQ_SWAPBACKED, which really is both PQ_ANON and PQ_AOBJ to keep track of which inactive queue a page is sitting on. So if you twiddle PQ_AOBJ (or PQ_ANON) for an inactive page, a subsequent uvm_pagefree(9) will remove the page from the wrong queue! This usually goes unnoticed, but if the page happens to be the last one on the queue, the queues get corrupted. The damage quickly spreads to the free page queues and almost certainly results in the uvm_pmr_size_RB_REMOVE_COLOR() faults that people have seen sporadically since the spring of this year. ok visa@, beck@, krw@, guenther@
* Introduce VM_KERNEL_SPACE_SIZE as a replacement formiod2015-02-071-17/+12
| | | | | | | (VM_MAX_KERNEL_ADDRESS - VM_MIN_KERNEL_ADDRESS). This will allow these to no longer be constants in the future. ok guenther@
* Clear PQ_AOBJ before calling uvm_pagefree(), clearing up one false XXXderaadt2015-02-061-1/+2
| | | | | comment (one is fixed, one is deleted). ok kettenis beck
* Make km_alloc(9) use the direct map for all "phys contig" mappings requestedkettenis2015-01-231-37/+29
| | | | | | | | | by the caller on architectures that implement them. Make sure that we physically align memory such that we meet any demands on virtual alignment in this case. This should reduce the overhead of mapping large pool pages for pools that request dma'able memory. ok deraadt@, dlg@
* Prefer MADV_* over POSIX_MADV_* in kernel for consistency: the latterguenther2014-12-171-19/+15
| | | | | | doesn't have all the values and therefore can't be used everywhere. ok deraadt@ kettenis@
* Use MAP_INHERIT_* for the 'inh' argument to the UMV_MAPFLAG() macro,guenther2014-12-151-11/+11
| | | | | | eliminating the must-be-kept-in-sync UVM_INH_* macros ok deraadt@ tedu@
* The sti(4) driver copies its ROM into kernel memory and executes the codekettenis2014-11-271-2/+3
| | | | | | | in there. It explicitly changes the mapping of that memory to RX, but this only works if the maximum protection of the mapping includes PROT_EXEC. ok miod@, deraadt@
* Kill kv_executable flag. We no longer allow requests for PROT_EXECderaadt2014-11-211-7/+3
| | | | | mappings via this interface (nothing uses it, in any case) ok uebayasi tedu
* More cases of kernel map entries being created as EXEC by default; notderaadt2014-11-171-7/+12
| | | | | just the base permission but the maxprot as well. ok tedu
* There is no reason for uvm_km_alloc1() to allocate kernel memoryderaadt2014-11-171-3/+2
| | | | | that is executable. ok tedu kettenis guenther
* Replace a plethora of historical protection options with justderaadt2014-11-161-26/+32
| | | | | | | PROT_NONE, PROT_READ, PROT_WRITE, and PROT_EXEC from mman.h. PROT_MASK is introduced as the one true way of extracting those bits. Remove UVM_ADV_* wrapper, using the standard names. ok doug guenther kettenis
* bzero -> memsettedu2014-11-131-2/+2
|
* remove uneeded proc.h includesjsg2014-09-141-2/+1
| | | | ok mpi@ kspillner@
* Chuck Cranor rescinded clauses in his licensejsg2014-07-111-7/+2
| | | | | | | | | | | | | on the 2nd of February 2011 in NetBSD. http://marc.info/?l=netbsd-source-changes&m=129658899212732&w=2 http://marc.info/?l=netbsd-source-changes&m=129659095515558&w=2 http://marc.info/?l=netbsd-source-changes&m=129659157916514&w=2 http://marc.info/?l=netbsd-source-changes&m=129665962324372&w=2 http://marc.info/?l=netbsd-source-changes&m=129666033625342&w=2 http://marc.info/?l=netbsd-source-changes&m=129666052825545&w=2 http://marc.info/?l=netbsd-source-changes&m=129666922906480&w=2 http://marc.info/?l=netbsd-source-changes&m=129667725518082&w=2
* Make sure kmthread never loops without making progress: if the freelistguenther2014-06-211-5/+17
| | | | | | was empty then the first page allocation should sleep until it can get one. ok tedu@
* compress code by turning four line comments into one line comments.tedu2014-04-131-70/+15
| | | | emphatic ok usual suspects, grudging ok miod
* in the brave new world of void *, we don't need caddr_t caststedu2013-05-301-2/+2
|
* UVM_UNLOCK_AND_WAIT no longer unlocks, so rename it to UVM_WAIT.tedu2013-05-301-3/+2
|
* remove simple_locks from uvm code. ok beck deraadttedu2013-05-301-6/+1
|
* Number of swap pages in use must be smaller than tha total number of swapkettenis2012-11-101-2/+3
| | | | | | pages, so fix non-sensical comparison introduced in rev 1.77. ok miod@, krw@, beck@
* New vmmap implementation.ariane2012-03-091-23/+55
| | | | | | | | | | | | no oks (it is really a pain to review properly) extensively tested, I'm confident it'll be stable 'now is the time' from several icb inhabitants Diff provides: - ability to specify different allocators for different regions/maps - a simpler implementation of the current allocator - currently in compatibility mode: it will generate similar addresses as the old allocator
* Rip out and burn support for UVM_HIST.oga2011-07-031-23/+1
| | | | | | | | The vm hackers don't use it, don't maintain it and have to look at it all the time. About time this 800 lines of code hit /dev/null. ``never liked it'' tedu@. ariane@ was very happy when i told her i wrote this diff.
* Don't bother checking for an empty queue before calling uvm_pglistfree.oga2011-06-231-5/+3
| | | | | | | | It will handle an empty list just fine (there's a small optimisation possible here to avoid grabbing the fpageqlock if no pages need freeing, but that is definitely another diff) ok ariane@
* Make mbufs and dma_alloc be contig allocations.ariane2011-06-231-2/+12
| | | | | | Requested by dlg@ ok oga@
* Backout vmmap in order to repair virtual address selection algorithmsariane2011-06-061-12/+6
| | | | outside the tree.
* Reimplement uvm/uvm_map.ariane2011-05-241-6/+12
| | | | | | | | | | | | | vmmap is designed to perform address space randomized allocations, without letting fragmentation of the address space go through the roof. Some highlights: - kernel address space randomization - proper implementation of guardpages - roughly 10% system time reduction during kernel build Tested by alot of people on tech@ and developers. Theo's machines are still happy.
* Don't leak swapslots when doing a uvm_km_pgremove and a page is in swap only.oga2011-05-101-15/+12
| | | | | | | | | | Before we were only calling uao_dropswap() if there was a page, maning that if the buffer was swapped out then we would leak the slot. Quite rare because only pipebuffers should swap from the kernel object, but i've seen panics that implied this had happened (alpha.p for example). ok thib@ after a lot of discussion and checking the semantics.
* Fix management of the list of free uvm_km_pages. Seems art@ lost a linekettenis2011-04-231-1/+2
| | | | | | when he copied this code from uvm_km_putpage() into km_free(). Found independently by ariane@; ok deraadt@
* Add missing call to pmap_update() in km_alloc().matthew2011-04-191-1/+2
| | | | ok deraadt@, miod@
* Free the correct pages when we failed to allocate va.art2011-04-191-2/+5
|
* Put back the change of pool and malloc into the new km_alloc(9) api.art2011-04-181-97/+17
| | | | | | The problems during the hackathon were not caused by this (most likely). prodded by deraadt@ and beck@
* unused variable on !PMAP_DIRECTderaadt2011-04-151-2/+1
|
* move uvm_pageratop from uvm_pager.c local to a general uvm functionoga2011-04-151-6/+2
| | | | | | | (uvm_atopg) and use it in uvm_km_doputpage to replace some handrolled code. Shrinks the kernel a trivial amount. ok beck@ and miod@ (who suggested i name it uvm_atopg not uvm_atop)
* Do not use NULL in integer comparisons. No functional change.miod2011-04-071-4/+4
| | | | ok matthew@ tedu@, also eyeballed by at least krw@ oga@ kettenis@ jsg@
* Backout the uvm_km_getpage -> km_alloc conversion. Weird things are happeningart2011-04-061-1/+82
| | | | | | and we aren't sure what's causing them. shouted oks by many before I even built a kernel with the diff.
* - Change pool constraints to use kmem_pa_mode instead of uvm_constraint_rangeart2011-04-051-82/+1
| | | | | | | | - Use km_alloc for all backend allocations in pools. - Use km_alloc for the emergmency kentry allocations in uvm_mapent_alloc - Garbage collect uvm_km_getpage, uvm_km_getpage_pla and uvm_km_putpage ariane@ ok
* Few minor ninja fixes while this isn't being used anywhere in -current.art2011-04-041-25/+20
| | | | | | | | - Change a few KASSERT(0) into proper panics. - Match the new behavior of single page freeing. - kremove pages and then free them, it's safer. thib@ ok
* Better.art2011-04-041-2/+4
|