summaryrefslogtreecommitdiffstats
path: root/sys/uvm/uvm_page.h (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Spell inline correctly.mpi2020-09-221-3/+3
| | | | | | Reduce differences with NetBSD. ok mvs@, kettenis@
* Split out the code that removes a page from uvm objects and clears the flagskettenis2019-11-291-1/+2
| | | | | | into a separate uvm_pageclean() function and call it from uvm_pagefree(). ok mpi@, guenther@, beck@
* Split PID from TID, giving processes a PID unrelated to the TID of theirguenther2016-11-071-2/+2
| | | | | | initial thread ok jsing@ kettenis@
* move the vm_page struct from being stored in RB macro trees to RBT functionsdlg2016-09-161-2/+2
| | | | | | | vm_page structs go into three trees, uvm_objtree, uvm_pmr_addr, and uvm_pmr_size. all these have been moved to RBT code. this should give us a decent chunk of code space back.
* remove vaxismsderaadt2016-03-091-2/+2
|
* Lock the page queues by turning uvm_lock_pageq() and uvm_unlock_pageq() intokettenis2015-10-081-3/+3
| | | | | | | | mtx_enter() and mtx_leave() operations. Not 100% this won't blow up but there is only one way to find out, and we need this to make progress on further unlocking uvm. prodded by deraadt@
* Remove the unused loan_count field and the related uvm logic. Most ofvisa2015-08-211-4/+1
| | | | | | the page loaning code is already in the Attic. ok kettenis@, beck@
* having macros provide semicolons is dangerous.dlg2015-04-221-3/+3
|
* Tedu the old idle page zeroing code.kettenis2015-02-071-2/+1
| | | | ok tedu@, guenther@, miod@
* Remove some unneeded <uvm/uvm_extern.h> inclusions.mpi2015-02-051-3/+1
| | | | ok deraadt@, miod@
* Introduce a thread for zeroing pages without holding the kernel lock. Thiskettenis2014-10-031-2/+2
| | | | | | | | | | | | way we can do some useful kernel lock in parallel with other things and create a reservoir of zeroed pages ready for use elsewhere. This should reduce latency. The thread runs at the absolutel lowest priority such that we don't keep other kernel threads or userland from doing useful work. Can be easily disabled by disabling the kthread_create(9) call in main(). Which perhaps we should do for non-MP kernels. ok deraadt@, tedu@
* Chuck Cranor rescinded clauses in his licensejsg2014-07-111-7/+2
| | | | | | | | | | | | | on the 2nd of February 2011 in NetBSD. http://marc.info/?l=netbsd-source-changes&m=129658899212732&w=2 http://marc.info/?l=netbsd-source-changes&m=129659095515558&w=2 http://marc.info/?l=netbsd-source-changes&m=129659157916514&w=2 http://marc.info/?l=netbsd-source-changes&m=129665962324372&w=2 http://marc.info/?l=netbsd-source-changes&m=129666033625342&w=2 http://marc.info/?l=netbsd-source-changes&m=129666052825545&w=2 http://marc.info/?l=netbsd-source-changes&m=129666922906480&w=2 http://marc.info/?l=netbsd-source-changes&m=129667725518082&w=2
* Allow for two more pmap-specific bits in vm_page pg_flags. Definemiod2014-03-211-1/+4
| | | | | | | | PG_PMAPMASK as all the possible pmap-specific bits (similar to the other PG_fooMASK) to make sure MI code does not need to be updated, the next time more bits are allocated to greedy pmaps. No functional change, soon to be used by the (greedy) mips64 pmap.
* unifdef -D__HAVE_VM_PAGE_MD - no functional change.miod2014-01-231-3/+2
|
* Remove __HAVE_PMAP_PHYSSEG support, nothing uses it anymore.miod2014-01-011-4/+1
|
* remove lots of comments about locking per beck's requesttedu2013-05-301-13/+11
|
* remove simple_locks from uvm code. ok beck deraadttedu2013-05-301-3/+3
|
* Remove the freelist member from vm_physsegoga2011-05-301-2/+1
| | | | | | | | | | | | | | | | | | The new world order of pmemrange makes this data completely redundant (being dealt with by the pmemrange constraints instead). Remove all code that messes with the freelist. While touching every caller of uvm_page_physload() anyway, add the flags argument to all callers (all but one is 0 and that one already used PHYSLOAD_DEVICE) and remove the macro magic to allow callers to continue without it. Should shrink the code a bit, as well. matthew@ pointed out some mistakes i'd made. ``freelist death, I like. Ok.' ariane@ `I agree with the general direction, go ahead and i'll fix any fallout shortly'' miod@ (68k 88k and vax i could not check would build)
* Kill vm_page_lookup_freelist.oga2011-05-101-3/+1
| | | | | | | it belongs to a world order that isn't here anymore. More importantly it has been unused for a fair while now. ok thib@
* So long, uvm_pglist.hoga2011-05-071-2/+3
| | | | | | | | This header defined three thing. two of which are unused throughout the tree, the final one was the definition of the pagq head type, move that to uvm_page.h and nuke the header ok thib@. Thanks to krw@ for testing the hppa build for me.
* Count the number of physical pages within a memory range.ariane2011-04-021-1/+4
| | | | | | Bob needs this. ok art@ bob@ thib@
* Add PADDR_IS_DMA_REACHABLE macro so art stops whiningthib2010-06-291-1/+4
|
* uvm constraints. Add two mandatory MD symbols, uvm_md_constraintsthib2010-06-271-1/+3
| | | | | | | | | | | | | | | | | | | | | | which contains the constraints for DMA/memory allocation for each architecture, and dma_constraints which contains the range of addresses that are dma accessable by the system. This is based on ariane@'s physcontig diff, with lots of bugfixes and additions the following additions by my self: Introduce a new function pool_set_constraints() which sets the address range for which we allocate pages for the pool from, this is now used for the mbuf/mbuf cluster pools to keep them dma accessible. The !direct archs no longer stuff pages into the kernel object in uvm_km_getpage_pla but rather do a pmap_extract() in uvm_km_putpages. Tested heavily by my self on i386, amd64 and sparc64. Some tests on alpha and SGI. "commit it" beck, art, oga, deraadt "i like the diff" deraadt
* Committing on behalf or ariane@.oga2010-04-221-1/+2
| | | | | | | | | | | | | | | recommit pmemrange: physmem allocator: change the view of free memory from single free pages to free ranges. Classify memory based on region with associated use-counter (which is used to construct a priority list of where to allocate memory). Based on code from tedu@, help from many. Useable now that bugs have been found and fixed in most architecture's pmap.c ok by everyone who has done a pmap or uvm commit in the last year.
* Bring back PHYSLOAD_DEVICE for uvm_page_physload.oga2010-03-241-1/+2
| | | | | | | | | | | | | | | | | | | | | ok kettenis@ beck@ (tentatively) and ariane@. deraadt asked for it to be commited now. original commit message: extend uvm_page_physload to have the ability to add "device" pages to the system. This is needed in the case where you need managed pages so you can handle faulting and pmap_page_protect() on said pages when you manage memory in such regions (i'm looking at you, graphics cards). these pages are flagged PG_DEV, and shall never be on the freelists, assert this. behaviour remains unchanged in the non-device case, specifically for all archs currently in the tree we panic if called after bootstrap. ok art@ kettenis@, beck@
* reintroduce the uvm_tree commit.oga2009-08-061-17/+4
| | | | | | | | | | Now instead of the global object hashtable, we have a per object tree. Testing shows no performance difference and a slight code shrink. OTOH when locking is more fine grained this should be faster due to lock contention on uvm.hashlock. ok thib@, art@.
* date based reversion of uvm to the 4th May.oga2009-06-171-1/+1
| | | | | | | | | | | More backouts in line with previous ones, this appears to bring us back to a stable condition. A machine forced to 64mb of ram cycled 10GB through swap with this diff and is still running as I type this. Other tests by ariane@ and thib@ also seem to show that it's alright. ok deraadt@, thib@, ariane@
* Backout pmemrange (which to most people is more well known as physmemariane2009-06-161-14/+3
| | | | | | allocator). "i can't see any obvious problems" oga
* Backout all changes to uvm after pmemrange (which will be backed outoga2009-06-161-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | separately). a change at or just before the hackathon has either exposed or added a very very nasty memory corruption bug that is giving us hell right now. So in the interest of kernel stability these diffs are being backed out until such a time as that corruption bug has been found and squashed, then the ones that are proven good may slowly return. a quick hitlist of the main commits this backs out: mine: uvm_objwire the lock change in uvm_swap.c using trees for uvm objects instead of the hash removing the pgo_releasepg callback. art@'s: putting pmap_page_protect(VM_PROT_NONE) in uvm_pagedeactivate() since all callers called that just prior anyway. ok beck@, ariane@. prompted by deraadt@.
* backout:deraadt2009-06-141-2/+1
| | | | | | | | > extend uvm_page_physload to have the ability to add "device" pages to the > system. since it was overlayed over a system that we warned would go "in to be tested, but may be pulled out". oga, you just made me spend 20 minutes of time I should not have had to spend doing this.
* extend uvm_page_physload to have the ability to add "device" pages to theoga2009-06-071-1/+2
| | | | | | | | | | | | | | | system. This is needed in the case where you need managed pages so you can handle faulting and pmap_page_protect() on said pages when you manage memory in such regions (i'm looking at you, graphics cards). these pages are flagged PG_DEV, and shall never be on the freelists, assert this. behaviour remains unchanged in the non-device case, specifically for all archs currently in the tree we panic if called after bootstrap. ok art@, kettenis@, ariane@, beck@.
* Instead of the global hash table with the terrible hashfunction and aoga2009-06-021-4/+2
| | | | | | | | | global lock, switch the uvm object pages to being kept in a per-object RB_TREE. Right now this is approximately the same speed, but cleaner. When biglock usage is reduced this will improve concurrency due to lock contention.. ok beck@ art@. Thanks to jasper for the speed testing.
* physmem allocator: change the view of free memory from single free pagesariane2009-06-011-3/+14
| | | | | | | | | to free ranges. Classify memory based on region with associated use-counter (which is used to construct a priority list of where to allocate memory). Based on code from tedu@, help from many. Ok art@
* Revert pageqlock back from a mutex to a simple_lock, as it needs to bemiod2009-04-281-3/+3
| | | | | recursive in some cases (mostly involving swapping). A proper fix is in the works, but this will unbreak kernels for now.
* Convert the page queue lock to a mutex instead of a simplelock.oga2009-04-131-3/+3
| | | | | | | | Fix up the one case of lock recursion (which blatantly ignored the comment right above it saying that we don't need to lock). The rest of the lock usage has been checked and appears to be correct. ok ariane@.
* In the case where VM_PHYSSEG_MAX == 1 make vm_physseg_find andoga2009-04-061-1/+38
| | | | | | | | PHYS_TO_VM_PAGE inline again. This should stop function call overhead killing the vax and other slow archs while keeping the benefit for the faster platforms. suggested by miod. ok miod@, toby@.
* Move all of the pseudo-inline functions in uvm into C files.oga2009-03-251-35/+22
| | | | | | | | | By pseudo-inline, I mean that if a certain macro was defined, they would be inlined. However, no architecture defines that, and none has for a very very long time. Therefore mainly this just makes the code a damned sight easier to read. Some k&r -> ansi declarations while I'm in there. "just commit it" art@. ok weingart@.
* vm_physseg_find and VM_PAGE_TO_PHYS are both called many times in youroga2009-03-241-99/+3
| | | | | | | | | | average arch port. They are also inline. This does not help, de-inline them. shaves about 1k on i386 and amd64 bsd.mp. Probably similar amounts of most architectures. "no issue" beck@ "Nuke nuke nuke... make them functions" weingart@ "this is good" art@
* Variables were never used, never implemented.ariane2009-01-201-21/+1
| | | | Ok miod, toby
* Turn the uvm_{lock/unlock}_fpageq() inlines intothib2007-12-181-1/+3
| | | | | | | macros that just expand into the mutex functions to keep the abstraction, do assorted cleanup. ok miod@,art@
* Reserve a few pg_flags for pmaps that might want to use them.art2007-04-181-2/+7
| | | | | | | i386 will use them soon and miod wants to work on other pmaps in parallell. miod@ ok
* While splitting flags and pqflags might have been a good idea in theoryart2007-04-131-36/+28
| | | | | | | | | | | | to separate locking, on most modern machines this is not enough since operations on short types touch other short types that share the same word in memory. Merge pg_flags and pqflags again and now use atomic operations to change the flags. Also bump wire_count to an int and pg_version might go int as well, just for alignment. tested by many, many. ok miod@
* Mechanically rename the "flags" and "version" fields in struct vm_pageart2007-04-041-3/+3
| | | | | | | | | | to "pg_flags" and "pg_version", so that they are a bit easier to work with. Whoever uses generic names like this for a popular struct obviously doesn't read much code. Most architectures compile and there are no functionality changes. deraadt@ ok ("if something fails to compile, we fix that by hand")
* IS_VM_PHYSADDR is no longer used.miod2006-06-161-8/+1
|
* typos from Jonathon Gray;jmc2003-11-081-2/+2
|
* Only add a pmap_physseg if MD code defines __HAVE_PMAP_PHYSSEG.art2002-07-201-1/+3
|
* Allow MD code to define __HAVE_VM_PAGE_MD to add own members into struct vm_page.art2002-06-111-1/+5
| | | | From NetBSD.
* First round of __P removal in sysmillert2002-03-141-27/+27
|
* UBC was a disaster. It worked very good when it worked, but on someart2001-12-191-40/+25
| | | | | | | | | machines or some configurations or in some phase of the moon (we actually don't know when or why) files disappeared. Since we've not been able to track down the problem in two weeks intense debugging and we need -current to be stable, back out everything to a state it had before UBC. We apologise for the inconvenience.
* Yet another sync to NetBSD uvm.art2001-12-041-3/+3
| | | | | | Today we add a pmap argument to pmap_update() and allocate map entries for kernel_map from kmem_map instead of using the static entries. This should get rid of MAX_KMAPENT panics. Also some uvm_loan problems are fixed.