summaryrefslogtreecommitdiffstats
path: root/sys/uvm/uvm_fault.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Comments & style cleanup, no functional change intended.mpi2021-02-161-224/+284
| | | | | | | | | | | | - Sync comments with NetBSD including locking details. - Remove superfluous parenthesis and spaces. - Add brackets, even if questionable, to reduce diff with NetBSD - Use for (;;) instead of while(1) - Rename a variable from 'result' into 'error'. - Move uvm_fault() and uvm_fault_upper_lookup() - Add an locking assert in uvm_fault_upper_lookup() ok tb@, mlarkin@
* Fix double unlock in uvmfault_anonget().mpi2021-02-151-3/+3
| | | | Reported by and ok jsg@
* (re)Introduce locking for amaps & anons.mpi2021-01-191-10/+52
| | | | | | | | | | | | | | | A rwlock is attached to every amap and is shared with all its anon. The same lock will be used by multiple amaps if they have anons in common. This should be enough to get the upper part of the fault handler out of the KERNEL_LOCK() which seems to bring up to 20% improvements in builds. This is based/copied/adapted from the most recent work done in NetBSD which is an evolution of the precendent simple_lock scheme. Tested by many, thanks! ok kettenis@, mvs@
* Move `access_type' to the fault context.mpi2021-01-161-20/+20
| | | | | | | Fix a regression where the valye wasn't correctly overwritten for wired mapping, introduced in previous refactoring. ok mvs@
* uvm: uvm_fault_lower(): don't sleep on lboltcheloha2021-01-021-2/+3
| | | | | | | We can simulate the current behavior without lbolt by sleeping for 1 second on the &nowake channel. ok mpi@
* Use per-CPU counters for fault and stats counters reached in uvm_fault().mpi2020-12-281-25/+26
| | | | ok kettenis@, dlg@
* Use a while loop instead of goto in uvm_fault().mpi2020-12-081-34/+23
| | | | ok jmatthew@, tb@
* Move logic handling lower faults, case 2, to its own function.mpi2020-11-191-63/+77
| | | | | | No functionnal change. ok kettenis@, jmatthew@, tb@
* Remove Case2 goto, use a simple if () instead.mpi2020-11-161-23/+17
| | | | ok tb@, jmatthew@
* Use a helper to look for existing mapping & return if there's an anon.mpi2020-11-131-56/+81
| | | | | | | Separate fault handling code for type 1 and 2 and reduce differences with NetBSD. ok tb@, jmatthew@, kettenis@
* Move the logic dealing with faults 1A & 1B to its own function.mpi2020-11-131-151/+173
| | | | | | | Some minor documentation improvments and style nits but this should not contain any functionnal change. ok tb@
* Remove unused `anon' argument from uvmfault_unlockall().mpi2020-11-061-19/+17
| | | | | | | | | It won't be used when amap and anon locking will be introduced. This "fixes" passing a unrelated/uninitialized pointer in an error path in case of memory shortage. ok kettenis@
* Move the top part of uvm_fault() (lookups, checks, etc) in their own function.mpi2020-10-211-113/+170
| | | | | | | | | The name, uvm_fault_check() and logic comes from NetBSD as reuducing diff with their tree is useful to learn from their experience and backport fixes. No functional change intended. ok kettenis@
* Introduce a helper to check if all available swap is in use.mpi2020-09-291-11/+6
| | | | | | | This reduces code duplication, reduces the diff with NetBSD and will help to introduce locks around global variables. ok cheloha@
* Remove trailing white spaces.mpi2020-09-241-39/+39
|
* Spell inline correctly.mpi2020-09-221-3/+3
| | | | | | Reduce differences with NetBSD. ok mvs@, kettenis@
* Kill outdated comment, pmap_enter(9) doesn't sleep.mpi2020-09-221-8/+1
| | | | ok kettenis@
* Add tracepoints in the page fault handler and when entries are added to maps.mpi2020-09-121-1/+3
| | | | ok kettenis@
* Convert infinite sleeps to {m,t}sleep_nsec(9).mpi2019-12-081-2/+2
| | | | ok visa@, jca@
* R.I.P. UVM_WAIT(). Use tsleep_nsec(9) directly.cheloha2019-07-181-2/+2
| | | | | | | | | | | UVM_WAIT() doesn't provide much of a useful abstraction. All callers tsleep forever and no callers set PCATCH, so only 2 of 4 parameters are actually used. Might as well just use tsleep_nsec(9) directly and make the uvm code a bit less specialized. Suggested by mpi@. ok mpi@ visa@ millert@
* Always refault if relocking maps fails after IO. This fixes a regressionvisa2019-02-031-1/+3
| | | | | | | | | | introduced with __MAP_NOFAULT. The regression let uvm_fault() run without proper locking and rechecking of state after map version change if page zero-fill was chosen. OK kettenis@ deraadt@ Reported-by: syzbot+9972088c1026668c6c5c@syzkaller.appspotmail.com
* Add support to uvm to establish write-combining mappings. Use this in thekettenis2018-10-311-10/+11
| | | | | | inteldrm driver to add support for the I915_MMAP_WC flag. ok deraadt@, jsg@
* Implement MAP_STACK option for mmap(). Synchronous faults (pagefault andderaadt2018-04-121-2/+3
| | | | | | | | | | | | | | syscall) confirm the stack register points at MAP_STACK memory, otherwise SIGSEGV is delivered. sigaltstack() and pthread_attr_setstack() are modified to create a MAP_STACK sub-region which satisfies alignment requirements. Observe that MAP_STACK can only be set/cleared by mmap(), which zeroes the contents of the region -- there is no mprotect() equivalent operation, so there is no MAP_STACK-adding gadget. This opportunistic software-emulation of a stack protection bit makes stack-pivot operations during ROPchain fragile (kind of like removing a tool from the toolbox). original discussion with tedu, uvm work by stefan, testing by mortimer ok kettenis
* Accessing a mmap(2)ed file behind its end should result in a SIGBUSbluhm2017-07-201-2/+2
| | | | | | according to POSIX. Bring regression test and kernel in line for amd64 and i386. Other architectures have to follow. OK deraadt@ kettenis@
* move the uvm_map_addr RB tree from RB macros to the RBT functionsdlg2016-09-161-2/+2
| | | | | | | | | this tree is interesting because it uses all the red black tree features, specifically the augment callback thats called on tree topology changes, and it poisons and checks entries as theyre removed from and inserted back into the tree respectively. ok stefan@
* Wait for RAM in uvm_fault when allocating uvm structures failsstefan2016-05-081-18/+44
| | | | | | | | | | | Only fail hard when running out of swap space also, as suggested by kettenis@ While there, let amap_add() return a success status and handle amap_add() errors in uvm_fault() similar to other out of RAM situations. These bits are needed for further amap reorganization diffs. lots of feedback and ok kettenis@
* Remove dead assignments and now unused variables.chl2016-03-291-4/+1
| | | | | | Found by LLVM/Clang Static Analyzer. ok mpi@ stefan@
* Sync no-argument function declaration and definition by adding (void).naddy2016-03-071-2/+2
| | | | ok mpi@ millert@
* UVM change needed for vmm.mlarkin2015-11-101-1/+11
| | | | discussed with miod, deraadt, and guenther.
* All our pmap implementations provide pmap_resident_count(), so removemiod2015-09-091-15/+1
| | | | #ifndef pmap_resident_count code paths.
* Remove the unused loan_count field and the related uvm logic. Most ofvisa2015-08-211-175/+15
| | | | | | the page loaning code is already in the Attic. ok kettenis@, beck@
* Remove some includes include-what-you-use claims don'tjsg2015-03-141-2/+1
| | | | | | | have any direct symbols used. Tested for indirect use by compiling amd64/i386/sparc64 kernels. ok tedu@ deraadt@
* Something is subtly wrong with this. On ramdisks, processes run out ofderaadt2015-02-081-2/+1
| | | | | mappable memory (direct or via execve), perhaps because of the address allocator behind maps and the way wiring counts work?
* Clear PQ_AOBJ before calling uvm_pagefree(), clearing up one false XXXderaadt2015-02-061-1/+2
| | | | | comment (one is fixed, one is deleted). ok kettenis beck
* Prefer MADV_* over POSIX_MADV_* in kernel for consistency: the latterguenther2014-12-171-5/+5
| | | | | | doesn't have all the values and therefore can't be used everywhere. ok deraadt@ kettenis@
* Use MAP_INHERIT_* for the 'inh' argument to the UMV_MAPFLAG() macro,guenther2014-12-151-2/+2
| | | | | | eliminating the must-be-kept-in-sync UVM_INH_* macros ok deraadt@ tedu@
* Replace a plethora of historical protection options with justderaadt2014-11-161-24/+23
| | | | | | | PROT_NONE, PROT_READ, PROT_WRITE, and PROT_EXEC from mman.h. PROT_MASK is introduced as the one true way of extracting those bits. Remove UVM_ADV_* wrapper, using the standard names. ok doug guenther kettenis
* Introduce __MAP_NOFAULT, a mmap(2) flag that makes sure a mapping will notkettenis2014-10-031-3/+7
| | | | | | | | | cause a SIGSEGV or SIGBUS when a mapped file gets truncated. Access to pages that are not backed by a file on such a mapping will be replaced by zero-filled anonymous pages. Makes passing file descriptors of mapped files usable without having to play tricks with signal handlers. "steal your mmap flag" deraadt@
* typo in commentguenther2014-09-071-2/+2
|
* Chuck Cranor rescinded clauses in his licensejsg2014-07-111-8/+1
| | | | | | | | | | | | | on the 2nd of February 2011 in NetBSD. http://marc.info/?l=netbsd-source-changes&m=129658899212732&w=2 http://marc.info/?l=netbsd-source-changes&m=129659095515558&w=2 http://marc.info/?l=netbsd-source-changes&m=129659157916514&w=2 http://marc.info/?l=netbsd-source-changes&m=129665962324372&w=2 http://marc.info/?l=netbsd-source-changes&m=129666033625342&w=2 http://marc.info/?l=netbsd-source-changes&m=129666052825545&w=2 http://marc.info/?l=netbsd-source-changes&m=129666922906480&w=2 http://marc.info/?l=netbsd-source-changes&m=129667725518082&w=2
* bye bye UBC; ok beck dlgderaadt2014-07-081-5/+1
|
* It is important that we don't release the kernel lock between issuing akettenis2014-07-031-10/+8
| | | | | | | wakeup and clearing the PG_BUSY and PG_WANTED flags, so try to keep those bits as close together and defenitely avoid calling random code in between. ok guenther@, tedu@
* Fix some potential integer overflows caused by converting a page number intokettenis2014-05-081-4/+4
| | | | | | | | | | an offset/size/address by shifting by PAGE_SHIFT. Make uvm_objwrire/unwire use voff_t instead of off_t. The former is the right type here even if it is equivalent to the latter. Inspired by a somewhat similar changes in Bitrig. ok deraadt@, guenther@
* compress code by turning four line comments into one line comments.tedu2014-04-131-264/+48
| | | | emphatic ok usual suspects, grudging ok miod
* uvm_fault() will try to fault neighbouring pages for the MADV_NORMAL case,miod2014-04-031-12/+27
| | | | | | | | | | | | | | | | | | which is the default, unless the fault call is explicitly used to wire a given page. The amount of pages being faulted in was borrowed from the FreeBSD VM code, about 15 years ago, at a time FreeBSD was only reliably running on 4KB page size systems. It is questionable whether faulting the same amount of pages, on platforms where the page size is larger, is a good idea, as it may cause too much I/O. Add an uvmfault_init() routine, which will compute the proper number of pages at runtime, depending upon the actual page size, and attempting to fault in the same overall size the previous code would have done with 4KB pages. ok tedu@
* In uvm_fault(), when attempting to map backpages and forwpages, defermiod2014-03-311-2/+3
| | | | | | | the pmap_update() to the end of the loop, rather than after each loop iteration - which might not even end up invoking pmap_enter()! Quiet blessing from guenther@ deraadt@
* in the brave new world of void *, we don't need caddr_t caststedu2013-05-301-2/+2
|
* UVM_UNLOCK_AND_WAIT no longer unlocks, so rename it to UVM_WAIT.tedu2013-05-301-6/+3
|
* remove lots of comments about locking per beck's requesttedu2013-05-301-92/+20
|
* remove simple_locks from uvm code. ok beck deraadttedu2013-05-301-27/+1
|