| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
No functionnal change.
ok kettenis@
|
|
|
|
| |
ok kettenis@, dlg@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the particular use before init was in uvm_init step 6, which calls
kmeminit to set up malloc(9), which calls uvm_km_zalloc, which calls
pmap_enter, which calls pool_get, which tries to allocate a page
using km_alloc, which isnt initalised until step 9 in uvm_init.
uvm_km_page_init calls kthread_create though, which uses malloc
internally, so it cant be reordered before malloc init.
to cope with this, uvm_km_page_init is split up. it sets up the
subsystem, and is called before kmeminit. the thread init is moved
to uvm_km_page_lateinit, which is called after kmeminit in uvm_init.
|
|
|
|
|
|
|
| |
have any direct symbols used. Tested for indirect use by compiling
amd64/i386/sparc64 kernels.
ok tedu@ deraadt@
|
|
|
|
|
|
|
| |
(VM_MAX_KERNEL_ADDRESS - VM_MIN_KERNEL_ADDRESS). This will allow these to no
longer be constants in the future.
ok guenther@
|
|
|
|
|
|
| |
doesn't have all the values and therefore can't be used everywhere.
ok deraadt@ kettenis@
|
|
|
|
|
|
| |
eliminating the must-be-kept-in-sync UVM_INH_* macros
ok deraadt@ tedu@
|
|
|
|
|
|
|
| |
PROT_NONE, PROT_READ, PROT_WRITE, and PROT_EXEC from mman.h.
PROT_MASK is introduced as the one true way of extracting those bits.
Remove UVM_ADV_* wrapper, using the standard names.
ok doug guenther kettenis
|
|
|
|
| |
ok mpi@ kspillner@
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on the 2nd of February 2011 in NetBSD.
http://marc.info/?l=netbsd-source-changes&m=129658899212732&w=2
http://marc.info/?l=netbsd-source-changes&m=129659095515558&w=2
http://marc.info/?l=netbsd-source-changes&m=129659157916514&w=2
http://marc.info/?l=netbsd-source-changes&m=129665962324372&w=2
http://marc.info/?l=netbsd-source-changes&m=129666033625342&w=2
http://marc.info/?l=netbsd-source-changes&m=129666052825545&w=2
http://marc.info/?l=netbsd-source-changes&m=129666922906480&w=2
http://marc.info/?l=netbsd-source-changes&m=129667725518082&w=2
|
|
|
|
| |
emphatic ok usual suspects, grudging ok miod
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
which is the default, unless the fault call is explicitly used to wire a given
page.
The amount of pages being faulted in was borrowed from the FreeBSD VM code,
about 15 years ago, at a time FreeBSD was only reliably running on 4KB page
size systems.
It is questionable whether faulting the same amount of pages, on platforms
where the page size is larger, is a good idea, as it may cause too much I/O.
Add an uvmfault_init() routine, which will compute the proper number of pages
at runtime, depending upon the actual page size, and attempting to fault in
the same overall size the previous code would have done with 4KB pages.
ok tedu@
|
|
|
|
|
|
| |
Has less special allocators on install media (where they aren't required
anyway).
Bonus: makes the vmmap initialization code easier to read.
|
|
|
|
|
|
|
|
|
|
|
|
| |
no oks (it is really a pain to review properly)
extensively tested, I'm confident it'll be stable
'now is the time' from several icb inhabitants
Diff provides:
- ability to specify different allocators for different regions/maps
- a simpler implementation of the current allocator
- currently in compatibility mode: it will generate similar addresses
as the old allocator
|
|
|
|
|
|
|
| |
Bogus chunks pointed out by matthew@ and miod@. No cookies for
marco@ and jasper@.
ok deraadt@ miod@ matthew@ jasper@ macro@
|
| |
|
|
|
|
|
| |
pools, sized by powers of 2, which are constrained to dma memory.
ok matthew tedu thib
|
|
|
|
|
|
|
|
|
|
| |
Now instead of the global object hashtable, we have a per object tree.
Testing shows no performance difference and a slight code shrink. OTOH when
locking is more fine grained this should be faster due to lock contention on
uvm.hashlock.
ok thib@, art@.
|
|
|
|
|
|
|
|
|
|
| |
We still have no idea why this stops the crashes. but it does.
a machine forced to 64mb of ram cycled 10GB through swap with this diff
and is still running as I type this. Other tests by ariane@ and thib@
also seem to show that it's alright.
ok deraadt@, thib@, ariane@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
separately).
a change at or just before the hackathon has either exposed or added a
very very nasty memory corruption bug that is giving us hell right now.
So in the interest of kernel stability these diffs are being backed out
until such a time as that corruption bug has been found and squashed,
then the ones that are proven good may slowly return.
a quick hitlist of the main commits this backs out:
mine:
uvm_objwire
the lock change in uvm_swap.c
using trees for uvm objects instead of the hash
removing the pgo_releasepg callback.
art@'s:
putting pmap_page_protect(VM_PROT_NONE) in uvm_pagedeactivate() since
all callers called that just prior anyway.
ok beck@, ariane@.
prompted by deraadt@.
|
|
|
|
|
|
|
|
|
| |
global lock, switch the uvm object pages to being kept in a per-object
RB_TREE. Right now this is approximately the same speed, but cleaner.
When biglock usage is reduced this will improve concurrency due to lock
contention..
ok beck@ art@. Thanks to jasper for the speed testing.
|
|
|
|
|
|
|
| |
two cases of pool_get() + memset(0) -> pool_get(,,,PR_ZERO)
1.5 cases of global variables are already zeroed, so don't zero them.
ok ariane@, comments on stuff i'd missed from blambert@ and cnst@.
|
|
|
|
|
|
|
| |
K&R function declarations, so switch them all over to ansi-style, in
accordance with the prophesy.
"go for it" art@
|
|
|
|
|
|
| |
pages.
"looks good/no problems with it" tedu@ miod@ art@
|
|
|
|
|
|
|
|
| |
changes the pressure on the uvm system, uncovering several bugs. Some
of those bugs result in provable deadlocks. We'll have to reconsider
integrating this diff again after fixing those bugs.
ok art@
|
|
|
|
|
|
|
|
|
|
|
| |
This will allow us to escape the limitations of kmem_map.
At this moment, the per-type limits are still enforced for all sizes,
but we might loosen that limit in the future after some thinking.
Original diff from Mickey in kernel/5761 , I massaged it a little to
obey the per-type limits.
miod@ ok
|
|
|
|
| |
ckuethe@ for a while. Okay beck@, "it is good timing" deraadt@.
|
|
|
|
|
|
| |
to be page aligned and can contain more "noise".
From mickey art@ ok
|
|
|
|
|
|
|
|
| |
and make sure that nothing can ever be mapped at theses addresses.
Only i386 overrides the default for now.
From mickey@, ok art@ miod@
|
|
|
|
|
| |
us did not see it or get a chance to test it before it was commited. It
broke cvs, in the ami driver, making it not succeed at seeing it's devices.
|
|
|
|
|
|
| |
this results in lesse kva waste due to static preallocation of those
for every phys page and also every swap page.
tested by beck krw miod
|
|
|
|
|
|
|
|
|
|
|
|
| |
an interrupt safe thread.
use this as the new backend for mbpool and mclpool, eliminating the mb_map.
introduce a sysctl kern.maxclusters which controls the limit of clusters
allocated.
testing by many people, works everywhere but m68k. ok deraadt@
this essentially deprecates the NMBCLUSTERS option, don't use it.
this should reduce pressure on the kmem_map and the uvm reserve of static
map entries.
|
|
|
|
|
|
|
| |
The only thing left in vm/ are just dumb wrappers.
vm/vm.h includes uvm/uvm_extern.h
vm/pmap.h includes uvm/uvm_pmap.h
vm/vm_page.h includes uvm/uvm_page.h
|
| |
|
| |
|
|
|
|
| |
Including support for zeroing pages in the idle loop (not enabled yet).
|
| |
|
|
|
|
|
| |
This is to match (make diffs smaller) the code in NetBSD.
new gcc inlines those functions, so this could also be a performance win.
|
| |
|
|
|
|
|
| |
Mostly cleanups, but also a few improvements to pagedaemon for better
handling of low memory and/or low swap conditions.
|
| |
|
|
|