summaryrefslogtreecommitdiffstats
path: root/sys/arch/alpha/dev (follow)
Commit message (Collapse)AuthorAgeFilesLines
* free(9) sizes.mpi2019-05-131-2/+4
| | | | From miod@
* Add size for free.visa2018-01-111-2/+5
| | | | OK mpi@
* Rename Debugger() into db_enter().mpi2017-04-301-4/+4
| | | | | | | Using a name with the 'db_' prefix makes it invisible from the dynamic profiler. ok deraadt@, kettenis@, visa@
* sizes for free()deraadt2015-09-021-2/+2
|
* Move acquisition of the kernel lock deeper in the interrupt path, and makemiod2015-05-191-1/+10
| | | | | | | sure clock interrupts do not attempt to acquire it. This will also eventually allow for IPL_MPSAFE interrupts on alpha. Tested by dlg@ and I.
* Replace some malloc(n*size,...) calls with mallocarray().doug2014-12-092-4/+4
| | | | ok tedu@ deraadt@
* Replace a plethora of historical protection options with justderaadt2014-11-161-3/+3
| | | | | | | PROT_NONE, PROT_READ, PROT_WRITE, and PROT_EXEC from mman.h. PROT_MASK is introduced as the one true way of extracting those bits. Remove UVM_ADV_* wrapper, using the standard names. ok doug guenther kettenis
* Replace all queue *_END macro calls except CIRCLEQ_END with NULL.doug2014-09-131-2/+2
| | | | | | | | CIRCLEQ_* is deprecated and not called in the tree. The other queue types have *_END macros which were added for symmetry with CIRCLEQ_END. They are defined as NULL. There's no reason to keep the other *_END macro calls. ok millert@
* add a size argument to free. will be used soon, but for now default to 0.tedu2014-07-123-6/+6
| | | | after discussions with beck deraadt kettenis.
* sgmap loading didnt respect the dmamaps max number of segments.dlg2014-07-111-1/+4
| | | | | | | | | | | this let it wanter off writing segment descriptors off in memory it didnt own, which led to some pretty awesome memory corruption. if you had a network card with a small number of tx descriptors per packet, a lot of memory, and a heavily fragmented packet (ie, ssh) you were basically guaranteed a confusing panic. ok miod@
* Convert bus_dmamem_map(9) to km_alloc(9) in order to make it fail andmpi2014-07-111-7/+6
| | | | | | | not sleep if the allocator cannot obtain a lock when BUS_DMA_NOWAIT is specified. idea and inputs from kettenis@, ok miod@
* Preallocate sgmap extent regions for tsp, cia and mcpcia dma maps, which falljmatthew2014-06-142-6/+24
| | | | | | back to sgmap if the direct mapping fails. ok miod@
* Use extent_alloc_with_descr(9) and add a mutex to protect the extent.kettenis2014-03-313-12/+23
| | | | | | | | This should make bus_dmamap_load(9) and bus_dmamap_unload(9) "mpsafe". As a bonus this gets rid of a potential memory allocation in the IO path. ok miod@
* Fix the error path in bus_dmamem_map.ariane2011-06-231-6/+2
| | | | | | | As discussed on icb: remove the comment, remove pmap_remove (uvm_km_free does that for us). ok oga@, deraadt@
* More than a decade ago, interrupt handlers on sparc started returning 0deraadt2011-04-151-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | (interrupt was not for me), 1 (positive interrupt was for me), or -1 (i am not sure...). We have continued with this practice in as many drivers as possible, throughout the tree. This makes some of the architectures use that information in their interrupt handler calling code -- if 1 is returned (and we know this specific machine does not have edge-shared interrupts), we finish servicing other possible handlers on the same pin. If the interrupt pin remains asserted (from a different device), we will end up back in the interrupt servicing code of course... but this is cheaper than calling all the chained interrupts on a pin. This does of course count on shared level interrupts being properly sorted by IPL. There have been some concerns about starvation of drivers which incorrectly return 1. Those drivers should be hunted down so that they return -1. (other architectures will follow) ok kettenis drahn dlg miod
* Kill pmap_phys_address(), and force every driver's mmap() routine to returnmiod2010-12-261-2/+2
| | | | | | | a physical address [more precisely, something suitable to pass to pmap_enter()'sphysical address argument]. This allows MI drivers to implement mmap() routines without having to know about the pmap_phys_address() implementation and #ifdef obfuscation.
* This is a first step towards getting rid of avail_start and avail_end in themiod2010-11-201-4/+2
| | | | | | | | | | | | | | kernel, currently limited to low-hanging fruit: these variables were used by bus_dma to specify the range in which to allocate memory, back when uvm_pglistalloc() was stupid and would not walk the vm_physseg[]. Nowadays, except on some platforms for early initialization, these variables are not used, or do not need to be global variables. Therefore: - remove `extern' declarations of avail_start and avail_end (or close cousins, such as arm physical_start and physical_end) from files which no longer need to use them. - make them local variables whenever possible. - remove them when they are assigned to but no longer used.
* Get rid of evcount's support for arranging counters in a treematthew2010-09-201-3/+2
| | | | | | | hierarchy. Everything attached to a single root node anyway, so at best we had a bush. "i think it is good" deraadt@
* pmap_extract() does the equivalent of vtophys if pmap_kernel(), so instead ofoga2010-04-102-11/+15
| | | | | | | doing if (p != NULL) pmap_extract() else vtophys() in a loop, just do pmap_extract unconditionally. ok miod@ (he found a typo, all hail miod!)
* PMAP_CANFAIL for bus_dmamem_map on all other architectures (and someoga2010-03-291-6/+18
| | | | | | whitespace tweaks on i386 so that it matches). ok kettenis@
* Remove unused last argument of alpha_shared_intr_disestablish().miod2009-09-301-3/+2
|
* Add a BUS_DMA_ZERO flag for bus_dmamem_alloc() to return zeroed memory.oga2009-04-201-1/+3
| | | | | Saves every damned driver calling bzero(), and continues the M_ZERO, PR_ZERO symmetry.
* Convert the waitok field of uvm_pglistalloc to "flags", more will be added soon.oga2009-04-141-3/+5
| | | | | | | | | For the possibility of sleeping, the first two flags are UVM_PLA_WAITOK and UVM_PLA_NOWAIT. It is an error not to show intention, so assert that one of the two is provided. Switch over every caller in the tree to using the appropriate flag. ok art@, ariane@
* When allocating memory in bus_dmamem_alloc() with uvm_pglistalloc(), do notmiod2009-03-071-2/+2
| | | | | try to be smart for the address range, uvm_pglistalloc() is smart enough nowadays.
* First pass at removing clauses 3 and 4 from NetBSD licenses.ray2008-06-265-40/+5
| | | | | | | | | Not sure what's more surprising: how long it took for NetBSD to catch up to the rest of the BSDs (including UCB), or the amount of code that NetBSD has claimed for itself without attributing to the actual authors. OK deraadt@
* Apply (with slight variants) this elimination of bzero() with M_ZERO:krw2007-10-021-4/+3
| | | | | | | | | | - if ((mapstore = malloc(mapsize, M_DEVBUF, - (flags & BUS_DMA_NOWAIT) ? M_NOWAIT : M_WAITOK)) == NULL) + if ((mapstore = malloc(mapsize, M_DEVBUF, (flags & BUS_DMA_NOWAIT) ? + (M_NOWAIT | M_ZERO) : (M_WAITOK | M_ZERO))) == NULL) return (ENOMEM); - bzero(mapstore, mapsize);
* Rework the interrupt code, shaving some cycles off in the process.brad2006-06-151-1/+9
| | | | | | | | | | | Rather than an "iointr" routine that decomposes a vector into an IRQ, we maintain a vector table directly, hooking up each "iointr" routine at the correct vector. This also allows us to hook device interrupts up to specific vectors. From thorpej NetBSD Tested by myself and a number of end-users.
* Check for stale flags in the DMA map.brad2006-05-211-1/+5
| | | | From thorpej NetBSD
* - _bus_dmamap_load_buffer_direct_common -> _bus_dmamap_load_buffer_directbrad2006-05-211-9/+10
| | | | | | | - fix _bus_dmamap_load_(uio/mbuf)_direct panic messages. - s/vm_page_alloc_memory/uvm_pglistalloc/ in panic message. From NetBSD
* Fix a couple of comments.brad2006-05-211-4/+4
| | | | From NetBSD
* Pay attention to BUS_DMA_READ; don't need to allocate a spillbrad2006-05-211-13/+45
| | | | | | page if it is set. From NetBSD
* Implement dmamap_load_uio for SGMAPs.brad2006-05-211-2/+73
| | | | From NetBSD
* Keep track of which DMA window was actually used to map thebrad2006-05-122-2/+10
| | | | | | | | | | request (not always the passed in DMA tag if we try direct-map and then fall back to sgmap-mapped). Use the actual window when performing dmamap_sync and dmamap_unload operations. From NetBSD ok martin@
* Use PAGE_SIZE rather than NBPG.brad2006-04-133-13/+13
| | | | | | From NetBSD ok martin@ miod@
* clean up after Theo's "support mbuf handling in alpha sgmap dma maps" commit.brad2006-04-042-113/+5
| | | | ok martin@
* rev 1.30brad2006-03-271-13/+12
| | | | | | | | | | | | | Don't increase the segment index if we skipped a zero-length mbuf. rev 1.22 Since the SGMAP buffer load subroutine doesn't need to modify the segment index, don't pass it by reference. From NetBSD ok miod@
* factorize SGMAP-mapped DMA map creation and destroy codemartin2006-03-202-2/+51
| | | | | | ok miod@, additional testing jsg@ from NetBSD
* In _bus_dmamem_alloc_range(), do not ignore the caller's ``high'' parameter.miod2006-03-181-3/+1
| | | | | | Makes isadma much happier. From NetBSD
* Protect sgmap extents with splvm(); from NetBSD.miod2006-03-131-5/+10
|
* Add a alpha_shared_intr_reset_strays() function that resets the straymartin2006-01-291-1/+14
| | | | | | | | | interrupt counter for a given shared interrupt descriptor. When an interrupt is successfully handled, reset the strays counter, thus preventing a "slow leak" from eventually shutting off the interrupt vector. from NetBSD via KUDO Takashi
* no more Mach-macrosmartin2005-10-281-2/+2
|
* Use list and queue macros where applicable to make the code easier to read;miod2004-12-252-11/+8
| | | | no functional change.
* Do not map empty mbufs (m_len == 0) in bus_dmamap_load_mbuf() as these mappingsclaudio2004-11-092-3/+8
| | | | | | may disturb the dma as seen in ipw(4). Emtpy mbufs are at the beginning of the mbuf chain and are as example a "side-effect" of a previous m_adj() call. OK miod@ mickey@ jason@ markus@
* Use new event counter API for interrupt counting on alpha. By me, with someaaron2004-06-281-2/+7
| | | | edits by Theo. deraadt@ ok
* support mbuf handling in alpha sgmap dma maps; from netbsdderaadt2004-01-131-183/+190
|
* typos from Jared Yanovich;jmc2003-10-182-5/+5
|
* this removes the functionality of adding allocatedmickey2002-10-071-1/+2
| | | | | pages into the queue already containing allocated pages. breaks i386:setup_buffers() because of this.
* No more need to initialize the result list before uvm_pglistalloc.art2002-10-061-2/+1
|
* No \n at the end of a panic() message... I thought all occurences had beenmiod2002-06-251-2/+2
| | | | squashed already.
* Since the sgmap is used in interrupts protect the extent with splvm.art2002-03-201-1/+7
| | | | | | | nate@ ok. Should fix a bunch of random memory corruption problems on many machines. How we could live so long without it is beyond me. Now my traktor is happy.