summaryrefslogtreecommitdiffstats
path: root/lib/librthread (follow)
Commit message (Collapse)AuthorAgeFilesLines
* make fixed-sized fixed-value mib[] arrays be constderaadt2020-10-121-4/+2
| | | | ok guenther tb millert
* Update my email address.pirofti2020-04-064-8/+8
|
* Instead of opting in to futexes on archs with atomics opt out on archsjsg2020-02-061-10/+7
| | | | | | without atomics, a smaller list. ok mpi@ visa@
* Remove duplicated header.mpi2019-11-011-2/+1
|
* Backout previous synch.h commit (r1.5, "Use process-private futexes to avoidsthen2019-10-241-5/+12
| | | | | the uvm_map lookup overhead"). This causes hangs with Python, seen easily by trying to build ports/graphics/py-Pillow.
* Use process-private futexes to avoid the uvm_map lookup overhead.mpi2019-10-211-12/+5
| | | | | | While here kill unused _wait() function. ok visa@
* Wake all waiters when unlocking an rwlock. This fixes a hangvisa2019-03-031-2/+2
| | | | | | | | | that could happen if there was more than one writer waiting for a read-locked rwlock. Problem found by semarie@. OK semarie@ tedu@
* New futex(2) based rwlock implementation based on the mutex code.mpi2019-02-133-154/+168
| | | | | | | | | This implementation reduces contention because threads no longer need to spin calling sched_yield(2) before going to sleep. Tested by many, thanks! ok visa@, pirofti@
* Import the existing rwlock implementation for architectures that cannotmpi2019-02-131-0/+260
| | | | use the futex(2)-based one due to missing atomic primitives.
* add a pthread_get_name_np to match pthread_set_name_np.tedu2019-02-044-3/+11
| | | | | | could be useful in ports. initial diff by David Carlier some time ago. ok jca
* Rename 1-letter variables to be coherent with others futex(2) basedmpi2019-01-291-29/+25
| | | | | | implementations. ok pirofti@
* Move sigwait(3) from libpthread to libcjca2019-01-123-78/+1
| | | | | | | | POSIX wants it in libc, that's where the function can be found on other systems. Reported by naddy@, input from naddy@ and guenther@. "looks ok" guenther@, ok deraadt@ Note: riding the libc/libpthread major cranks earlier today.
* mincore() is a relic from the past, exposing physical machine informationderaadt2019-01-111-2/+2
| | | | | | | about shared resources which no program should see. only a few pieces of software use it, generally poorly thought out. they are being fixed, so mincore() can be deleted. ok guenther tedu jca sthen, others
* Switch alpha to futex(2) based condvars, mutexes and semaphores.visa2018-10-211-5/+6
| | | | From Brad, tested by Miod, OK kettenis@
* Switch powerpc to futex(2) based condvars, mutexes and semaphores.visa2018-10-151-2/+2
| | | | From Brad, OK mpi@ kettenis@
* enable futex(2) based mutexes on armv7 and use futex based semaphores injsg2018-09-241-3/+3
| | | | | librthread on armv7 as well from brad ok visa@ kettenis@ mpi@
* Return EINVAL if pthread_barrier_init is called with count=0.pirofti2018-07-061-1/+4
| | | | OK kettenis@, guenther@
* New semaphore implementation making sem_post async-safe.pirofti2018-06-084-61/+560
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | POSIX dictates that sem_post() needs to be async-safe here[0] and is thus included in the list of safe functions to call from within a signal handler here[1]. The old semaphore implementation is using spinlocks and __thrsleep to synchronize between threads. Let's say there are two threads: T0 and T1 and the semaphore has V=0. T1 calls sem_wait() and it will now sleep (spinlock) until someone else sem_post()'s. Let's say T0 sends a signal to T1 and exits. The signal handler calls sem_post() which is meant to unblock T1 by incrementing V. With the old semaphore implementation we we are now in a deadlock as sem_post spinlocks on the same lock. The new implementation does not suffer from this defect as it uses futexes to resolve locking and thus sem_post does not need to spin. Besides fixing this defect and making us POSIX compliant, this should also improve performance as there should be less context switching and thus less time spent in the kernel. For architectures that do not provied futexes and atomic operations, the old implementation will be used and it is now being renamed to rthread_sem_compat as discussed with mpi@. [0] -- http://pubs.opengroup.org/onlinepubs/9699919799/functions/sem_post.html [1] -- http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html OK visa@, mpi@, guenther@
* syslog_r() expects a priority, not a faciliy. Use LOG_ERR for thebluhm2018-05-021-4/+4
| | | | | pthread_attr_setstack() error message. OK deraadt@
* pthread_join() must not return EINTRguenther2018-04-271-9/+13
| | | | | | Simplify sem_trywait() ok pirofti@ mpi@
* Validate timespec and return ECANCELED when interrupted with SA_RESTART.pirofti2018-04-242-12/+13
| | | | | | | | | | | | | | | Discussing with mpi@ and guenther@, we decided to first fix the existing semaphore implementation with regards to SA_RESTART and POSIX compliant returns in the case where we deal with restartable signals. Currently we return EINTR everywhere which is mostly incorrect as the user can not know if she needs to recall the syscall or not. Return ECANCELED to signal that SA_RESTART was set and EINTR otherwise. Regression tests pass and so does the posixsuite. Timespec validation bits are needed to pass the later. OK mpi@, guenther@
* (file missed from previous commit)deraadt2018-04-121-4/+48
| | | | | | | | | | | | | | Implement MAP_STACK option for mmap(). Synchronous faults (pagefault and syscall) confirm the stack register points at MAP_STACK memory, otherwise SIGSEGV is delivered. sigaltstack() and pthread_attr_setstack() are modified to create a MAP_STACK sub-region which satisfies alignment requirements. Observe that MAP_STACK can only be set/cleared by mmap(), which zeroes the contents of the region -- there is no mprotect() equivalent operation, so there is no MAP_STACK-adding gadget. This opportunistic software-emulation of a stack protection bit makes stack-pivot operations during ROPchain fragile (kind of like removing a tool from the toolbox). original discussion with tedu, uvm work by stefan, testing by mortimer
* Start mapping thread stacks with MAP_STACK. mmap() currently ignoresderaadt2018-02-111-2/+2
| | | | the flag, but some problem identification can begin.
* Shift top-of-stack down so that the random==0 case doesn't leave stackderaadt2018-02-101-3/+3
| | | | | pointer beyond the space. ok stefan, tedu
* Revert recent changes to unbreak ports/net/sambajca2017-11-044-3/+37
| | | | | | | | While it is not clear (to me) why that ports ends up with corrupted shared libs, reverting those changes fixes the issue and should allow us to close p2k17 more smoothly. Discussed with a bunch, ok ajacoutot@ guenther@
* Prefer <elf.h> to the non portable <sys/exec_elf.h>.mpi2017-10-292-4/+4
| | | | ok jca@, deraadt@
* Change pthread_cleanup_{push,pop} to macros that store the cleanup infoguenther2017-10-284-37/+3
| | | | | | | | | | | | | | on the stack instead of mallocing the list and move the APIs from libpthread to libc so that they can be used inside libc. Note: the standard was explicitly written to permit/support this "macro with unmatched brace" style and it's what basically everyone else already does. We xor the info with random cookies with a random magic to detect/trip-up overwrites. Major bump to both libc and libpthread due to the API move. ok mpi@
* Move the thread-related .h files to /usr/src/include/, since theguenther2017-10-151-2/+1
| | | | | | | implementation is now spread between libc and librthread. No changes to the content ok mpi@
* Move mutex, condvar, and thread-specific data routes, pthread_once, andguenther2017-09-0534-2994/+93
| | | | | | | | pthread_exit from libpthread to libc, along with low-level bits to support them. Major bump to both libc and libpthread. Requested by libressl team. Ports testing by naddy@ ok kettenis@
* Use "volatile unsigned int" instead of _atomic_lock_t. The _atomic_lock_tkettenis2017-08-011-3/+3
| | | | | | | isn't the same size on all our architectures and should only be used for spin locks. ok visa@, mpi@
* disable post fork checks for now, too much turbulence in the airtedu2017-07-301-2/+2
|
* not all the world is an i386. Back out breakage.deraadt2017-07-294-13/+4
|
* Use memory barriers to prevent pointer use before initialization.pirofti2017-07-294-4/+13
| | | | | | | | | | | This work was sparked by the topic posted on hn by wuch. I am still not sure that this fixes the defect he claims to have observed because I was not able to create a proper regress test for it to manifest. To that end, a proof of concept is more than welcomed! Thank you for the report! Discussed with and OK kettenis@, tedu@.
* bad things can (and will) happen if a threaded program calls fork() andtedu2017-07-272-2/+10
| | | | | | then strays off the path to exec(). one common manifestation of this problem occurs in pthread_join(), so we can add a little check there. first person to hit this in real life gets to change the error message.
* Enable the use of futex(2) in librthread on mips64.visa2017-07-041-2/+3
| | | | OK mpi@, deraadt@
* Re-enabled futex based condvar & mutexes, they are not the cause ofmpi2017-06-011-2/+9
| | | | vmd(8)'s regression.
* New condvar introduced a regression with vmd(8), revert until it is found.mpi2017-06-011-9/+2
| | | | Reported by Gregor Best.
* Enable futex-based mutex and condvar.mpi2017-05-291-2/+9
| | | | ok everybody
* SPINLOCK_SPIN_HOOK is no more, define our own set of macros.mpi2017-05-292-7/+11
| | | | Prodded by kettenis@ and tedu@
* Use membar_enter_after_atomic() and membar_exit_before_atomic().mpi2017-05-281-5/+5
|
* New mutex and condvar implementations based on futex(2).mpi2017-05-275-8/+602
| | | | | | | | | Not enabled yet, it needs some SPINLOCK_SPIN_HOOK love and some bumps. Tested by many including sthen@ in a bulk. ok visa@, sthen@, kettenis@, tedu@
* RELRO means the __{got,plt}_{start,end} symbols are superfluousguenther2017-02-271-6/+0
| | | | ok kettenis@
* Add support for AArch64.patrick2017-01-111-0/+49
|
* Now that all non-ARMv7 platforms are gone, tedu the legacy atomicpatrick2017-01-051-10/+3
| | | | | | locking code. ok kettenis@
* Get rid of ticket support, replace "struct _spinlock" with "_atomic_lock_t".akfaew2016-09-0411-82/+64
| | | | ok tedu@
* Remove _USING_TICKETS, it's defined as 0. No functional change.akfaew2016-09-036-25/+19
| | | | ok tedu@ mpi@
* delete wrong cvs $ tagsderaadt2016-09-013-6/+3
|
* bumpotto2016-09-011-1/+1
|
* Less lock contention by using more pools for mult-threaded programs.otto2016-09-014-24/+45
| | | | tested by many (thanks!) ok tedu, guenther@
* retire sparctedu2016-09-011-41/+0
|