summaryrefslogtreecommitdiffstats
path: root/libexec/ld.so (follow)
Commit message (Collapse)AuthorAgeFilesLines
* On i386 don't attempt to map shared libraries in low memory whenkurt2021-03-163-5/+21
| | | | | | | | | | | | | | | | a large executable's .text section crosses the 512MB exec line. Executables that have MAXTSIZ > 64MB can map above the default 512MB exec line. When this happens, shared libs that attempt to map into low memory will find their .data section can not be mapped. ld.so will attempt to remap the share lib at higher addresses until it can be mapped. For very large executables like chrome this process is very time consuming. This change detects how much of the executable's .text section exceeds 512MB and uses that as the initial hint for shared libs to map into which avoids attempting to map into blocked memory. okay deraadt@
* Fix a nasty mem leak in ld.so's own malloc. This was hard to diagnose, sinceotto2020-12-261-4/+1
| | | | | | malloc dumping and gdb do not help at all when studying ld.so. In the end it turns out ot be a simple merge error causing extra mmap calls. ok miller@ tb@
* Add retguard to macppc kernel locore.S, ofwreal.S, setjmp.Sgkoehler2020-11-281-3/+3
| | | | | | | | | This changes RETGUARD_SETUP(ffs) to RETGUARD_SETUP(ffs, %r11, %r12) and RETGUARD_CHECK(ffs) to RETGUARD_CHECK(ffs, %r11, %r12) to show that r11 and r12 are in use between setup and check, and to pick registers other than r11 and r12 in some kernel functions. ok mortimer@ deraadt@
* Retguard asm macros for powerpc libc, ld.sogkoehler2020-10-261-2/+5
| | | | | | | | | | Add retguard to some, but not all, asm functions in libc. Edit SYS.h in libc to remove the PREFIX macros and add SYSENTRY (more like aarch64 and powerpc64), so we can insert RETGUARD_SETUP after SYSENTRY. Some .S files in this commit don't get retguard, but do stop using the old prefix macros. Tested by deraadt@, who put this diff in a macppc snap.
* Use the retguard macros from asm.h to protect the system call stubs.deraadt2020-10-161-2/+5
| | | | ok mortimer kettenis
* make three mib[] arrays const, as was done in libcderaadt2020-10-152-12/+10
|
* clang 10 now emits calls to __multi3 from libcompiler_rtjca2020-08-111-1/+9
| | | | Hints from kettenis@, ok kettenis@ deraadt@
* Use the same names as the 64-bit PowerPC ELF ABI for the relocations.kettenis2020-07-182-13/+15
|
* Rewrite loop to match what is written down in the ABI document.kettenis2020-07-161-6/+5
| | | | ok drahn@
* Make lazy binding work.kettenis2020-07-162-14/+37
| | | | Committing on behalf of drahn@ who is a bit busy.
* Disable powerpc64 lazy binding, code was not for 64 bit ABIdrahn2020-06-281-25/+2
| | | | DT_PPC_GOT is not used on powerpc64, delete.
* Powerpc64 ld.so asm code needs to conform to Powerpc64 abi, not 32bit.drahn2020-06-281-27/+28
| | | | ok kettenis@
* PowerPC64 ld.so code.drahn2020-06-257-0/+749
| | | | | | | | | Mostly ported, code runs far enough to start first symbol string lookup. build with -gdwarf-4 to remove asm warnings. Do not bother supporting 32 bit non-pic relocations in shared libraries. (however leave the code there for now)
* ld.so(1) also ignores LD_LIBRARY_PATH an friends for set-group-ID executablesjca2020-05-081-6/+4
| | | | | | | While here, use consistent casing and don't use .Ev for set-user-ID/set-group-ID. from Miod
* LD_DEBUG is ignored for set-user-ID and set-group-ID executablesjca2020-05-081-2/+3
| | | | from Miod
* Add missing space in stack smash handler error message.matthieu2020-03-271-2/+2
| | | | ok kettenis@, deraadt@
* Anthony Steinhauser reports that 32-bit arm cpus have the same speculationderaadt2020-03-132-5/+5
| | | | | | | | | | | problems as 64-bit models. To resolve the syscall speculation, as a first step "nop; nop" was added after all occurances of the syscall ("swi 0") instruction. Then the kernel was changed to jump over the 2 extra instructions. In this final step, those pair of nops are converted into the speculation-blocking sequence ("dsb nsh; isb"). Don't try to build through these multiple steps, use a snapshot instead. Packages matching the new ABI will be out in a while... ok kettenis
* Anthony Steinhauser reports that 32-bit arm cpus have the same speculationderaadt2020-03-131-2/+2
| | | | | | problems as 64-bit models. For the syscall instruction issue, add nop;nop after swi 0, in preparation for jumping over a speculation barrier here later. (a lonely swi 0 was hiding in __asm in this file)
* Anthony Steinhauser reports that 32-bit arm cpus have the same speculationderaadt2020-03-111-2/+4
| | | | | | problems as 64-bit models. For the syscall instruction issue, add nop;nop after swi 0, in preparation for jumping over a speculation barrier here later. ok kettenis
* Now that the kernel skips the two instructions immediately followingkettenis2020-02-182-5/+5
| | | | | | | | a syscall, replace the double nop with a dsb nsh; isb; sequence which stops the CPU from speculating any further. This fix was suggested by Anthony Steinhauser. ok deraadt@
* Insert two nop instructions after each svc #0 instruction in userland.kettenis2020-01-262-6/+8
| | | | | | | | The will be replaced by a speculation barrier as soon as we teach the kernel to skip over these two instructions when returning from a system call. ok patrick@, deraadt@
* Eliminate failure returns from _dl_split_path(): if malloc fails just _dl_oom()guenther2019-12-172-8/+10
| | | | | | | Prompted by Qualys's leveraging malloc failure in _dl_split_path() to get stuff past. ok deraadt@ millert@
* Don't look up env variables until we know we'll trust them. Otherwise,guenther2019-12-171-32/+21
| | | | | | just delete them without looking. ok millert@
* ld.so may fail to remove the LD_LIBRARY_PATH environment variable formillert2019-12-111-5/+7
| | | | | set-user-ID and set-group-ID executables in low memory conditions. Reported by Qualys
* When loading a library, mmap(2) may fail. Then everything getsbluhm2019-12-091-9/+10
| | | | | | | unmapped and ld.so tries again with different random address layout. In this case, use the new libc executable address for msyscall(2), not one from the first try. Fixes sporadic bogus syscall on i386. OK deraadt@
* print addresses upon msyscall failure, for nowderaadt2019-12-092-4/+6
|
* Disable ltrace for objects linked with -znow, as at least on amd64, linkingguenther2019-12-0712-47/+24
| | | | | | | | | that was deletes the lazy relocation trampoline which ltrace currently depends on problem reported by tb@ directional feedback kettenis@ ok mpi@
* It is not always clear what ld.so was backed up to ld.so.backup, andderaadt2019-12-021-2/+1
| | | | better that folk doing development in here use their own cp tooling.
* Sigh, fix i386 msyscall() case to permission the correct address range.deraadt2019-11-301-6/+8
|
* As additional paranoia, make a copy of system ld.so into obj/ld.so.backupderaadt2019-11-291-1/+2
| | | | We don't want to CLEANFILES this one. On occasion this comes in useful.
* Repurpose the "syscalls must be on a writeable page" mechanism toderaadt2019-11-2914-17/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | enforce a new policy: system calls must be in pre-registered regions. We have discussed more strict checks than this, but none satisfy the cost/benefit based upon our understanding of attack methods, anyways let's see what the next iteration looks like. This is intended to harden (translation: attackers must put extra effort into attacking) against a mixture of W^X failures and JIT bugs which allow syscall misinterpretation, especially in environments with polymorphic-instruction/variable-sized instructions. It fits in a bit with libc/libcrypto/ld.so random relink on boot and no-restart-at-crash behaviour, particularily for remote problems. Less effective once on-host since someone the libraries can be read. For static-executables the kernel registers the main program's PIE-mapped exec section valid, as well as the randomly-placed sigtramp page. For dynamic executables ELF ld.so's exec segment is also labelled valid; ld.so then has enough information to register libc's exec section as valid via call-once msyscall(2) For dynamic binaries, we continue to to permit the main program exec segment because "go" (and potentially a few other applications) have embedded system calls in the main program. Hopefully at least go gets fixed soon. We declare the concept of embedded syscalls a bad idea for numerous reasons, as we notice the ecosystem has many of static-syscall-in-base-binary which are dynamically linked against libraries which in turn use libc, which contains another set of syscall stubs. We've been concerned about adding even one additional syscall entry point... but go's approach tends to double the entry-point attack surface. This was started at a nano-hackathon in Bob Beck's basement 2 weeks ago during a long discussion with mortimer trying to hide from the SSL scream-conversations, and finished in more comfortable circumstances next to a wood-stove at Elk Lakes cabin with UVM scream-conversations. ok guenther kettenis mortimer, lots of feedback from others conversations about go with jsing tb sthen
* Unrevert: this change was unrelatedguenther2019-11-281-16/+1
|
* Revert yesterday's _dl_md_reloc() and _dl_md_reloc_got() changes:guenther2019-11-285-386/+731
| | | | something's broken on at least i386.
* Delete now obsolete commentsguenther2019-11-272-6/+2
|
* unifdef: hppa does HAVE_JMPREL and does not have DT_PROCNUMguenther2019-11-271-16/+1
|
* armv7 and aarch64 specify GLOB_DAT as having an addend, so treat itguenther2019-11-272-10/+4
| | | | | | exactly like the ABS{32,64} relocation there. noted by and ok kettenis@
* Clean up _dl_md_reloc(): instead of having tables and piles of conditionalsguenther2019-11-264-589/+202
| | | | | | | | | | | | that handle a dozen relocation types for each, just have a nice little switch for the four specific relocations that actually occur. Besides being smaller and easier to understand, this fixes the COPY relocation handling to only do one symbol lookup, instead of looking up the symbol and then immediately looking it up again (with the correct flags to find the instance it needs). ok kettenis@
* Make aarch64, amd64, arm, and i386 more like sparc64: move non-lazyguenther2019-11-264-135/+202
| | | | | | | | | relocation from _dl_md_reloc() to _dl_md_reloc_all_plt() which has the minimal code to do it. Also, avoid division on PLTRELSZ; just use it to offset to the end. ok kettenis@
* Simplify the handling of the explicit relocations based on ld.so onlyguenther2019-11-101-13/+8
| | | | | | have NONE and REL32_64 relocations w/o symbol. ok visa@
* unifdef HAVE_JMPREL, delete dt_pltrelsz handling (which was only usedguenther2019-11-101-34/+2
| | | | | | in the HAVE_JMPREL case anyway), and reduce #includes to match boot.c ok visa@
* Recommit CHECK_LDSO bits for mips64, verified on both loongson and octeon.guenther2019-11-101-1/+9
| | | | ok visa@
* Delete unused support for relocations that don't require alignment.guenther2019-10-244-97/+12
| | | | ok mpi@ kettenis@
* Prefer the size-independent ELF identifiers over the size-specific ones.guenther2019-10-2320-252/+252
| | | | | | | | | Strip superfluous parens from return statements while here. Done programatically with two perl invocations idea ok kettenis@ drahn@ ok visa@
* Whoops: backout mips64+hppa CHECK_LDSO bits: they weren't done and weren'tguenther2019-10-212-25/+2
| | | | | | | part of the review. My fail for forgetting to diff my tree against what was reviewed problem noted by deraadt@
* For more archs, ld.so itself only needs/uses the arch's "just add load offset"guenther2019-10-2015-97/+462
| | | | | | | | | | | | 'relative' relocation. Take advantage of that to simplify ld.so's self-reloc code: * give the exceptional archs (hppa and mips64) copies of the current boot.c as boot_md.c * teach the Makefile to use boot_md.c when present * reduce boot.c down to the minimum necessary to handle just relative reloc * teach the Makefile to fail if the built ld.so has other types of relocs ok visa@ kettenis@
* Tighten handling of pure relative DIR32 relocations and those referencingguenther2019-10-051-11/+12
| | | | | | | | | | sections; despite being a RELA arch, ld.so was making assumptions about the initialization of the targeted location. Add the relative relocation optimization, handling relocations covered by the DT_RELACOUNT value in a tight loop. ok mpi@ deraadt@
* Delete some obsolete debugging #ifdefs blocksguenther2019-10-059-79/+9
| | | | ok mlarkin@, mpi@, krw@, deraadt@
* Convert the child_list member from a linked list to a vector.guenther2019-10-047-43/+66
| | | | ok mpi@
* Use a better algorithm for calculating the grpsym library order.guenther2019-10-035-59/+83
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing code did a full recursive walk for O(horrible). Instead, keep a single list of nodes plus the index of the first node whose children haven't been scanned; lookup until that index catches the end, appending the unscanned children of the node at the index. This also makes the grpsym list order match that calculated by FreeBSD and glibc in dependency trees with inconsistent ordering of dependent libs. To make this easier and more cache friendly, convert grpsym_list to a vector: the size is bounded by the number of objects currently loaded. Other, related fixes: * increment the grpsym generation number _after_ pushing the loading object onto its grpsym list, to avoid double counting it * increment the grpsym generation number when building the grpsym list for an already loaded object that's being dlopen()ed, to avoid incomplete grpsym lists * use a more accurate test of whether an object already has a grpsym list Prompted by a diff from Nathanael Rensen (nathanael (at) list.polymorpheus.com) that pointed to _dl_cache_grpsym_list() as a performance bottleneck. Much proding from robert@, sthen@, aja@, jca@ no problem reports after being in snaps ok mpi@
* Oops: the call to ofree() in orealloc() was misconverted into a call toguenther2019-09-301-2/+2
| | | | | | | _dl_free(), which would trigger a "recursive call" assertion...if we had ever realloced in ld.so ok deraadt@