| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
ok mpi@
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing code did a full recursive walk for O(horrible). Instead,
keep a single list of nodes plus the index of the first node whose
children haven't been scanned; lookup until that index catches the
end, appending the unscanned children of the node at the index. This
also makes the grpsym list order match that calculated by FreeBSD and
glibc in dependency trees with inconsistent ordering of dependent libs.
To make this easier and more cache friendly, convert grpsym_list
to a vector: the size is bounded by the number of objects currently
loaded.
Other, related fixes:
* increment the grpsym generation number _after_ pushing the loading
object onto its grpsym list, to avoid double counting it
* increment the grpsym generation number when building the grpsym list
for an already loaded object that's being dlopen()ed, to avoid
incomplete grpsym lists
* use a more accurate test of whether an object already has a grpsym list
Prompted by a diff from Nathanael Rensen (nathanael (at) list.polymorpheus.com)
that pointed to _dl_cache_grpsym_list() as a performance bottleneck.
Much proding from robert@, sthen@, aja@, jca@
no problem reports after being in snaps
ok mpi@
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
- the symbol it found, returned via the second argument
- the base offset of the the object it was found in, via the return value
- optionally: the object it was found in, returned via the last argument
Instead, return a struct with the symbol and object pointers and let the
caller get the base offset from the object's obj_base member. On at least
aarch64, amd64, mips64, powerpc, and sparc64, a two word struct like this
is passed in registers.
ok mpi@, kettenis@
|
| |
|
|
|
|
|
|
| |
In 2013, I implemented the single-entry LRU cache that gets the maximal
symbol reuse from combreloc. Since then, the ld.so generic relocation
symcache has been a waste of CPU and memory with 0% hit-rate, so kill it.
ok mpi@
|
| |
|
|
|
|
|
|
| |
the change in __getcwd(2)'s return value. Fix it by switching to the
__realpath(2) syscall, eliminating the ld.so copy of realpath().
problem caught by regress and noted by bluhm@
ok deraadt@
|
| |
|
|
|
| |
anywhere and can use Elf_Word instead.
ok guenther
|
| |
|
|
|
|
|
|
| |
previously 'implemented' by having the Elf_Word typedef in <sys/exec_elf.h>
vary, but that doesn't match the spec and breaks libelf so it's gone away.
Implement the variation here by defining our own type locally for this.
ok deraadt@
|
| |
|
|
|
|
|
|
|
|
| |
from Matt Dillon's implementation in DragonFlyBSD commit 7629c631.
One difference is that as long as DT_HASH is still present, ld.so
will use that to get the total number of symbols rather than walking
the GNU hash chains. Note that the GPLv2 binutils we have doesn't
support DT_GNU_HASH, so this only helps archs were lld is used.
ok kettenis@ mpi@
|
| |
|
|
|
|
|
|
|
|
| |
__got_{start,end} to find a region to mark read-only. It was only used
for binaries that didn't have a GNU_RELRO segment, but all archs have
been using that for over a year. Since support for insecure-PLT layouts
on powerpc and alpha have been removed, all archs handle GNU_RELRO the
same way and the support can be moved from the MD code to the MI code.
ok mpi@
|
| |
|
|
|
|
|
|
|
|
|
|
| |
we're looking up?" logic from _dl_find_symbol_obj() into matched_symbol(), so
that the former is just the "iterate across the hash" logic.
matched_symbol() returns zero on "not found", one on "found strong
symbol", and negative one on "found weak symbol". The last of those lets
the caller give up on this object after finding a weak symbol, as there's
no point in continuing to search for a strong symbol in the same object.
ok mpi@
|
| |
|
|
|
|
|
|
| |
the return pointers into a structure and pass that to _dl_find_symbol_obj().
Set sl->sl_obj_out in _dl_find_symbol_obj() so that the callers don't
need to each record the object.
ok mpi@
|
| |
|
|
| |
ok millert@
|
| |
|
|
| |
ok patrick@, millert@
|
| |
|
|
|
|
|
|
| |
simply exiting, via helper functions _dl_die(), _dl_diedie(), and
_dl_oom().
prompted by a complaint from jsing@
ok jsing@ deraadt@
|
| | |
|
| | |
|
| |
|
|
| |
problem reported by semarie@
|
| |
|
|
| |
ok kettenis@
|
| |
|
|
| |
ok kettenis@
|
| |
|
|
|
|
|
| |
Don't skip DT_INIT and DT_FINI for the main executable. This matches what
Linux and Solaris do.
ok guenther@
|
| |
|
|
|
|
|
|
| |
range instead of the [__got_start, __got_end) range.
On many archs this will cover _DYNAMIC too, so move up the DT_DEBUG handling
to before relocations and the mprotect are done.
ok kettenis@
|
| |
|
|
|
|
| |
for our development process.
ok kettenis@ deraadt@
|
| |
|
|
|
|
|
| |
portion like crt0 does. This is prep for eliminating _dl_fixup_user_env()
Mark almost everything in resolve.h as hidden, to improve code generation.
ok kettenis@ mpi@ "good time" deraadt@
|
| |
|
|
|
|
|
|
| |
load time only nwo. Rename _dl_searchnum and lastlookup to _dl_grpsym_gen
and grpsym_gen as they are generation numbers. Merge _dl_newsymsearch()
into _dl_cache_grpsym_list_setup().
ok millert@
|
| |
|
|
|
|
|
| |
needs to lock down the entire load group, not just the specific object.
problem report and ok sthen@
been in snaps for a week
|
| |
|
|
|
|
|
|
| |
a new MI routine _dl_protect_segment(), and use that for protecting the
GOT and--on some archs--the PLT.
Amazing testing turnaround by miod@, who apparently violated relativity
to get back results on some archs as fast as he did
|
| | |
|
| | |
|
| |
|
|
| |
fix _dl_strdup to return NULL instead of crash; ok deraadt@
|
| |
|
|
| |
ok okan kettenis
|
| |
|
|
| |
ok guenther
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
each plt call, allowing to trace a binary linked against shared library at the
public function call level.
To do so, ltrace(1) sets up some environment variables to enable plt tracing
in ld.so, and invokes ktrace(2) for utrace events. ld.so will force lazy
binding and will send an utrace record in the plt resolver, without updating
the plt.
Minimal filtering capabilities are provided, inspired by Solaris' truss -u,
to limit tracing to libraries and/or symbol names. Non-traced libraries and
symbols will have the regular resolver processing, with the expected plt
update.
"Get it in" deraadt
|
| |
|
|
|
|
| |
Much assistance and testing by miod
ok miod@
|
| |
|
|
| |
Improvements and okay matthew@, millert@, guenther@
|
| |
|
|
|
| |
pointers to prepare for adding rpath ORIGIN support.
okay matthew@ millert@
|
| |
|
|
| |
ok kurt
|
| |
|
|
| |
ok matthew@
|
| |
|
|
|
|
| |
DF_1_NODELETE and DF_1_INITFIRST, as well as DF_1_NOW and DF_1_GLOBAL.
Committing for kurt@ who worked out the final version; ok guenther@ drahn@
|
| |
|
|
| |
has some issues. Discussed with various, ok drahn@
|
| | |
|
| |
|
|
| |
get it in tree now deraadt@, ok by several ports folks. Thanks for the testing.
|
| | |
|
| | |
|
| |
|
|
| |
Pointed out by patrick keshish.
|
| |
|
|
|
| |
already generated list. Speeds up startup on deeply nested dlopen binaries.
ok guenther@, tested by ckuethe@ and ajacoutot@
|
| |
|
|
|
| |
for all objects which simplifies phdr usage in a few places.
"go for it" drahn@
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
- rename private values in struct elf_object to better
describe their meaning:
s/load_offs/obj_base/ "object's address '0' base"
s/load_addr/load_base/ "The base address of the loadable
segments"
- gdb needs the obj_base value so swap positions with load_base in
struct elf_object
- fix a few occurrences of where load_base was used instead of
obj_base.
With help and okay drahn@
|
| |
|
|
|
|
|
|
| |
Prelink fixes the address of libraries making 'return to libc' attacks trival,
prebind uses a different method to achieve most of the same gains, however
without adding any security conerns.
Still under development, now in-tree.
|
| |
|
|
| |
ok drahn@
|
| |
|
|
|
|
| |
simpler, however it broke ldd refcount output. use _dl_link_child to
increment refcounts and adjust _dl_notify_unload_shlib to match.
work by drahn@ and myself. ok drahn@
|