Rich Felker [Wed, 20 May 2015 04:17:35 +0000 (00:17 -0400)]
fix inconsistency in a_and and a_or argument types on x86[_64]
conceptually, and on other archs, these functions take a pointer to
int, but in the i386, x86_64, and x32 versions of atomic.h, they took
a pointer to void instead.
Bobby Bingham [Sun, 17 May 2015 18:46:38 +0000 (13:46 -0500)]
inline llsc atomics when building for sh4a
If we're building for sh4a, the compiler is already free to use
instructions only available on sh4a, so we can do the same and inline the
llsc atomics. If we're building for an older processor, we still do the
same runtime atomics selection as before.
Rich Felker [Mon, 18 May 2015 20:51:54 +0000 (16:51 -0400)]
reprocess libc/ldso RELA relocations in stage 3 of dynamic linking
this fixes a regression on powerpc that was introduced in commit f3ddd173806fd5c60b3f034528ca24542aecc5b9. global data accesses on
powerpc seem to be using a translation-unit-local GOT filled via
R_PPC_ADDR32 relocations rather than R_PPC_GLOB_DAT. being a non-GOT
relocation type, these were not reprocessed after adding the main
application and its libraries to the chain, causing libc code not to
see copy relocations in the main program, and therefore to use the
pre-copy-relocation addresses for global data objects (like environ).
the motivation for the dynamic linker only reprocessing GOT/PLT
relocation types in stage 3 is that these types always have a zero
addend, making them safe to process again even if the storage for the
addend has been clobbered. other relocation types which can be used
for address constants in initialized data objects may have non-zero
addends which will be clobbered during the first pass of relocation
processing if they're stored inline (REL form) rather than out-of-line
(RELA form).
powerpc generally uses only RELA, so this patch is sufficient to fix
the regression in practice, but is not fully general, and would not
suffice if an alternate toolchain generated REL for powerpc.
Rich Felker [Mon, 18 May 2015 16:11:25 +0000 (12:11 -0400)]
fix null pointer dereference in dcngettext under specific conditions
if setlocale has not been called, the current locale's messages_name
may be a null pointer. the code path where it's assumed to be non-null
was only reachable if bindtextdomain had already been called, which is
normally not done in programs which do not call setlocale, so the
omitted check went unnoticed.
patch from Void Linux, with description rewritten.
Rich Felker [Sat, 16 May 2015 05:53:54 +0000 (01:53 -0400)]
eliminate costly tricks to avoid TLS access for current locale state
the code being removed used atomics to track whether any threads might
be using a locale other than the current global locale, and whether
any threads might have abstract 8-bit (non-UTF-8) LC_CTYPE active, a
feature which was never committed (still pending). the motivations
were to support early execution prior to setup of the thread pointer,
to partially support systems (ancient kernels) where thread pointer
setup is not possible, and to avoid high performance cost on archs
where accessing the thread pointer may be very slow.
since commit 19a1fe670acb3ab9ead0fe31859ca7d4fe40dd54, the thread
pointer is always available, so these hacks are no longer needed.
removing them greatly simplifies the affected code.
Rich Felker [Sat, 16 May 2015 05:15:40 +0000 (01:15 -0400)]
in i386 __set_thread_area, don't assume %gs register is initially zero
commit f630df09b1fd954eda16e2f779da0b5ecc9d80d3 added logic to handle
the case where __set_thread_area is called more than once by reusing
the GDT slot already in the %gs register, and only setting up a new
GDT slot when %gs is zero. this created a hidden assumption that %gs
is zero when a new process image starts, which is true in practice on
Linux, but does not seem to be documented ABI, and fails to hold under
qemu app-level emulation.
while it would in theory be possible to zero %gs in the entry point
code, this code is shared between static and dynamic binaries, and
dynamic binaries must not clobber the value of %gs already setup by
the dynamic linker.
the alternative solution implemented in this commit simply uses global
data to store the GDT index that's selected. __set_thread_area should
only be called in the initial thread anyway (subsequent threads get
their thread pointer setup by __clone), but even if it were called by
another thread, it would simply read and write back the same GDT index
that was already assigned to the initial thread, and thus (in the x86
memory model) there is no data race.
Rich Felker [Thu, 14 May 2015 22:51:27 +0000 (18:51 -0400)]
make arm reloc.h CRTJMP macro compatible with thumb
compilers targeting armv7 may be configured to produce thumb2 code
instead of arm code by default, and in the future we may wish to
support targets where only the thumb instruction set is available.
the instructions this patch omits in thumb mode are needed only for
non-thumb versions of armv4 or earlier, which are not supported by any
current compilers/toolchains and thus rather pointless to have. at
some point these compatibility return sequences may be removed from
all asm source files, and in that case it would make sense to remove
them here too and remove the ifdef.
Rich Felker [Thu, 14 May 2015 22:26:16 +0000 (18:26 -0400)]
make arm crt_arch.h compatible with thumb code generation
compilers targeting armv7 may be configured to produce thumb2 code
instead of arm code by default, and in the future we may wish to
support targets where only the thumb instruction set is available.
the changes made here avoid operating directly on the sp register,
which is not possible in thumb code, and address an issue with the way
the address of _DYNAMIC is computed.
previously, the relative address of _DYNAMIC was stored with an
additional offset of -8 versus the pc-relative add instruction, since
on arm the pc register evaluates to ".+8". in thumb code, it instead
evaluates to ".+4". both are two (normal-size) instructions beyond "."
in the current execution mode, so the numbered label 2 used in the
relative address expression is simply moved two instructions ahead to
be compatible with both instruction sets.
Rich Felker [Wed, 6 May 2015 22:37:19 +0000 (18:37 -0400)]
fix stack protector crashes on x32 & powerpc due to misplaced TLS canary
i386, x86_64, x32, and powerpc all use TLS for stack protector canary
values in the default stack protector ABI, but the location only
matched the ABI on i386 and x86_64. on x32, the expected location for
the canary contained the tid, thus producing spurious mismatches
(resulting in process termination) upon fork. on powerpc, the expected
location contained the stdio_locks list head, so returning from a
function after calling flockfile produced spurious mismatches. in both
cases, the random canary was not present, and a predictable value was
used instead, making the stack protector hardening much less effective
than it should be.
in the current fix, the thread structure has been expanded to have
canary fields at all three possible locations, and archs that use a
non-default location must define a macro in pthread_arch.h to choose
which location is used. for most archs (which lack TLS canary ABI) the
choice does not matter.
Rich Felker [Sat, 2 May 2015 15:57:20 +0000 (11:57 -0400)]
fix crash in x32 sigsetjmp
the 64-bit push reads not only the 32-bit return address but also the
first 32 signal mask bits. if any were nonzero, the return address
obtained will be invalid.
at some point storage of the return address should probably be moved
to follow the saved mask so that there's plenty room and the same code
can be used on x32 and regular x86_64, but for now I want a fix that
does not risk breaking x86_64, and this simple re-zeroing works.
Rich Felker [Sat, 2 May 2015 01:22:27 +0000 (21:22 -0400)]
fix dangling pointers in x32 syscall timespec fixup code
the lifetime of compound literals is the block in which they appear.
the temporary struct __timespec_kernel objects created as compound
literals no longer existed at the time their addresses were passed to
the kernel.
Rich Felker [Fri, 1 May 2015 16:25:01 +0000 (12:25 -0400)]
fix mishandling of ENOMEM return case in internal getgrent_a function
due to an incorrect return statement in this error case, the
previously blocked cancellation state was not restored and no result
was stored. this could lead to invalid (read) accesses in the caller
resulting in crashes or nonsensical result data in the event of memory
exhaustion.
while the sh port is still experimental and subject to ABI
instability, this is not actually an application/libc boundary ABI
change. it only affects third-party APIs where jmp_buf is used in a
shared structure at the ABI boundary, because nothing anywhere near
the end of the jmp_buf object (which includes the oversized sigset_t)
is accessed by libc.
both glibc and uclibc have 15-slot jmp_buf for sh. presumably the
smaller version was used in musl because the slots for fpu status
register and thread pointer register (gbr) were incorrect and must not
be restored by longjmp, but the size should have been preserved, as
it's generally treated as a libc-agnostic ABI property for the arch,
and having extra slots free in case we ever need them for something is
useful anyway.
previously it was using the same name as the default ABI with hard
float (floating point args and return value in registers).
the test __SH_FPU_ANY__ || __SH4__ matches what's used in the
configure script already, and seems correct under casual review
against gcc's config/sh.h, but may need tweaks. the logic for
predefined macros for sh, and what they all mean, is very complex.
eventually this should be documented in comments here.
configure already rejects "half-hard" configurations on sh where
double=float since these do not conform to Annex F and are not
suitable for musl, so these do not need to be considered here.
fix build regression in sh-nofpu subarch due to missing symbol
commit 646cb9a4a04e5ed78e2dd928bf9dc6e79202f609 switched sigsetjmp to
use the new hidden ___setjmp symbol for setjmp, but the nofpu variant
of setjmp.s was not updated to match.
fix misalignment of dtv in static-linked programs with odd-sized TLS
both static and dynamic linked versions of the __copy_tls function
have a hidden assumption that the alignment of the beginning or end of
the memory passed is suitable for storing an array of pointers for the
dtv. pthread_create satisfies this requirement except when
libc.tls_size is misaligned, which cannot happen with dynamic linking
due to way update_tls_size computes the total size, but could happen
with static linking and odd-sized TLS.
commit dab441aea240f3b7c18a26d2ef51979ea36c301c, which made thread
pointer init mandatory for all programs, rendered this store obsolete
by removing the early-return path for static programs with no TLS.
make __init_tp function static when static linking
this slightly reduces the code size cost of TLS/thread-pointer for
static linking since __init_tp can be inlined into its only caller and
removed. this is analogous to the handling of __init_libc in
__libc_start_main, where the function only has external linkage when
it needs to be called from the dynamic linker.
fix regression in x86_64 math asm with old binutils
the implicit-operand form of fucomip is rejected by binutils 2.19 and
perhaps other versions still in use. writing both operands explicitly
fixes the issue. there is no change to the resulting output.
use CAS instead of swap since it's lighter for most archs, and keep
EBUSY in the lock value so that the old value obtained by CAS can be
used directly as the return value for pthread_spin_trylock.
in visibility preinclude, remove overrides for stdin/stdout/stderr
the motivation for this change is that the extra declaration (with or
without visibility) using "struct _IO_FILE" instead of "FILE" seems to
trigger a bug in gcc 3.x where it considers the types mismatched.
however, this change also results in slightly better code and it is
valid because (1) these three objects are constant, and (2) applying
the & operator to any of them is invalid C, since they are not even
specified to be objects. thus it does not matter if the application
and libc see different addresses for them, as long as the (initial,
unchanging) value is seen the same by both.
remove cruft for libc struct accessor function and broken visibility
these were hacks to work around toolchains that could not properly
optimize PIC accesses based on visibility and would generate GOT
lookups even for hidden data, which broke the old dynamic linker.
since commit f3ddd173806fd5c60b3f034528ca24542aecc5b9 it no longer
matters; the dynamic linker does not assume accessibility of this data
until stage 3.
make configure check for visibility preinclude compatible with pcc
pcc does not search for -include relative to the working directory
unless -I. is used. rather than adding -I., which could be problematic
if there's extra junk in the top-level directory, switch back to the
old method (reverting commit 60ed988fd6c67b489d7cc186ecaa9db4e5c25b8c)
of using -include vis.h and relying on -I./src/internal being present
on the command line (which the Makefile guarantees). to fix the
breakage that was present in trycppif checks with the old method,
$CFLAGS_AUTO is removed from the command line passed to trycppif; this
is valid since $CFLAGS_AUTO should not contain options that alter
compiler semantics or ABI, only optimizations, warnings, etc.
fix duplocale clobbering of new locale struct with memcpy of old
when the non-stub duplocale code was added as part of the locale
framework in commit 0bc03091bb674ebb9fa6fe69e4aec1da3ac484f2, the old
code to memcpy the old locale object to the new one was left behind.
the conditional for the memcpy no longer makes sense, because the
conditions are now always-true when it's reached, and the memcpy is
wrong because it clobbers the new->messages_name pointer setup just
above.
since the messages_name and ctype_utf8 members have already been
copied, all that remains is the cat[] array. these pointers are
volatile, so using memcpy to copy them is formally wrong; use a for
loop instead.
Andre McCurdy [Tue, 21 Apr 2015 17:34:05 +0000 (10:34 -0700)]
configure: check for -march and -mtune passed via CC
Some build environments pass -march and -mtune as part of CC, therefore
update configure to check both CC and CFLAGS before making the decision
to fall back to generic -march and -mtune options for x86.
Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
the first switch already returns in the F_SETLKW code path so it need
not be handled in the second switch. moreover the code in the second
switch is wrong for the F_SETLKW command: it's not cancellable.
fix mmap leak in sem_open failure path for link call
the leak was found by static analysis (reported by Alexander Monakov),
not tested/observed, but seems to have occured both when failing due
to O_EXCL, and in a race condition with O_CREAT but not O_EXCL where a
semaphore by the same name was created concurrently.
remove always-true conditional in dynamic linker TLSDESC processing
the allocating path which can fail is for dynamic TLS, which can only
occur at runtime, and the check for runtime was already made in the
outer conditional.
commit 637dd2d383cc1f63bf02a732f03786857b22c7bd introduced the checks
for RTLD_DEFAULT and RTLD_NEXT here, claiming they fixed a regression,
but the above conditional block clearly already covered these cases,
and removing the checks produces no difference in the generated code.
fix breakage in x32 dynamic linker due to mismatching register size
the jmp instruction requires a 64-bit register, so cast the desired PC
address up to uint64_t, going through uintptr_t to ensure that it's
zero-extended rather than possibly sign-extended.
fix regression in configure script with new visibility option
commit de2b67f8d41e08caa56bf6540277f6561edb647f introduced a
regression by adding a -include option to CFLAGS_AUTO which did not
work without additional -I options. this broke subsequent trycppif
tests and caused x86_64 to be misdetected as x32, among other issues.
simply using the full relative pathname to vis.h rather than -I is the
cleanest way to fix the problem.
this is implemented via the build system and does not affect source
files. the idea is to use protected or hidden visibility to prevent
the compiler from pessimizing function calls within a shared (or
position-independent static) libc in the form of overhead setting up
for a call through the PLT. the ld-time symbol binding via the
-Bsymbolic-functions option already optimized out the PLT itself, but
not the code in the caller needed to support a call through the PLT.
on some archs this overhead can be substantial; on others it's
trivial.
these are perfectly fine with ld-time symbol binding, but otherwise
result in textrels. they cannot be replaced with @PLT jump targets
because the PLT thunks require a GOT register to be setup, so use a
hidden alias instead.
these are perfectly fine with ld-time symbol binding, but if the calls
go through a PLT thunk, they are invalid because the caller does not
setup a GOT register. use a hidden alias to bypass the issue.
remove the last of possible-textrels from i386 asm
none of these are actual textrels because of ld-time binding performed
by -Bsymbolic-functions, but I'm changing them with the goal of making
ld-time binding purely an optimization rather than relying on it for
semantic purposes.
in the case of memmove's call to memcpy, making it explicit that the
memmove asm is assuming the forward-copying behavior of the memcpy asm
is desirable anyway; in case memcpy is ever changed, the semantic
mismatch would be apparent while editing memmcpy.s.
make dlerror state and message thread-local and dynamically-allocated
this fixes truncation of error messages containing long pathnames or
symbol names.
the dlerror state was previously required by POSIX to be global. the
resolution of bug 97 relaxed the requirements to allow thread-safe
implementations of dlerror with thread-local state and message buffer.
Szabolcs Nagy [Sat, 11 Apr 2015 00:35:07 +0000 (00:35 +0000)]
math: fix pow(+-0,-inf) not to raise divbyzero flag
this reverts the commit f29fea00b5bc72d4b8abccba2bb1e312684d1fce
which was based on a bug in C99 and POSIX and did not match IEEE-754
http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1515.pdf
apply hidden visibility to tlsdesc accessor functions
these functions are never called directly; only their addresses are
used, so PLT indirections should never happen unless a broken
application tries to redefine them, but it's still best to make them
hidden.
the casts of the argument to unsigned int suppressed diagnosis of
errors like passing a pointer instead of a character. putting the
actual function call in an unreachable branch restores any diagnostics
that would be present if the macros didn't exist and functions were
used.
the braf instruction's destination register is an offset from the
address of the braf instruction plus 4 (or equivalently, the address
of the next instruction after the delay slot). the code for dlsym was
incorrectly computing the offset to pass using the address of the
delay slot itself. in other places, a label was placed after the delay
slot, but I find this confusing. putting the label on the branch
instruction itself, and manually adding 4, makes it more clear which
branch the offset in the constant pool goes with.
redesign sigsetjmp so that signal mask is restored after longjmp
the conventional way to implement sigsetjmp is to save the signal mask
then tail-call to setjmp; siglongjmp then restores the signal mask and
calls longjmp. the problem with this approach is that a signal already
pending, or arriving between unmasking of signals and restoration of
the saved stack pointer, will have its signal handler run on the stack
that was active before siglongjmp was called. this can lead to
unbounded stack usage when siglongjmp is used to leave a signal
handler.
in the new design, sigsetjmp saves its own return address inside the
extended part of the sigjmp_buf (outside the __jmp_buf part used by
setjmp) then calls setjmp to save a jmp_buf inside its own execution.
it then tail-calls to __sigsetjmp_tail, which uses the return value of
setjmp to determine whether to save the current signal mask or restore
a previously-saved mask.
as an added bonus, this design makes it so that siglongjmp and longjmp
are identical. this is useful because the __longjmp_chk function we
need to add for ABI-compatibility assumes siglongjmp and longjmp are
the same, but for different reasons -- it was designed assuming either
can access a flag just past the __jmp_buf indicating whether the
signal masked was saved, and act on that flag. however, early versions
of musl did not have space past the __jmp_buf for the non-sigjmp_buf
version of jmp_buf, so our setjmp cannot store such a flag without
risking clobbering memory on (very) old binaries.
use hidden __tls_get_new for tls/tlsdesc lookup fallback cases
previously, the dynamic tlsdesc lookup functions and the i386
special-ABI ___tls_get_addr (3 underscores) function called
__tls_get_addr when the slot they wanted was not already setup;
__tls_get_addr would then in turn also see that it's not setup and
call __tls_get_new.
calling __tls_get_new directly is both more efficient and avoids the
issue of calling a non-hidden (public API/ABI) function from asm.
for the special i386 function, a weak reference to __tls_get_new is
used since this function is not defined when static linking (the code
path that needs it is unreachable in static-linked programs).
cleanup use of visibility attributes in pthread_cancel.c
applying the attribute to a weak_alias macro was a hack. instead use a
separate declaration to apply the visibility, and consolidate
declarations together to avoid having visibility mess all over the
file.
consistently use hidden visibility for cancellable syscall internals
in a few places, non-hidden symbols were referenced from asm in ways
that assumed ld-time binding. while these is no semantic reason these
symbols need to be hidden, fixing the references without making them
hidden was going to be ugly, and hidden reduces some bloat anyway.
in the asm files, .global/.hidden directives have been moved to the
top to unclutter the actual code.
fix inconsistent visibility for internal __tls_get_new function
at the point of call it was declared hidden, but the definition was
not hidden. for some toolchains this inconsistency produced textrels
without ld-time binding.
remove initializers for decoded aux/dyn arrays in dynamic linker
the zero initialization is redundant since decode_vec does its own
clearing, and it increases the risk that buggy compilers will generate
calls to memset. as long as symbols are bound at ld time, such a call
will not break anything, but it may be desirable to turn off ld-time
binding in the future.
allow libc itself to be built with stack protector enabled
this was already essentially possible as a result of the previous
commits changing the dynamic linker/thread pointer bootstrap process.
this commit mainly adds build system infrastructure:
configure no longer attempts to disable stack protector. instead it
simply determines how so the makefile can disable stack protector for
a few translation units used during early startup.
stack protector is also disabled for memcpy and memset since compilers
(incorrectly) generate calls to them on some archs to implement
struct initialization and assignment, and such calls may creep into
early initialization.
no explicit attempt to enable stack protector is made by configure at
this time; any stack protector option supported by the compiler can be
passed to configure in CFLAGS, and if the compiler uses stack
protector by default, this default is respected.
remove remnants of support for running in no-thread-pointer mode
since 1.1.0, musl has nominally required a thread pointer to be setup.
most of the remaining code that was checking for its availability was
doing so for the sake of being usable by the dynamic linker. as of
commit 71f099cb7db821c51d8f39dfac622c61e54d794c, this is no longer
necessary; the thread pointer is now valid before any libc code
(outside of dynamic linker bootstrap functions) runs.
this commit essentially concludes "phase 3" of the "transition path
for removing lazy init of thread pointer" project that began during
the 1.1.0 release cycle.
move thread pointer setup to beginning of dynamic linker stage 3
this allows the dynamic linker itself to run with a valid thread
pointer, which is a prerequisite for stack protector on archs where
the ssp canary is stored in TLS. it will also allow us to remove some
remaining runtime checks for whether the thread pointer is valid.
as long as the application and its libraries do not require additional
size or alignment, this early thread pointer will be kept and reused
at runtime. otherwise, a new static TLS block is allocated after
library loading has finished and the thread pointer is switched over.
previously, the layout of the static TLS block was perturbed by the
size of the dtv; dtv size increasing from 0 to 1 perturbed both TLS
arch types, and the TLS-above-TP type's layout was perturbed by the
specific number of dtv slots (libraries with TLS). this behavior made
it virtually impossible to setup a tentative thread pointer address
before loading libraries and keep it unchanged as long as the
libraries' TLS size/alignment requirements fit.
the new code fixes the location of the dtv and pthread structure at
opposite ends of the static TLS block so that they will not move
unless size or alignment changes.
allow i386 __set_thread_area to be called more than once
previously a new GDT slot was requested, even if one had already been
obtained by a previous call. instead extract the old slot number from
GS and reuse it if it was already set. the formula (GS-3)/8 for the
slot number automatically yields -1 (request for new slot) if GS is
zero (unset).
this overhaul further reduces the amount of arch-specific code needed
by the dynamic linker and removes a number of assumptions, including:
- that symbolic function references inside libc are bound at link time
via the linker option -Bsymbolic-functions.
- that libc functions used by the dynamic linker do not require
access to data symbols.
- that static/internal function calls and data accesses can be made
without performing any relocations, or that arch-specific startup
code handled any such relocations needed.
removing these assumptions paves the way for allowing libc.so itself
to be built with stack protector (among other things), and is achieved
by a three-stage bootstrap process:
1. relative relocations are processed with a flat function.
2. symbolic relocations are processed with no external calls/data.
3. main program and dependency libs are processed with a
fully-functional libc/ldso.
reduction in arch-specific code is achived through the following:
- crt_arch.h, used for generating crt1.o, now provides the entry point
for the dynamic linker too.
- asm is no longer responsible for skipping the beginning of argv[]
when ldso is invoked as a command.
- the functionality previously provided by __reloc_self for heavily
GOT-dependent RISC archs is now the arch-agnostic stage-1.
- arch-specific relocation type codes are mapped directly as macros
rather than via an inline translation function/switch statement.
this global lock allows certain unlock-type primitives to exclude
mmap/munmap operations which could change the identity of virtual
addresses while references to them still exist.
the original design mistakenly assumed mmap/munmap would conversely
need to exclude the same operations which exclude mmap/munmap, so the
vmlock was implemented as a sort of 'symmetric recursive rwlock'. this
turned out to be unnecessary.
commit 25d12fc0fc51f1fae0f85b4649a6463eb805aa8f already shortened the
interval during which mmap/munmap held their side of the lock, but
left the inappropriate lock design and some inefficiency.
the new design uses a separate function, __vm_wait, which does not
hold any lock itself and only waits for lock users which were already
present when it was called to release the lock. this is sufficient
because of the way operations that need to be excluded are sequenced:
the "unlock-type" operations using the vmlock need only block
mmap/munmap operations that are precipitated by (and thus sequenced
after) the atomic-unlock they perform while holding the vmlock.
this allows for a spectacular lack of synchronization in the __vm_wait
function itself.
optimize out setting up robust list with kernel when not needed
as a result of commit 12e1e324683a1d381b7f15dd36c99b37dd44d940, kernel
processing of the robust list is only needed for process-shared
mutexes. previously the first attempt to lock any owner-tracked mutex
resulted in robust list initialization and a set_robust_list syscall.
this is no longer necessary, and since the kernel's record of the
robust list must now be cleared at thread exit time for detached
threads, optimizing it out is more worthwhile than before too.
process robust list in pthread_exit to fix detached thread use-after-unmap
the robust list head lies in the thread structure, which is unmapped
before exit for detached threads. this leaves the kernel unable to
process the exiting thread's robust list, and with a dangling pointer
which may happen to point to new unrelated data at the time the kernel
processes it.
userspace processing of the robust list was already needed for
non-pshared robust mutexes in order to perform private futex wakes
rather than the shared ones the kernel would do, but it was
conditional on linking pthread_mutexattr_setrobust and did not bother
processing the pshared mutexes in the list, which requires additional
logic for the robust list pending slot in case pthread_exit is
interrupted by asynchronous process termination.
the new robust list processing code is linked unconditionally (inlined
in pthread_exit), handles both private and shared mutexes, and also
removes the kernel's reference to the robust list before unmapping and
exit if the exiting thread is detached.
fix possible clobbering of syscall return values on mips
depending on the compiler's interpretation of __asm__ register names
for register class objects, it may be possible for the return value in
r2 to be clobbered by the function call to __stat_fix. I have not
observed any such breakage in normal builds and suspect it only
happens with -O0 or other unusual build options, but since there's an
ambiguity as to the semantics of this feature, it's best to use an
explicit temporary to avoid the issue.
when dlopen fails, all partially-loaded libraries need to be unmapped
and freed. any of these libraries using an rpath with $ORIGIN
expansion may have an allocated string for the expanded rpath;
previously, this string was not freed when freeing the library data
structures.
halt dynamic linker library search on errors resolving $ORIGIN in rpath
this change hardens the dynamic linker against the possibility of
loading the wrong library due to inability to expand $ORIGIN in rpath.
hard failures such as excessively long paths or absence of /proc (when
resolving /proc/self/exe for the main executable's origin) do not stop
the path search, but memory allocation failures and any other
potentially transient failures do.
to implement this change, the meaning of the return value of
fixup_rpath function is changed. returning zero no longer indicates
that the dso's rpath string pointer is non-null; instead, the caller
needs to check. a return value of -1 indicates a failure that should
stop further path search.