[ASan] Print exact source location of global variables in error reports.
See https://code.google.com/p/address-sanitizer/issues/detail?id=299 for the
original feature request.
Introduce llvm.asan.globals metadata, which Clang (or any other frontend)
may use to report extra information about global variables to ASan
instrumentation pass in the backend. This metadata replaces
llvm.asan.dynamically_initialized_globals that was used to detect init-order
bugs. llvm.asan.globals contains the following data for each global:
1) source location (file/line/column info);
2) whether it is dynamically initialized;
3) whether it is blacklisted (shouldn't be instrumented).
Source location data is then emitted in the binary and can be picked up
by ASan runtime in case it needs to print error report involving some global.
For example:
0x... is located 4 bytes to the right of global variable 'C::array' defined in '/path/to/file:17:8' (0x...) of size 40
These source locations are printed even if the binary doesn't have any
debug info.
This is an ABI-breaking change. ASan initialization is renamed to
__asan_init_v4(). Pre-built libraries compiled with older Clang will not work
with the fresh runtime.
Chad Rosier [Wed, 2 Jul 2014 16:46:08 +0000 (16:46 +0000)]
Revert "Revert "MachineScheduler: better book-keeping for asserts.""
This reverts commit r212109, which reverted r212088.
However, disable the assert as it's not necessary for correctness. There are
several corner cases that the assert needed to handle better for in-order
scheduling, but none of them are incorrect scheduler behavior. The assert is
mainly there to collect good unit tests like this and ensure that the
target-independent scheduler is working as expected with the various machine
models.
AVX-512: dec/inc instructions are slow on KNL
After Alexey Volkov, I'm adding the same property for KNL, that prefers ADD/SUB instead of INC/DEC.
Added a test.
[cleanup] Hoist an if-else chain on ISD opcodes (really designed for
switches) into a switch, and sink them into a dispatch function that can
return the result rather than awkward variable setting with breaks.
aarch64: support target-specific .req assembler directive
Based on the support for .req on ARM. The aarch64 variant has to keep track if
the alias register was a vector register (v0-31) or a general purpose or
VFP/Advanced SIMD ([bhsdq]0-31) register.
[cleanup] Hoist the promotion dispatch logic into the promote function
so that we can use return to express it more cleanly and avoid so many
nested switch statements.
Remove the recommendation against using std::function
Clang-cl supports MSVC-style RTTI now, and we can even compile
typeid(...) with /GR-. Just don't instantiate std::function with a
polymorphic type, or bad things will happen.
[DAG] Pass the argument list to the CallLoweringInfo via move semantics. NFCI.
The argument list vector is never used after it has been passed to the
CallLoweringInfo and moving it to the CallLoweringInfo is cleaner and
pretty much as cheap as keeping a pointer to it.
Tim Northover [Tue, 1 Jul 2014 21:44:59 +0000 (21:44 +0000)]
X86: delegate expanding atomic libcalls to generic code.
On targets without cmpxchg16b or cmpxchg8b, the borderline atomic
operations were slipping through the gaps.
X86AtomicExpand.cpp was delegating to ISelLowering. Generic
ISelLowering was delegating to X86ISelLowering and X86ISelLowering was
asserting. The correct behaviour is to expand to a libcall, preferably
in generic ISelLowering.
This can be achieved by X86ISelLowering deciding it doesn't want the
faff after all.
This patch reduces the stack memory consumption of the InstCombine
function "isOnlyCopiedFromConstantGlobal() ", that in certain conditions
could overflow the stack because of excessive recursiveness.
This piece of code crashes llvm when trying to apply instcombine on
desktop. On embedded devices this could happen with a much lower limit
of recursiveness. Some instructions (getelementptr and bitcasts) make
the function recursively call itself on their uses, which is what makes
the example above consume so much stack (it becomes a recursive
depth-first tree visit with a very big depth).
The patch changes the algorithm to be semantically equivalent, but
iterative instead of recursive and the visiting order to be from a
depth-first visit to a breadth-first visit (visit all the instructions
of the current level before the ones of the next one).
Now if a lot of memory is required a heap allocation is done instead of
the the stack allocation, avoiding the possible crash.
Patch by Marcello Maggioni! We don't generally commit large stress test
that look for out of memory conditions, so I didn't request that one be
added to the patch.
David Blaikie [Tue, 1 Jul 2014 21:13:37 +0000 (21:13 +0000)]
DebugInfo: Keep track of subprograms who's arguments have been promoted.
Matching behavior with DeadArgumentElimination (and leveraging some
now-common infrastructure), keep track of the function from debug info
metadata if arguments are promoted.
This may produce interesting debug info - since the arguments may be
missing or of different types... but at least backtraces, inlining, etc,
will be correct.
Tim Northover [Tue, 1 Jul 2014 18:53:31 +0000 (18:53 +0000)]
X86: expand atomics in IR instead of as MachineInstrs.
The logic for expanding atomics that aren't natively supported in
terms of cmpxchg loops is much simpler to express at the IR level. It
also allows the normal optimisations and CodeGen improvements to help
out with atomics, instead of using a limited set of possible
instructions..
Kevin Enderby [Tue, 1 Jul 2014 17:19:10 +0000 (17:19 +0000)]
Add the -arch flag support to llvm-size like what was done to llvm-nm
to select the slice out of a Mach-O universal file. This also includes
support for -arch all, selecting the host architecture by default from
a universal file and checking if -arch is used with a standard Mach-O
it matches that architecture.
This is a small targeted fix for pr20119. The code needs quiet a bit of
refactoring and I added some FIXMEs about it, but I want to get the testcase
passing first.
[PeepholeOptimizer] Advanced rewriting of copies to avoid cross register banks
copies.
This patch extends the peephole optimization introduced in r190713 to produce
register-coalescer friendly copies when possible.
This extension taught the existing cross-bank copy optimization how to deal
with the instructions that generate cross-bank copies, i.e., insert_subreg,
extract_subreg, reg_sequence, and subreg_to_reg.
E.g.
b = insert_subreg e, A, sub0 <-- cross-bank copy
...
C = copy b.sub0 <-- cross-bank copy
Would produce the following code:
b = insert_subreg e, A, sub0 <-- cross-bank copy
...
C = copy A <-- same-bank copy
This patch also introduces a new helper class for that: ValueTracker.
This class implements the logic to look through the copy related instructions
and get the related source.
For now, the advanced rewriting is disabled by default as we are lacking the
semantic on target specific instructions to catch the motivating examples.
[RegAllocGreedy] Provide a flag to disable the local reassignment heuristic.
By default, no functionality change.
Before evicting a local variable, this heuristic tries to find another (set of)
local(s) that can be reassigned to a free color.
In some extreme cases (large basic blocks with tons of local variables), the
compilation time is dominated by the local interference checks that this
heuristic must perform, with no code gen gain.
E.g., the motivating example takes 4 minutes to compile with this heuristic, 12
seconds without.
Improving the situation will likely require to make drastic changes to the
register allocator and/or the interference check framework.
For now, provide this flag to better understand the impact of that heuristic.
Alp Toker [Tue, 1 Jul 2014 03:18:49 +0000 (03:18 +0000)]
ExecutionEngine::create(): fix interpreter fallback when JIT is unavailable
ForceInterpreter=false shouldn't disable the interpreter completely because it
can still be necessary to interpret if the target doesn't support JIT.
No obvious way to test this in LLVM, but this matches what
LLVMCreateExecutionEngineForModule() does and fixes the clang-interpreter
example in the clang source tree which uses the ExecutionEngine.
David Blaikie [Tue, 1 Jul 2014 03:11:59 +0000 (03:11 +0000)]
DebugInfo: Ensure that all debug location scope chains from instructions within a function, lead to the function itself.
Originally committed in r211723, reverted in r211724 due to failure
cases found and fixed (ArgumentPromotion: r211872, Inlining: r212065),
and I now believe the invariant actually holds for some reasonable
amount of code (but I'll keep an eye on the buildbots and see what
happens... ).
Original commit message:
PR20038: DebugInfo: Inlined call sites where the caller has debug info
but the call itself has no debug location.
This situation does bad things when inlined, so I've fixed Clang not to
produce inlinable call sites without locations when the caller has debug
info (in the one case where I could find that this occurred). This
updates the PR20038 test case to be what clang now produces, and readds
the assertion that had to be removed due to this bug.
I've also beefed up the debug info verifier to help diagnose these
issues in the future, and I hope to add checks to the inliner to just
assert-fail if it encounters this situation. If, in the future, we
decide we have to cope with this situation, the right thing to do is
probably to just remove all the DebugLocs from the inlined instructions.
Inlining functions with block addresses can cause many problem and requires a
rich infrastructure to support including escape analysis. At this point the
safest approach to address these problems is by blocking inlining from
happening.
Background:
There have been reports on Ruby segmentation faults triggered by inlining
functions with block addresses like
This kind of scenario can also happen when LLVM picks a subset of blocks for
inlining, which is the case with the actual code in the Ruby environment.
LLVM suppresses inlining for such functions when there is an indirect branch.
The attached patch does so even when there is no indirect branch. Note that
user code like above would not make much sense: using the global for jumping
across function boundaries would be illegal.
Why was there a segfault:
In the snipped above the block with the label is recognized as dead So it is
eliminated. Instead of a block address the cloner stores a constant (sic!) into
the global resulting in the segfault (when the global is used in a goto).
Why had it worked in the past then:
By luck. In older versions vm_exec_core was also inlined but the label address
used was the block label address in vm_exec_core. So the global jump ended up
in the original function rather than in the caller which accidentally happened
to work.
Test case ./tools/clang/test/CodeGen/indirect-goto.c will fail as a result
of this commit.
In r212073 I missed a call of `use_begin()` that assumed the wrong
semantics. It's not clear to me at all what this code does without the
fix, so I'm not sure how to write a testcase.
AArch64AddressTypePromotion was doing nothing because it was using the
old semantics of `Use` and `uses()`, when it really wanted to get at the
`users()`.
David Blaikie [Mon, 30 Jun 2014 20:30:39 +0000 (20:30 +0000)]
DebugInfo: Preserve debug location information when transforming a call into an invoke during inlining.
This both improves basic debug info quality, but also fixes a larger
hole whenever we inline a call/invoke without a location (debug info for
the entire inlining is lost and other badness that the debug info
emission code is currently working around but shouldn't have to).
Kevin Enderby [Mon, 30 Jun 2014 18:45:23 +0000 (18:45 +0000)]
Add the -arch flag support to llvm-nm to select the slice out of a Mach-O
universal file. This also includes support for -arch all, selecting the host
architecture by default from a universal file and checking if -arch is used
with a standard Mach-O it matches that architecture.
Adrian Prantl [Mon, 30 Jun 2014 17:17:35 +0000 (17:17 +0000)]
Debug info: split out complex DIVariable address expressions into a
separate MDNode so they can be uniqued via folding set magic. To conserve
space, DIVariable nodes are still variable-length, with the last two
fields being optional.
No functional change.
http://reviews.llvm.org/D3526
Andrea Di Biagio [Mon, 30 Jun 2014 17:14:21 +0000 (17:14 +0000)]
[X86] Add support for builtin to read performance monitoring counters.
This patch adds support for a new builtin instruction called
__builtin_ia32_rdpmc.
Builtin '__builtin_ia32_rdpmc' is defined as a 'GCC builtin'; on X86, it can
be used to read performance monitoring counters. It takes as input the index
of the performance counter to read, and returns the value of the specified
performance counter as a 64-bit number.
Calls to this new builtin will map to instruction RDPMC.
The index in input to the builtin call is moved to register %ECX. The result
of the builtin call is the value of the specified performance counter (RDPMC
would return that quantity in registers RDX:RAX).
This patch:
- Adds builtin int_x86_rdpmc as a GCCBuiltin;
- Adds a new x86 DAG node called 'RDPMC_DAG';
- Teaches how to lower this new builtin;
- Adds an ISel pattern to select instruction RDPMC;
- Fixes the definition of instruction RDPMC adding %RAX and %RDX as
implicit definitions, and adding %ECX as implicit use;
- Adds a LLVM test to verify that the new builtin is correctly selected.
This exception format is not specific to Windows x64. A similar approach is
taken on nearly all architectures. Generalise the name to reflect reality.
This will eventually be used for Windows on ARM data emission as well.
Rename the routines to reflect the reality that they are more related to call
frame information than to Win64 EH. Although EH is implemented in an intertwined
manner by augmenting with an exception handler and an associated parameter, the
majority of these routines emit information required to unwind the frames. This
also helps identify that these routines are generic for most windows platforms
(they apply equally to nearly all architectures except x86) although the
encoding of the information is architecture dependent.
Unwinding data is emitted via EmitWinCFI* and exception handling information via
EmitWinEH*.