[llvm-cov] Disable name-compression in a test binary
This should fix the lld bot:
The Buildbot has detected a new failure on builder llvm-clang-lld-x86_64-scei-ps4-ubuntu-fast while building cfe.
Full details are available at:
http://lab.llvm.org:8011/builders/llvm-clang-lld-x86_64-scei-ps4-ubuntu-fast/builds/16993
A coverage segment contains a starting line and column, an execution
count, and some other metadata. Clients of the coverage library use
segments to prepare line-oriented reports.
Users of the coverage library depend on segments being unique and sorted
in source order. Currently this is not guaranteed (this is why the clang
change which introduced deferred regions was reverted).
This commit documents the "unique and sorted" condition and asserts that
it holds. It also fixes the SegmentBuilder so that it produces correct
output in some edge cases.
Testing: I've added unit tests for some edge cases. I've also checked
that the new SegmentBuilder implementation is fully covered. Apart from
running check-profile and the llvm-cov tests, I've successfully used a
stage1 llvm-cov to prepare a coverage report for an instrumented clang
binary.
[Coverage] Report errors when reading malformed source regions
Each source region has a start and end location. Report an error when
the end location does not precede the begin location.
The old lineExecutionCounts.covmapping test actually had a buggy source
region in it. This commit introduces a regenerated copy of the coverage
and moves the old copy to malformedRegions.covmapping, for a test.
[llvm-cov] Unify region marker placement between text/html modes
Make sure that the text and html emitters always emit the same set of
region markers, and avoid emitting redundant markers for line segments
which don't end on the line they start on.
Wei Mi [Fri, 8 Sep 2017 16:44:52 +0000 (16:44 +0000)]
Fix a bug for rL312641.
rL312641 Allowed llvm.memcpy/memset/memmove to be tail calls when parent
function return the intrinsics's first argument. However on arm-none-eabi
platform, llvm.memcpy will be expanded to __aeabi_memcpy which doesn't
have return value. The fix is to check the libcall name after expansion
to match "memcpy/memset/memmove" before allowing those intrinsic to be
tail calls.
SLP vectorizer supports horizontal reductions for Add/FAdd binary
operations. Patch adds support for horizontal min/max reductions.
Function getReductionCost() is split to getArithmeticReductionCost() for
binary operation reductions and getMinMaxReductionCost() for min/max
reductions.
Patch fixes PR26956.
This patch adds prologue verification, which is already present in
Apple's dwarfdump. It checks for invalid directory indices and warns
about duplicate file paths.
- // `ICI` can either be a comparison against IV or a comparison of IV.next.
- // Depending on the interpretation, we calculate the start value differently.
+ // `ICI` is interpreted as taking the backedge if the *next* value of the
+ // induction variable satisfies some constraint.
- // Pair {IndVarBase; IsIndVarNext} semantically designates whether the latch
- // comparisons happens against the IV before or after its value is
- // incremented. Two valid combinations for them are:
- //
- // 1) { phi [ iv.start, preheader ], [ iv.next, latch ]; false },
- // 2) { iv.next; true }.
- //
- // The latch comparison happens against IndVarBase which can be either current
- // or next value of the induction variable.
const SCEVAddRecExpr *IndVarBase = cast<SCEVAddRecExpr>(LeftSCEV);
bool IsIncreasing = false;
bool IsSignedPredicate = true;
- bool IsIndVarNext = false;
ConstantInt *StepCI;
if (!IsInductionVar(IndVarBase, IsIncreasing, StepCI)) {
FailureReason = "LHS in icmp not induction variable";
return None;
}
- const SCEV *IndVarStart = nullptr;
- // TODO: Currently we only handle comparison against IV, but we can extend
- // this analysis to be able to deal with comparison against sext(iv) and such.
- if (isa<PHINode>(LeftValue) &&
- cast<PHINode>(LeftValue)->getParent() == Header)
- // The comparison is made against current IV value.
- IndVarStart = IndVarBase->getStart();
- else {
- // Assume that the comparison is made against next IV value.
- const SCEV *StartNext = IndVarBase->getStart();
- const SCEV *Addend = SE.getNegativeSCEV(IndVarBase->getStepRecurrence(SE));
- IndVarStart = SE.getAddExpr(StartNext, Addend);
- IsIndVarNext = true;
- }
+ const SCEV *StartNext = IndVarBase->getStart();
+ const SCEV *Addend = SE.getNegativeSCEV(IndVarBase->getStepRecurrence(SE));
+ const SCEV *IndVarStart = SE.getAddExpr(StartNext, Addend);
const SCEV *Step = SE.getSCEV(StepCI);
[XRay][CodeGen][PowerPC] Fix tail exit codegen for XRay in PPC
Summary:
This fixes code-gen for XRay in PPC. The regression wasn't caught by
codegen tests which we add in this change.
What happened was the following:
- For tail exits, we used to unconditionally prepend the returns/exits
with a pseudo-instruction that gets lowered to the instrumentation
sled (and leave the actual return/exit instruction as-is).
- Changes to the XRay instrumentation pass caused the tail exits to
suddenly also emit the tail exit pseudo-instruction, since the check
for whether a return instruction was also a call instruction meant it
was a tail exit instruction.
- None of the tests caught the regression either due to non-existent
tests, or the tests being disabled/removed for continuous breakage.
This change re-introduces some of the basic tests and verifies that
we're back to a state that allows the back-end to generate appropriate
XRay instrumented binaries for PPC in the presence of tail exits.
[x86] Flesh out the custom ISel for RMW aritmetic ops with used flags to
cover the bitwise operators.
Nothing really exciting here, this just stamps out the rest of the core
operations that can RMW memory and set flags.
Still not implemented here: ADC, SBB. Those will require more
interesting logic to channel the flags *in*, and I'm not currently
planning to try to tackle that. It might be interesting for someone who
wants to improve our code generation for bignum implementations.
[x86] Extend the manual ISel of `add` and `sub` with both RMW memory
operands and used flags to support matching immediate operands.
This is a bit trickier than register operands, and we still want to fall
back on a register operands even for things that appear to be
"immediates" when they won't actually select into the operation's
immediate operand. This also requires us to handle things like selecting
`sub` vs. `add` to minimize the number of bits needed to represent the
immediate, and picking the shortest immediate encoding. In order to
that, we in turn need to scan to make sure that CF isn't used as it will
get inverted.
The end result seems very nice though, and we're now generating
optimal instruction sequences for these patterns IMO.
A follow-up patch will further expand this to other operations with RMW
memory operands. But handing `add` and `sub` are useful starting points
to flesh out the machinery and make sure interesting and complex cases
can be handled.
Thanks to Craig Topper who provided a few fixes and improvements to this
patch in addition to the review!
Petr Hosek [Thu, 7 Sep 2017 23:02:50 +0000 (23:02 +0000)]
[llvm-objcopy] Add support for special section indexes in symbol table greater than SHN_LORESERVE
As is indexes above SHN_LORESERVE will not be handled correctly because
they'll be treated as indexes of sections rather than special values
that should just be copied. This change adds support to copy them
though.
Lang Hames [Thu, 7 Sep 2017 21:04:00 +0000 (21:04 +0000)]
[ORC] Add ErrorSuccess and void specializations to AsyncHandlerTraits.
This will allow async handlers to be added that return void or Error::success().
Such handlers are expected to be common, since one of the primary uses of
addAsyncHandler is to run the body of the handler in a detached thread, in which
case the main handler returns immediately and does not need to provide an Error
value.
Petr Hosek [Thu, 7 Sep 2017 20:44:16 +0000 (20:44 +0000)]
[yaml2obj][ELF] Add support for symbol indexes greater than SHN_LORESERVE
Right now Symbols must be either undefined or defined in a specific
section. Some symbols have section indexes like SHN_ABS however. This
change adds support for outputting symbols that have such section
indexes.
COFF: PDB: Allow multiple modules with the same name.
It is possible for two modules to have the same name if they are
archive members with the same name, or if we are doing LTO (in which
case all modules will have the name "lto.tmp").
Keith Wyss [Thu, 7 Sep 2017 18:07:48 +0000 (18:07 +0000)]
[XRay][tools] Function call stack based analysis tooling for XRay traces
Second try after fixing a code san problem with iterator reference types.
This change introduces a subcommand to the llvm-xray tool called
"stacks" which allows for analysing XRay traces provided as inputs and
accounting time to stacks instead of just individual functions. This
gives us a more precise view of where in a program the latency is
actually attributed.
The tool uses a trie data structure to keep track of the caller-callee
relationships as we process the XRay traces. In particular, we keep
track of the function call stack as we enter functions. While we're
doing this we're adding nodes in a trie and indicating a "calls"
relatinship between the caller (current top of the stack) and the callee
(the new top of the stack). When we push function ids onto the stack, we
keep track of the timestamp (TSC) for the enter event.
When exiting functions, we are able to account the duration by getting
the difference between the timestamp of the exit event and the
corresponding entry event in the stack. This works even if we somehow
miss the exit events for intermediary functions (i.e. if the exit event
is not cleanly associated with the enter event at the top of the stack).
The output of the tool currently provides just the top N leaf functions
that contribute the most latency, and the top N stacks that have the
most frequency. In the future we can provide more sophisticated query
mechanisms and potentially an export to database feature to make offline
analysis of the stack traces possible with existing tools.
Fixes some combine issues for AMDGPU where we weren't
getting the many extract_vector_elt combines expected
in a future patch.
This should really be checking isOperationLegalOrCustom on
the extract. That improves a number of x86 lit tests, but
a few get stuck in an infinite loop from one place
where a similar looking extract is created. I have a
different workaround in the backend for that which
keeps many of those improvements, but also adds a few
regressions.
Benjamin Kramer [Thu, 7 Sep 2017 14:52:26 +0000 (14:52 +0000)]
[ARM] Remove redundant vcvt patterns.
These don't add any value as they're just compositions of existing
patterns. However, they can confuse the cost logic in ISel, leading to
duplicated vcvt instructions like in PR33199.
[X86][LLVM]Expanding Supports lowerInterleavedLoad() in X86InterleavedAccess (VF{8|16|32} stride 3).
This patch expands the support of lowerInterleavedload to {8|16|32}x8i stride 3.
LLVM creates suboptimal shuffle code-gen for AVX2. In overall, this patch is a specific fix for the pattern (Strid=3 VF={8|16|32}) and we plan to include the store (deinterleved side).
The patch goal is to optimize the following sequence:
a0 b0 c0 a1 b1 c1 a2 b2
c2 a3 b3 c3 a4 b4 c4 a5
b5 c5 a6 b6 c6 a7 b7 c7
Simon Atanasyan [Thu, 7 Sep 2017 12:54:26 +0000 (12:54 +0000)]
[mips] Use RegisterMCAsmBackend to register all MIPS asm backends. NFC
This change converts the `MipsAsmBackend` constructor to the "standard"
form. It makes possible to use `RegisterMCAsmBackend` for the backends
registrations. Now we pass `Triple` instance to the `MipsAsmBackend`
ctor and deduce all required options like endianness and bitness from
the triple. We still need to implement explicit ABI checking for
providing correct options to backends.
[MachineCombiner] Update instruction depths incrementally for large BBs.
Summary:
For large basic blocks with lots of combinable instructions, the
MachineTraceMetrics computations in MachineCombiner can dominate the compile
time, as computing the trace information is quadratic in the number of
instructions in a BB and it's relevant successors/predecessors.
In most cases, knowing the instruction depth should be enough to make
combination decisions. As we already iterate over all instructions in a basic
block, the instruction depth can be computed incrementally. This reduces the
cost of machine-combine drastically in cases where lots of instructions
are combined. The major drawback is that AFAIK, computing the critical path
length cannot be done incrementally. Therefore we only compute
instruction depths incrementally, for basic blocks with more
instructions than inc_threshold. The -machine-combiner-inc-threshold
option can be used to set the threshold and allows for easier
experimenting and checking if using incremental updates for all basic
blocks has any impact on the performance.
Alex Bradbury [Thu, 7 Sep 2017 11:30:55 +0000 (11:30 +0000)]
[Sparc][NFC] Clean up SelectCC lowering
The ARM, BPF, MSP430, Sparc and Mips backends all use a similar code sequence
for lowering SelectCC. As pointed out by @reames in D29937, this code isn't
particularly clear and in most of these backends doesn't actually match the
comments. This patch makes the code sequence clearer for the Sparc backend
through better variable naming and more accurate comments (e.g. we are
inserting triangle control flow, _not_ diamond). There is no functional
change.
ModuleSummaryAnalysis: Correctly handle all function operand references.
The current code that handles personality functions when creating a
module summary does not correctly handle the case where a function's
personality function operand refers to the function indirectly
(e.g. via a bitcast). This patch handles such cases by treating
personality function references like any other reference, i.e. by
adding them to the function's reference list. This has the minor side
benefit of allowing personality functions to participate in early
dead stripping.
We do this by calling findRefEdges on the function itself. This way
we also end up handling other function operands (specifically prefix
data and prologue data) for free.
ARM: track globals promoted to coalesced const pool entries
Globals that are promoted to an ARM constant pool may alias with another
existing constant pool entry. We need to keep a reference to all globals
that were promoted to each constant pool value so that we can emit a
distinct label for each promoted global. These labels are necessary so
that debug info can refer to the promoted global without an undefined
reference during linking.
Util: Improve update_llc_test_checks to scrub macosx-style assembly annotations
Summary:
In D37523 Sanjay pointed out that the tool does not scrub macosx-style 'End of Function' annotations,
where the comments begin with a double-#.
I tested this patch by verifying all existing occurences of 'End function' are scrubbed:
find ./test/CodeGen/X86 -name '*.ll' | xargs grep -l "End function" | xargs utils/update_llc_test_checks.py --llc-binary build/bin/llc
Sam Clegg [Wed, 6 Sep 2017 22:05:41 +0000 (22:05 +0000)]
[WebAssembly] Only treat imports/exports as symbols when reading relocatable object files
This change only treats imported and exports functions and globals
as symbol table entries the object has a "linking" section (i.e. it is
relocatable object file).
In this case all globals must be of type I32 and initialized with
i32.const. This was previously being assumed but not checked for and
was causing a failure on big endian machines due to using the wrong
value of then union.
Insert IMPLICIT_DEFS for undef uses in tail merging
Tail merging can convert an undef use into a normal one when creating a
common tail. Doing so can make the register live out from a block which
previously contained the undef use. To keep the liveness up-to-date,
insert IMPLICIT_DEFs in such blocks when necessary.
To enable this patch the computeLiveIns() function which used to
compute live-ins for a block and set them immediately is split into new
functions:
- computeLiveIns() just computes the live-ins in a LivePhysRegs set.
- addLiveIns() applies the live-ins to a block live-in list.
- computeAndAddLiveIns() is a convenience function combining the other
two functions and behaving like computeLiveIns() before this patch.
Based on a patch by Krzysztof Parzyszek <kparzysz@codeaurora.org>
Consider this type of a loop:
for (...) {
...
if (...) continue;
...
}
Normally, the "continue" would branch to the loop control code that
checks whether the loop should continue iterating and which contains
the (often) unique loop latch branch. In certain cases jump threading
can "thread" the inner branch directly to the loop header, creating
a second loop latch. Loop canonicalization would then transform this
loop into a loop nest. The problem with this is that in such a loop
nest neither loop is countable even if the original loop was. This
may inhibit subsequent loop optimizations and be detrimental to
performance.
[IfConversion] Remove kill flags from common instructions as well
When if-converting a diamond, two separate blocks will be placed back
to back to form a straight line code. To ensure correctness of the
liveness information, any registers that are live in the second block
should not be killed in the first block, even if they were in the
original code.
Additionally, when the two blocks share common instructions at the
beginning, these instructions will not be duplicated, but only placed
once, before both of the blocks. Since the function "isIdenticalTo"
(as used here) ignores kill flags, the common initial code in one
block may have a kill flag for a register that is live in the other
block.
Because the code that removes kill flags only runs for the non-common
parts of the predicated blocks, a kill flag mismatch in the common
code could still lead to a live register being killed prematurely.
Fix PR33878: BasicAA incorrectly assumes different address spaces don't alias
Remove code that assumed that a nullptr of address space != 0 couldnt alias with a non-null pointer. This is incorrect, since nothing can be concluded about a null pointer in an address space != 0.
This code was written before address spaces were introduced
Wei Mi [Wed, 6 Sep 2017 16:05:17 +0000 (16:05 +0000)]
[TailCall] Allow llvm.memcpy/memset/memmove to be tail calls when parent
function return the intrinsics's first argument.
llvm.memcpy/memset/memmove return void but they will return the first
argument after they are expanded as libcalls. Now if the parent function
has any return value, llvm.memcpy cannot be turned into tail call after
expansion.
The patch is to handle that case in SelectionDAGBuilder so when caller
function return the same value as the first argument of llvm.memcpy,
tail call is allowed.
AMDGPU: Make worst-case assumption about the wait states in inline assembly
Summary:
Mesa still uses a hack where empty inline assembly is used as a kind of
optimization barrier. This exposed a problem where not enough wait states
were inserted, because the hazard recognizer implicitly assumed that each
inline assembly "instruction" has at least one wait state.
Simon Pilgrim [Wed, 6 Sep 2017 10:23:12 +0000 (10:23 +0000)]
[X86][X87] Ensure x87 instructions are tagged as altering the FPSW reg
As noted in PR34080, a lot of x87 instructions alter the FPSW status register (or leave it in an undefined state) but aren't tagged as such in the tablegen.
This patch tags the control word, stack, wait and math instructions as altering FPSW, which matches what the AMD APMs suggests happens.
[DAGCombiner] When combining EXTRACT_SUBVECTOR of a BUILD_VECTOR, make sure we don't create a BUILD_VECTOR with an illegal type after type legalization.
[X86] Add more FMA3 patterns to cover a load in all 3 possible positions.
This matches what we already do for AVX512. The peephole pass makes up for this in most if not all cases. But this makes isel behavior for these consistent with every other instruction.
[X86] Allow cross-lane permutations for sub targets supporting AVX2.
Summary:
Most instructions in AVX work “in-lane”, that is, each source element is applied only to other
elements of the same lane, thus a cross lane permutation is costly and needs more than one instrution.
AVX2 includes instructions to perform any-to-any permutation of words over a 256-bit register
and vectorized table lookup.
Use the section name if a STT_SECTION symbol has empty name.
Without this we would have multiple relocations pointing to symbols
with the same name: the empty string. There was no way for yaml2obj to
be able to handle that.
A more general solution would be to unique symbol names in a similar
way to how we unique section names. In practice I think this covers
all common cases and is a bit more user friendly than using names like
sym1, sym2, sym3, etc.
[ValueTracking, InstCombine] canonicalize fcmp ord/uno with non-NAN ops to null constants
This is a preliminary step towards solving the remaining part of PR27145 - IR for isfinite():
https://bugs.llvm.org/show_bug.cgi?id=27145
In order to solve that one more generally, we need to add matching for and/or of fcmp ord/uno
with a constant operand.
But while looking at those patterns, I realized we were missing a canonicalization for nonzero
constants. Rather than limiting to just folds for constants, we're adding a general value
tracking method for this based on an existing DAG helper.
By transforming everything to 0.0, we can simplify the existing code in foldLogicOfFCmps()
and pick up missing vector folds.