Zachary Turner [Tue, 13 Dec 2016 17:03:49 +0000 (17:03 +0000)]
[ADT] Add llvm::StringLiteral.
StringLiteral is a wrapper around a string literal useful for
replacing global tables of char arrays with global tables of
StringRefs that can initialized in a constexpr context, avoiding
the invocation of a global constructor.
David Callahan [Tue, 13 Dec 2016 16:42:18 +0000 (16:42 +0000)]
[ADCE] Add code to remove dead branches
Summary:
This is last in of a series of patches to evolve ADCE.cpp to support
removing of unnecessary control flow.
This patch adds the code to update the control and data flow graphs
to remove the dead control flow.
Also update unit tests to test the capability to remove dead,
may-be-infinite loop which is enabled by the switch
-adce-remove-loops.
Previous patches:
D23824 [ADCE] Add handling of PHI nodes when removing control flow
D23559 [ADCE] Add control dependence computation
D23225 [ADCE] Modify data structures to support removing control flow
D23065 [ADCE] Refactor anticipating new functionality (NFC)
D23102 [ADCE] Refactoring for new functionality (NFC)
Artur Pilipenko [Tue, 13 Dec 2016 14:21:14 +0000 (14:21 +0000)]
[DAGCombiner] Match load by bytes idiom and fold it into a single load
Match a pattern where a wide type scalar value is loaded by several narrow loads and combined by shifts and ors. Fold it into a single load or a load and a bswap if the targets supports it.
Assuming little endian target:
i8 *a = ...
i32 val = a[0] | (a[1] << 8) | (a[2] << 16) | (a[3] << 24)
=>
i32 val = *((i32)a)
This optimization was discussed on llvm-dev some time ago in "Load combine pass" thread. We came to the conclusion that we want to do this transformation late in the pipeline because in presence of atomic loads load widening is irreversible transformation and it might hinder other optimizations.
Eventually we'd like to support folding patterns like this where the offset has a variable and a constant part:
i32 val = a[i] | (a[i + 1] << 8) | (a[i + 2] << 16) | (a[i + 3] << 24)
Matching the pattern above is easier at SelectionDAG level since address reassociation has already happened and the fact that the loads are adjacent is clear. Understanding that these loads are adjacent at IR level would have involved looking through geps/zexts/adds while looking at the addresses.
The general scheme is to match OR expressions by recursively calculating the origin of individual bits which constitute the resulting OR value. If all the OR bits come from memory verify that they are adjacent and match with little or big endian encoding of a wider value. If so and the load of the wider type (and bswap if needed) is allowed by the target generate a load and a bswap if needed.
Simon Dardis [Tue, 13 Dec 2016 11:07:51 +0000 (11:07 +0000)]
[mips] Fix compact branch hazard detection
In certain cases it is possible that transient instructions such as
%reg = IMPLICIT_DEF as a single instruction in a basic block to reach
the MipsHazardSchedule pass. This patch teaches MipsHazardSchedule to
properly look through such cases.
Dylan McKay [Tue, 13 Dec 2016 05:53:14 +0000 (05:53 +0000)]
[AVR] Add an 'relax memory operation' pass
Summary:
This pass will be used to relax instructions which use out of bounds
memory accesses to equivalent operations that can work with the
addresses.
The pass currently implements relaxation for the STDWPtrQRr instruction.
Without this pass, an assertion error would be hit in the pseudo expansion pass.
In the future, we will need to add more instructions to this pass. We can do
that on a case-by-case basic.
Philip Reames [Tue, 13 Dec 2016 01:38:41 +0000 (01:38 +0000)]
[peephole] Enhance folding logic to work for STATEPOINTs
The general idea here is to get enough of the existing restrictions out of the way that the already existing folding logic in foldMemoryOperand can kick in for STATEPOINTs and fold references to immutable stack slots. The key changes are:
Support for folding multiple operands at once which reference the same load
Support for folding multiple loads into a single instruction
Walk all the operands of the instruction for varidic instructions (this is a bug fix!)
Once this lands, I'll post another patch which refactors the TII interface here. There's nothing actually x86 specific about the x86 code used here.
Philip Reames [Tue, 13 Dec 2016 01:21:15 +0000 (01:21 +0000)]
[Statepoints] Reuse stack slots more than once within a basic block
The stack slot reuse code had a really amusing bug. We ended up only reusing a stack slot exact once (initial use + reuse) within a basic block. If we had a third statepoint to process, we ended up allocating a new set of stack slots. If we crossed a basic block boundary, the set got cleared. As a result, code which is invoke heavy doesn't see the problem, but multiple calls within a basic block does. Net result: as we optimize invokes into calls, lowering gets worse.
The root error here is that the bitmap uses by the custom allocator wasn't kept in sync. The result was that we ended up resizing the bitmap on the next statepoint (to handle the cross block case), reset the bit once, but then never reset it again.
[libFuzzer] don't require extra flags with -minimize_crash=1 (default to -max_total_time=600). Also respect exact_artifact_path when outputting the end result
Chris Bieneman [Tue, 13 Dec 2016 00:29:56 +0000 (00:29 +0000)]
[LIT] Fix system-windows
Turns out if you were on windows and your default target wasn't windows the system-windows feature wasn't getting enabled.
This fixes that and updates the coff-dwarf test to rely on the new "target-windows" feature. That test was the reason why system-windows was changed to not always be enabled on Windows hosts.
Tim Northover [Mon, 12 Dec 2016 23:29:07 +0000 (23:29 +0000)]
Stop lying about pointers' required alignments.
These extra specializations were added in the depths of history (r67984 from
2009) and are clearly problematic now. The pointers actually are aligned to the
default (8 bytes), since otherwise UBsan would be complaining loudly.
I *think* it originally made sense because there was no "alignof" to infer the
correct value so the generic case went with what malloc returned (8-byte
aliged objects), and on 32-bit machines this specialization was correct. It
became wrong when we started compiling for 64-bit, and caused a UBSan failure
when we tried to put a ValueHandle into a DenseMap.
Marcos Pividori [Mon, 12 Dec 2016 23:25:11 +0000 (23:25 +0000)]
[libFuzzer] Implement Timers for Windows.
Implemented timeouts for Windows using TimerQueueTimers.
Timers are used to supervise the time of execution of the
callback function that is being fuzzed.
Petr Hosek [Mon, 12 Dec 2016 23:15:10 +0000 (23:15 +0000)]
[CMake] Multi-target builtins build
This change enables building builtins for multiple different targets
using LLVM runtimes directory.
To specify the builtin targets to be built, use the LLVM_BUILTIN_TARGETS
variable, where the value is the list of targets. To pass a per target
variable to the builtin build, you can set BUILTINS_<target>_<variable>
where <variable> will be passed to the builtin build for <target>.
Sanjoy Das [Mon, 12 Dec 2016 23:00:12 +0000 (23:00 +0000)]
Revert "[SCEVExpander] Use llvm data structures; NFC"
This reverts r289215 (git SHA1 cb7b86a1). It breaks the ubsan build
because a DenseMap that keys off of `AssertingVH<T>` will hit UB when it
tries to cast the empty and tombstone keys to `T *` (due to insufficient
alignment).
This is the relevant stack trace (thanks to Mike Aizatsky):
#0 0x25cf100 in llvm::AssertingVH<llvm::PHINode>::getValPtr() const llvm/include/llvm/IR/ValueHandle.h:212:39
#1 0x25cea20 in llvm::AssertingVH<llvm::PHINode>::operator=(llvm::AssertingVH<llvm::PHINode> const&) llvm/include/llvm/IR/ValueHandle.h:234:19
#2 0x25d0092 in llvm::DenseMapBase<llvm::DenseMap<llvm::AssertingVH<llvm::PHINode>, llvm::detail::DenseSetEmpty, llvm::DenseMapInfo<llvm::AssertingVH<llvm::PHINode> >, llvm::detail::DenseSetPair<llvm::AssertingVH<llvm::PHINode> > >, llvm::AssertingVH<llvm::PHINode>, llvm::detail::DenseSetEmpty, llvm::DenseMapInfo<llvm::AssertingVH<llvm::PHINode> >, llvm::detail::DenseSetPair<llvm::AssertingVH<llvm::PHINode> > >::clear() llvm/include/llvm/ADT/DenseMap.h:113:23
Guozhi Wei [Mon, 12 Dec 2016 22:09:02 +0000 (22:09 +0000)]
[PPC] Prefer direct move on power8 if load 1 or 2 bytes to VSR
Power8 has MTVSRWZ but no LXSIBZX/LXSIHZX, so move 1 or 2 bytes to VSR through MTVSRWZ is much faster than store the extended value into stack and load it with LXSIWZX.
This patch fixes pr31144.
Tim Shen [Mon, 12 Dec 2016 21:59:30 +0000 (21:59 +0000)]
[APFloat] Implement PPCDoubleDouble add and subtract.
Summary:
I looked at libgcc's implementation (which is based on the paper,
Software for Doubled-Precision Floating-Point Computations", by Seppo Linnainmaa,
ACM TOMS vol 7 no 3, September 1981, pages 272-283.) and made it generic to
arbitrary IEEE floats.
Matthew Simpson [Mon, 12 Dec 2016 21:11:04 +0000 (21:11 +0000)]
[SLP] Fix sign-extends for type-shrinking
This patch ensures the correct minimum bit width during type-shrinking.
Previously when type-shrinking, we always sign-extended values back to their
original width. However, if we are going to sign-extend, and the sign bit is
unknown, we have to increase the minimum bit width by one bit so the
sign-extend will fill the upper bits correctly. If the sign bit is known to be
zero, we can perform a zero-extend instead. This should fix PR31243.
Paul Robinson [Mon, 12 Dec 2016 20:49:11 +0000 (20:49 +0000)]
Recommit r288212: Emit 'no line' information for interesting 'orphan' instructions.
DWARF specifies that "line 0" really means "no appropriate source
location" in the line table. By default, use this for branch targets
and some other cases that have no specified source location, to
prevent inheriting unfortunate line numbers from physically preceding
instructions (which might be from completely unrelated source).
Updated patch allows enabling or suppressing this behavior for all
unspecified source locations.
Mehdi Amini [Mon, 12 Dec 2016 19:34:26 +0000 (19:34 +0000)]
Refactor BitcodeReader: move Metadata and ValueId handling in their own class/file
Summary:
I'm planning on changing the way we load metadata to enable laziness.
I'm getting lost in this gigantic files, and gigantic class that is the bitcode
reader. This is a first toward splitting it in a few coarse components that
are more easily understandable.
Dimitry Andric [Mon, 12 Dec 2016 19:05:52 +0000 (19:05 +0000)]
Fix compile with GCC 5 or later
Summary:
Compiling with GCC 5 or later can fail with a bogus error "constructor
required before non-static data member for
llvm::ValueEnumerator::MDRange::First has been parsed".
This was originally fixed upstream in GCC PR 70528, but later this fix
was reverted, and released versions of GCC still show the bogus error.
To work around this, replace MDRange's declaration of a default
constructor with a definition.
Teresa Johnson [Mon, 12 Dec 2016 16:09:30 +0000 (16:09 +0000)]
[ThinLTO] Import only necessary DICompileUnit fields
Summary:
As discussed on mailing list, for ThinLTO importing we don't need
to import all the fields of the DICompileUnit. Don't import enums,
macros, retained types lists. Also only import local scoped imported
entities. Since we don't currently import any global variables,
we also don't need to import the list of global variables (added an
assert to verify none are being imported).
This is being done by pre-populating the value map entries to map
the unneeded metadata to nullptr. For the imported entities, we can
simply replace the source module's list with a new list containing
only those needed imported entities. This is done in the IRLinker
constructor so that value mapping automatically does the desired
mapping.
Simon Pilgrim [Mon, 12 Dec 2016 10:49:15 +0000 (10:49 +0000)]
[X86][SSE] Lower suitably sign-extended mul vXi64 using PMULDQ
PMULDQ returns the 64-bit result of the signed multiplication of the lower 32-bits of vXi64 vector inputs, we can lower with this if the sign bits stretch that far.
Craig Topper [Mon, 12 Dec 2016 07:57:21 +0000 (07:57 +0000)]
[X86] Change CMPSS/CMPSD intrinsic instructions to use sse_load_f32/f64 as its memory pattern instead of full vector load.
These intrinsics only load a single element. We should use sse_loadf32/f64 to give more options of what loads it can match.
Currently these instructions are often only getting their load folded thanks to the load folding in the peephole pass. I plan to add more types of loads to sse_load_f32/64 so we can match without the peephole.
Craig Topper [Mon, 12 Dec 2016 05:07:17 +0000 (05:07 +0000)]
[X86] Remove some intrinsic instructions from hasPartialRegUpdate
Summary:
These intrinsic instructions are all selected from intrinsics that have well defined behavior for where the upper bits come from. It's not the same place as the lower bits.
As you can see we were suppressing load folding for these instructions in some cases. In none of the cases was the separate load helping avoid a partial dependency on the destination register. So we should just go ahead and allow the load to be folded.
Only foldMemoryOperand was suppressing folding for these. They all have patterns for folding sse_load_f32/f64 that aren't gated with OptForSize, but sse_load_f32/f64 doesn't allow 128-bit vector loads. It only allows scalar_to_vector and vzmovl of scalar loads to match. There's no reason we can't allow a 128-bit vector load to be narrowed so I would like to fix sse_load_f32/f64 to allow that. And if I do that it changes some of these same test cases to fold the load too.
Sebastian Pop [Mon, 12 Dec 2016 02:52:51 +0000 (02:52 +0000)]
[SCEVExpand] do not hoist divisions by zero (PR30935)
SCEVExpand computes the insertion point for the components of a SCEV to be code
generated. When it comes to generating code for a division, SCEVexpand would
not be able to check (at compilation time) all the conditions necessary to avoid
a division by zero. The patch disables hoisting of expressions containing
divisions by anything other than non-zero constants in order to avoid hoisting
these expressions past conditions that should hold before doing the division.
Craig Topper [Sun, 11 Dec 2016 22:32:38 +0000 (22:32 +0000)]
[InstCombine][XOP] The instructions for the scalar frcz intrinsics are defined to put 0 in the upper bits, not pass bits through like other intrinsics. So we should return a zero vector instead.
Ayman Musa [Sun, 11 Dec 2016 20:11:17 +0000 (20:11 +0000)]
[X86][AVX512] Add missing patterns for broadcast fallback in case load node has multiple uses (for v4i64 and v4f64).
When the load node which the broadcast instruction broadcasts has multiple uses, it cannot be folded.
A fallback pattern is added to catch these cases and provide another solution.
Sanjoy Das [Sun, 11 Dec 2016 20:07:15 +0000 (20:07 +0000)]
[Verifier] Add verification for TBAA metadata
Summary:
This change adds some verification in the IR verifier around struct path
TBAA metadata.
Other than some basic sanity checks (e.g. we get constant integers where
we expect constant integers), this checks:
- That by the time an struct access tuple `(base-type, offset)` is
"reduced" to a scalar base type, the offset is `0`. For instance, in
C++ you can't start from, say `("struct-a", 16)`, and end up with
`("int", 4)` -- by the time the base type is `"int"`, the offset
better be zero. In particular, a variant of this invariant is needed
for `llvm::getMostGenericTBAA` to be correct.
- That there are no cycles in a struct path.
- That struct type nodes have their offsets listed in an ascending
order.
- That when generating the struct access path, you eventually reach the
access type listed in the tbaa tag node.
Sebastian Pop [Sun, 11 Dec 2016 19:39:32 +0000 (19:39 +0000)]
instr-combiner: sum up all latencies of the transformed instructions
We have found that -- when the selected subarchitecture has a scheduling model
and we are not optimizing for size -- the machine-instruction combiner uses a
too-simple algorithm to compute the cost of one of the two alternatives [before
and after running a combining pass on a section of code], and therefor it throws
away the combination results too often.
This fix has the potential to help any ISA with the potential to combine
instructions and for which at least one subarchitecture has a scheduling model.
As of now, this is only known to definitely affect AArch64 subarchitectures with
a scheduling model.
Regression tested on AMD64/GNU-Linux, new test case tested to fail on an
unpatched compiler and pass on a patched compiler.
Sanjoy Das [Sun, 11 Dec 2016 19:02:21 +0000 (19:02 +0000)]
[SCEVExpander] Explicitly expand AddRec starts into loop preheader
This is NFC today, but won't be once D27216 (or an equivalent patch) is
in.
This change fixes a design problem in SCEVExpander -- it relied on a
hoisting optimization to generate correct code for add recurrences.
This meant changing the hoisting optimization to not kick in under
certain circumstances (to avoid speculating faulting instructions, say)
would break correctness.
The fix is to make the correctness requirements explicit, and have it
not rely on the hoisting optimization for correctness.
Oren Ben Simhon [Sun, 11 Dec 2016 14:10:52 +0000 (14:10 +0000)]
[X86] Regcall - Adding support for mask types
Regcall calling convention passes mask types arguments in x86 GPR registers.
The review includes the changes required in order to support v32i1, v16i1 and v8i1.
Chandler Carruth [Sun, 11 Dec 2016 12:49:05 +0000 (12:49 +0000)]
[FileCheck] Re-implement the logic to find each check prefix in the
check file to not be unreasonably slow in the face of multiple check
prefixes.
The previous logic would repeatedly scan potentially large portions of
the check file looking for alternative prefixes. In the worst case this
would scan most of the file looking for a rare prefix between every
single occurance of a common prefix. Even if we bounded the scan, this
would do bad things if the order of the prefixes was "unlucky" and the
distant prefix was scanned for first.
None of this is necessary. It is straightforward to build a state
machine that recognizes the first, longest of the set of alternative
prefixes. That is in fact exactly whan a regular expression does.
This patch builds a regular expression once for the set of prefixes and
then uses it to search incrementally for the next prefix. This requires
some threading of state but actually makes the code dramatically
simpler. I've also added a big comment describing the algorithm as it
was not at all obvious to me when I started.
With this patch, several previously pathological test cases in
test/CodeGen/X86 are 5x and more faster. Overall, running all tests
under test/CodeGen/X86 uses 10% less CPU after this, and because all the
slowest tests were hitting this, finishes in 40% less wall time on my
system (going from just over 5.38s to just over 3.23s) on a release
build! This patch substantially improves the time of all 7 X86 tests
that were in the top 20 reported by --time-tests, 5 of them are
completely off the list and the remaining 2 are much lower. (Sadly, the
new tests on the list include 2 new X86 ones that are slow for unrelated
reasons, so the count stays at 4 of the top 20.)
It isn't clear how much this helps debug builds in aggregate in part
because of the noise, but it again makes mane of the slowest x86 tests
significantly faster (10% or more improvement).
Chandler Carruth [Sun, 11 Dec 2016 09:50:05 +0000 (09:50 +0000)]
Refactor FileCheck some to reduce memory allocation and copying. Also
make some readability improvements.
Both the check file and input file have to be fully buffered to
normalize their whitespace. But previously this would be done in a stack
SmallString and then copied into a heap allocated MemoryBuffer. That
seems pretty wasteful, especially for something like FileCheck where
there are only ever two such entities.
This just rearranges the code so that we can keep the canonicalized
buffers on the stack of the main function, use reasonably large stack
buffers to reduce allocation. A rough estimate seems to show that about
80% of LLVM's .ll and .s files will fit into a 4k buffer, so this should
completely avoid heap allocation for the buffer in those cases. My
system's malloc is fast enough that the allocations don't directly show
up in timings. However, on some very slow test cases, this saves 1% - 2%
by avoiding the copy into the heap allocated buffer.
This also splits out the code which checks the input into a helper much
like the code to build the checks as that made the code much more
readable to me. Nit picks and suggestions welcome here. It has really
exposed a *bunch* of stuff that could be cleaned up though, so I'm
probably going to go and spring clean all of this code as I have more
changes coming to speed things up.
Craig Topper [Sun, 11 Dec 2016 08:54:52 +0000 (08:54 +0000)]
[X86][InstCombine] Add support for scalar FMA intrinsics to SimplifyDemandedVectorElts.
This teaches SimplifyDemandedElts that the FMA can be removed if the lower element isn't used. It also teaches it that if upper elements of the first operand aren't used then we can simplify them.
Chandler Carruth [Sun, 11 Dec 2016 07:46:21 +0000 (07:46 +0000)]
Tweak the core loop in StringRef::find to avoid calling memcmp on every
iteration.
Instead, load the byte at the needle length, compare it directly, and
save it to use in the lookup table of lengths we can skip forward.
I also added an annotation to expect that the comparison fails so that
the loop gets laid out contiguously without the call to memcpy (and the
substantial register shuffling that the ABI requires of that call).
Finally, because this behaves especially badly with a needle length of
one (by calling memcmp with a zero length) special case that to directly
call memchr, which is what we should have been doing anyways.
This was motivated by the fact that there are a large number of test
cases in 'check-llvm' where FileCheck's performance is dominated by
calls to StringRef::find (in a release, no-asserts build). I'm working
on patches to generally improve matters there, but this alone was worth
a 12.5% improvement in one test case where FileCheck spent 92% of its
time in this routine.
I experimented a bunch with different minor variations on this theme,
for example setting the pointer *at* the last byte and indexing
backwards for the call to memcmp. That didn't improve anything on this
version and seemed more complex. I also tried other things to make the
loop flow more nicely and none worked. =/ It is a bit unfortunate, the
generated code here remains pretty gross, but I don't see any obvious
ways to improve it. At this point, most of my ideas would be really
elaborate:
1) While the remainder of the string is long enough, we could load
a 16-byte or 32-byte vector at the address of the last byte and use
palignr to rotate that and check the first 15- or 31-bytes at the
front of the next segment, essentially pre-loading the first several
bytes of the next iteration so we could quickly detect a mismatch in
those bytes without an additional memory access. Down side would be
the code complexity, having a fallback loop, and likely misaligned
vector load. Plus it would make the common case of the last byte not
matching somewhat slower (need some extraction from a vector).
2) While we have space, we could do an aligned load of a 16- or 32-byte
vector that *contains* the end byte, and use any peceding bytes to
have a more precise "no" test, and any subsequent bytes could be
saved for the next iteration. This remove any unaligned load penalty,
but still requires us to pay the overhead of vector extraction for
the cases where we didn't need to do anything other than load and
compare the last byte.
3) Try to walk from the last byte in a way that is more friendly to
cache and/or memory pre-fetcher considering we have to poke the last
byte anyways.
No idea if any of these are really worth pursuing though. They all seem
somewhat unlikely to yield big wins in practice and to be a lot of work
and complexity. So I settled here, which at least seems like a strict
improvement over the previous version.
Craig Topper [Sun, 11 Dec 2016 07:42:04 +0000 (07:42 +0000)]
[AVX-512][InstCombine] Teach InstCombineCalls how to simplify demanded for scalar cmp intrinsics with masking and rounding.
These intrinsics don't read the upper elements of their first and second input. These are slightly different the the SSE version which does use the upper bits of its first element as passthru bits since the result goes to an XMM register. For AVX-512 the result goes to a mask register instead.
Craig Topper [Sun, 11 Dec 2016 07:42:01 +0000 (07:42 +0000)]
[AVX-512][InstCombine] Teach InstCombineCalls how to simplify demanded elements for scalar add,div,mul,sub,max,min intrinsics with masking and rounding.
These intrinsics don't read the upper bits of their second input. And the third input is the passthru for masking and that only uses the lower element as well.