Chris Bieneman [Mon, 9 Nov 2015 21:54:55 +0000 (21:54 +0000)]
Deprecate Autoconf
As per the very positive feedback from llvm-dev (http://lists.llvm.org/pipermail/llvm-dev/2015-November/092150.html), this commit officially deprecates the LLVM autoconf-based build system.
Sanjay Patel [Mon, 9 Nov 2015 21:16:49 +0000 (21:16 +0000)]
[x86] try harder to match bitwise 'or' into an LEA
The motivation for this patch starts with the epic fail example in PR18007:
https://llvm.org/bugs/show_bug.cgi?id=18007
...unfortunately, this patch makes no difference for that case, but it solves some
simpler cases. We'll get there some day. :)
The current 'or' matching code was using computeKnownBits() via
isBaseWithConstantOffset() -> MaskedValueIsZero(), but that's an unnecessarily limited use.
We can do more by copying the logic in ValueTracking's haveNoCommonBitsSet(), so we can
treat the 'or' as if it was an 'add'.
There's a TODO comment here because we should lift the bit-checking logic into a helper
function, so it's not duplicated in DAGCombiner.
Reid Kleckner [Mon, 9 Nov 2015 21:04:00 +0000 (21:04 +0000)]
[WinEH] Tweak funclet prologue/epilogue insertion to pass verifier
For some reason we'd never run MachineVerifier on WinEH code, and you
explicitly have to ask for it with llc. I added it to a few test cases
to get some coverage.
[sanitizer] Use same shadow offset for ASAN on aarch64
This patch makes ASAN for aarch64 use the same shadow offset for all
currently supported VMAs (39 and 42 bits). The shadow offset is the
same for 39-bit (36). Similar to ppc64 port, aarch64 transformation
also requires to use an add instead of 'or' for 42-bit VMA.
Dehao Chen [Mon, 9 Nov 2015 17:30:38 +0000 (17:30 +0000)]
Add discriminators for call instructions that are from the same line and same basic block.
Summary: Call instructions that are from the same line and same basic block needs to have separate discriminators to distinguish between different callsites.
Oliver Stannard [Mon, 9 Nov 2015 16:47:16 +0000 (16:47 +0000)]
GlobalOpt should maintain externally_initialized when splitting aggregates
When GlobalOpt splits an internal, global variable with an aggregate type, it
should propagate the externally_initialized flag to the newly created globals.
This makes the pass safe for our downstream use of this flag, while still
allowing some useful optimisations (such as removing dead parts of the split
aggregate) to be performed.
James Molloy [Mon, 9 Nov 2015 14:32:05 +0000 (14:32 +0000)]
[LoopVectorize] Address post-commit feedback on r250032
Implemented as many of Michael's suggestions as were possible:
* clang-format the added code while it is still fresh.
* tried to change Value* to Instruction* in many places in computeMinimumValueSizes - unfortunately there are several places where Constants need to be handled so this wasn't possible.
* Reduce the pass list on loop-vectorization-factors.ll.
* Fix a bug where we were querying MinBWs for I->getOperand(0) but using MinBWs[I].
Silviu Baranga [Mon, 9 Nov 2015 13:26:09 +0000 (13:26 +0000)]
Allow LLE/LD and the loop versioning infrastructure to use SCEV predicates
Summary:
LAA currently generates a set of SCEV predicates that must be checked by users.
In the case of Loop Distribute/Loop Load Elimination, no such predicates could have
been emitted, since we don't allow stride versioning. However, in the future there
could be SCEV predicates that will need to be checked.
This change adds support for SCEV predicate versioning in the Loop Distribute, Loop
Load Eliminate and the loop versioning infrastructure.
Charlie Turner [Mon, 9 Nov 2015 12:45:11 +0000 (12:45 +0000)]
[AArch64] Handle extract_subvector(..., 0) in ISel.
Summary:
Lowering this pattern early to an `EXTRACT_SUBREG` was making it impossible to match larger patterns in tblgen that use `extract_subvector(..., 0)` as part of the their input pattern.
It seems like there will exist somewhere a better way of specifying this pattern over all relevant register value types, but I didn't manage to find it.
Renato Golin [Mon, 9 Nov 2015 12:40:30 +0000 (12:40 +0000)]
[EABI] Add LLVM support for -meabi flag
"GCC requires the freestanding environment provide memcpy, memmove, memset
and memcmp": https://gcc.gnu.org/onlinedocs/gcc-5.2.0/gcc/Standards.html
Hence in GNUEABI targets LLVM should not convert 'memops' to their equivalent
'__aeabi_memops'. This convertion violates GCC contract.
The -meabi flag controls whether or not LLVM will modify 'memops' in GNUEABI
targets.
Without -meabi: use the triple default EABI.
With -meabi=default: use the triple default EABI.
With -meabi=gnu: use 'memops'.
With -meabi=4 or -meabi=5: use '__aeabi_memops'.
With -meabi set to an unknown value: same as -meabi=default.
Oliver Stannard [Mon, 9 Nov 2015 11:03:18 +0000 (11:03 +0000)]
[CodeGen] Always promote f16 if not legal
We don't currently have any runtime library functions for operations on
f16 values (other than conversions to and from f32 and f64), so we
should always promote it to f32, even if that is not a legal type. In
that case, the f32 values would be softened to f32 library calls.
SoftenFloatRes_FP_EXTEND now needs to check the promoted operand's type,
as it may ne a no-op or require a different library call.
getCopyFromParts and getCopyToParts now need to cope with a
floating-point value stored in a larger integer part, as is the case for
any target that needs to store an f16 value in a 32-bit integer
register.
Colin LeMahieu [Mon, 9 Nov 2015 00:15:45 +0000 (00:15 +0000)]
[AsmParser] Provide target direct access to mnemonic token. Allow assignment parsing to be hooked by target. Allow target to specify if identifier is a label.
NAKAMURA Takumi [Sun, 8 Nov 2015 09:45:06 +0000 (09:45 +0000)]
Appease hosts without HAVE_BACKTRACE nor ENABLE_BACKTRACES.
llvm/lib/Support/Signals.cpp:66:13: warning: unused function 'printSymbolizedStackTrace' [-Wunused-function]
llvm/lib/Support/Signals.cpp:52:13: warning: function 'findModulesAndOffsets' has internal linkage but is not defined [-Wundefined-internal]
Hal Finkel [Sun, 8 Nov 2015 08:04:40 +0000 (08:04 +0000)]
[PowerPC] Fix LoopPreIncPrep not to depend on SCEV constant simplifications
Under most circumstances, if SCEV can simplify X-Y to a constant, then it can
also simplify Y-X to a constant. However, there is no guarantee that this is
always true, and concensus is not to consider that a correctness bug in SCEV
(although it is undesirable).
PPCLoopPreIncPrep gathers pointers used to access memory (via loads, stores and
prefetches) into buckets, where in each bucket the relative pointer offsets are
constant. We used to keep each bucket as a multimap, where SCEV's subtraction
operation was used to define the ordering predicate. Instead, use a fixed SCEV
base expression for each bucket, record the constant offsets from that base
expression, and adjust it later, if desirable, once all pointers have been
collected.
Doing it this way should be more compile-time efficient than the previous
scheme (in addition to making the implementation less sensitive to SCEV
simplification quirks).
David Majnemer [Sun, 8 Nov 2015 02:36:00 +0000 (02:36 +0000)]
[WinEH] Update PHIs of CATCHRET successors
The TailDuplication machine pass ran across a malformed CFG: a PHI node
referred it's predecessor's predecessor instead of it's predecessor.
This occurred because we split the edge in X86ISelLowering when we
processed the CATCHRET but forgot to do something about the PHI nodes.
Sanjoy Das [Sat, 7 Nov 2015 01:55:53 +0000 (01:55 +0000)]
[FunctionAttrs] Fix an iterator wraparound bug
Summary:
This change fixes an iterator wraparound bug in
`determinePointerReadAttrs`.
Ideally, ++'ing off the `end()` of an iplist should result in a failed
assert, but currently iplist seems to silently wrap to the head of the
list on `end()++`. This is why the bad behavior is difficult to
demonstrate.
Disallow implicit conversions between ilist iterators and element
points. Explicit conversions still work of course.
This is the first step toward removing the undefined behaviour in
`ilist` and `iplist`:
http://lists.llvm.org/pipermail/llvm-dev/2015-October/091115.html
The motivation for removing the implicit iterators is that I came across
real bugs (that were *really* getting lucky). More details and some
brief discussion later in that thread:
http://lists.llvm.org/pipermail/llvm-dev/2015-October/091617.html
Note: if you have out-of-tree code, it should be fairly easy to revert
this patch downstream while you update your out-of-tree call sites.
Note that these conversions are occasionally latent bugs (that may
happen to "work" now, but only because of getting lucky with UB;
follow-ups will change your luck). When they are valid, I suggest using
`->getIterator()` to go from pointer to iterator, and `&*` to go from
iterator to pointer.
David Majnemer [Sat, 7 Nov 2015 00:52:53 +0000 (00:52 +0000)]
[InstCombine] Teach FoldPHIArgZextsIntoPHI about EHPads
FoldPHIArgZextsIntoPHI cannot insert an instruction after the PHI if
there is an EHPad in the BB. Doing so would result in an instruction
inserted after a terminator.
This reverts commit r252372. Apparently I missed clang-tools-extra.
http://lab.llvm.org:8011/builders/llvm-clang-lld-x86_64-scei-ps4-ubuntu-fast/builds/2534/steps/build/logs/stdio
Disallow implicit conversions between ilist iterators and element
points. Explicit conversions still work of course.
This is the first step toward removing the undefined behaviour in
`ilist` and `iplist`:
http://lists.llvm.org/pipermail/llvm-dev/2015-October/091115.html
The motivation for removing the implicit iterators is that I came across
real bugs (that were *really* getting lucky). More details and some
brief discussion later in that thread:
http://lists.llvm.org/pipermail/llvm-dev/2015-October/091617.html
Note: if you have out-of-tree code, it should be fairly easy to revert
this patch downstream while you update your out-of-tree call sites.
Note that these conversions are occasionally latent bugs (that may
happen to "work" now, but only because of getting lucky with UB;
follow-ups will change your luck). When they are valid, I suggest using
`->getIterator()` to go from pointer to iterator, and `&*` to go from
iterator to pointer.
Akira Hatanaka [Fri, 6 Nov 2015 23:55:38 +0000 (23:55 +0000)]
Add 'notail' marker for call instructions.
This marker prevents optimization passes from adding 'tail' or
'musttail' markers to a call. Is is used to prevent tail call
optimization from being performed on the call.
Pawel Bylica [Fri, 6 Nov 2015 23:21:49 +0000 (23:21 +0000)]
[Support] Use GetTempDir to get the temporary dir path on Windows.
Summary:
In general GetTempDir follows the same logic as the replaced code: checks env variables TMP, TEMP, USERPROFILE in order. However, it also perform other checks like making separators native (\), making the path absolute, etc.
This change fixes FileSystemTest.CreateDir unittest that had been failing when run from Unix-like shell on Windows (Unix-like path separator (/) used in env variables).
Ahmed Bougacha [Fri, 6 Nov 2015 23:16:53 +0000 (23:16 +0000)]
[AArch64][FastISel] Don't even try to select vector icmps.
We used to try to constant-fold them to i32 immediates.
Given that fast-isel doesn't otherwise support vNi1, when selecting
the result users, we'd fallback to SDAG anyway.
However, if the users were in another block, we'd insert broken
cross-class copies (GPR32 to FPR64).
Give up, let SDAG agree with itself on a vNi1 legalization strategy.
Ahmed Bougacha [Fri, 6 Nov 2015 23:16:48 +0000 (23:16 +0000)]
[X86] Fold (trunc (i32 (zextload i16))) into vbroadcast.
When matching non-LSB-extracting truncating broadcasts, we now insert
the necessary SRL. If the scalar resulted from a load, the SRL will be
folded into it, creating a narrower, offset, load.
However, i16 loads aren't Desirable, so we get i16->i32 zextloads.
We already catch i16 aextloads; catch these as well.
Ahmed Bougacha [Fri, 6 Nov 2015 23:16:43 +0000 (23:16 +0000)]
[X86] SRL non-LSB extracts when folding to truncating broadcasts.
Now that we recognize this, we can support it instead of bailing out.
That is, we can fold:
(v8i16 (shufflevector
(v8i16 (bitcast (v4i32 (build_vector X, Y, ...)))),
<1,1,...,1>))
into:
(v8i16 (vbroadcast (i16 (trunc (srl Y, 16)))))
Ahmed Bougacha [Fri, 6 Nov 2015 23:16:38 +0000 (23:16 +0000)]
[X86] Don't fold non-LSB extracts into truncating broadcasts.
We used to incorrectly assume that the offset we're extracting from
was a multiple of the element size. So, we'd fold:
(v8i16 (shufflevector
(v8i16 (bitcast (v4i32 (build_vector X, Y, ...)))),
<1,1,...,1>))
into:
(v8i16 (vbroadcast (i16 (trunc Y))))
whereas we should have extracted the higher bits from X.
Mehdi Amini [Fri, 6 Nov 2015 20:17:51 +0000 (20:17 +0000)]
Fix SLPVectorizer commutativity reordering
The SLPVectorizer had a very crude way of trying to benefit
from associativity: it tried to optimize for splat/broadcast
or in order to have the same operator on the same side.
This is benefitial to the cost model and allows more vectorization
to occur.
This patch improve the logic and make the detection optimal (locally,
we don't look at the full tree but only at the immediate children).
Should fix https://llvm.org/bugs/show_bug.cgi?id=25247
Sanjoy Das [Fri, 6 Nov 2015 19:01:08 +0000 (19:01 +0000)]
[ValueTracking] Add parameters to isImpliedCondition; NFC
Summary:
This change makes the `isImpliedCondition` interface similar to the rest
of the functions in ValueTracking (in that it takes a DataLayout,
AssumptionCache etc.). This is an NFC, intended to make a later diff
less noisy.
Sanjoy Das [Fri, 6 Nov 2015 19:01:03 +0000 (19:01 +0000)]
[ValueTracking] De-pessimize isImpliedCondition around unsigned compares
Summary:
Currently `isImpliedCondition` will optimize "I +_nuw C < L ==> I < L"
only if C is positive. This is an unnecessary restriction -- the
implication holds even if `C` is negative.
Sanjoy Das [Fri, 6 Nov 2015 19:00:57 +0000 (19:00 +0000)]
[ValueTracking] Add a framework for encoding implication rules
Summary:
This change adds a framework for adding more smarts to
`isImpliedCondition` around inequalities. Informally,
`isImpliedCondition` will now try to prove "A < B ==> C < D" by proving
"C <= A && B <= D", since then it follows "C <= A < B <= D".
While this change is in principle NFC, I could not think of a way to not
handle cases like "i +_nsw 1 < L ==> i < L +_nsw 1" (that ValueTracking
did not handle before) while keeping the change understandable. I've
added tests for these cases.
Matt Arsenault [Fri, 6 Nov 2015 18:01:57 +0000 (18:01 +0000)]
AMDGPU: Add pass to detect used kernel features
Mark kernels that use certain features that require user
SGPRs to support with kernel attributes. We need to know
before instruction selection begins because it impacts
the kernel calling convention lowering.
For now this only detects the workitem intrinsics.
Teresa Johnson [Fri, 6 Nov 2015 17:50:53 +0000 (17:50 +0000)]
Restore "Move metadata linking after lazy global materialization/linking."
Summary:
This reverts commit r251965.
Restore "Move metadata linking after lazy global materialization/linking."
This restores commit r251926, with fixes for the LTO bootstrapping bot
failure.
The bot failure was caused by references from debug metadata to
otherwise unreferenced globals. Previously, this caused the lazy linking
to link in their defs, which is unnecessary. With this patch, because
lazy linking is complete when we encounter the metadata reference, the
materializer created a declaration. For definitions such as aliases and
comdats, it is illegal to have a declaration. Furthermore, metadata
linking should not change code generation. Therefore, when linking of
global value bodies is complete, the materializer will simply return
nullptr as the new reference for the linked metadata.
This change required fixing a different test to ensure there was a
real reference to a linkonce global that was only being reference from
metadata.
Note that the new changes to the only-needed-named-metadata.ll test
illustrate an issue with llvm-link -only-needed handling of comdat
groups, whereby it may result in an incomplete comdat group. I note this
in the test comments, but the issue is orthogonal to this patch (it can
be reproduced without any metadata at head).
Reid Kleckner [Fri, 6 Nov 2015 17:06:38 +0000 (17:06 +0000)]
[WinEH] Mark funclet entries and exits as clobbering all registers
Summary:
In this implementation, LiveIntervalAnalysis invents a few register
masks on basic block boundaries that preserve no registers. The nice
thing about this is that it prevents the prologue inserter from thinking
it needs to spill all XMM CSRs, because it doesn't see any explicit
physreg defs in the MI.
Jun Bum Lim [Fri, 6 Nov 2015 16:27:47 +0000 (16:27 +0000)]
[AArch64]Enable the narrow ld promotion only on profitable microarchitectures
The benefit from converting narrow loads into a wider load (r251438) could be
micro-architecturally dependent, as it assumes that a single load with two bitfield
extracts is cheaper than two narrow loads. Currently, this conversion is
enabled only in cortex-a57 on which performance benefits were verified.