Evandro Menezes [Mon, 28 Aug 2017 22:51:52 +0000 (22:51 +0000)]
[AArch64] Adjust the cost model for Exynos M1 and M2
Add new predicate to more accurately model the scheduling around branches
and function calls and of loads and stores of pairs and integer
multiplications.
Craig Topper [Mon, 28 Aug 2017 22:00:27 +0000 (22:00 +0000)]
[InstCombine] Teach select01 helper of foldSelectIntoOp to handle vector splats
We were handling some vectors in foldSelectIntoOp, but not if the operand of the bin op was any kind of vector constant. This patch fixes it to treat vector splats the same as scalars.
Summary:
STRQro* instructions are slower than the alternative ADD/STRQui expanded
instructions on Falkor, so avoid generating them unless we're optimizing
for code size.
Davide Italiano [Mon, 28 Aug 2017 20:29:33 +0000 (20:29 +0000)]
[LoopUnroll] Properly update loop structure in case of successful peeling.
When peeling kicks in, it updates the loop preheader.
Later, a successful full unroll of the loop needs to update a PHI
which i-th argument comes from the loop preheader, so it'd better look
at the correct block. Fixes PR33437.
ARMv4 doesn't support the "BX" instruction, which has been introduced
with ARMv4t. Adjust the call lowering and tail call implementation
accordingly.
Further changes are necessary to ensure that presence of the v4t feature
is correctly set. Most importantly, the "generic" CPU for thumb-*
triples should include ARMv4t, since thumb mode without thumb support
would naturally be pointless.
Add a couple of asserts to ensure thumb instructions are not emitted
without CPU support.
Matthias Braun [Mon, 28 Aug 2017 19:48:42 +0000 (19:48 +0000)]
TableGen: Fix subreg composition/concatenation
This fixes 2 problems in subregister hierarchies with multiple levels
and tuples:
1) For bigger tuples computing secondary subregs would miss 2nd order
effects. In the test case a register like `S10_S11_S12_S13_S14` with D5
= S10_S11, D6 = S12_S13 we would correctly compute sub0 = D5, sub1 = D6
but would miss the fact that we could now form ssub0_ssub1_ssub2_ssub3
(aka sub0_sub1) = D5_D6. This is fixed by changing
computeSecondarySubRegs() to compute a fixpoint.
2) Fixing 1) exposed a problem where TableGen would create multiple
names for effectively the same subregister index. In the test case
the subregister index sub0 is composed from ssub0 and ssub1, and sub1 is
composed from ssub2 and ssub3. TableGen should not create both sub0_sub1
and ssub0_ssub1_ssub2_ssub3 as infered subregister indexes. This changes
the code to build a transitive closure of the subregister components
before forming new concatenated subregister indexes.
This fix was developed for an out of tree target. For the in-tree
targets the only change is in the register information computed for ARM.
There is a slight chance this fixed/improved some register coalescing
around the QQQQ/QQ register classes there but I couldn't see/provoke any
code generation differences.
Matthias Braun [Mon, 28 Aug 2017 19:48:40 +0000 (19:48 +0000)]
TableGen: Add -gen-register-info-debug-dump
Adds a new --gen-register-info-debug-dump mode to tablegen that dumps various register related information:
- List of register classes with super and subclasses
- List of subregister indexes with lanemasks
- List of registers with subregisters
I will use this in an upcoming commit to create a test.
It may also be useful for target developers wanting to get an overview
of all the register related information, esp. the things inferred by
tablegen and not directly visible in the .td file.
Geoff Berry [Mon, 28 Aug 2017 19:03:45 +0000 (19:03 +0000)]
[ARM] Fix bug in ARMLoadStoreOptimizer when kill flags are missing.
Summary:
ARMLoadStoreOpt::FixInvalidRegPairOp() was only checking if one of the
load destination registers to be split overlapped with the base register
if the base register was marked as killed. Since kill flags may not
always be present, this can lead to incorrect code.
This bug was exposed by my MachineCopyPropagation change D30751 breaking
the sanitizer-x86_64-linux-android buildbot.
Also clean up some dead code and add an assert that a register offset is
never encountered by this code, since it does not handle them correctly.
Taewook Oh [Mon, 28 Aug 2017 18:57:00 +0000 (18:57 +0000)]
Create PHI node for the return value only when the return value has uses.
Summary:
Currently, a phi node is created in the normal destination to unify the return values from promoted calls and the original indirect call. This patch makes this phi node to be created only when the return value has uses.
This patch is necessary to generate valid code, as compiler crashes with the attached test case without this patch. Without this patch, an illegal phi node that has no incoming value from `entry`/`catch` is created in `cleanup` block.
I think existing implementation is good as far as there is at least one use of the original indirect call. `insertCallRetPHI` creates a new phi node in the normal destination block only when the original indirect call dominates its use and the normal destination block. Otherwise, `fixupPHINodeForNormalDest` will handle the unification of return values naturally without creating a new phi node. However, if there's no use, `insertCallRetPHI` still creates a new phi node even when the original indirect call does not dominate the normal destination block, because `getCallRetPHINode` returns false.
Zachary Turner [Mon, 28 Aug 2017 18:49:04 +0000 (18:49 +0000)]
[CodeView] Don't output S_UDT symbols for forward decls.
S_UDT symbols are the debugger's "index" for all the structs,
typedefs, classes, and enums in a program. If any of those
structs/classes don't have a complete declaration, or if there
is a typedef to something that doesn't have a complete definition,
then emitting the S_UDT is unhelpful because it doesn't give
the debugger enough information to do anything useful. On the
other hand, it results in a huge size blow-up in the resulting
PDB, which is exacerbated by an order of magnitude when linking
with /DEBUG:FASTLINK.
With this patch, we drop S_UDT records for types that refer either
directly or indirectly (e.g. through a typedef, pointer, etc) to
a class/struct/union/enum without a complete definition. This
brings us about 50% of the way towards parity with /DEBUG:FASTLINK
PDBs generated from cl-compiled object files.
[AMDGPU] Fix regression in AMDGPULibCalls allowing native for doubles
Under -cl-fast-relaxed-math we could use native_sqrt, but f64 was
allowed to produce HSAIL's nsqrt instruction. HSAIL is not here
and we stick with non-existing native_sqrt(double) as a result.
Add check for f64 to not return native functions and also remove
handling of f64 case for fold_sqrt.
Craig Topper [Mon, 28 Aug 2017 15:32:50 +0000 (15:32 +0000)]
[X86] Make 128/256-bit extract_subvector Legal instead of Custom. Move combining with BUILD_VECTOR from Legalization to DAG combine
EXTRACT_SUBVECTOR was marked Custom solely so we could combine it with BUILD_VECTOR operations to create smaller BUILD_VECTORS during Legalization. But that sort of combining should really be done by the DAG combiner.
This patch adds the last piece of needed supported DAG combine to handle this. Once that's done we can make the EXTRACT_SUBVECTOR operations Legal.
Ilya Biryukov [Mon, 28 Aug 2017 15:12:24 +0000 (15:12 +0000)]
Changed Dockerfiles to install LLVM into /usr/local
Summary:
Previously, the installation path was simply '/'.
Using '/usr/local' would ensure that LLVM installation does not
conflict with software installed via package managers.
Add abstract virtual method setDefault() to class Option and implement it in its inheritors in order to be able to set all the options to its default values in user's code without actually knowing all these options. For instance:
The current version of LLVM X86 disassembler incorrectly interprets some possible sets of x86 prefixes. This patch is the first step to close PR7709 and PR17697. There will be next patch(es) to close relative PRs.
Differential Revision: https://reviews.llvm.org/D36788
M lib/Target/X86/Disassembler/X86DisassemblerDecoder.cpp
M lib/Target/X86/Disassembler/X86DisassemblerDecoder.h
A test/MC/Disassembler/X86/prefixes-i386.s
A test/MC/Disassembler/X86/prefixes-x86_64.s
M test/MC/Disassembler/X86/prefixes.txt
Gadi Haber [Mon, 28 Aug 2017 10:04:16 +0000 (10:04 +0000)]
[X86][Haswell] Updating HSW instruction scheduling information
This patch completely replaces the instruction scheduling information for the Haswell architecture target by modifying the file X86SchedHaswell.td located under the X86 Target.
We used the scheduling information retrieved from the Haswell architects in order to replace and modify the existing scheduling.
The patch continues the scheduling replacement effort started with the SNB target in r307529 and r310792.
Information includes latency, number of micro-Ops and used ports by each HSW instruction.
Please expect some performance fluctuations due to code alignment effects.
Lang Hames [Mon, 28 Aug 2017 03:36:46 +0000 (03:36 +0000)]
[Error] Add a handleExpected utility.
handleExpected is similar to handleErrors, but takes an Expected<T> as its first
input value and a fallback functor as its second, followed by an arbitary list
of error handlers (equivalent to the handler list of handleErrors). If the first
input value is a success value then it is returned from handleErrors
unmodified. Otherwise the contained error(s) are passed to handleErrors, along
with the handlers. If handleErrors returns success (indicating that all errors
have been handled) then handleExpected runs the fallback functor and returns its
result. If handleErrors returns a failure value then the failure value is
returned and the fallback functor is never run.
This simplifies the process of re-trying operations that return Expected values.
Without this utility such retry logic is cumbersome as the internal Error must
be explicitly extracted from the Expected value, inspected to see if its
handleable and then consumed:
Expected<Foo> Result;
(void)!!Result; // "Check" Result so that it can be safely overwritten.
if (auto ValOrErr = tryFoo(Aggressive))
Result = std::move(ValOrErr);
else {
auto Err = ValOrErr.takeError();
if (Err.isA<HandleableError>()) {
consumeError(std::move(Err));
Result = tryFoo(Conservative);
} else
return std::move(Err);
}
with handleExpected, this can be re-written as:
auto Result =
handleExpected(
tryFoo(Aggressive),
[]() { return tryFoo(Conservative); },
[](HandleableError&) { /* discard to handle */ });
Petar Jovanovic [Sun, 27 Aug 2017 21:07:24 +0000 (21:07 +0000)]
[mips] Generate NMADD and NMSUB instructions when fneg node is present
This patch enables generation of NMADD and NMSUB instructions when fneg node
is present. These instructions are currently only generated if fsub node is
present.
Craig Topper [Sun, 27 Aug 2017 19:03:36 +0000 (19:03 +0000)]
[AVX512] Add more patterns for using masked moves for subvector extracts of the lowest subvector. This time with bitcasts between the vselect and the extract.
Ayal Zaks [Sun, 27 Aug 2017 12:55:46 +0000 (12:55 +0000)]
[LV] Fix PR34248 - recommit D32871 after revert r311304
Original commit r311077 of D32871 was reverted in r311304 due to failures
reported in PR34248.
This recommit fixes PR34248 by restricting the packing of predicated scalars
into vectors only when vectorizing, avoiding doing so when unrolling w/o
vectorizing. Added a test derived from the reproducer of PR34248.
Craig Topper [Sat, 26 Aug 2017 22:24:57 +0000 (22:24 +0000)]
[AVX512] Add patterns to match masked extract_subvector with bitcasts between the vselect and the extract_subvector. Remove the late DAG combine.
We used to do a late DAG combine to move the bitcasts out of the way, but I'm starting to think that it's better to canonicalize extract_subvector's type to match the type of its input. I've seen some cases where we've formed two different extract_subvector from the same node where one had a bitcast and the other didn't.
Add some more test cases to ensure we've also got most of the zero masking covered too.
Jatin Bhateja [Sat, 26 Aug 2017 19:02:36 +0000 (19:02 +0000)]
[DAGCombiner] Extending pattern detection for vector shuffle.
Summary:
If all the operands of a BUILD_VECTOR extract elements from same vector then split the
vector efficiently based on the maximum vector access index.
Petr Hosek [Sat, 26 Aug 2017 01:32:20 +0000 (01:32 +0000)]
[llvm-objcopy] New layout algorithm that lays out segments first
The current file layout algorithm in llvm-objcopy is simple but
difficult to reason about. It also makes it very complicated to support
nested segments and to support segments that have offsets that come
before a point after the program headers. To support these cases and
simplify one of the most critical parts llvm-objcopy I rewrote the
layout algorithm. Laying out segments first solves most of the issues
encountered by the previous algorithm.
Hiroshi Yamauchi [Sat, 26 Aug 2017 00:31:00 +0000 (00:31 +0000)]
Add options to dump block frequency/branch probability info in text.
Summary:
Add options -print-bfi/-print-bpi that dump block frequency and branch
probability info like -view-block-freq-propagation-dags and
-view-machine-block-freq-propagation-dags do but in text.
This is useful when the graph is very large and complex (the dot command
crashes, lines/edges too close to tell apart, hard to navigate without textual
search) or simply when text is preferred.
Craig Topper [Fri, 25 Aug 2017 23:34:55 +0000 (23:34 +0000)]
[X86] Add patterns to show more failures to use TBM instructions when we're trying to check flags.
We can probably add patterns to fix some of them. But the ones that use 'and' as their root node emit a X86ISD::CMP node in front of the 'and' and then pattern matching that to 'test' instruction. We can't use a tablegen pattern to fix that because we can't remap the cmp result to the flag output of a TBM instruction.
Chandler Carruth [Fri, 25 Aug 2017 22:50:52 +0000 (22:50 +0000)]
[x86] Teach the backend to fold more read-modify-write memory operands
to instructions.
These can't be reasonably matched in tablegen due to the handling of
flags, so we have to do this in C++ code. We only did it for `inc` and
`dec` historically, this starts fleshing that out to more interesting
instructions. Notably, this handles transfering operands to `add` and
`sub`.
Currently this forces them into a register. The next patch will add
support for keeping immediate operands as immediates. Then I'll extend
this beyond just `add` and `sub`.
I'm not super thrilled by the repeated switches in the code but
everything else I tried was really ugly or problematic.
Many thanks to Craig Topper for the suggestions about where to even
begin here and how to make this stuff work.
Craig Topper [Fri, 25 Aug 2017 18:39:40 +0000 (18:39 +0000)]
[InstCombine] Don't fall back to only calling computeKnownBits if the upper bit of Add/Sub is demanded.
Just create an all 1s demanded mask and continue recursing like normal. The recursive calls should be able to handle an all 1s mask and do the right thing.
The only time we should care about knowing whether the upper bit was demanded is when we need to know if we should clear the NSW/NUW flags.
Now that we have a consistent path through the code for all cases, use KnownBits::computeForAddSub to compute the known bits at the end since we already have the LHS and RHS.
My larger goal here is to move the code that turns add into xor if only 1 bit is demanded and no bits below it are non-zero from InstCombiner::OptAndOp to here. This will allow it to be more general instead of just looking for 'add' and 'and' with constant RHS.
Florian Hahn [Fri, 25 Aug 2017 16:52:29 +0000 (16:52 +0000)]
[LoopInterchange] Skip zext instructions when looking for induction var.
Summary:
SimplifyIndVar may introduce zext instructions to widen arguments of the
loop exit check. They should not prevent us from splitting the loop at
the induction variable, but maybe the check should be more conservative,
e.g. making sure it only extends arguments used by a comparison?
David Blaikie [Fri, 25 Aug 2017 16:46:07 +0000 (16:46 +0000)]
Fix unused-lambda-capture warning by using default capture-by-ref
Since the lambda isn't escaped (via a std::function or similar) it's
fine/better to use default capture-by-ref to provide semantics similar
to language-level nested scopes (if/for/while/etc).
Amjad Aboud [Fri, 25 Aug 2017 11:07:54 +0000 (11:07 +0000)]
[InstCombine] Consider more cases where SimplifyDemandedUseBits does not convert AShr to LShr.
There are cases where AShr have better chance to be optimized than LShr, especially when the demanded bits are not known to be Zero, and also known to be similar to the sign bit.
Chandler Carruth [Fri, 25 Aug 2017 02:32:48 +0000 (02:32 +0000)]
Teach the llc check updater to recognize the end-of-function comment
used on Windows and sometimes Darwin. Cleans up generated patterns for
me quite a bit.
Gor Nishanov [Fri, 25 Aug 2017 02:25:10 +0000 (02:25 +0000)]
[coroutines] Add support for symmetric control transfer (musttail on coro.resumes followed by a suspend)
Summary:
Add musttail to any resume instructions that is immediately followed by a
suspend (i.e. ret). We do this even in -O0 to support guaranteed tail call
for symmetrical coroutine control transfer (C++ Coroutines TS extension).
This transformation is done only in the resume part of the coroutine that has
identical signature and calling convention as the coro.resume call.
Chandler Carruth [Fri, 25 Aug 2017 02:06:36 +0000 (02:06 +0000)]
[x86] NFC: More refactoring to pave the way to extending this ISel logic
to handle other x86 pseudos that carry flags and thus can't be matched
by our ISel patterns with fused memory accesses.
Chandler Carruth [Fri, 25 Aug 2017 02:04:03 +0000 (02:04 +0000)]
[x86] NFC - Refactor the custom lowering of `(load; op; store)` RMW sequences.
This extracts the code out of a giant switch in preparation for expanding it to
handle operations other thin `inc` and `dec`. Add a FIXME indicating what's
coming here.
Matt Arsenault [Fri, 25 Aug 2017 01:26:13 +0000 (01:26 +0000)]
DAG: Fix naming crime
Because isOperationCustom was only checking for custom
lowering on illegal types, this was behaving inconsistently
with the other isOperation* functions, so that
isOperationLegalOrCustom != (isOperationLegal || isOperationCustom)
Luckily this is only used in one place which already checks the
type legality on its own.
[unittests] Remove reverse iteration tests which use pointer-like keys
Summary: The expected order of pointer-like keys is hash-function-dependent which in turn depends on the platform/environment. Need to come up with a better way to test reverse iteration of containers with pointer-like keys.
Chandler Carruth [Fri, 25 Aug 2017 00:56:05 +0000 (00:56 +0000)]
[x86] Back out one aspect of r311318: don't generically set
FeatureSlowUAMem32.
The idea was to mark things that are slow on widely available processors
as slow in the generic CPU so that the code generated for that CPU would
be fast across those processors. However, for this feature that doesn't
work out very well at all.
The problem here is that you can very easily enable AVX or AVX2 on top
of this generic CPU. For example, this can happen just by using AVX2
intrinsics from Clang within a region of code guarded by a dynamic CPU
feature test. When you do that, the generated code with SlowUAMem32 set
is ... amazingly slower. The problem is that there really aren't very
good alternatives to the unaligned loads, and so our vector codegen
regresses significantly.
The other issue is that there are plenty of AMD CPUs with AVX1 that
don't set FeatureSlowUAMem32 and so we shouldn't just check for AVX2
instead of this special feature. =/
It would be nice to have the target attriute logic be able to
enable/disable more than just one feature at a time and control this in
a more fine grained and useful way, but that doesn't seem easy. Given
that it is only Sandybridge and Ivybridge that set this feature, for now
I'm just backing it out of the generic CPU. That has the additional
advantage of going back to the previous state that people seemed vaguely
happy with.
Stephen Hines [Fri, 25 Aug 2017 00:48:21 +0000 (00:48 +0000)]
Fix two (three) more issues with unchecked Error.
Summary:
If assertions are disabled, but LLVM_ABI_BREAKING_CHANGES is enabled,
this will cause an issue with an unchecked Success. Switching to
consumeError() is the correct way to bypass the check. This patch also
includes disabling 2 tests that can't work without assertions enabled,
since llvm_unreachable() with NDEBUG won't crash.
Chandler Carruth [Fri, 25 Aug 2017 00:34:07 +0000 (00:34 +0000)]
[x86] Fix an amazing goof in the handling of sub, or, and xor lowering.
The comment for this code indicated that it should work similar to our
handling of add lowering above: if we see uses of an instruction other
than flag usage and store usage, it tries to avoid the specialized
X86ISD::* nodes that are designed for flag+op modeling and emits an
explicit test.
Problem is, only the add case actually did this. In all the other cases,
the logic was incomplete and inverted. Any time the value was used by
a store, we bailed on the specialized X86ISD node. All of this appears
to have been historical where we had different logic here. =/
Turns out, we have quite a few patterns designed around these nodes. We
should actually form them. I fixed the code to match what we do for add,
and it has quite a positive effect just within some of our test cases.
The only thing close to a regression I see is using:
notl %r
testl %r, %r
instead of:
xorl -1, %r
But we can add a pattern or something to fold that back out. The
improvements seem more than worth this.
I've also worked with Craig to update the comments to no longer be
actively contradicted by the code. =[ Some of this still remains
a mystery to both Craig and myself, but this seems like a large step in
the direction of consistency and slightly more accurate comments.
Many thanks to Craig for help figuring out this nasty stuff.
Sanjay Patel [Thu, 24 Aug 2017 23:24:43 +0000 (23:24 +0000)]
[DAG] convert vector select-of-constants to logic/math
This goes back to a discussion about IR canonicalization. We'd like to preserve and convert
more IR to 'select' than we currently do because that's likely the best choice in IR:
http://lists.llvm.org/pipermail/llvm-dev/2016-September/105335.html
...but that's often not true for codegen, so we need to account for this pattern coming in
to the backend and transform it to better DAG ops.
Steps in this patch:
1. Add an EVT param to the existing convertSelectOfConstantsToMath() TLI hook to more finely
enable this transform. Other targets will probably want that anyway to distinguish scalars
from vectors. We're using that here to exclude AVX512 targets, but it may not be necessary.
2. Convert a vselect to ext+add. This eliminates a constant load/materialization, and the
vector ext is often free.
Implementing a more general fold using xor+and can be a follow-up for targets that don't have
a legal vselect. It's also possible that we can remove the TLI hook for the special case fold
implemented here because we're eliminating a constant, but it needs to be tested on other
targets.
Sanjay Patel [Thu, 24 Aug 2017 22:54:01 +0000 (22:54 +0000)]
[InstCombine] fix and enhance udiv/urem narrowing
There are 3 small independent changes here:
1. Account for multiple uses in the pattern matching: avoid the transform if it increases the instruction count.
2. Add a missing fold for the case where the numerator is the constant: http://rise4fun.com/Alive/E2p
3. Enable all folds for vector types.
There's still one more potential change - use "shouldChangeType()" to keep from transforming to an illegal integer type.