Cullen Rhodes [Tue, 27 Aug 2019 12:57:09 +0000 (12:57 +0000)]
[IntrinsicEmitter] Support scalable vectors in intrinsics
Summary:
This patch adds support for scalable vectors in intrinsics, enabling
intrinsics such as the following to be defined:
declare <vscale x 4 x i32> @llvm.something.nxv4i32(<vscale x 4 x i32>)
Support for this is implemented by defining a new type descriptor for
scalable vectors and adding mangling support for scalable vector types
in the name mangling scheme used by 'any' types in intrinsic signatures.
Tests have been added for IRBuilder to test scalable vectors work as
expected when using intrinsics through this interface. This required
implementing an intrinsic that is explicitly defined with scalable
vectors, e.g. LLVMType<nxv4i32>, an SVE floating-point convert
intrinsic was used for this. The behaviour of the overloaded type
LLVMScalarOrSameVectorWidth with scalable vectors is tested using the
existing masked load intrinsic. Also added an .ll test to test the
Verifier catches a bad intrinsic argument when passing a fixed-width
predicate (mask) to the masked.load intrinsic where a scalable is
expected.
Pavel Labath [Tue, 27 Aug 2019 11:24:08 +0000 (11:24 +0000)]
Add error handling to the DataExtractor class
Summary:
This is motivated by D63591, where we realized that there isn't a really
good way of telling whether a DataExtractor is reading actual data, or
is it just returning default values because it reached the end of the
buffer.
This patch resolves that by providing a new "Cursor" class. A Cursor
object encapsulates two things:
- the current position/offset in the DataExtractor
- an error object
Storing the error object inside the Cursor enables one to use the same
pattern as the std::{io}stream API, where one can blindly perform a
sequence of reads and only check for errors once at the end of the
operation. Similarly to the stream API, as soon as we encounter one
error, all of the subsequent operations are skipped (return default
values) too, even if the would suceed with clear error state. Unlike the
std::stream API (but in line with other llvm APIs), we force the error
state to be checked through usage of llvm::Error.
Tim Northover [Tue, 27 Aug 2019 10:21:11 +0000 (10:21 +0000)]
AArch64: avoid creating cycle in DAG for post-increment NEON ops.
Inserting a value into Visited has the effect of terminating a search for
predecessors if that node is seen. This is legitimate for the base address, and
acts as a slight performance optimization, but the vector-building node can be
paert of a legitimate cycle so we shouldn't stop searching there.
George Rimar [Tue, 27 Aug 2019 09:58:39 +0000 (09:58 +0000)]
[yaml2obj] - Don't allow setting StOther and Other/Visibility at the same time.
This is a follow up discussed in the comments of D66583.
Currently, if for example, we have both StOther and Other set in YAML document for a symbol,
then yaml2obj reports an "unknown key 'Other'" error.
It happens because 'mapOptional()' is never called for 'Other/Visibility' in this case,
leaving those unhandled.
This message does not describe the reason of the error well. This patch fixes it.
Craig Topper [Tue, 27 Aug 2019 06:39:50 +0000 (06:39 +0000)]
[SelectionDAGBuilder] Hide existence of ConstantDataVector vector from visitGetElementPtr.
ConstantDataVector is a specialized verison of ConstantVector
that stores data in a packed array of bits instead of as
individual pointers to other Constants. But we really shouldn't
expose that if we can void it. And we should handle regular
ConstantVector equally well.
This removes a dyn_cast to ConstantDataVector and just calls
getSplatValue directly on a Constant* if the type is a vector.
Summary:
During the fixpoint iteration, including the manifest stage, we should
not delete stuff as other abstract attributes might have a reference to
the value. Through the API this can now be done safely at the very end.
Pengfei Wang [Tue, 27 Aug 2019 01:53:24 +0000 (01:53 +0000)]
[WinEH] Allocate space in funclets stack to save XMM CSRs
Summary:
This is an alternate approach to D63396
Currently funclets reuse the same stack slots that are used in the
parent function for saving callee-saved xmm registers. If the parent
function modifies a callee-saved xmm register before an excpetion is
thrown, the catch handler will overwrite the original saved value.
This patch allocates space in funclets stack for saving callee-saved xmm
registers and uses RSP instead RBP to access memory.
Craig Topper [Tue, 27 Aug 2019 01:07:37 +0000 (01:07 +0000)]
[Analysis] In EmitGEPOffset, use Constant::getUniqueInteger to handle struct indices in vector GEPs.
We previously called getSplatValue if the index had a vector type,
but getSplatValue returns null for non-splats. This would cause
a nullptr dereference if it wasn't a splat.
Using getUniqueInteger gives us an assert if its a vector type,
but the value isn't a splat. This is what is used in
SelectionDAGBuilder's code that expands GEPs as well.
Matt Arsenault [Tue, 27 Aug 2019 00:18:09 +0000 (00:18 +0000)]
AMDGPU: Combine directly on mul24 intrinsics
The problem these are supposed to work around can occur before the
intrinsics are lowered into the nodes. Try to directly simplify them
so they are matched before the bit assert operations can be optimized
out.
Matt Arsenault [Tue, 27 Aug 2019 00:08:31 +0000 (00:08 +0000)]
AMDGPU: Run AMDGPUCodeGenPrepare after scalar opts
The mul24 matching could interfere with SLSR and the other addressing
mode related passes. This probably is not the optimal placement, but
is an intermediate step. This should probably be moved after all the
generic IR passes, particularly LSR. Moving this after LSR seems to
help in some cases, and hurts others.
As-is in this patch, in idiv-licm, it saves 1-2 instructions inside
some of the loop bodies, but increases the number in others. Moving
this later helps these loops. In the new lsr tests in
mul24-pass-ordering, the intrinsic prevents introducing more
instructions in the loop preheader, so moving this later ends up
hurting them. This shouldn't be any worse than before the intrinsics
were introduced in r366094, and LSR should probably be smarter. I
think it's because it doesn't know the and inside the loop will be
folded away.
Heejin Ahn [Mon, 26 Aug 2019 21:51:35 +0000 (21:51 +0000)]
[WebAssembly] Fix SSA rebuilding in SjLj transformation
Summary:
Previously we skipped uses within the same BB as a def when rebuilding
SSA after SjLj transformation. For example, before transformation,
```
for.cond:
%0 = phi i32 [ %var, %for.inc ] ...
%var = ...
br label %for.inc
In this BB, %var should be defined in all paths from %for.inc to make %0
valid. In the input it was true; %for.inc's only predecessor was
%for.cond. But after SjLj transformation, it is possible that %for.inc
has other predecessors that are reachable without reaching %for.cond.
```
entry.split:
...
br i1 %a, label %bb.1, label %for.inc
In this case, we can't use %var in the `phi` instruction in %for.cond,
because %var is not defined in all paths through %for.inc (If the
control flow is %entry -> %entry.split -> %for.inc -> %for.cond, %var
has not been defined until we reach the `phi`). But the previous code
excluded users within the same BB, skipping instructions within the same
BB so they are not rewritten properly. User instructions within the same
BB also should be candidates for rewriting if they are _before_ the
original definition.
Evgeniy Stepanov [Mon, 26 Aug 2019 21:44:55 +0000 (21:44 +0000)]
[hwasan] Fix test failure in r369721.
Try harder to emulate "old runtime" in the test.
To get the old behavior with the new runtime library, we need both
disable personality function wrapping and enable landing pad
instrumentation.
Lang Hames [Mon, 26 Aug 2019 21:42:51 +0000 (21:42 +0000)]
[ORC] Make sure that queries on emitted-but-not-ready symbols fail correctly.
In r369808 the failure scheme for ORC symbols was changed to make
MaterializationResponsibility objects responsible for failing the symbols
they represented. This simplifies error logic in the case where symbols are
still covered by a MaterializationResponsibility, but left a gap in error
handling: Symbols that have been emitted but are not yet ready (due to a
dependence on some unemitted symbol) are not covered by a
MaterializationResponsibility object. Under the scheme introduced in r369808
such symbols would be moved to the error state, but queries on those symbols
were never notified. This led to deadlocks when such symbols were failed.
This commit updates error logic to immediately fail queries on any symbol that
has already been emitted if one of its dependencies fails.
Lang Hames [Mon, 26 Aug 2019 21:42:47 +0000 (21:42 +0000)]
[ORC] Fix an overly aggressive assert.
Symbols that have not been queried will not have MaterializingInfo entries,
so remove the assert that all failed symbols should have these entries.
Also updates the loop to only remove entries that were found earlier.
Heejin Ahn [Mon, 26 Aug 2019 21:41:17 +0000 (21:41 +0000)]
[WebAssembly] Combine emscripten SjLj tests
Summary:
Combine a test in lower-em-sjlj-longjmp-only.ll into lower-em-sjlj.ll,
because the test command is the same and I don't see any reason it
should be a separate file. Also converted tabs into spaces and fixed
indentations in lower-em-sjlj-sret.ll. (lower-em-sjlj.ll uses a
different test command (llc), so it couldn't be combined)
This teaches the importer to handle INSERT_SUBREG instructions.
We were missing patterns involving INSERT_SUBREG in AArch64. It appears in
AArch64InstrInfo.td 107 times, and 14 times in AArch64InstrFormats.td.
To meaningfully import it, the GlobalISelEmitter needs to know how to infer a
super register class for a given register class.
This patch introduces the following:
- `getSuperRegForSubReg`, a function which finds the largest register class
which supports a value type and subregister index
- `inferSuperRegisterClass`, a function which finds the appropriate super
register class for an INSERT_SUBREG'
- `inferRegClassFromPattern`, a function which allows for some trivial
lookthrough into instructions
- `getRegClassFromLeaf`, a helper function which returns the register class for
a leaf `TreePatternNode`
- Support for subregister index operands in `importExplicitUseRenderer`
It also
- Updates tests in each backend which are impacted by the change
- Adds GlobalISelEmitterSubreg.td to test that we import and skip the expected
patterns
As a result of this patch, INSERT_SUBREG patterns in X86 may use the
LOW32_ADDR_ACCESS_RBP register class instead of GR32. This is correct, since the
register class contains the same registers as GR32 (except with the addition of
RBP). So, this also teaches X86 to handle that register class. This is in line
with X86ISelLowering, which treats this as a GR class.
[Attributor] Adjust and test the iteration bound of tests
Summary:
Try to verify how many iterations we need for a fixpoint in our tests.
This patch adjust the way we count to make it easier to follow. It also
adjusts the bounds to actually account for a fixpoint and not only the
minimum number to pass all checks.
FileManager: Use llvm::Expected in new getFileRef API
`FileManager::getFileRef` is a modern API which we expect to convert to
over time. We should modernize the error handling as well, using
`llvm::Expected` instead of `llvm::ErrorOr`, to help clients that care
about errors to ensure nothing is missed.
However, not all clients care. I've also added another path for those
that don't:
- `FileEntryRef` is now copy- and move-assignable (using a pointer
instead of a reference).
- `FileManager::getOptionalFileRef` returns an `llvm::Optional` instead
of `llvm::Expected`.
- Added an `llvm::expectedToOptional` utility in case this is useful
elsewhere.
Craig Topper [Mon, 26 Aug 2019 18:23:26 +0000 (18:23 +0000)]
[X86] Add a hack to combinePMULDQ to manually turn SIGN_EXTEND_VECTOR_INREG/ZERO_EXTEND_VECTOR_INREG inputs into an ANY_EXTEND_VECTOR_INREG style shuffle
ANY_EXTEND_VECTOR_INREG isn't currently marked Legal which prevents SimplifyDemandedBits from turning SIGN/ZERO_EXTEND_VECTOR_INREG into it after op legalization. And even if we did make it Legal, combineExtInVec doesn't do shuffle combining on the VECTOR_INREG nodes until AVX1.
This patch adds a quick hack to combinePMULDQ to directly emit a vector shuffle corresponding to an ANY_EXTEND_VECTOR_INREG operation. This avoids both of those issues without creating any other regressions on our tests. The xop-ifma.ll change here also showed up when I tried to resurrect D56306 and seemed to be the only improvement that patch creates now. This is a more direct way to get the benefit.
Craig Topper [Mon, 26 Aug 2019 17:59:11 +0000 (17:59 +0000)]
[DAGCombiner][X86] Teach SimplifyVBinOp to fold VBinOp (concat X, undef/constant), (concat Y, undef/constant) -> concat (VBinOp X, Y), VecC
This improves the combine I included in D66504 to handle constants in the upper operands of the concat. If we can constant fold them away we can pull the concat after the bin op. This helps with chains of madd reductions on X86 from loop unrolling. The loop madd reduction pattern creates pmaddwd with half the width of the add that follows it using zeroes to fill the upper bits. If we have two of these added together we can pull the zeroes through the accumulating add and then shrink it.
By default, the Attributor tracks potential dependences between abstract
attributes based on the issued Attributor::getAAFor queries. This
simplifies the development of new abstract attributes but it can also
lead to spurious dependences that might increase compile time and make
internalization harder (D63312). With this patch, abstract attributes
can opt-out of implicit dependence tracking and instead register
dependences explicitly. It is up to the implementation to make sure all
existing dependences are registered.
Amaury Sechet [Mon, 26 Aug 2019 17:02:12 +0000 (17:02 +0000)]
[DAGCombiner] Remove a bunch of redundant AddToWorklist calls.
Summary:
This comes as a first step toward processing the DAG nodes in topological orders. Doing so ensure that arguments of a node are combined before the node itself is combined, which exposes ore opportunities for optimization and/or reduce the amount of patterns a node has to match for.
DAGCombiner adding nodes to the worklist is various places causes the nodes to be in a different order from what is expected. In addition, this is reduant because these nodes end up being added to the worklist anyways due to the machinery at line 1621.
Wei Mi [Mon, 26 Aug 2019 15:54:16 +0000 (15:54 +0000)]
[SampleFDO] Extract the code calling each section reader to readOneSection.
This is a followup of https://reviews.llvm.org/D66513. The code calling each
section reader should be put into a separate function (readOneSection), so
SampleProfileExtBinaryReader can override it. Otherwise, the base class
SampleProfileExtBinaryBaseReader will need to be aware of all different kinds
of section readers. That is not right.
Bjorn Pettersson [Mon, 26 Aug 2019 09:29:53 +0000 (09:29 +0000)]
[LoopUnroll] Handle certain PHIs in full unrolling properly
Summary:
When reconstructing the CFG of the loop after unrolling,
LoopUnroll could in some cases remove the phi operands of
loop-carried values instead of preserving them, resulting
in undef phi values after loop unrolling.
When doing this reconstruction, avoid removing incoming
phi values for phis in the successor blocks if the successor
is the block we are jumping to anyway.
Craig Topper [Sun, 25 Aug 2019 17:59:49 +0000 (17:59 +0000)]
[X86][DAGCombiner] Teach narrowShuffle to use concat_vectors instead of inserting into undef
Summary:
Concat_vectors is more canonical during early DAG combine. For example, its what's used by SelectionDAGBuilder when converting IR shuffles into SelectionDAG shuffles when element counts between inputs and mask don't match. We also have combines in DAGCombiner than can pull concat_vectors through a shuffle. See partitionShuffleOfConcats. So it seems like concat_vectors is a better operation to use here. I had to teach DAGCombiner's SimplifyVBinOp to also handle concat_vectors with undef. I haven't checked yet if we can remove the INSERT_SUBVECTOR version in there or not.
I didn't want to mess with the other caller of getShuffleHalfVectors that's used during shuffle lowering where insert_subvector probably is what we want to produce so I've enabled this via a boolean passed to the function.
Xing Xue [Sun, 25 Aug 2019 15:17:25 +0000 (15:17 +0000)]
[PowerPC][AIX] Adds support for writing the .data section in assembly files
Summary:
Adds support for generating the .data section in assembly files for global variables with a non-zero initialization. The support for writing the .data section in XCOFF object files will be added in a follow-on patch. Any relocations are not included in this patch.
Bjorn Pettersson [Sun, 25 Aug 2019 10:54:44 +0000 (10:54 +0000)]
Fixup in test/DebugInfo/X86/live-debug-vars-discard-invalid.mir
The test case used invalid source operands as input
to BTS64rr instructions (feeding register operands with
immediates). This patch changes those instruction into
using BTS64ri8 instead, which seems to better match the
operand types.
Fixes problems seen in https://reviews.llvm.org/D63973.
Nikita Popov [Sun, 25 Aug 2019 08:04:22 +0000 (08:04 +0000)]
[SDAG] Fold umul_lohi with 0 or 1 multiplicand
These can turn up during multiplication legalization. In principle
these should also apply to smul_lohi, but I wasn't able to figure
out how to produce those with the necessary operands.
Matt Arsenault [Sat, 24 Aug 2019 22:14:37 +0000 (22:14 +0000)]
AMDGPU: Generate check lines
Checking all the instructions will help catch LICM changes when passes
are reordered. Also switch to using gfx9 since global stores make the
relevant instructions more obvious.
Benjamin Kramer [Sat, 24 Aug 2019 16:19:32 +0000 (16:19 +0000)]
Hack around a GCC ICE that was fixed in GCC 6.2
lib/Target/X86/AsmParser/X86AsmParser.cpp: In member function ‘void {anonymous}::X86AsmParser::SwitchMode(unsigned int)’:
lib/Target/X86/AsmParser/X86AsmParser.cpp:927:76: in constexpr expansion of ‘AllModes.llvm::FeatureBitset::FeatureBitset(std::initializer_list<unsigned int>{((const unsigned int*)(& ._157)), 3u})’
include/llvm/MC/SubtargetFeature.h:56:12: in constexpr expansion of ‘llvm::FeatureBitset::set(I)’
lib/Target/X86/AsmParser/X86AsmParser.cpp:927:76: internal compiler error: in fold_binary_loc, at fold-const.c:9921
FeatureBitset AllModes({X86::Mode64Bit, X86::Mode32Bit, X86::Mode16Bit});
^
Benjamin Kramer [Sat, 24 Aug 2019 15:46:49 +0000 (15:46 +0000)]
Try to make MSVC 2017 happy.
AArch64BaseInfo.h(316): error C3615: constexpr function 'llvm::SysAlias::SysAlias' cannot result in a constant expression
AArch64BaseInfo.h(316): note: failure was caused by call of undefined function or one not declared 'constexpr'
AArch64BaseInfo.h(316): note: see usage of 'llvm::FeatureBitset::FeatureBitset'
Benjamin Kramer [Sat, 24 Aug 2019 15:02:44 +0000 (15:02 +0000)]
Use a bit of relaxed constexpr to make FeatureBitset costant intializable
This requires std::intializer_list to be a literal type, which it is
starting with C++14. The downside is that std::bitset is still not
constexpr-friendly so this change contains a re-implementation of most
of it.
Roman Lebedev [Sat, 24 Aug 2019 06:49:51 +0000 (06:49 +0000)]
[Constant] Add 'isElementWiseEqual()' method
Promoting it from InstCombine's tryToReuseConstantFromSelectInComparison().
Return true if this constant and a constant 'Y' are element-wise equal.
This is identical to just comparing the pointers, with the exception that
for vectors, if only one of the constants has an `undef` element in some
lane, the constants still match.
Summary:
`matchThreeWayIntCompare()` looks for
```
select i1 (a == b),
i32 Equal,
i32 (select i1 (a < b), i32 Less, i32 Greater)
```
but both of these selects/compares can be in it's commuted form,
so out of 8 variants, only the two most basic ones is handled.
This fixes regression being introduced in D66232.
Roman Lebedev [Sat, 24 Aug 2019 06:49:25 +0000 (06:49 +0000)]
[InstCombine] Try to reuse constant from select in leading comparison
Summary:
If we have e.g.:
```
%t = icmp ult i32 %x, 65536
%r = select i1 %t, i32 %y, i32 65535
```
the constants `65535` and `65536` are suspiciously close.
We could perform a transformation to deduplicate them:
```
Name: ult
%t = icmp ult i32 %x, 65536
%r = select i1 %t, i32 %y, i32 65535
=>
%t.inv = icmp ugt i32 %x, 65535
%r = select i1 %t.inv, i32 65535, i32 %y
```
https://rise4fun.com/Alive/avb
While this may seem esoteric, this should certainly be good for vectors
(less constant pool usage) and for opt-for-size - need to have only one constant.
But the real fun part here is that it allows further transformation,
in particular it finishes cleaning up the `clamp` folding,
see e.g. `canonicalize-clamp-with-select-of-constant-threshold-pattern.ll`.
We start with e.g.
```
%dont_need_to_clamp_positive = icmp sle i32 %X, 32767
%dont_need_to_clamp_negative = icmp sge i32 %X, -32768
%clamp_limit = select i1 %dont_need_to_clamp_positive, i32 -32768, i32 32767
%dont_need_to_clamp = and i1 %dont_need_to_clamp_positive, %dont_need_to_clamp_negative
%R = select i1 %dont_need_to_clamp, i32 %X, i32 %clamp_limit
```
without this patch we currently produce
```
%1 = icmp slt i32 %X, 32768
%2 = icmp sgt i32 %X, -32768
%3 = select i1 %2, i32 %X, i32 -32768
%R = select i1 %1, i32 %3, i32 32767
```
which isn't really a `clamp` - both comparisons are performed on the original value,
this patch changes it into
```
%1.inv = icmp sgt i32 %X, 32767
%2 = icmp sgt i32 %X, -32768
%3 = select i1 %2, i32 %X, i32 -32768
%R = select i1 %1.inv, i32 32767, i32 %3
```
and then the magic happens! Some further transform finishes polishing it and we finally get:
```
%t1 = icmp sgt i32 %X, -32768
%t2 = select i1 %t1, i32 %X, i32 -32768
%t3 = icmp slt i32 %t2, 32767
%R = select i1 %t3, i32 %t2, i32 32767
```
which is beautiful and just what we want.
Proofs for `getFlippedStrictnessPredicateAndConstant()` for de-canonicalization:
https://rise4fun.com/Alive/THl
Proofs for the fold itself: https://rise4fun.com/Alive/THl