Igor Kudrin [Tue, 10 Sep 2019 09:03:24 +0000 (09:03 +0000)]
[DWARF] Add a unit test for DWARFUnit::getLength().
This is a follow-up of rL369529, where the return value of
DWARFUnit::getLength() was changed from uint32_t to uint64_t.
The test checks that a unit header with Length > 4G can be successfully
parsed and the value of the Length field is not truncated.
[Alignment] Use Align for TargetLowering::MinStackArgumentAlignment
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
[LegalizeTypes] Teach SoftenFloatOp_SELECT_CC to handle operand 2 or 3 being softened.
This can only happen on X86 when fp128 is a legal type, but we
go through softening to generate libcalls. This causes fp128 to
be softened to fp128 instead of an integer type. This can be
removed if D67128 lands.
Petr Hosek [Tue, 10 Sep 2019 03:11:39 +0000 (03:11 +0000)]
clang-misexpect: Profile Guided Validation of Performance Annotations in LLVM
This patch contains the basic functionality for reporting potentially
incorrect usage of __builtin_expect() by comparing the developer's
annotation against a collected PGO profile. A more detailed proposal and
discussion appears on the CFE-dev mailing list
(http://lists.llvm.org/pipermail/cfe-dev/2019-July/062971.html) and a
prototype of the initial frontend changes appear here in D65300
We revised the work in D65300 by moving the misexpect check into the
LLVM backend, and adding support for IR and sampling based profiles, in
addition to frontend instrumentation.
We add new misexpect metadata tags to those instructions directly
influenced by the llvm.expect intrinsic (branch, switch, and select)
when lowering the intrinsics. The misexpect metadata contains
information about the expected target of the intrinsic so that we can
check against the correct PGO counter when emitting diagnostics, and the
compiler's values for the LikelyBranchWeight and UnlikelyBranchWeight.
We use these branch weight values to determine when to emit the
diagnostic to the user.
A future patch should address the comment at the top of
LowerExpectIntrisic.cpp to hoist the LikelyBranchWeight and
UnlikelyBranchWeight values into a shared space that can be accessed
outside of the LowerExpectIntrinsic pass. Once that is done, the
misexpect metadata can be updated to be smaller.
In the long term, it is possible to reconstruct portions of the
misexpect metadata from the existing profile data. However, we have
avoided this to keep the code simple, and because some kind of metadata
tag will be required to identify which branch/switch/select instructions
are influenced by the use of llvm.expect
[Windows] Replace TrapUnreachable with an int3 insertion pass
This is an alternative to D66980, which was reverted. Instead of
inserting a pseudo instruction that optionally expands to nothing, add a
pass that inserts int3 when appropriate after basic block layout.
LangRef: mention MSan's problem with speculative conditional branches.
Summary:
This short blurb aims to disallow optimizations like we had to revert
(under MSan) in
https://reviews.llvm.org/D21165
https://bugs.llvm.org/show_bug.cgi?id=28054
https://reviews.llvm.org/D67205
Philip Reames [Mon, 9 Sep 2019 20:54:13 +0000 (20:54 +0000)]
[LoopVectorize] Leverage speculation safety to avoid masked.loads
If we're vectorizing a load in a predicated block, check to see if the load can be speculated rather than predicated. This allows us to generate a normal vector load instead of a masked.load.
To do so, we must prove that all bytes accessed on any iteration of the original loop are dereferenceable, and that all loads (across all iterations) are properly aligned. This is equivelent to proving that hoisting the load into the loop header in the original scalar loop is safe.
Note: There are a couple of code motion todos in the code. My intention is to wait about a day - to be sure this sticks - and then perform the NFC motion without furthe review.
Philip Reames [Mon, 9 Sep 2019 20:26:52 +0000 (20:26 +0000)]
[Tests] Add anyextend tests for unordered atomics
Motivated by work on changing our representation of unordered atomics in SelectionDAG, but as an aside, all our lowerings for O3 are terrible. Even the ones which ignore the atomicity.
Philip Reames [Mon, 9 Sep 2019 19:23:22 +0000 (19:23 +0000)]
Introduce infrastructure for an incremental port of SelectionDAG atomic load/store handling
This is the first patch in a large sequence. The eventual goal is to have unordered atomic loads and stores - and possibly ordered atomics as well - handled through the normal ISEL codepaths for loads and stores. Today, there handled w/instances of AtomicSDNodes. The result of which is that all transforms need to be duplicated to work for unordered atomics. The benefit of the current design is that it's harder to introduce a silent miscompile by adding an transform which forgets about atomicity. See the thread on llvm-dev titled "FYI: proposed changes to atomic load/store in SelectionDAG" for further context.
Note that this patch is NFC unless the experimental flag is set.
The basic strategy I plan on taking is:
introduce infrastructure and a flag for testing (this patch)
Audit uses of isVolatile, and apply isAtomic conservatively*
piecemeal conservative* update generic code and x86 backedge code in individual reviews w/tests for cases which didn't check volatile, but can be found with inspection
flip the flag at the end (with minimal diffs)
Work through todo list identified in (2) and (3) exposing performance ops
(*) The "conservative" bit here is aimed at minimizing the number of diffs involved in (4). Ideally, there'd be none. In practice, getting it down to something reviewable by a human is the actual goal. Note that there are (currently) no paths which produce LoadSDNode or StoreSDNode with atomic MMOs, so we don't need to worry about preserving any behaviour there.
We've taken a very similar strategy twice before with success - once at IR level, and once at the MI level (post ISEL).
Matt Arsenault [Mon, 9 Sep 2019 18:57:51 +0000 (18:57 +0000)]
AMDGPU/GlobalISel: Legalize G_BUILD_VECTOR v2s16
Handle it the same way as G_BUILD_VECTOR_TRUNC. Arguably only
G_BUILD_VECTOR_TRUNC should be legal for this, but G_BUILD_VECTOR will
probably be more convenient in most cases.
Eli Friedman [Mon, 9 Sep 2019 18:29:27 +0000 (18:29 +0000)]
[IfConversion] Correctly handle cases where analyzeBranch fails.
If analyzeBranch fails, on some targets, the out parameters point to
some blocks in the function. But we can't use that information, so make
sure to clear it out. (In some places in IfConversion, we assume that
any block with a TrueBB is analyzable.)
The change to the testcase makes it trigger a bug on builds without this
fix: IfConvertDiamond tries to perform a followup "merge" operation,
which isn't legal, and we somehow end up with a branch to a deleted MBB.
I'm not sure how this doesn't crash the compiler.
Matt Arsenault [Mon, 9 Sep 2019 18:10:31 +0000 (18:10 +0000)]
AMDGPU: Use PatFrags to allow selecting custom nodes or intrinsics
This enables GlobalISel to handle various intrinsics. The custom node
pattern will be ignored, and the intrinsic will work. This will also
allow SelectionDAG to directly select the intrinsics, but as they are
all custom lowered to the nodes, this ends up leaving dead code in the
table.
Eventually either GlobalISel should add the equivalent of custom nodes
equivalent, or intrinsics should be directly used. These each have
different tradeoffs.
There are a few more to handle, but these are easy to handle
ones. Some others fail for other reasons.
[X86] Allow _MM_FROUND_CUR_DIRECTION and _MM_FROUND_NO_EXC to be used together on instructions that only support SAE and not embedded rounding.
Current for SAE instructions we only allow _MM_FROUND_CUR_DIRECTION(bit 2) or _MM_FROUND_NO_EXC(bit 3) to be used as the immediate passed to the inrinsics. But these instructions don't perform rounding so _MM_FROUND_CUR_DIRECTION is just sort of a default placeholder when you don't want to suppress exceptions. Using _MM_FROUND_NO_EXC by itself is really bit equivalent to (_MM_FROUND_NO_EXC | _MM_FROUND_TO_NEAREST_INT) since _MM_FROUND_TO_NEAREST_INT is 0. Since we aren't rounding on these instructions we should also accept (_MM_FROUND_CUR_DIRECTION | _MM_FROUND_NO_EXC) as equivalent to (_MM_FROUND_NO_EXC). icc allows this, but gcc does not.
The bitstream remark serializer landed in r367372.
This adds a bitstream remark parser that parser bitstream remark files
to llvm::remarks::Remark objects through the RemarkParser interface.
A few interesting things to point out:
* There are parsing helpers to parse the different types of blocks
* The main parsing helper allows us to parse remark metadata and open an
external file containing the encoded remarks
* This adds a dependency from the Remarks library to the BitstreamReader
library
* The testing strategy is to create a remark entry through YAML, parse
it, serialize it to bitstream, parse that back and compare the objects.
* There are close to no tests for malformed bitstream remarks, due to
the lack of textual format for the bitstream format.
* This adds a new C API for parsing bitstream remarks:
LLVMRemarkParserCreateBitstream.
* This bumps the REMARKS_API_VERSION to 1.
Simon Atanasyan [Mon, 9 Sep 2019 17:28:45 +0000 (17:28 +0000)]
[mips] Fix decoding of microMIPS JALX instruction
microMIPS jump and link exchange instruction stores a target in a
26-bits field. Despite other microMIPS JAL instructions these bits
are target address shifted right 2 bits [1]. The patch fixes the
JALX instruction decoding and uses 2-bit shift.
[1] MIPS Architecture for Programmers Volume II-B: The microMIPS32 Instruction Set
Matt Arsenault [Mon, 9 Sep 2019 17:25:35 +0000 (17:25 +0000)]
AMDGPU: Move MnemonicAlias out of instruction def hierarchy
Unfortunately MnemonicAlias defines a "Predicates" field just like an
instruction or pattern, with a somewhat different interpretation.
This ends up overriding the intended Predicates set by
PredicateControl on the pseudoinstruction defintions with an empty
list. This allowed incorrectly selecting instructions that should have
been rejected due to the SubtargetPredicate from patterns on the
instruction definition.
This does remove the divergent predicate from the 64-bit shift
patterns, which were already not used for the 32-bit shift, so I'm not
sure what the point was. This also removes a second, redundant copy of
the 64-bit divergent patterns.
[GlobalISel][AArch64] Handle tail calls with non-void return types
Just return once you emit the call, which is exactly what SelectionDAG does in
this situation.
Update call-translator-tail-call.ll.
Also update dllimport.ll to show that we tail call here in GISel again. Add
-verify-machineinstrs to the GISel line too, to defend against verifier
failures.
Fangrui Song [Mon, 9 Sep 2019 16:45:17 +0000 (16:45 +0000)]
[yaml2obj] Simplify p_filesz/p_memsz computing
This fixes a bug as well. When "FileSize:" (p_filesz) is specified and
different from the actual value, the following code probably should not
use PHeader.p_filesz:
if (SHeader->sh_offset == PHeader.p_offset + PHeader.p_filesz)
PHeader.p_memsz += SHeader->sh_size;
David Green [Mon, 9 Sep 2019 16:35:49 +0000 (16:35 +0000)]
[ARM] Fix loads and stores for predicate vectors
These predicate vectors can usually be loaded and stored with a single
instruction, a VSTR_P0. However this instruction will store the entire P0
predicate, 16 bits, zeroextended to 32bits. Each lane of the the
v4i1/v8i1/v16i1 representing 4/2/1 bits.
As far as I understand, when llvm says "store this v4i1", it really does need
to store 4 bits (or 8, that being the size of a byte, with this bottom 4 as the
interesting bits). For example a bitcast from a v8i1 to a i8 is defined as a
store followed by a load, which is how the code is expanded.
So this instead lowers the v4i1/v8i1 load/store through some shuffles to get
the bits into the correct positions. This, as you might imagine, is not as
efficient as a single instruction. But I believe it is needed for correctness.
v16i1 equally should not load/store 32bits, only storing the 16bits of data.
Stack loads/stores are still using the VSTR_P0 (as can be seen by the test not
changing). This is fine as they are self-consistent, it is only "externally
observable loads/stores" (from our point of view) that need to be corrected.
Simon Tatham [Mon, 9 Sep 2019 15:17:26 +0000 (15:17 +0000)]
[ARM] Remove some spurious MVE reduction instructions.
The family of 'dual-accumulating' vector multiply-add instructions
(VMLADAV, VMLALDAV and VRMLALDAVH) can all operate on both signed and
unsigned integer types, and they all have an 'exchange' variant (with
an X in the name) that modifies which pairs of vector lanes in the two
inputs are multiplied together. But there's a clause in the spec that
says that the X variants //don't// operate on unsigned integer types,
only signed. You can have X, or unsigned, or neither, but not both.
We didn't notice that clause when we implemented the MC support for
these instructions, so LLVM believes that things like VMLADAVX.U8 do
exist, contradicting the spec. Here I fix that by conditioning them
out in Tablegen.
In order to do that, I've reversed the nesting order of the Tablegen
multiclasses for those instructions. Previously, the innermost
multiclass generated the X and not-X variants, and the one outside
that generated the A and not-A variants. Now X is done by the outer
multiclass, which allows me to bypass that one when I only want the
two not-X variants.
Changing the multiclass nesting order also changes the names of the
instruction ids unless I make a special effort not to. I decided that
while I was changing them anyway I'd make them look nicer; so now the
instructions have names like MVE_VMLADAVs32 or MVE_VMLADAVaxs32,
instead of cumbersome _noacc_noexch suffixes.
The corresponding multiply-subtract instructions are unaffected. Those
don't accept unsigned types at all, either in the spec or in LLVM.
James Molloy [Mon, 9 Sep 2019 13:17:55 +0000 (13:17 +0000)]
[DFAPacketizer] Reapply: Track resources for packetized instructions
Reapply with fix to reduce resources required by the compiler - use
unsigned[2] instead of std::pair. This causes clang and gcc to compile
the generated file multiple times faster, and hopefully will reduce
the resource requirements on Visual Studio also. This fix is a little
ugly but it's clearly the same issue the previous author of
DFAPacketizer faced (the previous tables use unsigned[2] rather uglily
too).
This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
resources were allocated to the packetized instructions.
This is particularly important for targets that do their own bundle packing - it's not
sufficient to know simply that instructions can share a packet; which slots are used is
also required for encoding.
This extends the emitter to emit a side-table containing resource usage diffs for each
state transition. The packetizer maintains a set of all possible resource states in its
current state. After packetization is complete, all remaining resource states are
possible packetization strategies.
The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
(most uses of the packetizer like MachinePipeliner don't care and don't need the extra
maintained state).
Summary:
This tests inlining size thresholds, but relies on the output of running
the full O2 pipeline, making it brittle against changes in unrelated
passes.
Only run the inlining pass and set thresholds on the test RUN line
instead.
Simon Pilgrim [Mon, 9 Sep 2019 12:33:22 +0000 (12:33 +0000)]
Revert rL371198 from llvm/trunk: [DFAPacketizer] Track resources for packetized instructions
This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
resources were allocated to the packetized instructions.
This is particularly important for targets that do their own bundle packing - it's not
sufficient to know simply that instructions can share a packet; which slots are used is
also required for encoding.
This extends the emitter to emit a side-table containing resource usage diffs for each
state transition. The packetizer maintains a set of all possible resource states in its
current state. After packetization is complete, all remaining resource states are
possible packetization strategies.
The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
(most uses of the packetizer like MachinePipeliner don't care and don't need the extra
maintained state).
Differential Revision: https://reviews.llvm.org/D66936
........
Reverted as this is causing "compiler out of heap space" errors on MSVC 2017/19 NDEBUG builds
David Green [Mon, 9 Sep 2019 10:46:25 +0000 (10:46 +0000)]
[ARM] Prevent generating NEON stack accesses under MVE.
We should not be generating Neon stack loads/stores even for these large
registers.
No test here because my understanding is we will only generate these QQPR regs
for intrinsics and VLDn's. The tests will follow once those are available.
Tim Northover [Mon, 9 Sep 2019 10:04:23 +0000 (10:04 +0000)]
GlobalISel: add combiner to form indexed loads.
Loosely based on DAGCombiner version, but this part is slightly simpler in
GlobalIsel because all address calculation is performed by G_GEP. That makes
the inc/dec distinction moot so there's just pre/post to think about.
No targets can handle it yet so testing is via a special flag that overrides
target hooks.
George Rimar [Mon, 9 Sep 2019 09:43:03 +0000 (09:43 +0000)]
[lib/ObjectYAML] - Improve and cleanup error reporting in ELFState<ELFT> class.
The aim of this patch is to refactor how we handle and report error.
I suggest to use the same approach we use in LLD: delayed error reporting.
For that I introduced 'HasError' flag which triggers when we report an error.
Now we do not exit instantly on any error. The benefits are:
1) There are no more 'exit(1)' calls in the library code.
2) Code was simplified significantly in a few places.
3) It is now possible to print multiple errors instead of only one.
Also, I changed the messages to be lower case and removed a full stop.
Sam Parker [Mon, 9 Sep 2019 08:39:14 +0000 (08:39 +0000)]
[ARM][ParallelDSP] Fix for sext input
The incoming accumulator value can be discovered through a sext, in
which case there will be a mismatch between the input and the result.
So sign extend the accumulator input if we're performing a 64-bit mac.
Kai Luo [Mon, 9 Sep 2019 02:32:42 +0000 (02:32 +0000)]
[MachineCopyPropagation] Remove redundant copies after TailDup via machine-cp
Summary:
After tailduplication, we have redundant copies. We can remove these
copies in machine-cp if it's safe to, i.e.
```
$reg0 = OP ...
... <<< No read or clobber of $reg0 and $reg1
$reg1 = COPY $reg0 <<< $reg0 is killed
...
<RET>
```
will be transformed to
```
$reg1 = OP ...
...
<RET>
```
This patch decodes target and faux shuffles with getTargetShuffleInputs - a reduced version of resolveTargetShuffleInputs that doesn't resolve SM_SentinelZero cases, so we can correctly remove zero vectors if they aren't demanded.
[X86] Remove call to getZeroVector from materializeVectorConstant. Add isel patterns for zero vectors with all types.
The change to avx512-vec-cmp.ll is a regression, but should be
easy to fix. It occurs because the getZeroVector call was
canonicalizing both sides to the same node, then SimplifySelect
was able to simplify it. But since only called getZeroVector
on some VTs this isn't a robust way to combine this.
The change to vector-shuffle-combining-ssse3.ll is more
instructions, but removes a constant pool load so its unclear
if its a regression or not.
Roman Lebedev [Sun, 8 Sep 2019 20:14:15 +0000 (20:14 +0000)]
[InstSimplify] simplifyUnsignedRangeCheck(): if we know that X != 0, handle more cases (PR43246)
Summary:
This is motivated by D67122 sanitizer check enhancement.
That patch seemingly worsens `-fsanitize=pointer-overflow`
overhead from 25% to 50%, which strongly implies missing folds.
In this particular case, given
```
char* test(char& base, unsigned long offset) {
return &base + offset;
}
```
it will end up producing something like
https://godbolt.org/z/LK5-iH
which after optimizations reduces down to roughly
```
define i1 @t0(i8* nonnull %base, i64 %offset) {
%base_int = ptrtoint i8* %base to i64
%adjusted = add i64 %base_int, %offset
%non_null_after_adjustment = icmp ne i64 %adjusted, 0
%no_overflow_during_adjustment = icmp uge i64 %adjusted, %base_int
%res = and i1 %non_null_after_adjustment, %no_overflow_during_adjustment
ret i1 %res
}
```
Without D67122 there was no `%non_null_after_adjustment`,
and in this particular case we can get rid of the overhead:
Here we add some offset to a non-null pointer,
and check that the result does not overflow and is not a null pointer.
But since the base pointer is already non-null, and we check for overflow,
that overflow check will already catch the null pointer,
so the separate null check is redundant and can be dropped.
Alive proofs:
https://rise4fun.com/Alive/WRzq
There are more patterns of "unsigned-add-with-overflow", they are not handled here,
but this is the main pattern, that we currently consider canonical,
so it makes sense to handle it.
[InstCombine] fold extract+insert into identity shuffle
This is similar to the existing fold for splats added with:
rL365379
If we can adjust the shuffle mask to include another element
in an identity mask (if it changes vector length, that's an
extract/insert subvector operation in the backend), then that
can eliminate extractelement/insertelement pairs in IR.
All targets are expected to lower shuffles with identity masks
efficiently.