Teresa Johnson [Mon, 18 Dec 2017 18:00:32 +0000 (18:00 +0000)]
[ThinLTO] Make distributed indexes test more robust
Modify test so that it passes in the reverse-iteration bot.
We use DenseMap instead of std::map for the summaries to emit into
distributed index files. The iteration order is not defined, but
it is deterministic, which is good enough.
Sander de Smalen [Mon, 18 Dec 2017 16:48:53 +0000 (16:48 +0000)]
[AArch64][SVE] Asm: Improve diagnostics further when +sve is not specified
Summary: Patch [4/4] in a series to add parsing of predicates and properly parse SVE ZIP1/ZIP2 instructions. This patch further improves diagnostic messages for when the SVE feature is not specified.
Sander de Smalen [Mon, 18 Dec 2017 14:34:24 +0000 (14:34 +0000)]
[TableGen][AsmMatcherEmitter] Only choose specific diagnostic for enabled instruction
Summary:
When emitting a diagnostic for an invalid operand, a specific diagnostic
should only be reported when the instruction being matched is actually
enabled by the feature flags.
Patch [3/4] in a series to add parsing of predicates and properly parse SVE
ZIP1/ZIP2 instructions. This patch fixes bogus diagnostic messages for when
the SVE feature is not specified.
Diana Picus [Mon, 18 Dec 2017 13:22:28 +0000 (13:22 +0000)]
[ARM GlobalISel] Fix G_(UN)MERGE_VALUES handling after r319524
r319524 has made more G_MERGE_VALUES/G_UNMERGE_VALUES pairs legal than
are supported by the rest of the pipeline. Restrict that to only the
cases that we can currently handle: packing 32-bit values into 64-bit
ones, when we have hardware FP.
Max Kazantsev [Mon, 18 Dec 2017 13:01:32 +0000 (13:01 +0000)]
[ConstantRange] Support for ashr in ConstantRange computation
Extend the ConstantRange implementation to compute the range of possible values resulting from an arithmetic right shift operation.
There will be a follow up patch to leverage this constant range infrastructure in LazyValueInfo.
Tim Northover [Mon, 18 Dec 2017 10:36:00 +0000 (10:36 +0000)]
AArch64: work around how Cyclone handles "movi.2d vD, #0".
For Cylone, the instruction "movi.2d vD, #0" is executed incorrectly in some rare
circumstances. Work around the issue conservatively by avoiding the instruction entirely.
This patch changes CodeGen so that problematic instructions are never
generated, and the AsmParser so that an equivalent instruction is used (with a
warning).
Sam Parker [Mon, 18 Dec 2017 10:04:27 +0000 (10:04 +0000)]
[DAGCombine] Move AND nodes to multiple load leaves
Search from AND nodes to find whether they can be propagated back to
loads, so that the AND and load can be combined into a narrow load.
We search through OR, XOR and other AND nodes and all bar one of the
leaves are required to be loads or constants. The exception node then
needs to be masked off meaning that the 'and' isn't removed, but the
loads(s) are narrowed still.
Hiroshi Inoue [Mon, 18 Dec 2017 06:47:37 +0000 (06:47 +0000)]
[SROA] Disable non-whole-alloca splits by default
This patch introduce a switch to control splitting of non-whole-alloca slices with default off.
The switch will be default on again after fixing an issue reported in PR35657.
Craig Topper [Mon, 18 Dec 2017 04:50:05 +0000 (04:50 +0000)]
[X86] Fix mistake that I made when splitting up the setOperationAction calls recently.
The block I moved things that need BWI and 512-bit or VLX is incorrectly qualified with just hasBWI || hasVLX. Here I've qualified it with hasBWI && (hasAVX512 || hasVLX) where the hasAVX512 will be replaced with allowing 512-bit vectors in an upcoming patch.
Serguei Katkov [Mon, 18 Dec 2017 04:25:07 +0000 (04:25 +0000)]
[CGP] Fix the handling select inst in complex addressing mode
When we put the value in select placeholder we must pass
the value through simplification tracker due to the value might
be already simplified and erased.
Craig Topper [Sun, 17 Dec 2017 18:23:45 +0000 (18:23 +0000)]
[X86] Make the code that creates fmaddsub from build_vector of extracts and inserts functional and add tests.
Summary:
We had no tests for this and we couldn't do the optimization because of a bad use count check. We need to know how many non-undef pieces of the build vector were filled in and ensure our use count is equal to that. But on the shuffle combine version we need the use count to be 2.
The missing coverage was noticed during the review of D40335.
Simon Pilgrim [Sat, 16 Dec 2017 23:32:18 +0000 (23:32 +0000)]
[X86][AVX] lowerVectorShuffleAsBroadcast - aggressively peek through BITCASTs
Assuming we can safely adjust the broadcast index for the new type to keep it suitably aligned, then peek through BITCASTs when looking for the broadcast source.
Sean Fertile [Sat, 16 Dec 2017 22:41:39 +0000 (22:41 +0000)]
[Memcpy Loop Lowering] Only calculate residual size/bytes copied when needed.
If the loop operand type is int8 then there will be no residual loop for the
unknown size expansion. Dont create the residual-size and bytes-copied values
when they are not needed.
Craig Topper [Sat, 16 Dec 2017 21:12:24 +0000 (21:12 +0000)]
[X86] Don't pass a zero input to the passthru operand of getVectorMaskingNode/getScalarMaskingNode when its going to emit an ISD::OR/ISD::AND. NFCI
In those cases, the pass thru operand of the methods isn't used. The calls to the scalar version were passing a MVT::i1 zero, which is an illegal type at the stage this code runs.
Craig Topper [Sat, 16 Dec 2017 19:31:36 +0000 (19:31 +0000)]
[X86] When using vpopcntdq for ctpop of v8i16 vectors, only promote to v8i32.
Previously we promoted to v8i64, but we don't need to go all the way to 512-bits. If we have VLX we can use the 256-bit instruction. And even if we don't have VLX we can widen v8i32 to v16i32 and drop the upper half.
Craig Topper [Sat, 16 Dec 2017 18:35:31 +0000 (18:35 +0000)]
[X86] Combine some more scheduler model entries using regular expressions.
We had a lot of separate 32 and 64 instructions that had the same scheduling data. This merges them into the same regular expression. This is pretty consistent with a lot of other instructions.
We want to do this for 2 reasons:
1. Value tracking does not recognize the ashr variant, so it would fail to match for cases like D39766.
2. DAGCombiner does better at producing optimal codegen when we have the cmp+sel pattern.
More detail about what happens in the backend:
1. DAGCombiner has a generic transform for all targets to convert the scalar cmp+sel variant of abs
into the shift variant. That is the opposite of this IR canonicalization.
2. DAGCombiner has a generic transform for all targets to convert the vector cmp+sel variant of abs
into either an ABS node or the shift variant. That is again the opposite of this IR canonicalization.
3. DAGCombiner has a generic transform for all targets to convert the exact shift variants produced by #1 or #2
into an ISD::ABS node. Note: It would be an efficiency improvement if we had #1 go directly to an ABS node
when that's legal/custom.
4. The pattern matching above is incomplete, so it is possible to escape the intended/optimal codegen in a
variety of ways.
a. For #2, the vector path is missing the case for setlt with a '1' constant.
b. For #3, we are missing a match for commuted versions of the shift variants.
5. Therefore, this IR canonicalization can only help get us to the optimal codegen. The version of cmp+sel
produced by this patch will be recognized in the DAG and converted to an ABS node when possible or the
shift sequence when not.
6. In the following examples with this patch applied, we may get conditional moves rather than the shift
produced by the generic DAGCombiner transforms. The conditional move is created using a target-specific
decision for any given target. Whether it is optimal or not for a particular subtarget may be up for debate.
define <4 x i32> @abs_shifty_vec(<4 x i32> %x) {
%signbit = ashr <4 x i32> %x, <i32 31, i32 31, i32 31, i32 31>
%add = add <4 x i32> %signbit, %x
%abs = xor <4 x i32> %signbit, %add
ret <4 x i32> %abs
}
define <4 x i32> @abs_cmpsubsel_vec(<4 x i32> %x) {
%cmp = icmp slt <4 x i32> %x, zeroinitializer
%sub = sub <4 x i32> zeroinitializer, %x
%abs = select <4 x i1> %cmp, <4 x i32> %sub, <4 x i32> %x
ret <4 x i32> %abs
}
Hal Finkel [Sat, 16 Dec 2017 02:55:24 +0000 (02:55 +0000)]
[LV] Extend InstWidening with CM_Widen_Recursive
Changes to the original scalar loop during LV code gen cause the return value
of Legal->isConsecutivePtr() to be inconsistent with the return value during
legal/cost phases (further analysis and information of the bug is in D39346).
This patch is an alternative fix to PR34965 following the CM_Widen approach
proposed by Ayal and Gil in D39346. It extends InstWidening enum with
CM_Widen_Reverse to properly record the widening decision for consecutive
reverse memory accesses and, consequently, get rid of the
Legal->isConsetuviePtr() call in LV code gen. I think this is a simpler/cleaner
solution to PR34965 than the one in D39346.
Vitaly Buka [Sat, 16 Dec 2017 02:10:00 +0000 (02:10 +0000)]
[LTO] Make processing of combined module more consistent
Summary:
1. Use stream 0 only for combined module. Previously if combined module was not
processes ThinLTO used the stream for own output. However small changes in input,
could trigger combined module and shuffle outputs making life of llvm::LTO harder.
2. Always process combined module and write output to stream 0. Processing empty
combined module is cheap and allows llvm::LTO users to avoid implementing processing
which is already done in llvm::LTO.
Hal Finkel [Sat, 16 Dec 2017 01:12:50 +0000 (01:12 +0000)]
[LV] NFC patch for moving VP*Recipe class definitions from LoopVectorize.cpp to VPlan.h
This is a small step forward to move VPlan stuff to where it should belong (i.e., VPlan.*):
1. VP*Recipe classes in LoopVectorize.cpp are moved to VPlan.h.
2. Many of VP*Recipe::print() and execute() definitions are still left in
LoopVectorize.cpp since they refer to things declared in LoopVectorize.cpp. To
be moved to VPlan.cpp at a later time.
3. InterleaveGroup class is moved from anonymous namespace to llvm namespace.
Referencing it in anonymous namespace from VPlan.h ended up in warning.
Teresa Johnson [Sat, 16 Dec 2017 00:18:12 +0000 (00:18 +0000)]
[ThinLTO] Enable importing of aliases as copy of aliasee
Summary:
This implements a missing feature to allow importing of aliases, which
was previously disabled because alias cannot be available_externally.
We instead import an alias as a copy of its aliasee.
Some additional work was required in the IndexBitcodeWriter for the
distributed build case, to ensure that the aliasee has a value id
in the distributed index file (i.e. even when it is not being
imported directly).
This is a performance win in codes that have many aliases, e.g. C++
applications that have many constructor and destructor aliases.
Quentin Colombet [Fri, 15 Dec 2017 23:07:42 +0000 (23:07 +0000)]
[TableGen][GlobalISel] Have the predicate directly know which data they are dealing with
Prior to this patch, a predicate wouldn't make sense outside of its
rule. Indeed, it was only during emitting a rule that a predicate would
be made aware of the IDs of the data it is checking. Because of that,
predicates could not be moved around or compared between each other.
Matthias Braun [Fri, 15 Dec 2017 22:22:42 +0000 (22:22 +0000)]
MachineModuleInfo: Remove unused function; NFC
Remove the unused setModule() function; it would be dangerous if someone
actually used it as it wouldn't reset/recompute various other module
related data.
[Hexagon] Remove recursion in visitUsesOf, replace with use queue
This is primarily to reduce stack usage, but ordering the use queue
according to the position in the code (earlier instructions visited
before later ones) reduces the number of unnecessary bottoms due to
visiting instructions out of order, e.g.
%reg1 = copy %reg0
%reg2 = copy %reg0
%reg3 = and %reg1, %reg2
Here, reg3 should be known to be same as reg0-2, but if reg3 is
evaluated after reg1 is updated, but before reg2 is updated, the two
inputs to the and will appear different, causing reg3 to become
bottom.
Craig Topper [Fri, 15 Dec 2017 21:18:06 +0000 (21:18 +0000)]
[X86] Use AND32ri8 instead of AND64ri8 in Asan code in EmitCallAsanReport for 32-bit mode.
This seemed to work due to a quirk in the X86 MC encoder that didn't emit a REX byte that the AND64ri8 implies when in 32-bit mode. This made the encoding the same as AND32ri8. I tried to add an assert to catch the dropped REX prefix that caught this.
Craig Topper [Fri, 15 Dec 2017 21:18:05 +0000 (21:18 +0000)]
[X86] In LowerVectorCTPOP use ISD::ZERO_EXTEND/ISD::TRUNCATE instead of the target specific nodes.
The target independent nodes will get legalized to the target specific nodes by their own legalization process. Someday I'd like to stop using a target specific for zero extends and truncates of legal types so the less places we reference the target specific opcode the better.
Craig Topper [Fri, 15 Dec 2017 20:57:18 +0000 (20:57 +0000)]
[X86] Remove unnecessary TODO.
When I wrote it I thought we were missing a potential optimization for KNL. But investigating further shows that for KNL we still do the optimal thing by widening to v4f32 and then using special isel patterns to widen again to zmm a register.
Jun Bum Lim [Fri, 15 Dec 2017 20:33:24 +0000 (20:33 +0000)]
Re-commit : [LICM] Allow sinking when foldable in loop
This recommits r320823 reverted due to the test failure in sink-foldable.ll and
an unused variable. Added "REQUIRES: aarch64-registered-target" in the test
and removed unused variable.
Original commit message:
Continue trying to sink an instruction if its users in the loop is foldable.
This will allow the instruction to be folded in the loop by decoupling it from
the user outside of the loop.