[X86] Don't mutate shuffle arguments after early-out for AVX512
The match* functions have the annoying behavior of modifying its inputs.
Save and restore the inputs, just in case the early out for AVX512 is
hit. This is still not great and its only a matter of time this kind of
bug happens again, but I couldn't come up with a better pattern without
rewriting significant chunks of this code. Fixes PR35977.
------------------------------------------------------------------------
[X86] When legalizing (v64i1 select i8, v64i1, v64i1) make sure not to introduce bitcasts to i64 in 32-bit mode
We legalize selects of masks with scalar conditions using a bitcast to an integer type. But if we are in 32-bit mode we can't convert v64i1 to i64. So instead split the v64i1 to v32i1 and concat it back together. Each half will then be legalized by bitcasting to i32 which is fine.
The test case is a little indirect. If we have the v64i1 select in IR it will get legalized by legalize vector ops which has a run of type legalization after it. That type legalization run is able to fix this i64 bitcast. So in order to avoid that we need a build_vector of a splat which legalize vector ops will ignore. Legalize DAG will then turn that into a select via LowerBUILD_VECTORvXi1. And the select will get legalized. In this case there is no type legalizer run to cleanup the bitcast.
This fixes pr35972.
------------------------------------------------------------------------
[DAG] Teach BaseIndexOffset to correctly handle with indexed operations
BaseIndexOffset address analysis incorrectly ignores offsets folded
into indexed memory operations causing potential errors in alias
analysis of pre-indexed operations.
The work order was changed in r228186 from SCC order
to RPO with an arbitrary sorting function. The sorting
function attempted to move inner loop nodes earlier. This
was was apparently relying on an assumption that every block
in a given loop / the same loop depth would be seen before
visiting another loop. In the broken testcase, a block
outside of the loop was encountered before moving onto
another block in the same loop. The testcase would then
structurize such that one blocks unconditional successor
could never be reached.
Revert to plain RPO for the analysis phase. This fixes
detecting edges as backedges that aren't really.
The processing phase does use another visited set, and
I'm unclear on whether the order there is as important.
An arbitrary order doesn't work, and triggers some infinite
loops. The reversed RPO list seems to work and is closer
to the order that was used before, minus the arbitary
custom sorting.
A few of the changed tests now produce smaller code,
and a few are slightly worse looking.
------------------------------------------------------------------------
StructurizeCFG: xfail one of the testcases from r321751
It fails with -verify-region-info. This seems to be a issue
with RegionInfo itself which existed before.
------------------------------------------------------------------------
RegionInfo: Use report_fatal_error instead of llvm_unreachable
Otherwise when using -verify-region-info in a release build the
error won't be emitted.
------------------------------------------------------------------------
PeepholeOptimizer: Do not form PHI with subreg arguments
When replacing a PHI the PeepholeOptimizer currently takes the register
class of the register at the first operand. This however is not correct
if this argument has a subregister index.
As there is currently no API to query the register class resulting from
applying a subregister index to all registers in a class, we can only
abort in these cases and not perform the transformation.
This changes findNextSource() to require the end of all copy chains to
not use a subregister if there is any PHI in the chain. I had to rewrite
the overly complicated inner loop there to have a good place to insert
the new check.
This fixes https://llvm.org/PR33071 (aka rdar://32262041)
TargetLoweringBase: The ios simulator has no bzero function.
Make sure I really get back to the beahvior before my rewrite in r321035
which turned out not to be completely NFC as I changed the behavior for
the ios simulator environment.
------------------------------------------------------------------------
[SLP] Fix PR35777: Incorrect handling of aggregate values.
Summary:
Fixes the bug with incorrect handling of InsertValue|InsertElement
instrucions in SLP vectorizer. Currently, we may use incorrect
ExtractElement instructions as the operands of the original
InsertValue|InsertElement instructions.
[LV] Don't call recordVectorLoopValueForInductionCast for newly-created IV from a trunc.
Summary:
This method is supposed to be called for IVs that have casts in their use-def
chains that are completely ignored after vectorization under PSE. However, for
truncates of such IVs the same InductionDescriptor is used during
creation/widening of both original IV based on PHINode and new IV based on
TruncInst.
This leads to unintended second call to recordVectorLoopValueForInductionCast
with a VectorLoopVal set to the newly created IV for a trunc and causes an
assert due to attempt to store new information for already existing entry in the
map. This is wrong and should not be done.
Fix crash when linking metadata with ODR type uniquing
Summary:
With DebugTypeODRUniquing enabled, during IR linking debug metadata
in the destination module may be reached from the source module.
This means that ConstantAsMetadata nodes (e.g. on DITemplateValueParameter)
may contain a value the destination module. When trying to map such
metadata nodes, we will attempt to map a GV already in the dest module.
linkGlobalValueProto will end up with a source GV that is the same as
the dest GV as well as the new GV. Trying to access the TypeMap for the
source GV type, which is actually a dest GV type, hits an assertion
since it appears that we have mapped into the source module (because the
type is the value not a key into the map).
Detect that we don't need to access the TypeMap in this case, since
there is no need to create a bitcast from the new GV to the source GV
type as they GV are the same.
Alex Bradbury [Wed, 3 Jan 2018 13:46:21 +0000 (13:46 +0000)]
[ARM][NFC] Avoid recreating MCSubtargetInfo in ARMAsmBackend
After D41349, we can now directly access MCSubtargetInfo from
createARM*AsmBackend. This patch makes use of this, avoiding the need to
create a fresh MCSubtargetInfo (which was previously always done with a blank
CPU and feature string). Given the total size of the change remains pretty
tiny and we're removing the old explicit destructor, I changed the STI field
to a reference rather than a pointer.
Hal Finkel [Wed, 3 Jan 2018 11:35:09 +0000 (11:35 +0000)]
[TableGen] Add support of Intrinsics with multiple returns
This change deals with intrinsics with multiple outputs, for example load
instrinsic with address updated.
DAG selection for Instrinsics could be done either through source code or
tablegen. Handling all intrinsics in source code would introduce a huge chunk
of repetitive code if we have a large number of intrinsic that return multiple
values (see NVPTX as an example). While intrinsic class in tablegen supports
multiple outputs, tablegen only supports Intrinsics with zero or one output on
TreePattern. This appears to be a simple bug in tablegen that is fixed by this
change.
Alex Bradbury [Wed, 3 Jan 2018 09:14:02 +0000 (09:14 +0000)]
Fix incorrect documentation comment left after r321692
TargetRegistryInfo::createMCAsmBackend no longer takes a TheTriple parameter.
The majory of the TargetRegistryInfo::create* functions have no or very
limitied per-parameter doc comments, and adding a comment for the
MCSubtargetInfo, MCRegisterInfo and MCTargetOptions parameters seems like it
would add no real value beyond reading the function signature. As such, I've
just deleted the doc comment for TheTriple.
Alex Bradbury [Wed, 3 Jan 2018 08:53:05 +0000 (08:53 +0000)]
Thread MCSubtargetInfo through Target::createMCAsmBackend
Currently it's not possible to access MCSubtargetInfo from a TgtMCAsmBackend.
D20830 threaded an MCSubtargetInfo reference through
MCAsmBackend::relaxInstruction, but this isn't the only function that would
benefit from access. This patch removes the Triple and CPUString arguments
from createMCAsmBackend and replaces them with MCSubtargetInfo.
This patch just changes the interface without making any intentional
functional changes. Once in, several cleanups are possible:
* Get rid of the awkward MCSubtargetInfo handling in ARMAsmBackend
* Support 16-bit instructions when valid in MipsAsmBackend::writeNopData
* Get rid of the CPU string parsing in X86AsmBackend and just use a SubtargetFeature for HasNopl
* Emit 16-bit nops in RISCVAsmBackend::writeNopData if the compressed instruction set extension is enabled (see D41221)
This change initially exposed PR35686, which has since been resolved in r321026.
Sanjay Patel [Tue, 2 Jan 2018 20:56:45 +0000 (20:56 +0000)]
[ValueTracking] recognize min/max of min/max patterns
This is part of solving PR35717:
https://bugs.llvm.org/show_bug.cgi?id=35717
The larger IR optimization is proposed in D41603, but we can show
the improvement in ValueTracking using codegen tests because
SelectionDAG creates min/max nodes based on ValueTracking.
Any target with min/max ops should show wins here. I chose AArch64
vector ops because they're clean and uniform.
Some Alive proofs for the tests (can't put more than 2 tests in 1
page currently because the web app says it's too long):
https://rise4fun.com/Alive/WRN
https://rise4fun.com/Alive/iPm
https://rise4fun.com/Alive/HmY
https://rise4fun.com/Alive/CNm
https://rise4fun.com/Alive/LYf
Amara Emerson [Tue, 2 Jan 2018 18:56:39 +0000 (18:56 +0000)]
[AArch64][GlobalISel] Fix assert fail with unknown intrinsic.
A call may have an intrinsic name but not have a valid intrinsic ID,
for example with llvm.invariant.group.barrier. If so, treat it as a
normal call like FastISel does.
Sanjay Patel [Tue, 2 Jan 2018 16:38:29 +0000 (16:38 +0000)]
[x86] allow pairs of PCMPEQ for vector-sized integer equality comparisons (PR33325)
This is an extension of D31156 with the goal that we'll allow memcmp() == 0 expansion
for x86 to use 2 pairs of loads per block.
The memcmp expansion pass (formerly part of CGP) will generate this kind of pattern
with oversized integer compares, so we want to transform these into x86-specific vector
nodes before legalization splits things into scalar chunks.
See PR33325 for more details:
https://bugs.llvm.org/show_bug.cgi?id=33325
Anna Thomas [Tue, 2 Jan 2018 16:25:50 +0000 (16:25 +0000)]
[BasicBlockUtils] Check for unreachable preds before updating LI in UpdateAnalysisInformation
Summary:
We are incorrectly updating the LI when loop-simplify generates
dedicated exit blocks for a loop. The issue is that there's an implicit
assumption that the Preds passed into UpdateAnalysisInformation are
reachable. However, this is not true and breaks LI by incorrectly
updating the header of a loop.
One such case is when we generate dedicated exits when the exit block is
a landing pad (through SplitLandingPadPredecessors). There maybe other
cases as well, since we do not guarantee that Preds passed in are
reachable basic blocks.
The added test case shows how loop-simplify breaks LI for the outer loop (and DT in turn)
after we try to generate the LoopSimplifyForm.
Craig Topper [Mon, 1 Jan 2018 19:21:35 +0000 (19:21 +0000)]
[SelectionDAG][X86][AArch64] Require targets to specify the promotion type when using setOperationAction Promote for INT_TO_FP and FP_TO_INT
Currently the promotion for these ignores the normal getTypeToPromoteTo and instead just tries to double the element width. This is because the default behavior of getTypeToPromote to just adds 1 to the SimpleVT, which has the affect of increasing the element count while keeping the scalar size the same.
If multiple steps are required to get to a legal operation type, int_to_fp will be promoted multiple times. And fp_to_int will keep trying wider types in a loop until it finds one that works.
getTypeToPromoteTo does have the ability to query a promotion map to get the type and not do the increasing behavior. It seems better to just let the target specify the promotion type in the map explicitly instead of letting the legalizer iterate via widening.
FWIW, it's worth I think for any other vector operations that need to be promoted, we have to specify the type explicitly because the default behavior of getTypeToPromote isn't useful for vectors. The other types of promotion already require either the element count is constant or the total vector width is constant, but neither happens by incrementing the SimpleVT enum.
Craig Topper [Mon, 1 Jan 2018 01:11:29 +0000 (01:11 +0000)]
[X86] Add patterns for using zmm registers for v8i32/v8f32 vselect with the false input being zero.
We can use zmm move with zero masking for this. We already had patterns for using a masked move, but we didn't check for the zero masking case separately.
Craig Topper [Sun, 31 Dec 2017 19:17:52 +0000 (19:17 +0000)]
[X86] Use CONCAT_VECTORS instead of INSERT_SUBVECTOR for padding v4i1/v2i1 vector to v8i1 pre-legalize.
The CONCAT_VECTORS will be lowered to INSERT_SUBVECTOR later. In the modified cases this seems to be enough to trick a later DAG combine into running in a different order than allows the ANDs to be removed.
I'll admit this is a bit of a hack that happens to work, but using CONCAT_VECTORS is more consistent with other legalization code anyway.
Simon Pilgrim [Sun, 31 Dec 2017 17:07:47 +0000 (17:07 +0000)]
[X86][SSE] Don't vectorize splat buildvector of binops (PR30780)
Don't combine buildvector(binop(),binop(),binop(),binop()) -> binop(buildvector(), buildvector()) if its a splat - keep the binop scalar and just splat the result to avoid large vector constants.
Craig Topper [Sun, 31 Dec 2017 07:38:41 +0000 (07:38 +0000)]
[X86] Prevent combining (v8i1 (bitconvert (i8 load)))->(v8i1 load) if we don't have DQI.
We end up using an i8 load via an isel pattern from v8i1 anyway. This just makes it more explicit. This seems to improve codgen in some cases and I'd like to kill off some of the load patterns.
Philip Reames [Sun, 31 Dec 2017 03:34:36 +0000 (03:34 +0000)]
2nd attempt at "fixing" amdgpu tests after r321575
The test needs to be changed; it was exercising UB and that likely wasn't the intent of the test author. I simply removed the checks because I have absolutely no idea what this test was trying to accomplish. With multiple check patterns, no explanation, and no familiarity on my part with the ISA a true fix is going to have to come from someone familiar with the target.
Serge Pavlov [Sat, 30 Dec 2017 15:37:46 +0000 (15:37 +0000)]
Added support for reading configuration files
Configuration file is read as a response file in which file names in
the nested constructs `@file` are resolved relative to the directory
where the including file resides. Lines in which the first non-whitespace
character is '#' are considered as comments and are skipped. Trailing
backslashes are used to concatenate lines in the same way as they
are used in shell scripts.
Serge Pavlov [Sat, 30 Dec 2017 08:15:15 +0000 (08:15 +0000)]
Added support for reading configuration files
Configuration file is read as a response file in which file names in
the nested constructs `@file` are resolved relative to the directory
where the including file resides. Lines in which the first non-whitespace
character is '#' are considered as comments and are skipped. Trailing
backslashes are used to concatenate lines in the same way as they
are used in shell scripts.
Hiroshi Inoue [Sat, 30 Dec 2017 08:09:04 +0000 (08:09 +0000)]
[PowerPC] fix a bug in TCO eligibility check
If the callee and caller use different calling convensions, we cannot apply TCO if the callee requires arguments on stack; e.g. C calling convention and Fast CC use the same registers for parameter passing, but the stack offset is not necessarily same.
This patch also recommit r319218 "[PowerPC] Allow tail calls of fastcc functions from C CallingConv functions." by @sfertile since the problem reported in r320106 should be fixed.
Philip Reames [Sat, 30 Dec 2017 05:54:22 +0000 (05:54 +0000)]
[instsimplify] consistently handle undef and out of bound indices for insertelement and extractelement
In one case, we were handling out of bounds, but not undef indices. In the other, we were handling undef (with the comment making the analogy to out of bounds), but not out of bounds. Be consistent and treat both undef and constant out of bounds indices as producing undefined results.
As a side effect, this also protects instcombine from having to handle large constant indices as we always simplify first.