Kai Luo [Mon, 9 Sep 2019 02:32:42 +0000 (02:32 +0000)]
[MachineCopyPropagation] Remove redundant copies after TailDup via machine-cp
Summary:
After tailduplication, we have redundant copies. We can remove these
copies in machine-cp if it's safe to, i.e.
```
$reg0 = OP ...
... <<< No read or clobber of $reg0 and $reg1
$reg1 = COPY $reg0 <<< $reg0 is killed
...
<RET>
```
will be transformed to
```
$reg1 = OP ...
...
<RET>
```
This patch decodes target and faux shuffles with getTargetShuffleInputs - a reduced version of resolveTargetShuffleInputs that doesn't resolve SM_SentinelZero cases, so we can correctly remove zero vectors if they aren't demanded.
[X86] Remove call to getZeroVector from materializeVectorConstant. Add isel patterns for zero vectors with all types.
The change to avx512-vec-cmp.ll is a regression, but should be
easy to fix. It occurs because the getZeroVector call was
canonicalizing both sides to the same node, then SimplifySelect
was able to simplify it. But since only called getZeroVector
on some VTs this isn't a robust way to combine this.
The change to vector-shuffle-combining-ssse3.ll is more
instructions, but removes a constant pool load so its unclear
if its a regression or not.
Roman Lebedev [Sun, 8 Sep 2019 20:14:15 +0000 (20:14 +0000)]
[InstSimplify] simplifyUnsignedRangeCheck(): if we know that X != 0, handle more cases (PR43246)
Summary:
This is motivated by D67122 sanitizer check enhancement.
That patch seemingly worsens `-fsanitize=pointer-overflow`
overhead from 25% to 50%, which strongly implies missing folds.
In this particular case, given
```
char* test(char& base, unsigned long offset) {
return &base + offset;
}
```
it will end up producing something like
https://godbolt.org/z/LK5-iH
which after optimizations reduces down to roughly
```
define i1 @t0(i8* nonnull %base, i64 %offset) {
%base_int = ptrtoint i8* %base to i64
%adjusted = add i64 %base_int, %offset
%non_null_after_adjustment = icmp ne i64 %adjusted, 0
%no_overflow_during_adjustment = icmp uge i64 %adjusted, %base_int
%res = and i1 %non_null_after_adjustment, %no_overflow_during_adjustment
ret i1 %res
}
```
Without D67122 there was no `%non_null_after_adjustment`,
and in this particular case we can get rid of the overhead:
Here we add some offset to a non-null pointer,
and check that the result does not overflow and is not a null pointer.
But since the base pointer is already non-null, and we check for overflow,
that overflow check will already catch the null pointer,
so the separate null check is redundant and can be dropped.
Alive proofs:
https://rise4fun.com/Alive/WRzq
There are more patterns of "unsigned-add-with-overflow", they are not handled here,
but this is the main pattern, that we currently consider canonical,
so it makes sense to handle it.
[InstCombine] fold extract+insert into identity shuffle
This is similar to the existing fold for splats added with:
rL365379
If we can adjust the shuffle mask to include another element
in an identity mask (if it changes vector length, that's an
extract/insert subvector operation in the backend), then that
can eliminate extractelement/insertelement pairs in IR.
All targets are expected to lower shuffles with identity masks
efficiently.
Roman Lebedev [Sun, 8 Sep 2019 17:50:40 +0000 (17:50 +0000)]
[NFC][InstSimplify] Some tests for dropping null check after uadd.with.overflow of non-null (PR43246)
https://rise4fun.com/Alive/WRzq
Name: C <= Y && Y != 0 --> C <= Y iff C != 0
Pre: C != 0
%y_is_nonnull = icmp ne i64 %y, 0
%no_overflow = icmp ule i64 C, %y
%r = and i1 %y_is_nonnull, %no_overflow
=>
%r = %no_overflow
Name: C <= Y || Y != 0 --> Y != 0 iff C != 0
Pre: C != 0
%y_is_nonnull = icmp ne i64 %y, 0
%no_overflow = icmp ule i64 C, %y
%r = or i1 %y_is_nonnull, %no_overflow
=>
%r = %y_is_nonnull
Name: C > Y || Y == 0 --> C > Y iff C != 0
Pre: C != 0
%y_is_null = icmp eq i64 %y, 0
%overflow = icmp ugt i64 C, %y
%r = or i1 %y_is_null, %overflow
=>
%r = %overflow
Name: C > Y && Y == 0 --> Y == 0 iff C != 0
Pre: C != 0
%y_is_null = icmp eq i64 %y, 0
%overflow = icmp ugt i64 C, %y
%r = and i1 %y_is_null, %overflow
=>
%r = %y_is_null
David Stenberg [Sun, 8 Sep 2019 14:22:06 +0000 (14:22 +0000)]
[DebugInfo][X86] Describe call site values for zero-valued imms
Summary:
Add zero-materializing XORs to X86's describeLoadedValue() hook in order
to produce call site values.
I have had to change the defs logic in collectCallSiteParameters() a bit
to be able to describe the XORs. The XORs implicitly define $eflags,
which would cause them to never be considered, due to a guard condition
that I->getNumDefs() is one. I have changed that condition so that we
now only consider instructions where a forwarded register overlaps with
the instruction's single explicit define. We still need to collect the implicit
defines of other forwarded registers to remove them from the work list.
I'm not sure how to move towards supporting instructions with multiple
explicit defines, cases where forwarded register are implicitly defined,
and/or cases where an instruction produces values for multiple forwarded
registers. Perhaps the describeLoadedValue() hook should take a register
argument, and we then leave it up to the hook to describe the loaded
value in that register? I have not yet encountered a situation where
that would be necessary though.
David Stenberg [Sun, 8 Sep 2019 14:05:10 +0000 (14:05 +0000)]
[NFC] Make the describeLoadedValue() hook return machine operand objects
Summary:
This changes the ParamLoadedValue pair which the describeLoadedValue()
hook returns so that MachineOperand objects are returned instead of
pointers.
When describing call site values we may need to describe operands which
are not part of the instruction. One such example is zero-materializing
XORs on x86, which I have implemented support for in a child revision.
Instead of having to return a pointer to an operand stored somewhere
outside the instruction, start returning objects directly instead, as
that simplifies the code.
The MachineOperand class only holds POD members, and on x86-64 it is 32
bytes large. That combined with copy elision means that the overhead of
returning a machine operand object from the hook does not become very
large.
I benchmarked this on a 8-thread i7-8650U machine with 32 GB RAM. The
benchmark consisted of building a clang 8.0 binary configured with:
Simon Pilgrim [Sun, 8 Sep 2019 11:46:21 +0000 (11:46 +0000)]
[X86][SSE] Add support for <64 x i1> bool reduction
This generalizes the existing <32 x i1> pre-AVX2 split code to support reductions from <64 x i1> as well, we can probably generalize to any larger pow2 case in the future if the (unlikely) need ever arises.
We still need to tweak combineBitcastvxi1 to improve AVX512F codegen as its assumes vXi1 types should be handled on the mask registers even when they aren't legal.
[X86] Make getZeroVector return floating point vectors in their native type on SSE2 and later.
isel used to require zero vectors to be canonicalized to a single
type to minimize the number of patterns needed to match. This is
no longer required.
I plan to do this to integers too, but floating point was simpler
to start with. Integer has a complication where v32i16/v64i8 aren't
legal when the other 512-bit integer types are.
Summary:
Similar to the previous prefer-256-bit flag. We might want to
enable this by default some CPUs. This just starts the initial
work to implement and prove that it effects TTI's vector width.
Simon Pilgrim [Sat, 7 Sep 2019 16:13:57 +0000 (16:13 +0000)]
[X86] Avoid uses of getZextValue(). NFCI.
Use getAPIntValue() directly - this is mainly a best practice style issue to help prevent fuzz tests blowing up when a i12345 (or whatever) is generated.
Use getConstantOperandVal/getConstantOperandAPInt wrappers where possible.
[CodeGen] Handle SMULFIXSAT with scale zero in TargetLowering::expandFixedPointMul
Summary:
Normally TargetLowering::expandFixedPointMul would handle
SMULFIXSAT with scale zero by using an SMULO to compute the
product and determine if saturation is needed (if overflow
happened). But if SMULO isn't custom/legal it falls through
and uses the same technique, using MULHS/SMUL_LOHI, as used
for non-zero scales.
Problem was that when checking for overflow (handling saturation)
when not using MULO we did not expect to find a zero scale. So
we ended up in an assertion when doing
APInt::getLowBitsSet(VTSize, Scale - 1)
This patch fixes the problem by adding a new special case for
how saturation is computed when scale is zero.
Summary:
Add an intrinsic that takes 2 unsigned integers with
the scale of them provided as the third argument and
performs fixed point multiplication on them. The
result is saturated and clamped between the largest and
smallest representable values of the first 2 operands.
This is a part of implementing fixed point arithmetic
in clang where some of the more complex operations
will be implemented as intrinsics.
[X86] Fix pshuflw formation from repeated shuffle mask (PR43230)
Fix for https://bugs.llvm.org/show_bug.cgi?id=43230.
When creating PSHUFLW from a repeated shuffle mask, we have to apply
the checks to the repeated mask, not the original one. For the test
case from PR43230 the inspected part of the original mask is all undef.
This addresses the issue mentioned on D19867. When we simplify
with.overflow instructions in CVP, we leave behind extractvalue
of insertvalue sequences that LVI no longer understands. This
means that we can not simplify any instructions based on the
with.overflow anymore (until some over pass like InstCombine
cleans them up).
This patch extends LVI extractvalue handling by calling
SimplifyExtractValueInst (which doesn't do anything more than
constant folding + looking through insertvalue) and using the block
value of the simplification.
A possible alternative would be to do something similar to
SimplifyIndVars, where we instead directly try to replace
extractvalue users of the with.overflow. This would need some
additional structural changes to CVP, as it's currently not legal
to remove anything but the current instruction -- we'd have to
introduce a worklist with instructions scheduled for deletion or similar.
[DwarfExpression] Disallow some rewrites to avoid undefined behavior
Summary:
The value operand in DW_OP_plus_uconst/DW_OP_constu value can be
large (it uses uint64_t as representation internally in LLVM).
This means that in the uint64_t to int conversions, previously done
by DwarfExpression::addMachineRegExpression, could lose information.
Also, the negation done in "-Offset" was undefined behavior in case
Offset was exactly INT_MIN.
To avoid the above problems, we now avoid transformation like
[Reg, DW_OP_plus_uconst, Offset] --> [DW_OP_breg, Offset]
and
[Reg, DW_OP_constu, Offset, DW_OP_plus] --> [DW_OP_breg, Offset]
when Offset > INT_MAX.
And we avoid to transform
[Reg, DW_OP_constu, Offset, DW_OP_minus] --> [DW_OP_breg,-Offset]
when Offset > INT_MAX+1.
The patch also adjusts DwarfCompileUnit::constructVariableDIEImpl
to make sure that "DW_OP_constu, Offset, DW_OP_minus" is used
instead of "DW_OP_plus_uconst, Offset" when creating DIExpressions
with negative frame index offsets.
Notice that this might just be the tip of the iceberg. There
are lots of fishy handling related to these constants. I think both
DIExpression::appendOffset and DIExpression::extractIfOffset may
trigger undefined behavior for certain values.
Replicate the change "[Alignment][NFC] Use Align with TargetLowering::setMinFunctionAlignment"
on AVR to avoid a breakage.
See r371200 / https://reviews.llvm.org/D67229
Summary:
This patch introduces initial `AAValueSimplify` which simplifies a value in a context.
example
- (for function returned) If all the return values are the same and constant, then we can replace callsite returned with the constant.
- If an internal function takes the same value(constant) as an argument in the callsite, then we can replace the argument with that constant.
Teresa Johnson [Sat, 7 Sep 2019 03:09:36 +0000 (03:09 +0000)]
Change TargetLibraryInfo analysis passes to always require Function
Summary:
This is the first change to enable the TLI to be built per-function so
that -fno-builtin* handling can be migrated to use function attributes.
See discussion on D61634 for background. This is an enabler for fixing
handling of these options for LTO, for example.
This change should not affect behavior, as the provided function is not
yet used to build a specifically per-function TLI, but rather enables
that migration.
Most of the changes were very mechanical, e.g. passing a Function to the
legacy analysis pass's getTLI interface, or in Module level cases,
adding a callback. This is similar to the way the per-function TTI
analysis works.
There was one place where we were looking for builtins but not in the
context of a specific function. See FindCXAAtExit in
lib/Transforms/IPO/GlobalOpt.cpp. I'm somewhat concerned my workaround
could provide the wrong behavior in some corner cases. Suggestions
welcome.
[AArch64][GlobalISel] Enable the localizer for optimized builds.
Despite the fact that the localizer's original motivation was to fix horrendous
constant spilling at -O0, shortening live ranges still has net benefits even
with optimizations enabled.
On an -Os build of CTMark, doing this improves code size by 0.5% geomean.
There are a few regressions, bullet increasing in size by 0.5%. One example from
bullet where code size increased slightly was due to GlobalISel actually now
generating the same code as SelectionDAG. So we actually have an opportunity
in future to implement better heuristics for localization and therefore be
*better* than SDAG in some cases. In relation to other optimizations though that
one is relatively minor.
[InstCombine] Refactor substitution of instruction in the parent BB (NFC)
Add the new method `LibCallSimplifier::substituteInParent()` that calls
`LibCallSimplifier::replaceAllUsesWith()' and
`LibCallSimplifier::eraseFromParent()` back to back, simplifying the
resulting code.
[IR] CallBrInst: scan+update arg list when indirect dest list changes
Summary:
There's an unspoken invariant of callbr that the list of BlockAddress
Constants in the "function args" list match the BasicBlocks in the
"other labels" list. (This invariant is being added to the LangRef in
https://reviews.llvm.org/D67196).
When modifying the any of the indirect destinations of a callbr
instruction (possible jump targets), we need to update the function
arguments if the argument is a BlockAddress whose BasicBlock refers to
the indirect destination BasicBlock being replaced. Otherwise, many
transforms that modify successors will end up violating that invariant.
A recent change to the arm64 Linux kernel exposed this bug, which
prevents the kernel from booting.
I considered maintaining a mapping from indirect destination BasicBlock
to argument operand BlockAddress, but this ends up being a one to
potentially many (though usually one) mapping. Also, the list of
arguments to a function (or more typically inline assembly) ends up
being less than 10. The implementation is significantly simpler to just
rescan the full list of arguments. Because of the one to potentially
many relationship, the full arg list must be scanned (we can't stop at
the first instance).
Thanks to the following folks that reported the issue and helped debug
it:
* Nathan Chancellor
* Will Deacon
* Andrew Murray
* Craig Topper
Lang Hames [Fri, 6 Sep 2019 19:21:59 +0000 (19:21 +0000)]
[ORC] Make sure RPC channel-send is called in blocking calls and responses.
ORC-RPC batches calls by default, and the channel's send method must be called
to transfer any buffered calls to the remote. The call to send was missing on
responses and blocking calls in the SingleThreadedRPCEndpoint. This patch adds
the necessary calls and modifies the RPC unit test to check for them.
Lang Hames [Fri, 6 Sep 2019 19:21:55 +0000 (19:21 +0000)]
[llvm-jitlink] Add optional slab allocator for testing locality optimizations.
The llvm-jitlink utility now accepts a '-slab-allocate <size>' option. If given,
llvm-jitlink will use a slab-based memory manager rather than the default
InProcessMemoryManager. Using a slab allocator will allow reliable testing of
future locality based optimizations (e.g. PLT and GOT elimination) in JITLink.
The <size> argument is a number, optionally followed by a units specifier (Kb,
Mb, or Gb). If the units are not given then the number is assumed to be in Kb.
[X86] Use MOVSX by default instead of CBW to extend i8 to AX for i8 sdivrem.
We can use a MOVSX16 here then rely on FixupBWInst to change to
MOVSX32 if the upper bits are dead. With a special case to
not promote if it could be turned into CBW.
Then we can rely on X86MCInstLower to turn the MOVSX into CBW
very late if register allocation worked out.
Using MOVSX gives an opportunity to use the MOVSX as a both a
copy and a sign extend since the input and output register aren't
tied together.
[X86] Use MOVZX16rr8/MOVZXrm8 when extending input for i8 udivrem.
We can rely on X86FixupBWInsts to turn these into MOVZX32. This
simplifies a follow up commit to use MOVSX for i8 sdivrem with
a late optimization to use CBW when register allocation works out.
[X86] Teach FixupBWInsts to turn MOVSX16rr8/MOVZX16rr8/MOVSX16rm8/MOVZX16rm8 into their 32-bit dest equivalents when the upper part of the register is dead.
Kevin P. Neal [Fri, 6 Sep 2019 18:04:34 +0000 (18:04 +0000)]
[FPEnv] Teach the IRBuilder about constrained FPToSI and FPToUI.
The IRBuilder doesn't know that the two floating point to integer instructions
have constrained equivalents. This patch adds the support by building on
the strict FP mode now present in the IRBuilder.
Reviewed by: John McCall
Approved by: John McCall
Differential Revision: https://reviews.llvm.org/D67291
[AArch64][GlobalISel] Always fall back on tail calls with -tailcallopt
-tailcallopt requires that we perform different stack adjustments than with
sibling calls. For example, the `@caller_to0_from8` function in
test/CodeGen/AArch64/tail-call.ll requires that we adjust SP. Without
-tailcallopt, this adjustment does not happen. With it, however, it is expected.
So, to ensure that adding sibling call support doesn't break -tailcallopt,
make CallLowering always fall back on possible tail calls when -tailcallopt
is passed in.
Update test/CodeGen/AArch64/tail-call.ll with a GlobalISel line to make sure
that we don't differ from the SDAG implementation at any point.
Summary:
This isn't an important optimization at all... We're already doing:
pow(x, 0.0) -> 1.0
My patch merely teaches instcombine that -0.0 does the same.
However, doing this fixes an AMAZING bug! Compile this program:
And clang will crash with a signal. Wow, fast math is so fast it ICEs the
compiler! Arguably, the generated math is infinitely fast.
What's actually happening is that we recurse infinitely in getPow. In debug we
hit its assertion:
assert(Exp != 0 && "Incorrect exponent 0 not handled");
We avoid this entire mess if we instead recognize that an exponent of positive
and negative zero yield 1.0.
A separate commit, r371221, fixed the same problem. This only contains the added
tests.
[Alignment][NFC] Use Align with TargetLowering::setPrefFunctionAlignment
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
[Alignment][NFC] Use Align with TargetLowering::setPrefLoopAlignment
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
Summary:
This was discovered while introducing the llvm::Align type.
The original setMinFunctionAlignment used to take alignment as log2, looking at the comment it seems like instructions are to be 2-bytes aligned and not 4-bytes aligned.
George Rimar [Fri, 6 Sep 2019 13:05:34 +0000 (13:05 +0000)]
[llvm-readelf] - Print unknown st_other value if present in GNU output.
This is a fix for https://bugs.llvm.org/show_bug.cgi?id=40785.
llvm-readelf does not print the st_value of the symbol when
st_value has any non-visibility bits set.
This patch:
* Aligns "Ndx" row for the default and a new cases.
(it was 1 space character off for the case when "PROTECTED" visibility was printed)
* Prints "[<other>: 0x??]" for symbols which has an additional st_other bits set.
In compare with GNU, this logic is a bit simpler and seems to be more consistent.
For MIPS GNU can print named flags, though can't print a mix of them:
0: 00000000 0 NOTYPE LOCAL DEFAULT UND
1: 00000000 0 NOTYPE GLOBAL DEFAULT [OPTIONAL] UND a1
2: 00000000 0 NOTYPE GLOBAL DEFAULT [MIPS PLT] UND a2
3: 00000000 0 NOTYPE GLOBAL DEFAULT [MIPS PIC] UND a3
4: 00000000 0 NOTYPE GLOBAL DEFAULT [MICROMIPS] UND a4
5: 00000000 0 NOTYPE GLOBAL DEFAULT [MIPS16] UND a5
6: 00000000 0 NOTYPE GLOBAL DEFAULT [<other>: c] UND b1
7: 00000000 0 NOTYPE GLOBAL DEFAULT [<other>: 28] UND b2
On PPC64 it can print a localentry value that is encoded in the high bits of st_other
63: 0000000000000850 208 FUNC GLOBAL DEFAULT [<localentry>: 8] 12
We chose to print the raw st_other field, prefixed with '0x'.
[Alignment][NFC] Use Align with TargetLowering::setMinFunctionAlignment
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
James Molloy [Fri, 6 Sep 2019 12:20:08 +0000 (12:20 +0000)]
[DFAPacketizer] Track resources for packetized instructions
This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
resources were allocated to the packetized instructions.
This is particularly important for targets that do their own bundle packing - it's not
sufficient to know simply that instructions can share a packet; which slots are used is
also required for encoding.
This extends the emitter to emit a side-table containing resource usage diffs for each
state transition. The packetizer maintains a set of all possible resource states in its
current state. After packetization is complete, all remaining resource states are
possible packetization strategies.
The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
(most uses of the packetizer like MachinePipeliner don't care and don't need the extra
maintained state).