We can't handle 'uge' case because we can't ever get it,
there needs to be extra use on that compare or else it will be
canonicalized, but because of extra use we can't handle it.
Qiu Chaofan [Tue, 13 Aug 2019 07:53:29 +0000 (07:53 +0000)]
[PowerPC] Fix ICE when truncating some vectors
The legalizer would hit an assertion on PowerPC platform when truncating
a vector whose size is not power of 2. This patch is to add a check to
prevent vectors with such odd-size elements from being custom lowered.
Amara Emerson [Tue, 13 Aug 2019 06:26:59 +0000 (06:26 +0000)]
[GlobalISel] Make the InstructionSelector instance non-const, allowing state to be maintained.
Currently we can't keep any state in the selector object that we get from
subtarget. As a result we have to plumb through all our variables through
multiple functions. This change makes it non-const and adds a virtual init()
method to allow further state to be captured for each target.
AArch64 makes use of this in this patch to cache a call to hasFnAttribute()
which is expensive to call, and is used on each selection of G_BRCOND.
Akira Hatanaka [Tue, 13 Aug 2019 01:23:06 +0000 (01:23 +0000)]
Do not call replaceAllUsesWith to upgrade calls to ARC runtime functions
to intrinsic calls
This fixes a bug in r368311.
It turns out that the ARC runtime functions in the IR can have pointer
parameter types that are not i8* or i8**. Instead of RAUWing normal
functions with intrinsics, manually bitcast the arguments before passing
them to the intrinsic functions and bitcast the return value back to the
type of the original call instruction.
This recommits r368634, which was reverted in r368637. The loop in the
patch was iterating over uses of a function and deleting function calls
inside it, which caused bots to crash.
Daniel Sanders [Tue, 13 Aug 2019 00:55:24 +0000 (00:55 +0000)]
Eliminate implicit Register->unsigned conversions in VirtRegMap. NFC
Summary:
This was mostly an experiment to assess the feasibility of completely
eliminating a problematic implicit conversion case in D61321 in advance of
landing that* but it also happens to align with the goal of propagating the
use of Register/MCRegister instead of unsigned so I believe it makes sense
to commit it.
The overall process for eliminating the implicit conversions from
Register/MCRegister -> unsigned was to:
1. Add an explicit conversion to support genuinely required conversions to
unsigned. For example, using them as an index for IndexedMap. Sadly it's
not possible to have an explicit and implicit conversion to the same
type and only deprecate the implicit one so I called the explicit
conversion get().
2. Temporarily annotate the implicit conversion to unsigned with
LLVM_ATTRIBUTE_DEPRECATED to make them visible
3. Eliminate implicit conversions by propagating Register/MCRegister/
explicit-conversions appropriately
4. Remove the deprecation added in 2.
* My conclusion is that it isn't feasible as there's too much code to
update in one go.
Akira Hatanaka [Mon, 12 Aug 2019 23:53:23 +0000 (23:53 +0000)]
Do not call replaceAllUsesWith to upgrade calls to ARC runtime functions
to intrinsic calls
This fixes a bug in r368311.
It turns out that the ARC runtime functions in the IR can have pointer
parameter types that are not i8* or i8**. Instead of RAUWing normal
functions with intrinsics, manually bitcast the arguments before passing
them to the intrinsic functions and bitcast the return value back to the
type of the original call instruction.
r367088 made it so that funclets store XMM registers into their local
frame instead of storing them to the parent frame. However, that change
forgot to update the parent frame pointer offset for catch blocks. This
change does that.
Fixes crashes when an exception is rethrown in a catch block that saves
XMMs, as described in https://crbug.com/992860.
Juergen Ributzka [Mon, 12 Aug 2019 23:01:07 +0000 (23:01 +0000)]
[TextAPI] Fix & Add tests for tbd files version 3.
- There was a simple typo in TextStub code that prevented version 3 files to be read.
- Included a version 3 unit test to handle the differences in the format.
- Also a typo in Error.h inside the comments.
https://reviews.llvm.org/D66041
This patch is from Cyndy Ishida <cyndy_ishida@apple.com>.
Daniel Sanders [Mon, 12 Aug 2019 22:41:02 +0000 (22:41 +0000)]
[risc-v] Apply llvm-prefer-register-over-unsigned from clang-tidy to LLVM
Summary:
This clang-tidy check is looking for unsigned integer variables whose initializer
starts with an implicit cast from llvm::Register and changes the type of the
variable to llvm::Register (dropping the llvm:: where possible).
Daniel Sanders [Mon, 12 Aug 2019 22:40:53 +0000 (22:40 +0000)]
[aarch64] Apply llvm-prefer-register-over-unsigned from clang-tidy to LLVM
Summary:
This clang-tidy check is looking for unsigned integer variables whose initializer
starts with an implicit cast from llvm::Register and changes the type of the
variable to llvm::Register (dropping the llvm:: where possible).
Manual fixups in:
AArch64InstrInfo.cpp - genFusedMultiply() now takes a Register* instead of unsigned*
AArch64LoadStoreOptimizer.cpp - Ternary operator was ambiguous between Register/MCRegister. Settled on Register
Daniel Sanders [Mon, 12 Aug 2019 22:40:45 +0000 (22:40 +0000)]
[webassembly] Apply llvm-prefer-register-over-unsigned from clang-tidy to LLVM
Summary:
This clang-tidy check is looking for unsigned integer variables whose initializer
starts with an implicit cast from llvm::Register and changes the type of the
variable to llvm::Register (dropping the llvm:: where possible).
[AMDGPU] Use PredicateControl in MIMGBaseOpcode. NFC.
This is infrastructural, will be needed for future work.
For some reason it was only used in MIMG_NoSampler, while
needed everywere we use MIMGBaseOpcode if we want to use
predicates.
Craig Topper [Mon, 12 Aug 2019 22:18:23 +0000 (22:18 +0000)]
[X86] Allow combineTruncateWithSat to use pack instructions for i16->i8 without AVX512BW.
We need AVX512BW to be able to truncate an i16 vector. If we don't
have that we have to extend i16->i32, then trunc, i32->i8. But we
won't be able to remove the min/max if we do that. At least not
without more special handling.
Craig Topper [Mon, 12 Aug 2019 19:26:37 +0000 (19:26 +0000)]
[X86] Add a paranoia type check to the code that detects AVG patterns from truncating stores.
If we're after type legalize, we should make sure we won't create
a store with an illegal type when we separate the AVG pattern
from the truncating store.
I don't know of a way to fail for this today. Just noticed while
I was in the vicinity.
Wenlei He [Mon, 12 Aug 2019 17:45:14 +0000 (17:45 +0000)]
[ThinLTO][AutoFDO] Fix memory corruption due to race condition from thin backends
Summary:
This commit fixed a race condition from multi-threaded thinLTO backends that causes non-deterministic memory corruption for a data structure used only by AutoFDO with compact binary profile.
GUIDToFuncNameMap, a static data member of type DenseMap in FunctionSamples is used as a per-module mapping from function name MD5 to name string when input AutoFDO profile is in compact binary format. However with ThinLTO, we can have parallel backends modifying and accessing the class static map concurrently. The fix is to make GUIDToFuncNameMap a member of SampleProfileLoader instead of a file static data.
This pass is a port of the according pass from the HSAIL compiler.
It parses printf calls and setup runtime printf buffer.
After that it copies printf arguments to the buffer and fills in
module metadata for runtime.
Sean Fertile [Mon, 12 Aug 2019 15:27:40 +0000 (15:27 +0000)]
[XCOFF] Use a single symbolic constant for the size of an embeded name. [NFC]
Convert SymbolNameSize and SectionNameSize into just `NameSize`. The length of
a name embeded in a symbol table entry or section header table entry is length 8
for Sections, Symbols and Files. No need to have a distinct constant for each
one. Also removes the Size argument to 'generateStringRef' as the size is
always 'XCOFF::NameSize'.
See https://crbug.com/992871#c3 for how to reproduce.
> Patch https://reviews.llvm.org/D43256 introduced more aggressive loop layout optimization which depends on profile information. If profile information is not available, the statically estimated profile information(generated by BranchProbabilityInfo.cpp) is used. If user program doesn't behave as BranchProbabilityInfo.cpp expected, the layout may be worse.
>
> To be conservative this patch restores the original layout algorithm in plain mode. But user can still try the aggressive layout optimization with -force-precise-rotation-cost=true.
>
> Differential Revision: https://reviews.llvm.org/D65673
Owen Reynolds [Mon, 12 Aug 2019 14:00:28 +0000 (14:00 +0000)]
[llvm-ar] Accept file paths with windows format slashes
The internal representation of llvm-ar archives uses linux style slashes
for paths, no matter the OS. In the case of windows this meant file
paths input intending to match existing members would only match if
linux style slashes where used. This change allows either slash
direction to be input by the user.
This change includes removing an unnecessary call to normalisePath and
moving the call of another.
Sam Elliott [Mon, 12 Aug 2019 13:51:00 +0000 (13:51 +0000)]
[RISCV] Fix ICE in isDesirableToCommuteWithShift
Summary:
Ana Pazos reported a bug where we were not checking that an APInt would
fit into 64-bits before calling `getSExtValue()`. This caused asserts when
compiling large constants, such as i128s, as happens when compiling compiler-rt.
This patch adds a testcase and makes the callback less error-prone.
Kang Zhang [Mon, 12 Aug 2019 13:15:31 +0000 (13:15 +0000)]
[CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks
Summary:
In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
But the `early-ret` pass is before `block-placement`, we don't want to run it again.
This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.
Owen Reynolds [Mon, 12 Aug 2019 13:04:02 +0000 (13:04 +0000)]
[llvm-ar][test] Correct tests marked as expected fails
In diff D64802 I marked three tests as expected failures for darwin but
James Nagurne saw these fail on his downstream embedded ARM cross
compiler.
I believe XFAIL: system-darwin should be used instead of using XFAIL:
darwin due to the problem being related to the darwin host and not the
target.
Hans Wennborg [Mon, 12 Aug 2019 12:43:51 +0000 (12:43 +0000)]
Revert r368509 "[CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks"
> In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
> But the `early-ret` pass is before `block-placement`, we don't want to run it again.
> This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.
>
> Reviewed By: efriedma
>
> Differential Revision: https://reviews.llvm.org/D63972
This also revertes follow-ups r368514 and r368532.
James Henderson [Mon, 12 Aug 2019 11:36:11 +0000 (11:36 +0000)]
[llvm-strings] Improve testing of llvm-strings
This patch tidies up the llvm-strings testing by:
1. Adding comments to every test.
2. Getting rid of canned input files, and having the tests generate
them on the fly (this makes the tests self-contained).
3. Adding missing test coverage.
4. Renaming some tests that weren't clear as to their purpose.
5. Adding extra checking of various cases, formatting etc.
6. Removing a test that didn't seem to have any useful purpose for
testing llvm-strings.
Craig Topper [Mon, 12 Aug 2019 06:55:58 +0000 (06:55 +0000)]
[X86] Add some reduction add test cases that show sub-optimal code on avx2 and later.
For v4i8 and v8i8 when the reduction starts with a load we end up
shifting the data in the scalar domain and copying to the vector
domain a second time using a broadcast.
We already copied it to the vector domain once. It's better to
just shuffle it there.
Bjorn Pettersson [Sun, 11 Aug 2019 19:27:06 +0000 (19:27 +0000)]
[SelectionDAG] Widen vector results of SMULFIX/UMULFIX/SMULFIXSAT
Summary:
After the commits that changed x86 backend to widen vectors
instead of using promotion some of our downstream tests
started to fail. It was noticed that WidenVectorResult has
been missing support for SMULFIX/UMULFIX/SMULFIXSAT. This
patch adds the missing functionality.
David Green [Sun, 11 Aug 2019 08:53:18 +0000 (08:53 +0000)]
[MVE] Don't try to unroll vectorised MVE loops
Due to the nature of the beat system in the MVE architecture, along with tail
predication and low-overhead loops, unrolling has less benefit compared to
normal loops. You can not, for example, hide the latency of a load with other
instructions as you can for scalar code. Preventing unrolling also makes the
code easier to read and reason about.
So if a loop contains vector code, don't enable the runtime unrolling. At least
for the time being.
David Green [Sun, 11 Aug 2019 08:42:57 +0000 (08:42 +0000)]
[ARM] Permit auto-vectorization using MVE
With enough codegen complete, we can now correctly report the number and size
of vector registers for MVE, allowing auto vectorisation. This also allows FP
auto-vectorization for MVE without -Ofast/-ffast-math, due to support for IEEE
FP arithmetic and parity between scalar and vector FP behaviour.
Heejin Ahn [Sun, 11 Aug 2019 06:24:07 +0000 (06:24 +0000)]
Fix __clang_call_termiante's argument for foreign exceptions
Summary:
When exceptions are repeatedly thrown in the middle of handling another
exception, we call `__clang_call_terminate` with the exception pointer
(i32) as an argument. But in case of foreign exceptions, we don't have
the pointer, so we call the function with 0. (This requires
`__clang_call_terminate` can deal with 0 argument, which will be done
later)
But previously the 0 argument was not added as a `i32.const 0` but an
immediate by mistake, causing the `call` instruction to take not an i32
but rather an exnref, because an `exnref` is left on top of the value
stack if `br_on_exn` is not taken.
```
block i32
br_on_exn 0, __cpp_exception
;; exnref is on top of stack now
i32.const 0 ;; This was missing!
call __clang_call_terminate
unreachable
end
call __clang_call_terminate ;; This takes i32 extracted by br_on_exn
```
Wenlei He [Sun, 11 Aug 2019 06:05:35 +0000 (06:05 +0000)]
[LICM] Make Loop ICM profile aware
Summary:
Hoisting/sinking instruction out of a loop isn't always beneficial. Hoisting an instruction from a cold block inside a loop body out of the loop could hurt performance. This change makes Loop ICM profile aware - it now checks block frequency to make sure hoisting/sinking anly moves instruction to colder block.
Craig Topper [Sun, 11 Aug 2019 02:08:38 +0000 (02:08 +0000)]
[X86] Remove some code from combineShuffle that seems largely unnecessary with widening legalization.
The test case that changed is probably better served through
allowing combineTruncatedArithmetic to create narrow vectors. It
also appears InstCombine would have simplified this test case
to remove the zext and trunc anyway.
Roman Lebedev [Sat, 10 Aug 2019 19:28:54 +0000 (19:28 +0000)]
[InstCombine] Shift amount reassociation in bittest: relax one-use check when shifting constant
If one of the values being shifted is a constant, since the new shift
amount is known-constant, the new shift will end up being constant-folded
so, we don't need that one-use restriction then.
Roman Lebedev [Sat, 10 Aug 2019 19:28:44 +0000 (19:28 +0000)]
[InstCombine] Shift amount reassociation in bittest: drop pointless one-use restriction
That one-use restriction is not needed for correctness - we have already
ensured that one of the shifts will go away, so we know we won't increase
the instruction count. So there is no need for that restriction.
Simon Pilgrim [Sat, 10 Aug 2019 16:46:07 +0000 (16:46 +0000)]
[X86][SSE] Lower shuffle as ANY_EXTEND_VECTOR_INREG
On SSE41+ targets we always lower vector shuffles to ZERO_EXTEND_VECTOR_INREG, even if we don't need the extended bits.
This patch relaxes this so that we lower to ANY_EXTEND_VECTOR_INREG if we can, meaning that shuffle combines have a better idea of what elements need to be kept zero. This helps the multiple reduction code as we can now combine away a lot more of the pack+extend codes.
Sanjay Patel [Sat, 10 Aug 2019 13:17:54 +0000 (13:17 +0000)]
[Reassociate] try harder to convert negative FP constants to positive
This is an extension of a transform that tries to produce positive floating-point
constants to improve canonicalization (and hopefully lead to more reassociation
and CSE).
The original patches were:
D4904
D5363 (rL221721)
But as the test diffs show, these were limited to basic patterns by walking from
an instruction to its single user rather than recursively moving up the def-use
sequence. No fast-math is required here because we're only rearranging implicit
FP negations in intermediate ops.
A motivating bug is:
https://bugs.llvm.org/show_bug.cgi?id=32939
Kang Zhang [Sat, 10 Aug 2019 09:58:52 +0000 (09:58 +0000)]
[CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks
Summary:
In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
But the `early-ret` pass is before `block-placement`, we don't want to run it again.
This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.
Craig Topper [Sat, 10 Aug 2019 07:51:13 +0000 (07:51 +0000)]
[X86] Match the IR pattern form movmsk on SSE1 only targets where v4i32 isn't legal
Summary:
This patch adds a special DAG combine for SSE1 to recognize the IR pattern InstCombine gives us for movmsk. This only does the recognition for a few cases where its obvious the input won't be scalarized resulting in building a vector just do to the movmsk. I've made it separate from our existing matching for movmsk since that's called in multiple places and I didn't spend time to see if the other callers would make sense here. Plus the restrictions and additional checks would complicate that.
This fixes the case from PR42870. Buts its probably still broken the presence of logic ops feeding the movmsk pattern which would further hide the v4f32 type.
Luo, Yuanke [Sat, 10 Aug 2019 02:49:02 +0000 (02:49 +0000)]
[X86] Fix stack probe issue on windows32.
Summary:
On windows if the frame size exceed 4096 bytes, compiler need to
generate a call to _alloca_probe. X86CallFrameOptimization pass
changes the reserved stack size and cause of stack probe function
not be inserted. This patch fix the issue by detecting the call
frame size, if the size exceed 4096 bytes, drop X86CallFrameOptimization.
Fedor Sergeev [Sat, 10 Aug 2019 01:23:38 +0000 (01:23 +0000)]
[MemDep] allow to select block-scan-limit when constructing MemoryDependenceAnalysis
Introducing non-global control for default block-scan-limit in MemDep analysis.
Useful when there are many compilations per initialized LLVM instance (e.g. JIT).
cfi-icall: Allow the jump table to be optionally made non-canonical.
The default behavior of Clang's indirect function call checker will replace
the address of each CFI-checked function in the output file's symbol table
with the address of a jump table entry which will pass CFI checks. We refer
to this as making the jump table `canonical`. This property allows code that
was not compiled with ``-fsanitize=cfi-icall`` to take a CFI-valid address
of a function, but it comes with a couple of caveats that are especially
relevant for users of cross-DSO CFI:
- There is a performance and code size overhead associated with each
exported function, because each such function must have an associated
jump table entry, which must be emitted even in the common case where the
function is never address-taken anywhere in the program, and must be used
even for direct calls between DSOs, in addition to the PLT overhead.
- There is no good way to take a CFI-valid address of a function written in
assembly or a language not supported by Clang. The reason is that the code
generator would need to insert a jump table in order to form a CFI-valid
address for assembly functions, but there is no way in general for the
code generator to determine the language of the function. This may be
possible with LTO in the intra-DSO case, but in the cross-DSO case the only
information available is the function declaration. One possible solution
is to add a C wrapper for each assembly function, but these wrappers can
present a significant maintenance burden for heavy users of assembly in
addition to adding runtime overhead.
For these reasons, we provide the option of making the jump table non-canonical
with the flag ``-fno-sanitize-cfi-canonical-jump-tables``. When the jump
table is made non-canonical, symbol table entries point directly to the
function body. Any instances of a function's address being taken in C will
be replaced with a jump table address.
This scheme does have its own caveats, however. It does end up breaking
function address equality more aggressively than the default behavior,
especially in cross-DSO mode which normally preserves function address
equality entirely.
Furthermore, it is occasionally necessary for code not compiled with
``-fsanitize=cfi-icall`` to take a function address that is valid
for CFI. For example, this is necessary when a function's address
is taken by assembly code and then called by CFI-checking C code. The
``__attribute__((cfi_jump_table_canonical))`` attribute may be used to make
the jump table entry of a specific function canonical so that the external
code will end up taking a address for the function that will pass CFI checks.
Sanjay Patel [Fri, 9 Aug 2019 21:37:32 +0000 (21:37 +0000)]
[DAGCombiner] exclude x*2.0 from normal negation profitability rules
This is the codegen part of fixing:
https://bugs.llvm.org/show_bug.cgi?id=32939
Even with the optimal/canonical IR that is ideally created by D65954,
we would reverse that transform in DAGCombiner and end up with the same
asm on AArch64 or x86.
I see 2 options for trying to correct this:
1. Limit isNegatibleForFree() by special-casing the fmul pattern (this patch).
2. Avoid creating (fmul X, 2.0) in the 1st place by adding a special-case
transform to SelectionDAG::getNode() and/or SelectionDAGBuilder::visitFMul()
that matches the transform done by DAGCombiner.
This seems like the less intrusive patch, but if there's some other reason to
prefer 1 option over the other, we can change to the other option.
Daniel Sanders [Fri, 9 Aug 2019 21:11:20 +0000 (21:11 +0000)]
[globalisel] Add G_SEXT_INREG
Summary:
Targets often have instructions that can sign-extend certain cases faster
than the equivalent shift-left/arithmetic-shift-right. Such cases can be
identified by matching a shift-left/shift-right pair but there are some
issues with this in the context of combines. For example, suppose you can
sign-extend 8-bit up to 32-bit with a target extend instruction.
%1:_(s32) = G_SHL %0:_(s32), i32 24 # (I've inlined the G_CONSTANT for brevity)
%2:_(s32) = G_ASHR %1:_(s32), i32 24
%3:_(s32) = G_ASHR %2:_(s32), i32 1
would reasonably combine to:
%1:_(s32) = G_SHL %0:_(s32), i32 24
%2:_(s32) = G_ASHR %1:_(s32), i32 25
which no longer matches the special case. If your shifts and extend are
equal cost, this would break even as a pair of shifts but if your shift is
more expensive than the extend then it's cheaper as:
%2:_(s32) = G_SEXT_INREG %0:_(s32), i32 8
%3:_(s32) = G_ASHR %2:_(s32), i32 1
It's possible to match the shift-pair in ISel and emit an extend and ashr.
However, this is far from the only way to break this shift pair and make
it hard to match the extends. Another example is that with the right
known-zeros, this:
%1:_(s32) = G_SHL %0:_(s32), i32 24
%2:_(s32) = G_ASHR %1:_(s32), i32 24
%3:_(s32) = G_MUL %2:_(s32), i32 2
can become:
%1:_(s32) = G_SHL %0:_(s32), i32 24
%2:_(s32) = G_ASHR %1:_(s32), i32 23
All upstream targets have been configured to lower it to the current
G_SHL,G_ASHR pair but will likely want to make it legal in some cases to
handle their faster cases.
To follow-up: Provide a way to legalize based on the constant. At the
moment, I'm thinking that the best way to achieve this is to provide the
MI in LegalityQuery but that opens the door to breaking core principles
of the legalizer (legality is not context sensitive). That said, it's
worth noting that looking at other instructions and acting on that
information doesn't violate this principle in itself. It's only a
violation if, at the end of legalization, a pass that checks legality
without being able to see the context would say an instruction might not be
legal. That's a fairly subtle distinction so to give a concrete example,
saying %2 in:
%1 = G_CONSTANT 16
%2 = G_SEXT_INREG %0, %1
is legal is in violation of that principle if the legality of %2 depends
on %1 being constant and/or being 16. However, legalizing to either:
%2 = G_SEXT_INREG %0, 16
or:
%1 = G_CONSTANT 16
%2:_(s32) = G_SHL %0, %1
%3:_(s32) = G_ASHR %2, %1
depending on whether %1 is constant and 16 does not violate that principle
since both outputs are genuinely legal.