Some conditional branch instructions generated by this pass are checking
the wrong condition code. The instructions TBZ and TBNZ are transformed
into B.GE and B.LT instead of B.PL and B.MI respectively. They should
only be checking the Negative bit.
John Brawn [Wed, 28 Jun 2017 14:11:15 +0000 (14:11 +0000)]
[ARM] Improve if-conversion for M-class CPUs without branch predictors
The current heuristic in isProfitableToIfCvt assumes we have a branch predictor,
and so gives the wrong answer in some cases when we don't. This patch adds a
subtarget feature to indicate that a subtarget has no branch predictor, and
changes the heuristic in isProfitableToiIfCvt when it's present. This gives a
slight overall improvement in a set of embedded benchmarks on Cortex-M4 and
Cortex-M33.
Teresa Johnson [Wed, 28 Jun 2017 13:07:37 +0000 (13:07 +0000)]
Add zero-length check to memcpy/memset load store loop expansion
Summary:
I was testing using this expansion logic in other cases besides
NVPTX, and found some runtime failures due to the lack of a check
for a zero length memcpy/memset before the loop. There is already
such a check in the memmove expansion code though.
Petar Jovanovic [Wed, 28 Jun 2017 10:21:17 +0000 (10:21 +0000)]
[X86] Correct dwarf unwind information in function epilogue
CFI instructions that set appropriate cfa offset and cfa register are now
inserted in emitEpilogue() in X86FrameLowering.
Majority of the changes in this patch:
1. Ensure that CFI instructions do not affect code generation.
2. Enable maintaining correct information about cfa offset and cfa register
in a function when basic blocks are reordered, merged, split, duplicated.
These changes are target independent and described below.
Changed CFI instructions so that they:
1. are duplicable
2. are not counted as instructions when tail duplicating or tail merging
3. can be compared as equal
Add information to each MachineBasicBlock about cfa offset and cfa register
that are valid at its entry and exit (incoming and outgoing CFI info). Add
support for updating this information when basic blocks are merged, split,
duplicated, created. Add a verification pass (CFIInfoVerifier) that checks
that outgoing cfa offset and register of predecessor blocks match incoming
values of their successors.
Incoming and outgoing CFI information is used by a late pass
(CFIInstrInserter) that corrects CFA calculation rule for a basic block if
needed. That means that additional CFI instructions get inserted at basic
block beginning to correct the rule for calculating CFA. Having CFI
instructions in function epilogue can cause incorrect CFA calculation rule
for some basic blocks. This can happen if, due to basic block reordering,
or the existence of multiple epilogue blocks, some of the blocks have wrong
cfa offset and register values set by the epilogue block above them.
Nikolai Bozhenov [Wed, 28 Jun 2017 10:08:08 +0000 (10:08 +0000)]
[ValueTracking] Enabling existing ValueTracking patch by default.
The original patch was an improvement to IR ValueTracking on non-negative
integers. It has been checked in to trunk (D18777, r284022). But was disabled by
default due to performance regressions.
Perf impact has improved. The patch would be enabled by default.
Nikolai Bozhenov [Wed, 28 Jun 2017 09:26:20 +0000 (09:26 +0000)]
[InstCombine] Canonicalize clamp of float types to minmax in fast mode.
Summary:
This commit allows matchSelectPattern to recognize clamp of float
arguments in the presence of FMF the same way as already done for
integers.
This case is a little different though. With integers, given the
min/max pattern is recognized, DAGBuilder starts selecting MIN/MAX
"automatically". That is not the case for float, because for them only
full FMINNAN/FMINNUM/FMAXNAN/FMAXNUM ISD nodes exist and they do care
about NaNs. On the other hand, some backends (e.g. X86) have only
FMIN/FMAX nodes that do not care about NaNS and the former NAN/NUM
nodes are illegal thus selection is not happening. So I decided to do
such kind of transformation in IR (InstCombiner) instead of
complicating the logic in the backend.
Nikolai Bozhenov [Wed, 28 Jun 2017 09:22:58 +0000 (09:22 +0000)]
Add tests to document current InstCombine behavior for clamp pattern.
Summary:
This commit adds the tests for clamp pattern as a prerequisite of
D33186 to make the impact of that fix more clear and also to document
current behavior.
Reviewers: spatel, jmolloy
Reviewed By: spatel
Subscribers: n.bozhenov, llvm-commits
Patch by Andrei Elovikov <andrei.elovikov@intel.com>
George Rimar [Wed, 28 Jun 2017 08:21:19 +0000 (08:21 +0000)]
Recommit "[ELF] - Add ability for DWARFContextInMemory to exit early when any error happen."
With fix in include folder character case:
#include "llvm/Codegen/AsmPrinter.h" -> #include "llvm/CodeGen/AsmPrinter.h"
Original commit message:
Change introduces error reporting policy for DWARFContextInMemory.
New callback provided by client is able to handle error on it's
side and return Halt or Continue.
That allows to either keep current behavior when parser prints all errors
but continues parsing object or implement something very different, like
stop parsing on a first error and report an error in a client style.
Kristof Beyls [Wed, 28 Jun 2017 07:07:03 +0000 (07:07 +0000)]
[ARM] Make -mcpu=generic schedule for an in-order core (Cortex-A8).
The benchmarking summarized in
http://lists.llvm.org/pipermail/llvm-dev/2017-May/113525.html showed
this is beneficial for a wide range of cores.
As is to be expected, quite a few small adaptations are needed to the
regressions tests, as the difference in scheduling results in:
- Quite a few small instruction schedule differences.
- A few changes in register allocation decisions caused by different
instruction schedules.
- A few changes in IfConversion decisions, due to a difference in
instruction schedule and/or the estimated cost of a branch mispredict.
George Rimar [Wed, 28 Jun 2017 06:57:20 +0000 (06:57 +0000)]
[ELF] - Add ability for DWARFContextInMemory to exit early when any error happen.
Change introduces error reporting policy for DWARFContextInMemory.
New callback provided by client is able to handle error on it's
side and return Halt or Continue.
That allows to either keep current behavior when parser prints all errors
but continues parsing object or implement something very different, like
stop parsing on a first error and report an error in a client style.
Craig Topper [Wed, 28 Jun 2017 06:45:36 +0000 (06:45 +0000)]
[InstCombine] Add test case demonstrating that we don't handle icmp eq (trunc (lshr(X, cst1)), cst->icmp (and X, mask), cst when the shift type is larger than 64-bits. NFC
Craig Topper [Wed, 28 Jun 2017 06:43:58 +0000 (06:43 +0000)]
Revert r306508 "[InstCombine] Add test case demonstrating that we don't handle icmp eq (trunc (lshr(X, cst1)), cst->icmp (and X, mask), cst when the shift type is larger than 64-bits. NFC"
Craig Topper [Wed, 28 Jun 2017 06:42:48 +0000 (06:42 +0000)]
[InstCombine] Add test case demonstrating that we don't handle icmp eq (trunc (lshr(X, cst1)), cst->icmp (and X, mask), cst when the shift type is larger than 64-bits. NFC
Hiroshi Inoue [Wed, 28 Jun 2017 06:14:30 +0000 (06:14 +0000)]
Add missing library dependency to fix build break in llvm-lto2
error message
CMakeFiles/llvm-lto2.dir/llvm-lto2.cpp.o: In function `dumpSymtab(int, char**)':
llvm-lto2.cpp:(.text._ZL10dumpSymtabiPPc+0x238): undefined reference to `llvm::getBitcodeFileContents(llvm::MemoryBufferRef)'
collect2: error: ld returned 1 exit status
Max Kazantsev [Wed, 28 Jun 2017 04:57:45 +0000 (04:57 +0000)]
[IRCE][NFC] Better get SCEV for 1 in calculateSubRanges
A slightly more efficient way to get constant, we avoid resolving in getSCEV and excessive
invocations, and we don't create a ConstantInt if 'true' branch is taken.
Allow to truncate left shift with non-constant shift amount
That is pretty common for clang to produce code like
(shl %x, (and %amt, 31)). In this situation we can still perform
trunc (shl) into shl (trunc) conversion given the known value
range of shift amount.
When simplifying an instruction that has been re-mapped, it should never
simplify to an instruction in the original function. In the edge case
where we are inlining a function into itself, the existing code led to
incorrect behavior. Replace the incorrect code with an assert verifying
that we never expect simplification to produce an instruction in the old
function, unless the functions are the same.
[COFF, ARM64] Add support for Windows ARM64 COFF format
Summary:
This is the llvm part of the initial implementation to support Windows ARM64 COFF format.
I will gradually add more functionality in subsequent patches.
Florian Hahn [Tue, 27 Jun 2017 22:27:32 +0000 (22:27 +0000)]
[AArch64] Inline callee if its target-features are a subset of the caller
Summary:
Similar to X86, it should be safe to inline callees if their target-features
are a subset of the caller. This change matches GCC's inlining behavior
with respect to attributes [1].
Sanjay Patel [Tue, 27 Jun 2017 21:46:34 +0000 (21:46 +0000)]
[CGP] eliminate a sub instruction in memcmp expansion
As noted in D34071, there are some IR optimization opportunities that could be
handled by normal IR passes if this expansion wasn't happening so late in CGP.
Regardless of that, it seems wasteful to knowingly produce suboptimal IR here,
so I'm proposing this change:
%s = sub i32 %x, %y
%r = icmp ne %s, 0
=>
%r = icmp ne %x, %y
Changing the predicate to 'eq' mimics what InstCombine would do, so that's just
an efficiency improvement if we decide this expansion should happen sooner.
The fact that the PowerPC backend doesn't eliminate the 'subf.' might be
something for PPC folks to investigate separately.
Tim Northover [Tue, 27 Jun 2017 21:41:40 +0000 (21:41 +0000)]
GlobalISel: verify that a COPY is trivial when created.
Without this check, COPY instructions can actually be one of the generic casts
in disguise. That's confusing and bad.
At some point during ISel this restriction has to be relaxed since the fully
selected instructions will usually use COPY for those purposes. Right now I
think it's possible that relaxation occurs during RegBankSelect (hence the
change there). I'm not convinced that's where it belongs long-term though.
Joel Jones [Tue, 27 Jun 2017 20:44:55 +0000 (20:44 +0000)]
[AArch64] Performance enhancements for Cavium ThunderX2 T99
This patch enables significant performance enhancements to the
Cavium ThunderX2T99 LLVM backend, as observed by running SPEC2K6,
by adding more detailed scheduling information.
Related Bugzilla bug: http://bugs.llvm.org/show_bug.cgi?id=32562
Craig Topper [Tue, 27 Jun 2017 19:57:53 +0000 (19:57 +0000)]
[InstCombine] Propagate nsw flag when turning mul by pow2 into shift when the constant is a vector splat or the scalar bit width is larger than 64-bits
The check to see if we can propagate the nsw flag used m_ConstantInt(uint64_t*&) which doesn't work with splat vectors and has a restriction that the bitwidth of the ConstantInt must be 64-bits are less.
This patch changes it to use m_APInt to remove both these issues
[AMDGPU] Simplify setcc (sext from i1 b), -1|0, cc
Depending on the compare code that can be either an argument of
sext or negate of it. This helps to avoid v_cndmask_b64 instruction
for sext. A reversed value can be further simplified and folded into
its parent comparison if possible.
Matt Arsenault [Tue, 27 Jun 2017 18:28:10 +0000 (18:28 +0000)]
RenameIndependentSubregs: Fix infinite loop
Apparently this replacement can really be substituting the
same as the original register. Avoid restarting the loop
when there's been no change in the register uses.
[AMDGPU] Combine and x, (sext cc from i1) => select cc, x, 0
Also factored out function to check if a boolean is an already
deserialized value which does not require v_cndmask_b32 to be
loaded. Added binary logical operators to its check.
Jakub Kuderski [Tue, 27 Jun 2017 18:08:53 +0000 (18:08 +0000)]
[Dominators] Use Semi-NCA instead of SLT to calculate dominators
Summary:
This patch makes GenericDomTreeConstruction use the Semi-NCA algorithm instead of Simple Lengauer-Tarjan.
As described in `RFC: Dynamic dominators`, Semi-NCA offers slightly better performance than SLT. What's more important, it can be extended to perform incremental updates on already constructed dominator trees.
The patch passes check-all, llvm test suite and is able to boostrap clang. I also wasn't able to observe any compilation time regressions.
Matthias Braun [Tue, 27 Jun 2017 18:05:26 +0000 (18:05 +0000)]
LiveRangeCalc: Slightly improve map usage; NFC
- DenseMap should be faster than std::map
- Use the `InsertRes = insert() if (!InsertRes.inserted)` pattern rather
than the `if (!X.contains(...)) { X.insert(...); }` to save one map
lookup.
This canonicalization was suggested in D33172 as a way to make InstCombine behavior more uniform.
We have this transform for icmp+br, so unless there's some reason that icmp+select should be
treated differently, we should do the same thing here.
The benefit comes from increasing the chances of creating identical instructions. This is shown in
the tests in logical-select.ll (PR32791). InstCombine doesn't fold those directly, but EarlyCSE
can simplify the identical cmps, and then InstCombine can fold the selects together.
The possible regression for the tests in select.ll raises questions about poison/undef:
http://lists.llvm.org/pipermail/llvm-dev/2017-May/113261.html
...but that transform is just as likely to be triggered by this canonicalization as it is to be
missed, so we're just pointing out a commutation deficiency in the pattern matching:
https://reviews.llvm.org/rL228409
Craig Topper [Tue, 27 Jun 2017 17:16:03 +0000 (17:16 +0000)]
[InstCombine] Add test case demonstrating that we don't propagate nsw flag when converting mul by pow2 to shl when the type is larger than 64-bits. NFC
Gadi Haber [Tue, 27 Jun 2017 15:05:13 +0000 (15:05 +0000)]
Updated and extended the information about each instruction in HSW and SNB to include the following data:
•static latency
•number of uOps from which the instructions consists
•all ports used by the instruction
Sam Kolton [Tue, 27 Jun 2017 15:02:23 +0000 (15:02 +0000)]
[AMDGPU] SDWA: several fixes for V_CVT and VOPC instructions
Summary:
1. Instruction V_CVT_U32_F32 allow omod operand (see SIInstrInfo.td:1435). In fact this operand shouldn't be allowed here. This fix checks if SDWA pseudo instruction has OMod operand and then copy it.
2. There were several problems with support of VOPC instructions in SDWA peephole pass.
Matthew Simpson [Tue, 27 Jun 2017 15:00:22 +0000 (15:00 +0000)]
[AArch64] Update successor probabilities after ccmp-conversion
This patch modifies the conditional compares pass so that it keeps successor
probabilities up-to-date after the conversion. Previously, successor
probabilities were being normalized to a uniform distribution, even though they
may have been heavily biased prior to the conversion (e.g., if one of the edges
was the back edge of a loop). This loss of information affected passes later in
the pipeline.
Anna Thomas [Tue, 27 Jun 2017 14:14:35 +0000 (14:14 +0000)]
[LoopUnrollRuntime] Use SCEV exit count for calculating trip count. NFCI
Instead of getBackEdgeTakenCount, use getExitCount on the latch exiting block
(which is proven to be the only exiting block in the loop to be unrolled).
Hiroshi Inoue [Tue, 27 Jun 2017 12:43:08 +0000 (12:43 +0000)]
[SelectionDAG] set dereferenceable flag in MergeConsecutiveStores to fix assetion failure
When SelectionDAG merges consecutive stores and loads in MergeConsecutiveStores, it does not set dereferenceable flag for a created load instruction. This results in an assertion failure if SelectionDAG commonizes this load instruction with other load instructions, as well as it may miss optimization opportunities.
This patch sat dereferenceable flag for the newly created load instruction if all the load instructions to be merged are dereferenceable.
AVX512 compare instructions return v*i1 types.
In cases where the number of elements in the returned value are less than 8, clang adds zeroes to get a mask of v8i1 type.
Later on it's replaced with CONCAT_VECTORS, which then is lowered to many DAG nodes including insert/extract element and shift right/left nodes.
The fact that AVX512 compare instructions put the result in a k register and zeroes all its upper bits allows us to remove the extra nodes simply by copying the result to the required register class.
When lowering, identify these cases and transform them into an INSERT_SUBVECTOR node (marked legal), then catch this pattern in instructions selection phase and transform it into one avx512 cmp instruction.
Simon Dardis [Tue, 27 Jun 2017 10:11:11 +0000 (10:11 +0000)]
[mips] Refine the condition for when to use CALL16 vs a GOT displacement.
Borrow from the logic for 'jal' in MipsAsmParser::processInstruction
and add the extra condition of bypassing CALL16 if the destination symbol
is an ELF symbol with STB_LOCAL binding.
Diana Picus [Tue, 27 Jun 2017 09:19:51 +0000 (09:19 +0000)]
[ARM] GlobalISel: Support G_SELECT for i32
* Mark as legal for (s32, i1, s32, s32)
* Map everything into GPRs
* Select to two instructions: a CMP of the condition against 0, to set
the flags, and a MOVCCr to select between the two inputs based on the
flags that we've just set
Ayal Zaks [Tue, 27 Jun 2017 08:41:19 +0000 (08:41 +0000)]
Recommitting 306331.
Undoing revert 306338 after fixed bug: add metadata to the load instead of the
reverse shuffle added to it, retaining the original ValueMap implementation.