Summary:
ConstantRange class currently has a method getSetSize, which is mostly used to
compare set sizes of two constant ranges (there is only one spot where it's used
in a slightly different scenario). This patch introduces setSizeSmallerThanOf
method, which does such comparison in a more efficient way. In the original
method we have to extend our types to (BitWidth+1), which can result it using
slow case of APInt, extra memory allocations, etc.
The change is supposed to not change any functionality, but it slightly improves
compile time. Here is compile time improvements that I observed on CTMark:
* tramp3d-v4 -2.02%
* pairlocalalign -1.82%
* lencod -1.67%
Xin Tong [Mon, 20 Mar 2017 00:30:19 +0000 (00:30 +0000)]
Remove unnecessary IDom check
Summary: This Idom check seems unnecessary. The immediate children of a node on the Dominator Tree should always be the IDom of its immediate children in this case.
Craig Topper [Sun, 19 Mar 2017 17:11:09 +0000 (17:11 +0000)]
[AVX-512] Handle kor/kand/kandn/kxor/kxnor/knot intrinsics at lowering time instead of isel
Summary:
Currently we handle these intrinsics at isel with special patterns. But as they just map to normal logic operations, we should just handle them at lowering. This will expose them to DAG combine optimizations. Right now the kor-sequence test generates a bunch of regclass copies between GR16 and VK16 that the peephole optimizer and/or register coallescing are removing to keep everything in the mask domain. By handling the logic op intrinsics earlier, these copies become bitcasts in the DAG and get removed by DAG combine which seems more robust.
This should help enable my plan to stop copying between K registers and GR8/GR16. The peephole optimizer can't remove a chain of copies between K and GR32 with insert_subreg/extract_subreg present in the chain so the kor-sequence test break. But this patch should dodge the problem entirely.
Simon Pilgrim [Sun, 19 Mar 2017 16:50:25 +0000 (16:50 +0000)]
Fix constant folding of fp2int to large integers
We make the assumption in most of our constant folding code that a fp2int will target an integer of 128-bits or less, calling the APFloat::convertToInteger with only uint64_t[2] of raw bits for the result.
Fuzz testing (PR24662) showed that we don't handle other cases at all, resulting in stack overflows and all sorts of crashes.
This patch uses the APSInt version of APFloat::convertToInteger instead to better handle such cases.
Ahmed Bougacha [Sun, 19 Mar 2017 16:13:00 +0000 (16:13 +0000)]
[GlobalISel] Don't select trivially dead instructions.
Folding instructions when selecting can cause them to become dead.
Don't select these dead instructions (if they don't have other side
effects, and don't define physical registers).
Preserve existing tests by adding COPYs.
In some tests, the G_CONSTANT vregs never get constrained to a class:
the only use of the vreg was folded into another instruction, so the
G_CONSTANT, now dead, never gets selected.
Ahmed Bougacha [Sun, 19 Mar 2017 16:12:51 +0000 (16:12 +0000)]
[GlobalISel][AArch64] Split out cast select tests. NFC.
And remove some redundant bitcast tests.
Also split the test functions themselves: it makes it obvious to see
what's tested where and what isn't, it makes the tests much easier to
read and manually update, and, most importantly, it makes them almost
trivial to update using tooling. Yes, it's obnoxiously verbose, but
said tooling helps upgrade to better MIR syntax whenever available.
Xin Tong [Sun, 19 Mar 2017 15:30:53 +0000 (15:30 +0000)]
[JumpThreading] Perform phi-translation in SimplifyPartiallyRedundantLoad.
Summary:
In case we are loading on a phi-load in SimplifyPartiallyRedundantLoad.
Try to phi translate it into incoming values in the predecessors before
we search for available loads.
Xin Tong [Sun, 19 Mar 2017 15:27:52 +0000 (15:27 +0000)]
Extract FindAvailablePtrLoadStore out of FindAvailableLoadedValue. NFCI
Summary:
Extract FindAvailablePtrLoadStore out of FindAvailableLoadedValue.
Prepare for upcoming change which will do phi-translation for load on
phi pointer in jump threading SimplifyPartiallyRedundantLoad.
This is in preparation for https://reviews.llvm.org/D30543
Teresa Johnson [Sun, 19 Mar 2017 13:54:57 +0000 (13:54 +0000)]
Enable stripping of multiple DILocation on !llvm.loop metadata
Summary:
I found that stripDebugInfo was still leaving significant amounts of
debug info due to !llvm.loop that contained DILocation after stripping.
The support for stripping debug info on !llvm.loop added in r293377 only
removes a single DILocation. Enhance that to remove all DILocation from
!llvm.loop.
Oren Ben Simhon [Sun, 19 Mar 2017 08:14:18 +0000 (08:14 +0000)]
[MIR] Support Customed Register Mask and CSRs
The MIR printer dumps a string that describe the register mask of a function.
A static predefined list of register masks matches a static list of strings.
However when the register mask is not from the static predefined list, there is no descriptor string and the printer fails.
This patch adds support to custom register mask printing and dumping.
Also the list of callee saved registers (describing the registers that must be preserved for the caller) might be dynamic.
As such this data needs to be dumped and parsed back to the Machine Register Info.
Craig Topper [Sat, 18 Mar 2017 18:24:41 +0000 (18:24 +0000)]
[GVN] Fix accidental double storage of the function BasicBlock list in iterateOnFunction
Summary:
iterateOnFunction creates a ReversePostOrderTraversal object which does a post order traversal in its constructor and stores the results in an internal vector. Iteration over it just reads from the internal vector in reverse order.
The GVN code seems to be unaware of this and iterates over ReversePostOrderTraversal object and makes a copy of the vector into a local vector. (I think at one point in time we used a DFS here instead which would have required the local vector).
The net affect of this is that we have two vectors containing the basic block list. As I didn't want to expose the implementation detail of ReversePostOrderTraversal's constructor to GVN, I've changed the code to do an explicit post order traversal storing into the local vector and then reverse iterate over that.
I've also removed the reserve(256) since the ReversePostOrderTraversal wasn't doing that. I can add it back if we thinks it important. Though it seemed weird that it wasn't based on the size of the function.
Craig Topper [Sat, 18 Mar 2017 18:21:46 +0000 (18:21 +0000)]
[ValueTracking] Remove deadish code from computeKnownBitsAddSub.
The code assigned to KnownZero, but later code unconditionally assigned over it. I'm pretty sure the later code can handle the same cases and more equally well.
Daniel Berlin [Sat, 18 Mar 2017 15:41:36 +0000 (15:41 +0000)]
NewGVN: Fix PHI evaluation bug exposed by new verifier. We were checking whether the incoming block was reachable instead of whether the specific edge was reachable
Matthias Braun [Sat, 18 Mar 2017 05:08:58 +0000 (05:08 +0000)]
ExecutionDepsFix: Let targets specialize the pass; NFC
Let targets specialize the pass with the register class so we can get a
parameterless default constructor and can put the pass into the pass
registry to enable testing with -run-pass=.
Go back to behavior pre-r231309 and reduce the timeout from 8 to ~1.5
min now that we have (a) PCMCache mechanism (r298165) and (b) timeout
that doesn't cause a failure, but actually build the module (r298175).
Craig Topper [Fri, 17 Mar 2017 23:48:02 +0000 (23:48 +0000)]
[BuildLibCalls] emitPutChar should infer function attributes for putchar
When InstCombine calls into SimplifyLibCalls and it createa putChar calls, we don't infer the attributes. And since SimplifyLibCalls doesn't use InstCombine's IRBuilder the calls doesn't end up in the worklist on this iteration of InstCombine. So it gets picked up on the next iteration where it causes an IR change. This of course causes InstCombine to run another iteration.
So this patch just gets the attributes right the first time. We already did this for puts and some other libcalls.
Jessica Paquette [Fri, 17 Mar 2017 22:26:55 +0000 (22:26 +0000)]
[Outliner] Add outliner for AArch64
This commit adds the necessary target hooks for outlining in AArch64. It also
refactors the switch statement used in `getMemOpBaseRegImmOfsWidth` into a
more general function, `getMemOpInfo`. This allows the outliner to share that
code without copying and pasting it.
The AArch64 outliner can be run using -mllvm -enable-machine-outliner, as with
the X86-64 outliner.
The test for this pass verifies that the outliner does, in fact outline
functions, fixes up the stack accesses properly, and can correctly generate a
tail call. In the future, this test should be replaced with a MIR test, so that
we can properly test immediate offset overflows in fixed-up instructions.
Evgeniy Stepanov [Fri, 17 Mar 2017 22:17:29 +0000 (22:17 +0000)]
[asan] Fix dead stripping of globals on Linux.
Use a combination of !associated, comdat, @llvm.compiler.used and
custom sections to allow dead stripping of globals and their asan
metadata. Sometimes.
Currently this works on LLD, which supports SHF_LINK_ORDER with
sh_link pointing to the associated section.
This also works on BFD, which seems to treat comdats as
all-or-nothing with respect to linker GC. There is a weird quirk
where the "first" global in each link is never GC-ed because of the
section symbols.
At this moment it does not work on Gold (as in the globals are never
stripped).
Evgeniy Stepanov [Fri, 17 Mar 2017 22:17:24 +0000 (22:17 +0000)]
Add !associated metadata.
This is an ELF-specific thing that adds SHF_LINK_ORDER to the global's section
pointing to the metadata argument's section. The effect of that is a reverse dependency
between sections for the linker GC.
!associated does not change the behavior of global-dce. The global
may also need to be added to llvm.compiler.used.
Since SHF_LINK_ORDER is per-section, !associated effectively enables
fdata-sections for the affected globals, the same as comdats do.
Eli Friedman [Fri, 17 Mar 2017 22:15:50 +0000 (22:15 +0000)]
[SelectionDAG] Remove redundant stores more aggressively.
Handle TokenFactors more aggressively in
SDValue::reachesChainWithoutSideEffects. This isn't really a
very effective change anymore because of other changes to
chain handling, but it's a cheap check, and the expanded
comments are still useful.
It might be possible to loosen the hasOneUse() requirement with a
deeper analysis, but a naive implementation of that check would be
expensive.
Matt Arsenault [Fri, 17 Mar 2017 20:52:21 +0000 (20:52 +0000)]
AMDGPU: Fix handling of constant phi input loop conditions
If the loop condition was an i1 phi with a constantexpr input, this
would add a loop intrinsic fed by a phi dependent on a call to
if.break in the same block. Insert the call in the loop header.
Matt Arsenault [Fri, 17 Mar 2017 20:41:45 +0000 (20:41 +0000)]
AMDGPU: Cleanup control flow intrinsics
Move backend internal intrinsics along with the rest of the
normal intrinsics, and use the Intrinsic::getDeclaration
API instead of manually constructing the type list.
It's surprising this was working before. fdiv.fast had
the wrong number of parameters. The control flow intrinsic
declaration attributes were not being applied, and
their types were inconsistent. The actual IR use types
did not match the declaration, and were closer to the
types used for the patterns. The brcond lowering
was changing the types, so introduce new nodes for those.
Reid Kleckner [Fri, 17 Mar 2017 20:25:49 +0000 (20:25 +0000)]
[X86] Emit fewer instructions to allocate >16GB stack frames
Summary:
Use this code pattern when RAX is live, instead of emitting up to 2
billion adjustments:
pushq %rax
movabsq +-$Offset+-8, %rax
addq %rsp, %rax
xchg %rax, (%rsp)
movq (%rsp), %rsp
Try to clean this code up a bit while I'm here. In particular, hoist the
logic that handles the entire adjustment with `movabsq $imm, %rax` out
of the loop.
This negates the offset in the prologue and uses ADD because X86 only
has a two operand subtract which always subtracts from the destination
register, which can no longer be RSP.
Jun Bum Lim [Fri, 17 Mar 2017 19:05:21 +0000 (19:05 +0000)]
[CodeGenPrep]Restructure promoting Ext to form ExtLoad
Summary:
Instead of just looking for a load which is mergable with Ext to form ExtLoad, trying to promote Exts as long as the cost is acceptable. This change is not a NFC as it continue promoting Exts even after finding a load during promotions; the change in arm64-codegen-prepare-extload.ll described in 2.b might show the case.
This change was motivated from D26524. Based on this change, I will move the transformation performed in aarch64-type-promotion into CGP.
Reid Kleckner [Fri, 17 Mar 2017 17:16:39 +0000 (17:16 +0000)]
Store Arguments in a flat array instead of an iplist
This saves two pointers from Argument and eliminates some extra
allocations.
Arguments cannot be inserted or removed from a Function because that
would require changing its Type, which LLVM does not allow. Instead,
passes that change prototypes, like DeadArgElim, create a new Function
and copy over argument names and attributes. The primary benefit of
iplist is O(1) random insertion and removal. We just don't need that for
arguments, so don't use it.
Loop unswitching can be extremely harmful for a SIMT target. In case
if hoisted condition is not uniform a SIMT machine will execute both
clones of a loop sequentially. Therefor LoopUnswitch checks if the
condition is non-divergent.
Since DivergenceAnalysis adds an expensive PostDominatorTree analysis
not needed for non-SIMT targets a new option is added to avoid unneded
analysis initialization. The method getAnalysisUsage is called when
TargetTransformInfo is not yet available and we cannot use it here.
For that reason a new field DivergentTarget is added to PassManagerBuilder
to control the behavior and set this field from a target.
Andre Vieira [Fri, 17 Mar 2017 09:37:10 +0000 (09:37 +0000)]
[ARM] Fix triple format in test branch disassemble test
Fixing triple format in the tests added for the branch label fix for Thumb
Targets. Also recommitting previously approved patch, see
https://reviews.llvm.org/D30943.
Craig Topper [Fri, 17 Mar 2017 07:37:31 +0000 (07:37 +0000)]
[AVX-512] Make VEX encoded FMA instructions available when AVX512 is enabled regardless of whether +fma was added on the command line.
We weren't able to handle isel of the 128/256-bit FMA instructions when AVX512F was enabled but VLX and FMA weren't.
I didn't mask FeatureAVX512 imply FeatureFMA as I wasn't sure I wanted disabling FMA to also disable AVX512. Instead we just can't prevent FMA instructions if AVX512 is enabled.
Another option would be to promote 128/256-bit to 512-bit, do the operation and extract it. But that requires a lot of extra isel patterns. Since no CPUs exist that support AVX512, but not FMA just using the VEX instructions seems better.
Jonas Paulsson [Fri, 17 Mar 2017 06:47:08 +0000 (06:47 +0000)]
[SystemZ] Add use of super-reg in splitMove()
If one of the subregs of the 128 bit reg is undefined when splitMove() splits
a store into two instructions, a use of an undefined physical register
results.
To remedy this, an implicit use of the super register is added onto both new
instructions, along with propagated kill and undef flags.
This was discovered with llvm-stress, and that test case is attached as
test/CodeGen/SystemZ/splitMove_undefReg_mverifier.ll
Thanks to Matthias Braun for helping with a nice explanation.
Craig Topper [Fri, 17 Mar 2017 06:00:01 +0000 (06:00 +0000)]
[X86] Use update_llc_test_checks.py to regenerate a test and add command lines to demonstrate that we don't pick EVEX encoded instruction when AVX512 and FMA3 are both enabled.
This bug only exists on the scalar llvm.fma instrinsics. Looks like we don't test the llvm.fma intrinsics very thoroughly. In fact I don't see any tests for the vector versions.
Craig Topper [Fri, 17 Mar 2017 05:59:54 +0000 (05:59 +0000)]
[X86] Cleanup the AddedComplexity values on move immediate instructions. NFC
This makes the values a little more consistent between similar instruction and reduces the values some. This results in better grouping in the isel table saving a few bytes.
Sanjoy Das [Fri, 17 Mar 2017 00:55:53 +0000 (00:55 +0000)]
[RSForGC] Handle vector GEPs
We were not handling getelemenptr instructions of vector type before.
Since getelemenptr instructions for vector types follow the same rule as
getelementptr instructions for non-vector types, we can just handle them
in the same way.
- This fixes a bug where subregister incompatible with the vregs register
class where used.
- Implement the case where multiple copies are necessary to cover a
given lanemask.
Matthias Braun [Fri, 17 Mar 2017 00:41:33 +0000 (00:41 +0000)]
VirtRegMap: Correctly deal with bundles when deleting identity copies.
This fixes two problems when VirtRegMap encounters bundles:
- When substituting a vreg subregister def with an actual register the
internal read flag must be cleared.
- Removing an identity COPY from a bundle needs to use
removeFromBundle() and a newly introduced function to update
SlotIndexes.
No testcase here, because none of the in-tree targets trigger this,
however an upcoming commit of mine will need this and the testcase there
will trigger this.
Eric Christopher [Fri, 17 Mar 2017 00:38:03 +0000 (00:38 +0000)]
Remove LessPreciseFPMADOption from TargetOptions along with all of the
associated command line options and functions - it's currently unused
in all of llvm and clang other than being set and reset.
LTO: Fix a potential race condition in the caching API.
After the call to sys::fs::exists succeeds, indicating a cache hit, we call
AddFile and the client will open the file using the supplied path. If the
client is using cache pruning, there is a potential race between the pruner
and the client. To avoid this, change the caching API so that it provides
a MemoryBuffer to the client, and have clients use that MemoryBuffer where
possible.
This scheme won't work with the gold plugin because the plugin API expects a
file path. So we have the gold plugin use the buffer identifier as a path and
live with the race for now. (Note that the gold plugin isn't actually affected
by the problem at the moment because it doesn't support cache pruning.)
This effectively reverts r279883 modulo the change to use the existing path
in the gold plugin.
Zachary Turner [Fri, 17 Mar 2017 00:15:55 +0000 (00:15 +0000)]
[pdb] Fix an uninitialized read, and add a test for it.
This was originally reported in pr32249, uncovered by PTVS-Studio.
There was no code coverage for this path because it was
difficult to construct odd-case PDB files that were not generated
by cl.
Now that we can write construct minimal PDB files from YAML,
it's easy to construct fragments that generate whatever we want.
In this patch I add a test that creates 2 type records. One
with a unique name, and one without. I verify that we can go
from PDB to Yaml with no errors. In a future patch I'd like
to add something like llvm-pdbdump raw -lookup-type that will
just dump one record and nothing else, which should make it
a bit cleaner to find this kind of thing.