This patch refactors the PHisToFix loop as follows:
- The loop itself now resides in its own method.
- The new method iterates on scalar-loop's header; the PHIsToFix map formerly
propagated as an output parameter and filled during phi widening is removed.
- The code handling reductions is moved into its own method, similar to the
existing fixFirstOrderRecurrence().
Oliver Stannard [Tue, 14 Mar 2017 13:50:10 +0000 (13:50 +0000)]
[ARM] Diagnose ARM MOVT without :lower16: or :upper16: expression
This instruction was missing from the list of opcodes that we check, so we were
hitting an llvm_unreachable in ARMMCCodeEmitter.cpp for the ARM MOVT
instruction, rather than the diagnostic that is emitted for the other MOVW/MOVT
instructions.
Refactoring Cost Model's selectVectorizationFactor() so that it handles only the
selection of the best VF from a pre-computed range of candidate VF's, extracting
early-exit criteria and the computation of a MaxVF upper-bound to other methods,
all driven by a newly introduced LoopVectorizationPlanner.
Daniel Berlin [Tue, 14 Mar 2017 11:25:45 +0000 (11:25 +0000)]
Make PredIteratorCache size() logically const. Do not require copying predecessors to get size.
Summary:
Every single benchmark i can run, on large and small cfgs, fully
connected, etc, across 3 different platforms (x86, arm., and PPC) says
that the current pred iterator cache is a losing proposition.
I can't find a case where it's faster than just walking preds, and in some cases, it's 5-10% slower.
This is due to copying the preds.
It also degrades into copying the entire cfg.
The one operation that is occasionally faster is the cached size.
This makes that operation faster by not relying on having the copies available.
I'm not even sure that is faster enough to be worth it. I, again, have
trouble finding cases where this takes long enough in a pass to be
worth caching compared to a million other things they could cache or
improve.
My suggestion:
We next remove the get() interface.
We do stronger benchmarking of size().
We probably end up killing this entire cache.
/
Oliver Stannard [Tue, 14 Mar 2017 10:13:17 +0000 (10:13 +0000)]
[ValueTracking] Out of range shifts might be undef
If it is possible for the RHS of a shift operation to be greater than or equal
to the bit-width, then the result might be undef, and we can't report any known
bits.
In some cases, this was allowing a transformation in instcombine which widened
an undef value from i1 to i32, increasing the range of values that a function
could return.
Sam Parker [Tue, 14 Mar 2017 09:13:22 +0000 (09:13 +0000)]
[ARM] Move SMULW[B|T] isel to DAG Combine
Create nodes for smulwb and smulwt and move their selection from
DAGToDAG to DAG combine. smlawb and smlawt can then be selected
using tablegen. Added some helper functions to detect shift patterns
as well as a wrapper around SimplifyDemandBits. Added a couple of
extra tests.
Oren Ben Simhon [Tue, 14 Mar 2017 09:09:26 +0000 (09:09 +0000)]
Disable Callee Saved Registers
Each Calling convention (CC) defines a static list of registers that should be preserved by a callee function. All other registers should be saved by the caller.
Some CCs use additional condition: If the register is used for passing/returning arguments – the caller needs to save it - even if it is part of the Callee Saved Registers (CSR) list.
The current LLVM implementation doesn’t support it. It will save a register if it is part of the static CSR list and will not care if the register is passed/returned by the callee.
The solution is to dynamically allocate the CSR lists (Only for these CCs). The lists will be updated with actual registers that should be saved by the callee.
Since we need the allocated lists to live as long as the function exists, the list should reside inside the Machine Register Info (MRI) which is a property of the Machine Function and managed by it (and has the same life span).
The lists should be saved in the MRI and populated upon LowerCall and LowerFormalArguments.
The patch will also assist to implement future no_caller_saved_regsiters attribute intended for interrupt handler CC.
getIntrinsicInstrCost() used to only compute scalarization cost based on types.
This patch improves this so that the actual arguments are checked when they are
available, in order to handle only unique non-constant operands.
The improvement in getOperandsScalarizationOverhead() to differentiate on
constants made it necessary to update the interleaved_cost.ll tests even
though they do not relate to intrinsics.
Review: Hal Finkel
https://reviews.llvm.org/D29540
Nirav Dave [Tue, 14 Mar 2017 01:42:23 +0000 (01:42 +0000)]
Recommitting Craig Topper's patch now that r296476 has been recommitted.
When checking if chain node is foldable, make sure the intermediate nodes have a single use across all results not just the result that was used to reach the chain node.
This recovers a test case that was severely broken by r296476, my making sure we don't create ADD/ADC that loads and stores when there is also a flag dependency.
Nirav Dave [Tue, 14 Mar 2017 00:34:14 +0000 (00:34 +0000)]
In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
Recommiting with compiler time improvements
Recommitting after fixup of 32-bit aliasing sign offset bug in DAGCombiner.
* Simplify Consecutive Merge Store Candidate Search
Now that address aliasing is much less conservative, push through
simplified store merging search and chain alias analysis which only
checks for parallel stores through the chain subgraph. This is cleaner
as the separation of non-interfering loads/stores from the
store-merging logic.
When merging stores search up the chain through a single load, and
finds all possible stores by looking down from through a load and a
TokenFactor to all stores visited.
This improves the quality of the output SelectionDAG and the output
Codegen (save perhaps for some ARM cases where we correctly constructs
wider loads, but then promotes them to float operations which appear
but requires more expensive constant generation).
Some minor peephole optimizations to deal with improved SubDAG shapes (listed below)
Additional Minor Changes:
1. Finishes removing unused AliasLoad code
2. Unifies the chain aggregation in the merged stores across code
paths
3. Re-add the Store node to the worklist after calling
SimplifyDemandedBits.
4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
arbitrary, but seems sufficient to not cause regressions in
tests.
5. Remove Chain dependencies of Memory operations on CopyfromReg
nodes as these are captured by data dependence
6. Forward loads-store values through tokenfactors containing
{CopyToReg,CopyFromReg} Values.
7. Peephole to convert buildvector of extract_vector_elt to
extract_subvector if possible (see
CodeGen/AArch64/store-merge.ll)
8. Store merging for the ARM target is restricted to 32-bit as
some in some contexts invalid 64-bit operations are being
generated. This can be removed once appropriate checks are
added.
This finishes the change Matt Arsenault started in r246307 and
jyknight's original patch.
Many tests required some changes as memory operations are now
reorderable, improving load-store forwarding. One test in
particular is worth noting:
CodeGen/PowerPC/ppc64-align-long-double.ll - Improved load-store
forwarding converts a load-store pair into a parallel store and
a memory-realized bitcast of the same value. However, because we
lose the sharing of the explicit and implicit store values we
must create another local store. A similar transformation
happens before SelectionDAG as well.
Zachary Turner [Mon, 13 Mar 2017 23:28:25 +0000 (23:28 +0000)]
Add the beginning of PDB diffing support.
For now this only diffs the stream directory and the MSF
Superblock. Future patches will drill down into individual
streams to find out where the differences lie.
Adrian Prantl [Mon, 13 Mar 2017 22:56:14 +0000 (22:56 +0000)]
Revert "Debug Info: Add basic support for external types references."
This reverts commit r242302. External type refs of this form were
never used by any LLVM frontend so this is effectively dead code.
(They were introduced to support clang module debug info, but in the
end we came up with a better design that doesn't use this feature at
all.)
Rui Ueyama [Mon, 13 Mar 2017 22:19:05 +0000 (22:19 +0000)]
Make FileOutputBuffer fail early if you pass a directory.
Previously, it created a temporary directory and then failed when
FileOutputBuffer tried to rename that file to the destination file
(which is actually a directory name).
[IPRA] Change algorithm for RegUsageInfoCollector.
The previous algorithm for RegUsageInfoCollector had pretty bad
performance on architectures with a lot of registers that alias
a lot one another, because we potentially iterate for every register
over all the aliasing registers. This costs even more if the function
is small and doesn't define a lot of registers.
This patch changes the algorithm to one that while iterating over
all the registers it will iterate over the aliasing registers only
if the register itself is defined.
This should be faster based on the assumption that only a subset
of the whole LLVM registers set is actually defined in the function.
Juergen Ributzka [Mon, 13 Mar 2017 21:34:07 +0000 (21:34 +0000)]
[Support] Test directory iterators and recursive directory iterators with broken symlinks.
This commit adds a unit test to the file system tests to verify the behavior of
the directory iterator and recursive directory iterator with broken symlinks.
Simon Pilgrim [Mon, 13 Mar 2017 21:23:29 +0000 (21:23 +0000)]
[X86][MMX] Fix folding of shift value loads to cover whole 64-bits
rL230225 made the assumption that only the lower 32-bits of an MMX register load is used as a shift value, when in fact the whole 64-bits are reloaded and treated as a i64 to determine the shift value.
This patch reverts rL230225 to ensure that the whole 64-bits of memory are folded and ensures that the upper 32-bit are zero'd for cases where the shift value has come from a scalar source.
Andrew Kaylor [Mon, 13 Mar 2017 20:35:10 +0000 (20:35 +0000)]
Revert r295004 (Add MXCSR) due to errors reported by MachineVerifier
I am leaving the code in clang which filters mxcsr from the clobber list because that is still technically correct and will be useful again when the MXCSR register is reintroduced.
Jessica Paquette [Mon, 13 Mar 2017 18:39:33 +0000 (18:39 +0000)]
[Outliner] Add tail call support
This commit adds tail call support to the MachineOutliner pass. This allows
the outliner to insert jumps rather than calls in areas where tail calling is
possible. Outlined tail calls include the return or terminator of the basic
block being outlined from.
Tail call support allows the outliner to take returns and terminators into
consideration while finding candidates to outline. It also allows the outliner
to save more instructions. For example, in the X86-64 outliner, a tail called
outlined function saves one instruction since no return has to be inserted.
Craig Topper [Mon, 13 Mar 2017 18:34:46 +0000 (18:34 +0000)]
[X86] Lower AVX2 gather intrinsics similar to AVX-512. Apply the same input source optimizations to break execution dependencies.
For AVX-512 we force the input to zero if the input is undef or the mask is all ones to break an execution dependency. This patch brings the same behavior to AVX2.
Craig Topper [Mon, 13 Mar 2017 18:17:46 +0000 (18:17 +0000)]
[AVX-512] If gather mask is all ones, force the input to a zero vector.
We were already forcing undef inputs to become a zero vector, this now catches an all ones mask too.
Ideally we'd use undef and let execution dep fix handle picking the best register/clearance for the undef, but I don't think it can handle the early clobber today.
Craig Topper [Mon, 13 Mar 2017 17:37:14 +0000 (17:37 +0000)]
[SelectionDAG] Enhance SDTCisSameNumEltsAs to work with scalar types and use it on extend/trunc/round operations.
Currently we don't enforce that ISD::ANY_EXTEND, ZERO_EXTEND, SIGN_EXTEND, TRUNC, FP_ROUND, FP_EXTEND have the same number of elements(including scalar) between their input and output. Though we have them documented as such. Up until a few months ago x86 created nodes that violated this rule. That's all been fixed now, and we should enforce the rule going forward.
In order to do this we need to allow SDTCisSameNumEltsAs to support scalar types and not enforce being a vector. If one type is scalar we will force the other type to also be scalar.
Zachary Turner [Mon, 13 Mar 2017 16:24:10 +0000 (16:24 +0000)]
[ADT] Improve the genericity of llvm::enumerate().
There were some issues in the implementation of enumerate()
preventing it from being used in various contexts. These were
all related to the fact that it did not supporter llvm's
iterator_facade_base class. So this patch adds support for that
and additionally exposes a new helper method to_vector() that
will evaluate an entire range and store the results in a
vector.
Zachary Turner [Mon, 13 Mar 2017 14:57:45 +0000 (14:57 +0000)]
[llvm-pdbdump] Add support for dumping symbols from Yaml -> PDB.
Previously we could round-trip type records from PDB -> Yaml ->
PDB, but for symbols we could only go from PDB -> Yaml. This
completes the round-tripping for symbols as well.
Rafael Espindola [Mon, 13 Mar 2017 14:45:06 +0000 (14:45 +0000)]
Fix crash when multiple raw_fd_ostreams to stdout are created.
If raw_fd_ostream is constructed with the path of "-", it claims
ownership of the stdout file descriptor. This means that it closes
stdout when it is destroyed. If there are multiple users of
raw_fd_ostream wrapped around stdout, then a crash can occur because
of operations on a closed stream.
An example of this would be running something like "clang -S -o - -MD
-MF - test.cpp". Alternatively, using outs() (which creates a local
version of raw_fd_stream to stdout) anywhere combined with such a
stream usage would cause the crash.
The fix duplicates the stdout file descriptor when used within
raw_fd_ostream, so that only that particular descriptor is closed when
the stream is destroyed.
Gil Rapaport [Mon, 13 Mar 2017 10:23:46 +0000 (10:23 +0000)]
[LV] Set memcheck metadata also for VF==1
This commit is a follow-up on r297580. It fixes the FIXME added temporarily
by that commit to keep the removal of Unroller's specialized version of
scalarizeInstruction() an NFC. See https://reviews.llvm.org/D30715 for details.
Craig Topper [Mon, 13 Mar 2017 05:34:03 +0000 (05:34 +0000)]
Revert "[AVX-512] EVEX2VEX, don't reject intrinsic instructions when both have a memory operand. We should just continue to check other operands instead."
This reverts r297596.
There were other issues that were making this not work that have been fixed now. Reverting this results in a more accurate table.
Craig Topper [Mon, 13 Mar 2017 00:36:49 +0000 (00:36 +0000)]
[AVX-512] EVEX2VEX, don't reject intrinsic instructions when both have a memory operand. We should just continue to check other operands instead.
This exposed that we have several intrinsic instructions that have identical TSFlags to other instructions. We should merge their patterns and kill of the duplicate. I'll fix that in a follow up patch.
Craig Topper [Sun, 12 Mar 2017 22:29:12 +0000 (22:29 +0000)]
[AVX-512] Fix the valid immediates for the scatter/gather prefetch intrinsics.
The immediate should be 1 or 2, not 0 or 1. This was found while adding bounds checking to clang. In fact the existing clang builtin test failed if we ran it all the way to assembly.
Sanjay Patel [Sun, 12 Mar 2017 18:28:48 +0000 (18:28 +0000)]
[x86] don't blindly transform SETB into SBB
I noticed unnecessary 'sbb' instructions in D30472 and while looking at 'ptest' codegen recently.
This happens because we were transforming any 'setb' - even when we only wanted a single-bit result.
This patch moves those transforms under visitAdd/visitSub, so we we're only creating sbb/adc when it
is a win. I don't know why we need a SETCC_CARRY node type, but I'm not proposing to change that
existing behavior in this patch.
Also, I'm skeptical that sbb/adc are a win for all micro-arches, so I added comments to the test files
where this transform still fires.
The test changes here are all cases where we no longer produce sbb/adc. Avoiding partial register
stalls (generating an xor to clear a register) is not handled in some cases, but that's a separate
issue.
Gil Rapaport [Sun, 12 Mar 2017 12:31:38 +0000 (12:31 +0000)]
[LV] A unified scalarizeInstruction() for Vectorizer and Unroller; NFC
Unroller's specialized scalarizeInstruction() is mostly duplicating Vectorizer's
variant. OTOH Vectorizer's scalarizeInstruction() already supports the special
case of VF==1 except for avoiding mask-bit extraction in that case. This patch
removes Unroller's specialized version in favor of a unified method.
The only functional difference between the two variants seems to be setting
memcheck metadata for loads and stores only in Vectorizer's variant, which is a
bug in Unroller. To keep this patch an NFC the unified method doesn't set
memcheck metadata for VF==1.
Craig Topper [Sun, 12 Mar 2017 03:37:37 +0000 (03:37 +0000)]
[AVX-512] Fix a bad use of a high GR8 register after copying from a mask register during fast isel. This ends up extracting from bits 15:8 instead of the lower bits of the mask.
I'm pretty sure there are more problems lurking here. But I think this fixes PR32241.
I've added the test case from that bug and added asserts that will fail if we ever try to copy between high registers and mask registers again.
Simon Pilgrim [Sat, 11 Mar 2017 20:42:31 +0000 (20:42 +0000)]
[X86][SSE] Improve extraction of elements from v16i8 (pre-SSE41)
Without SSE41 (pextrb) we currently extract byte elements from a vector by spilling to stack and reloading the byte.
This patch is an initial attempt at using MOVD/PEXTRW to extract the relevant DWORD/WORD from the vector and then shift+truncate to collect the correct byte.
Extraction of multiple bytes this way would result in code bloat, but as explained in the patch we could probably afford to be more aggressive with the supported extractions before again falling back on spilling - possibly through counting the number of extracts and which DWORD/WORD they originate?