Nirav Dave [Mon, 7 Aug 2017 14:07:49 +0000 (14:07 +0000)]
[DAG] Extend visitSCALAR_TO_VECTOR optimization to truncated vector.
Relanding after case to insert explicit truncation as necessary.
Allow SCALAR_TO_VECTOR of EXTRACT_VECTOR_ELT to reduce to
EXTRACT_SUBVECTOR of vector shuffle when output is smaller. Marginally
improves vector shuffle computations.
Alexey Bataev [Mon, 7 Aug 2017 14:03:17 +0000 (14:03 +0000)]
[SLP] General improvements of SLP vectorization process.
Summary:
Patch tries to improve two-pass vectorization analysis, existing in SLP vectorizer. What it does:
1. Defines key nodes, that are the vectorization roots. Previously vectorization started if StoreInst or ReturnInst is found. For now, the vectorization started for all Instructions with no users and void types (Terminators, StoreInst) + CallInsts.
2. CmpInsts, InsertElementInsts and InsertValueInsts are stored in the array. This array is processed only after the vectorization of the first-after-these instructions key node is finished. Vectorization goes in reverse order to try to vectorize as much code as possible.
Nirav Dave [Mon, 7 Aug 2017 13:55:27 +0000 (13:55 +0000)]
[TableGen] AsmMatcher: fix OpIdx computation when HasOptionalOperands is true
Relanding after fixing UB issue with DefaultOffsets.
Consider the following instruction: "inst.eq $dst, $src" where ".eq"
is an optional flag operand. The $src and $dst operands are
registers. If we parse the instruction "inst r0, r1", the flag is not
present and it will be marked in the "OptionalOperandsMask" variable.
After the matching is complete we call the "convertToMCInst" method.
The current implementation works only if the optional operands are at
the end of the array. The "Operands" array looks like [token:"inst",
reg:r0, reg:r1]. The first operand that must be added to the MCInst
is the destination, the r0 register. The "OpIdx" (in the Operands
array) for this register is 2. However, since the flag is not present
in the Operands, the actual index for r0 should be 1. The flag is not
present since we rely on the default value.
This patch removes the "NumDefaults" variable and replaces it with an
array (DefaultsOffset). This array contains an index for each operand
(excluding the mnemonic). At each index, the array contains the
number of optional operands that should be subtracted. For the
previous example, this array looks like this: [0, 1, 1]. When we need
to access the r0 register, we compute its index as 2 -
DefaultsOffset[1] = 1.
[X86][LLVM]Expanding Supports lowerInterleavedStore() in X86InterleavedAccess (VF16 stride 4).
This patch expands the support of lowerInterleavedStore to 16x8i stride 4.
LLVM creates suboptimal shuffle code-gen for AVX2. In overall, this patch is a specific fix for the pattern (Strid=4 VF=16) and we plan to include more patterns in the future.
The patch goal is to optimize the following sequence:
At the end of the computation, we have ymm2, ymm0, ymm12 and ymm3 holding
each 16 chars:
Guy Blank [Mon, 7 Aug 2017 05:51:14 +0000 (05:51 +0000)]
[SelectionDAG] reset NewNodesMustHaveLegalTypes flag between basic blocks
The NewNodesMustHaveLegalTypes flag is set to false at the beginning of CodeGenAndEmitDAG, and set to true after legalizing types.
But before calling CodeGenAndEmitDAG we build the DAG for the basic block.
So for the first basic block NewNodesMustHaveLegalTypes would be 'false' during the SDAG building, and for all other basic blocks it would be 'true'.
This patch sets the flag to false before SDAG building each basic block.
Sanjay Patel [Sun, 6 Aug 2017 16:27:07 +0000 (16:27 +0000)]
[x86] use more shift or LEA for select-of-constants
We can convert any select-of-constants to math ops:
http://rise4fun.com/Alive/d7d
For this patch, I'm enhancing an existing x86 transform that uses fake multiplies
(they always become shl/lea) to avoid cmov or branching. The current code misses
cases where we have a negative constant and a positive constant, so this is just
trying to plug that hole.
The DAGCombiner diff prevents us from hitting a terrible inefficiency: we can start
with a select in IR, create a select DAG node, convert it into a sext, convert it
back into a select, and then lower it to sext machine code.
Some notes about the test diffs:
1. 2010-08-04-MaskedSignedCompare.ll - We were creating control flow that didn't exist in the IR.
2. memcmp.ll - Choose -1 or 1 is the case that got me looking at this again. I
think we could avoid the push/pop in some cases if we used 'movzbl %al' instead of an xor on
a different reg? That's a post-DAG problem though.
3. mul-constant-result.ll - The trade-off between sbb+not vs. setne+neg could be addressed if
that's a regression, but I think those would always be nearly equivalent.
4. pr22338.ll and sext-i1.ll - These tests have undef operands, so I don't think we actually care about these diffs.
5. sbb.ll - This shows a win for what I think is a common case: choose -1 or 0.
6. select.ll - There's another borderline case here: cmp+sbb+or vs. test+set+lea? Also, sbb+not vs. setae+neg shows up again.
7. select_const.ll - These are motivating cases for the enhancement; replace cmov with cheaper ops.
Assembly differences between movzbl and xor to avoid a partial reg stall are caused later by the X86 Fixup SetCC pass.
Meador Inge [Sun, 6 Aug 2017 12:02:17 +0000 (12:02 +0000)]
[AVR] Compute code model if one is not provided
The patch from r310028 fixed things to work with the new
`LLVMTargetMachine` constructor that came in on r309911.
However, the fix was partial since an object of type
`CodeModel::Model` must be passed to `LLVMTargetMachine`
(not one of `Optional<CodeModel::Model>`).
This patch fixes the problem in the same fashion that r309911
did for other machines: by checking if the passed optional
code model has a value and using `CodeModel::Small` if not.
Craig Topper [Sat, 5 Aug 2017 23:34:44 +0000 (23:34 +0000)]
[X86] Enable isel to use the PAUSE instruction even when SSE2 is disabled
Summary:
On older processors this instruction encoding is treated as a NOP.
MSVC doesn't disable intrinsics based on features the way clang/gcc does. Because the PAUSE instruction encoding doesn't crash older processors, some software out there uses these intrinsics without checking for SSE2.
This change also seems to also be consistent with gcc behavior.
[ADT] Add a much simpler loop to DenseMap::clear when the types are
POD-like and we can just splat the empty key across memory.
Sadly we can't optimize the normal loop well enough because we can't
turn the conditional store into an unconditional store according to the
memory model.
This loop actually showed up in a profile of code that was calling clear
as a serious source of time. =[
Craig Topper [Sat, 5 Aug 2017 20:00:41 +0000 (20:00 +0000)]
[InstCombine] Support vector splats in foldSelectICmpAnd.
Unfortunately, it looks like there's some other missed optimizations in the generated code for some of these cases. I'll try to look at some of those next.
Florian Hahn [Sat, 5 Aug 2017 12:13:13 +0000 (12:13 +0000)]
[ARM] Add registers to debuginfo MIR test cases.
Summary:
MIRParserImpl::computeFunctionProperties uses MRI.getNumVirtRegs() to
set the NoVReg property. By adding a bunch of registers to the MIR test
cases, the NoVReg property is not set when importing the MIR. Otherwise
NoVReg is set after instruction selection while the machine instructions
still contain virtual registers, causing expensive checks to fail.
[LCG] Completely remove the parent set and leaf tracking for RefSCCs.
After the previous series of patches, this is now trivial and deletes
a pretty astonishing amount of complexity. This has been a long time
coming, as the move toward a PO sequence of RefSCCs started eroding the
underlying use cases for this half of the data structure.
Among the biggest advantages here is that now there aren't two
independent data structures that need to stay in sync.
Some of my profiling has also indicated that updating the parent sets
was among the most expensive parts of the lazy call graph. Eliminating
it whole sale is likely to be a nice win in terms of compile time.
Last but not least, I had discussed with some folks previously keeping
it around for asserts and other correctness checking, but once the
fundamentals of the parent and child checking were implemented without
the parent sets their value in correctness checking was tiny and no
where near worth the cost of the complexity required to keep everything
up-to-date.
[LCG] Re-implement the basic isParentOf, isAncestorOf, isChildOf, and
isDescendantOf methods on RefSCCs in terms of the forward edges rather
than the parent sets.
This is technically slower, but probably not interestingly slower, and
all of these routines were already so expensive that they're guarded
behind both !NDEBUG and EXPENSIVE_CHECKS.
This removes another non-critical usage of parent sets.
I've also added some comments to try and help clarify to any potential
users the costs of these routines. They're mostly useful for debugging,
asserts, or other queries.
[LCG] Add the concept of a "dead" node and use it to avoid a complex
walk over the parent set.
When removing a single function from the call graph, we previously would
walk the entire RefSCC's parent set and then walk every outgoing edge
just to find the ones to remove. In addition to this being quite high
complexity in theory, it is also the last fundamental use of the parent
sets.
With this change, when we remove a function we transform the node
containing it to be recognizably "dead" and then teach the edge
iterators to recognize edges to such nodes and skip them the same way
they skip null edges.
We can't move fully to using "dead" nodes -- when disconnecting two live
nodes we need to null out the edge. But the complexity this adds to the
edge sequence isn't too bad and the simplification of lazily handling
this seems like a significant win.
[LCG] Replace an implicit bool operator with a named function. (NFC)
The definition of 'false' here was already pretty vague and debatable,
and I'm about to add another potential 'false' that would actually make
much more sense in a bool operator. Especially given how rarely this is
used, a nicely named method seems better.
[LCG] When removing a dead function and clearing out the data
structures, actually null out the graph pointers as well. We won't ever
update these, and we certainly shouldn't be calling any methods on them,
so it seems good to defensively nuke them.
[LCG] Remove the use of the parent sets to compute connectivity when
merging RefSCCs.
The logic to directly use the reference edges is simpler and not
substantially slower (despite the comments to the contrary) because this
is not actually an especially hot part of LCG in practice.
Craig Topper [Sat, 5 Aug 2017 01:45:17 +0000 (01:45 +0000)]
[InstCombine] In foldSelectICmpAnd, if we need to to truncate from the 'and' type to the 'select' type, do it after shifting right instead of just bailing.
Previously we were always trying to emit the zext or truncate before any shift. This meant if the 'and' mask was larger than the size of the truncate we would skip the transformation.
Now we shift the result of the and right first leaving the bit within the range of the truncate.
This matches what we are doing in foldSelectICmpAndOr for the same problem.
Petr Hosek [Fri, 4 Aug 2017 23:18:18 +0000 (23:18 +0000)]
[llvm][llvm-objcopy] When outputting to binary don't output segments that cover no sections
Sometimes LLD will produce a PT_LOAD segment that only covers the
headers (and covers no sections). GNU objcopy does not output the
segment contents for these sections. In particular this is an issue in
building magenta because the final link step for the kernel would
produce just such a PT_LOAD segment. This change is to support this case
and to match what GNU objcopy does in this case.
Adrian McCarthy [Fri, 4 Aug 2017 22:37:58 +0000 (22:37 +0000)]
Enable llvm-pdbutil to list enumerations using native PDB reader
This extends the native reader to enable llvm-pdbutil to list the enums in a
PDB and it includes a simple test. It does not yet list the values in the
enumerations, which requires an actual implementation of
NativeEnumSymbol::FindChildren.
Name: narrow_mul
%mul = mul i32 %x, C1
%r = trunc i32 %mul to i8
=>
%xn = trunc i32 %x to i8
%narrowC = trunc i32 C1 to i8
%r = mul i8 %xn, %narrowC
http://rise4fun.com/Alive/QpS
This doesn't solve PR34046 (failure to recognize rotate):
https://bugs.llvm.org/show_bug.cgi?id=34046
...but it reduces an extra complication in the description examples
to a form that we can more easily match.
Reid Kleckner [Fri, 4 Aug 2017 21:52:00 +0000 (21:52 +0000)]
[Support] Use FILE_SHARE_DELETE to fix RemoveFileOnSignal on Windows
Summary:
Tools like clang that use RemoveFileOnSignal on their output files
weren't actually able to clean up their outputs before this change. Now
the call to llvm::sys::fs::remove succeeds and the temporary file is
deleted. This is a stop-gap to fix clang before implementing the
solution outlined in PR34070.
Petr Hosek [Fri, 4 Aug 2017 21:09:26 +0000 (21:09 +0000)]
Reland "[llvm][llvm-objcopy] Added support for outputting to binary in llvm-objcopy"
This change adds the "-O binary" flag which directs llvm-objcopy to
output the object file to the same format as GNU objcopy does when given
the flag "-O binary". This was done by splitting the Object class into
two subclasses ObjectELF and ObjectBianry which each output a different
format but relay on the same code to read in the Object in Object.
Amara Emerson [Fri, 4 Aug 2017 20:19:46 +0000 (20:19 +0000)]
[SCEV] Preserve NSW information for sext(subtract).
Pushes the sext onto the operands of a Sub if NSW is present.
Also adds support for propagating the nowrap flags of the
llvm.ssub.with.overflow intrinsic during analysis.
Its sole purpose was to avoid spreading around ifdefs related to
building global-isel. Since r309990, GlobalISel is not optional anymore,
thus, we can get rid of this mechanism all together.
Zachary Turner [Fri, 4 Aug 2017 20:02:38 +0000 (20:02 +0000)]
[llvm-pdbutil] Dump image section headers.
Image section headers are stored in the DBI stream, but we
had no way to dump them. This patch adds dumping support,
along with some tests that LLD actually dumps them correctly.
Ulrich Weigand [Fri, 4 Aug 2017 18:57:58 +0000 (18:57 +0000)]
[SystemZ] Add support for 128-bit atomic load/store/cmpxchg
This adds support for the main 128-bit atomic operations,
using the SystemZ instructions LPQ, STPQ, and CDSG.
Generating these instructions is a bit more complex than usual
since the i128 type is not legal for the back-end. Therefore,
we have to hook the LowerOperationWrapper and ReplaceNodeResults
TargetLowering callbacks.
We currently emit a serialization operation (bcr 14, 0) before every
atomic load and after every atomic store. This is overly conservative.
The SystemZ architecture actually does not require any serialization
for atomic loads, and a serialization after an atomic store only if
we need to enforce sequential consistency. This is what other compilers
for the platform implement as well.
Summary:
This intrinsic lets us set inactive lanes to an identity value when
implementing wavefront reductions. In combination with Whole Wavefront
Mode, it lets inactive lanes be skipped over as required by GLSL/Vulkan.
Lowering the intrinsic needs to happen post-RA so that RA knows that the
destination isn't completely overwritten due to the EXEC shenanigans, so
we need another pseudo-instruction to represent the un-lowered
intrinsic.
Connor Abbott [Fri, 4 Aug 2017 18:36:52 +0000 (18:36 +0000)]
[AMDGPU] Add support for Whole Wavefront Mode
Summary:
Whole Wavefront Wode (WWM) is similar to WQM, except that all of the
lanes are always enabled, regardless of control flow. This is required
for implementing wavefront reductions in non-uniform control flow, where
we need to use the inactive lanes to propagate intermediate results, so
they need to be enabled. We need to propagate WWM to uses (unless
they're explicitly marked as exact) so that they also propagate
intermediate results correctly. We do the analysis and exec mask munging
during the WQM pass, since there are interactions with WQM for things
that require both WQM and WWM. For simplicity, WWM is entirely
block-local -- blocks are never WWM on entry or exit of a block, and WWM
is not propagated to the block level. This means that computations
involving WWM cannot involve control flow, but we only ever plan to use
WWM for a few limited purposes (none of which involve control flow)
anyways.
Shaders can ask for WWM using the @llvm.amdgcn.wwm intrinsic. There
isn't yet a way to turn WWM off -- that will be added in a future
change.
Finally, it turns out that turning on inactive lanes causes a number of
problems with register allocation. While the best long-term solution
seems like teaching LLVM's register allocator about predication, for now
we need to add some hacks to prevent ourselves from getting into trouble
due to constraints that aren't currently expressed in LLVM. For the gory
details, see the comments at the top of SIFixWWMLiveness.cpp.
Connor Abbott [Fri, 4 Aug 2017 18:36:50 +0000 (18:36 +0000)]
[AMDGPU] refactor WQM pass in preparation for WWM (NFCI)
Summary:
Right now, the WQM pass conflates two different things when tracking the
Needs of an instruction:
1. Needs can be StateWQM, which is propagated to other instructions, and
means that this instruction (and everything it depends on) must be
calculated in WQM.
2. Needs can be StateExact, which is not propagated to other
instructions, and means that this instruction must not be calculated in
WQM and WQM-ness must not be propagated past this instruction.
This works now because there are only two different states, but in the
future we want to be able to express things like "calculate this in WQM,
but please disable WWM and don't propagate it" (to implement
@llvm.amdgcn.set.inactive). In order to do this, we need to split the
per-instruction Needs field in two: a new Needs field, which can only
contain StateWQM (and in the future, StateWWM) and is propagated to
sources, and a Disables field, which can also contain just StateWQM or
nothing for now.
We keep the per-block tracking the same for now, by translating
Needs/Disables to the old representation with only StateWQM or
StateExact. The other place that needs special handling is when we
emit the state transitions. We could just translate back to the old
representation there as well, which we almost do, but instead of 0 as a
placeholder value for "any state," we explicitly or together all the
states an instruction is allowed to be in. This lets us refactor the
code in preparation for WWM, where we'll need to be able to handle
things like "this instruction must be in Exact or WQM, but not WWM."
Connor Abbott [Fri, 4 Aug 2017 18:36:49 +0000 (18:36 +0000)]
[AMDGPU] Add an llvm.amdgcn.wqm intrinsic for WQM
Summary:
Previously, we assumed that certain types of instructions needed WQM in
pixel shaders, particularly DS instructions and image sampling
instructions. This was ok because with OpenGL, the assumption was
correct. But we want to start using DPP instructions for derivatives as
well as other things, so the assumption that we can infer whether to use
WQM based on the instruction won't continue to hold. This intrinsic lets
frontends like Mesa indicate what things need WQM based on their
knowledge of the API, rather than second-guessing them in the backend.
We need to keep around the old method of enabling WQM, but eventually we
should remove it once Mesa catches up. For now, this will let us use DPP
instructions for computing derivatives correctly.
Port libFuzzer tests to LIT. Do not require two-stage build for check-fuzzer.
This revision ports all libFuzzer tests apart from the unittest to LIT.
The advantages of doing so include:
- Tests being self-contained
- Much easier debugging of a single test
- No need for using a two-stage compilation
The unit-test is still compiled using CMake, but it does not need a
freshly built compiler.
NOTE: The previous two-stage bot configuration will NOT work, as in the
second stage build LLVM_USE_SANITIZER is set, which disables ASAN from
being built.
Thus bots will be reconfigured in the next few commits.
Javed Absar [Fri, 4 Aug 2017 17:10:11 +0000 (17:10 +0000)]
[ARM] Use searchable-table for banked registers
This is a continuation of https://reviews.llvm.org/D36219
This patch uses reverse mapping (encoding->name) in
ARMInstPrinter::printBankedRegOperand to get rid of
hard-coded values (as pointed out by @olista01).
Reid Kleckner [Fri, 4 Aug 2017 17:09:11 +0000 (17:09 +0000)]
[ArgPromotion] Preserve alignment of byval argument in new alloca
The frontend may have requested a higher alignment for any reason, and
downstream optimizations may already have taken advantage of it. We
should keep the same alignment when moving the allocation from the
parameter area to the local variable area.
Dehao Chen [Fri, 4 Aug 2017 16:20:54 +0000 (16:20 +0000)]
Adjust the hotness threshold from 99.9% to 99%.
Summary: We originally set the hotness threshold as 99.9% to be consistent with gcc FDO. But because the inline heuristic is different between 2 compilers: llvm uses bottom-up algorithm while gcc uses priority based. The LLVM algorithm tends to inline too much early that prevents hot callsites from further inlined into its caller. Due to this restriction, we think it is reasonable to lower the hotness threshold to give priority to those that are really hot. Our experiments show that this change would improve performance on large applications. Note that the inline heuristic has great room for further tuning. Once the inline heuristics are refined, we could adjust this threshold to allow inlining for less hot callsites.
Craig Topper [Fri, 4 Aug 2017 16:07:20 +0000 (16:07 +0000)]
[InstCombine] Remove the (not (sext)) case from foldBoolSextMaskToSelect and inline the remaining code to match visitOr
Summary:
The (not (sext)) case is really (xor (sext), -1) which should have been simplified to (sext (xor, 1)) before we got here. So we shouldn't need to handle it.
With that taken care of we only need to two cases so don't need the swap anymore. This makes us in sync with the equivalent code in visitOr so inline this to match.
Adds function attributes to index: ReadNone, ReadOnly, NoRecurse, NoAlias. This attributes will be used for future ThinLTO optimizations that will propagate function attributes across modules.