For vector GEPs, CastGEPIndices can end up in an infinite recursion, because
we compare the vector type to the scalar pointer type, find them different,
and then try to cast a type to itself.
Added a template for building target specific memory node in DAG.
I added API for creation a target specific memory node in DAG. Today, all memory nodes are common for all targets and their constructors are located in SelectionDAG.cpp.
There are some cases in X86 where we need to create a special node - truncation-with-saturation store, float-to-half-store.
In the current patch I added truncation-with-saturation nodes and I'm using them for intrinsics. In the future I plan to implement DAG lowering for truncation-with-saturation pattern.
Oren Ben Simhon [Wed, 21 Dec 2016 08:31:45 +0000 (08:31 +0000)]
[X86] Vectorcall Calling Convention - Adding CodeGen Complete Support
The vectorcall calling convention specifies that arguments to functions are to be passed in registers, when possible.
vectorcall uses more registers for arguments than fastcall or the default x64 calling convention use.
The vectorcall calling convention is only supported in native code on x86 and x64 processors that include Streaming SIMD Extensions 2 (SSE2) and above.
The current implementation does not handle Homogeneous Vector Aggregates (HVAs) correctly and this review attempts to fix it.
This aubmit also includes additional lit tests to cover better HVAs corner cases.
Adam Nemet [Wed, 21 Dec 2016 04:07:40 +0000 (04:07 +0000)]
[LDist] Match behavior between invoking via optimization pipeline or opt -loop-distribute
In r267672, where the loop distribution pragma was introduced, I tried
it hard to keep the old behavior for opt: when opt is invoked
with -loop-distribute, it should distribute the loop (it's off by
default when ran via the optimization pipeline).
As MichaelZ has discovered this has the unintended consequence of
breaking a very common developer work-flow to reproduce compilations
using opt: First you print the pass pipeline of clang
with -debug-pass=Arguments and then invoking opt with the returned
arguments.
clang -debug-pass will include -loop-distribute but the pass is invoked
with default=off so nothing happens unless the loop carries the pragma.
While through opt (default=on) we will try to distribute all loops.
This changes opt's default to off as well to match clang. The tests are
modified to explicitly enable the transformation.
Sebastian Pop [Wed, 21 Dec 2016 01:41:12 +0000 (01:41 +0000)]
machine combiner: fix pretty printer
we used to print UNKNOWN instructions when the instruction to be printer was not
yet inserted in any BB: in that case the pretty printer would not be able to
compute a TII as the instruction does not belong to any BB or function yet.
This patch explicitly passes the TII to the pretty-printer.
Antonio Maiorano [Wed, 21 Dec 2016 01:05:29 +0000 (01:05 +0000)]
Improve natvis for llvm::SmallString so that it correctly displays only the valid portion of the string
The usual method, and the one employed before my change, of displaying strings in natvis is to make use of the "<variable>,s" format specifier; however, this method only works for null-terminated strings. My fix here is to use the "<pointer>,[size]" format specifier to display a bounded array, and then cast it to "const char*", which in the MSVC debugger has the desired effect of rendering the character array as a string.
We're currently doing nearly the same thing for @llvm.objectsize in
three different places: two of them are missing checks for overflow,
and one of them could subtly break if InstCombine gets much smarter
about removing alloc sites. Seems like a good idea to not do that.
Rui Ueyama [Tue, 20 Dec 2016 23:09:09 +0000 (23:09 +0000)]
Move GlobPattern class from LLD to llvm/Support.
GlobPattern is a class to handle glob pattern matching. Currently
only LLD is using that, but technically that feature is not specific
to linkers, so in this patch I move that file to LLVM.
[SCEV] Be less conservative when extending bitwidths for computing ranges.
Summary:
In getRangeForAffineAR we compute ranges for affine exprs E = A + B*C,
where ranges for A, B, and C are known. To avoid overflow, we need to
operate on a bigger bitwidth, and originally we chose 2*x+1 for this
(x being the original bitwidth). However, it is safe to use just 2*x:
Unnecessary extending of bitwidths results in noticeable slowdowns: ranges
perform arithmetic operations using APInt, which are much slower when bitwidths
are bigger than 64.
Eli Friedman [Tue, 20 Dec 2016 20:05:07 +0000 (20:05 +0000)]
[ARM] Implement isExtractSubvectorCheap.
See https://reviews.llvm.org/D6678 for the history of
isExtractSubvectorCheap. Essentially the same considerations apply
to ARM.
This temporarily breaks the formation of vpadd/vpaddl in certain cases;
AddCombineToVPADDL essentially assumes that we won't form VUZP shuffles.
See https://reviews.llvm.org/D27779 for followup fix.
Matt Arsenault [Tue, 20 Dec 2016 18:55:06 +0000 (18:55 +0000)]
AMDGPU: Don't add same instruction multiple times to worklist
When the instruction is processed the first time, it may be
deleted resulting in crashes. While the new test adds the same
user to the worklist twice, this particular case doesn't crash
but I'm not sure why.
Zachary Turner [Tue, 20 Dec 2016 17:57:56 +0000 (17:57 +0000)]
Re-add the assert to StringRef's const char *, length constructor.
By putting the assert behind a conditional in the initializer list
we can ensure that it will still work in a constexpr context as
the else branch of the ternary operator won't be examined unless
the condition fails.
Tom Stellard [Tue, 20 Dec 2016 15:52:17 +0000 (15:52 +0000)]
AMDGPU/SI: Add a MachineMemOperand to MIMG instructions
Summary:
Without a MachineMemOperand, the scheduler was assuming MIMG instructions
were ordered memory references, so no loads or stores could be reordered
across them.
Serge Pavlov [Tue, 20 Dec 2016 08:48:51 +0000 (08:48 +0000)]
Fix build with expensive checks enabled
Include of llvm/IR/Verifier.h was removed from HexagonCommonGEP.cpp in r289604
as unused. In fact it is required when expensive checks are enabled, because
it declared function `verifyFunction`, which is called in conditionally compiled
part of the file.
Chandler Carruth [Tue, 20 Dec 2016 03:32:17 +0000 (03:32 +0000)]
[PM] Rework a loop in the CGSCC update logic to be more conservative and
clear. The current RefSCC can occur in exactly one position so we should
just enforce that and leverage the property rather than checking for it
anywhere.
This addresses review comments made on another patch.
Chandler Carruth [Tue, 20 Dec 2016 03:15:32 +0000 (03:15 +0000)]
[PM] Provide an initial, minimal port of the inliner to the new pass manager.
This doesn't implement *every* feature of the existing inliner, but
tries to implement the most important ones for building a functional
optimization pipeline and beginning to sort out bugs, regressions, and
other problems.
Notable, but intentional omissions:
- No alloca merging support. Why? Because it isn't clear we want to do
this at all. Active discussion and investigation is going on to remove
it, so for simplicity I omitted it.
- No support for trying to iterate on "internally" devirtualized calls.
Why? Because it adds what I suspect is inappropriate coupling for
little or no benefit. We will have an outer iteration system that
tracks devirtualization including that from function passes and
iterates already. We should improve that rather than approximate it
here.
- Optimization remarks. Why? Purely to make the patch smaller, no other
reason at all.
The last one I'll probably work on almost immediately. But I wanted to
skip it in the initial patch to try to focus the change as much as
possible as there is already a lot of code moving around and both of
these *could* be skipped without really disrupting the core logic.
A summary of the different things happening here:
1) Adding the usual new PM class and rigging.
2) Fixing minor underlying assumptions in the inline cost analysis or
inline logic that don't generally hold in the new PM world.
3) Adding the core pass logic which is in essence a loop over the calls
in the nodes in the call graph. This is a bit duplicated from the old
inliner, but only a handful of lines could realistically be shared.
(I tried at first, and it really didn't help anything.) All told,
this is only about 100 lines of code, and most of that is the
mechanics of wiring up analyses from the new PM world.
4) Updating the LazyCallGraph (in the new PM) based on the *newly
inlined* calls and references. This is very minimal because we cannot
form cycles.
5) When inlining removes the last use of a function, eagerly nuking the
body of the function so that any "one use remaining" inline cost
heuristics are immediately refined, and queuing these functions to be
completely deleted once inlining is complete and the call graph
updated to reflect that they have become dead.
6) After all the inlining for a particular function, updating the
LazyCallGraph and the CGSCC pass manager to reflect the
function-local simplifications that are done immediately and
internally by the inline utilties. These are the exact same
fundamental set of CG updates done by arbitrary function passes.
7) Adding a bunch of test cases to specifically target CGSCC and other
subtle aspects in the new PM world.
Many thanks to the careful review from Easwaran and Sanjoy and others!
Adrian Prantl [Tue, 20 Dec 2016 02:09:43 +0000 (02:09 +0000)]
[IR] Remove the DIExpression field from DIGlobalVariable.
This patch implements PR31013 by introducing a
DIGlobalVariableExpression that holds a pair of DIGlobalVariable and
DIExpression.
Currently, DIGlobalVariables holds a DIExpression. This is not the
best way to model this:
(1) The DIGlobalVariable should describe the source level variable,
not how to get to its location.
(2) It makes it unsafe/hard to update the expressions when we call
replaceExpression on the DIGLobalVariable.
(3) It makes it impossible to represent a global variable that is in
more than one location (e.g., a variable with multiple
DW_OP_LLVM_fragment-s). We also moved away from attaching the
DIExpression to DILocalVariable for the same reasons.
This reapplies r289902 with additional testcase upgrades and a change
to the Bitcode record for DIGlobalVariable, that makes upgrading the
old format unambiguous also for variables without DIExpressions.
Chris Bieneman [Tue, 20 Dec 2016 00:42:06 +0000 (00:42 +0000)]
Revert "[ObjectYAML] Support for DWARF debug_info section"
This reverts commit r290147.
This commit is breaking a bot (http://lab.llvm.org:8011/builders/clang-atom-d525-fedora-rel/builds/621). I don't have time to investigate at the moment, so I'll revert for now.
Michael LeMay [Mon, 19 Dec 2016 21:02:41 +0000 (21:02 +0000)]
[TargetInstrInfo] replace redundant expression in getMemOpBaseRegImmOfs
Summary:
The expression for computing the return value of getMemOpBaseRegImmOfs has only
one possible value. The other value would result in a return earlier in the
function. This patch replaces the expression with its only possible value.
Greg Clayton [Mon, 19 Dec 2016 20:36:41 +0000 (20:36 +0000)]
Make a function to correctly extract the DW_AT_high_pc given the low pc value.
DWARF 4 and later supports encoding the PC as an address or as as offset from the low PC. Clients using DWARFDie should be insulated from how to extract the high PC value. This function takes care of extracting the form value and looking for the correct form.
Diana Picus [Mon, 19 Dec 2016 14:08:02 +0000 (14:08 +0000)]
[ARM] GlobalISel: Lower i8 and i16 register args
This allows lowering i8 and i16 arguments if they can fit in the registers. Note
that the lowering is incomplete - ABI extensions are handled in a subsequent
patch.
(Last part of)
Differential Revision: https://reviews.llvm.org/D27704
Diana Picus [Mon, 19 Dec 2016 11:55:41 +0000 (11:55 +0000)]
[ARM] GlobalISel: Lower more than 4 arguments
This adds support for lowering more than 4 arguments (although still i32 only).
It uses the handleAssignments / ValueHandler infrastructure extracted from
the AArch64 backend in r288658.
Sam Kolton [Mon, 19 Dec 2016 11:43:15 +0000 (11:43 +0000)]
AMDGPU: [AMDGPU] Assembler: add .hsa_code_object_metadata directive for functime metadata V2.0
Summary:
Added pair of directives .hsa_code_object_metadata/.end_hsa_code_object_metadata.
Between them user can put YAML string that would be directly put to the generated note. E.g.:
'''
.hsa_code_object_metadata
{
amd.MDVersion: [ 2, 0 ]
}
.end_hsa_code_object_metadata
'''
Based on D25046
Diana Picus [Mon, 19 Dec 2016 11:26:31 +0000 (11:26 +0000)]
[ARM] GlobalISel: Support loading from the stack
Add support for selecting simple G_LOAD and G_FRAME_INDEX instructions (32-bit
scalars only). This will be useful for functions that need to pass arguments on
the stack.
Bjorn Pettersson [Mon, 19 Dec 2016 11:20:57 +0000 (11:20 +0000)]
[CodeGen] Make MachineInstr::isIdenticalTo() symmetric.
Summary:
MachineInstr::isIdenticalTo() is for some reason not
symmetric when comparing bundles, which gives us the
property:
I1->isIdenticalTo(*I2) != I2->isIdenticalTo(*I1)
when comparing bundles where one bundle is longer than
the other.
This patch makes sure that bundles of different length
always are considered as not being identical. Thus, the
result of the comparison will be the same regardless of
which side that happens to be to the left.
The original version of the code in XRayInstrumentation.cpp assumed that
functions may not have empty machine basic blocks (or that the first one
couldn't be). This change addresses that by special-casing that specific
situation.
We provide two .mir test-cases to make sure we're handling this
appropriately.
Craig Topper [Mon, 19 Dec 2016 08:35:56 +0000 (08:35 +0000)]
[X86] When recognizing vector loads or VZEXT_LOAD in selectScalarSSELoad make sure we pass the load's user rather than load itself to the second operand of IsLegalToFold.
Craig Topper [Mon, 19 Dec 2016 00:42:28 +0000 (00:42 +0000)]
[X86] Remove all of the patterns that use X86ISD:FAND/FXOR/FOR/FANDN except for the ones needed for SSE1. Anything SSE2 or above uses the integer ISD opcode.
This removes 11721 bytes from the DAG isel table or 2.2%
David Majnemer [Sun, 18 Dec 2016 20:10:50 +0000 (20:10 +0000)]
[PDB] Don't use the long type
Long is not the same size across a number of the platforms we support.
Use unsigned int here instead, it is more appropriate because
overflow/wrap-around is possible and, in this case, expected.
Sanjay Patel [Sun, 18 Dec 2016 18:49:48 +0000 (18:49 +0000)]
[InstCombine] use commutative matchers for patterns with commutative operators
Background/motivation - I was circling back around to:
https://llvm.org/bugs/show_bug.cgi?id=28296
I made a simple patch for that and noticed some regressions, so added test cases for
those with rL281055, and this is hopefully the minimal fix for just those cases.
But as you can see from the surrounding untouched folds, we are missing commuted patterns
all over the place, and of course there are no regression tests to cover any of those cases.
We could sprinkle "m_c_" dust all over this file and catch most of the missing folds, but
then we still wouldn't have test coverage, and we'd still miss some fraction of commuted
patterns because they require adjustments to the match order.
I'm aware of the concern about the potential compile-time performance impact of adding
matches like this (currently being discussed on llvm-dev), but I don't think there's any
evidence yet to suggest that handling commutative pattern matching more thoroughly is not
a worthwhile goal of InstCombine.
Daniel Jasper [Sun, 18 Dec 2016 14:36:38 +0000 (14:36 +0000)]
Revert r289955 and r289962. This is causing lots of ASAN failures for us.
Not sure whether it causes and ASAN false positive or whether it
actually leads to incorrect code or whether it even exposes bad code.
Hans, I'll get you instructions to reproduce this.
[X86] [AVX512] Minor fix in encoding of scalar EVEX instructions. NFC.
Commit on behalf of Gadi Haber
Removed EVEX_V512 prefix from scalar EVEX instructions since HW ignores L'L bits anyway (LIG). 4 instructions are modified.
The changed encodings are validated with XED.
Rviewers: delena, igorb
Tom de Vries [Sun, 18 Dec 2016 09:41:20 +0000 (09:41 +0000)]
[FileCheck] Fix comment in ReadCheckFile
The comment in ReadCheckFile claims that both leading and trailing whitespace
are removed, but the associated statement only removes leading whitespace.
Craig Topper [Sun, 18 Dec 2016 07:54:23 +0000 (07:54 +0000)]
[X86][SSE][AVX-512] Convert FAND/FOR/FXOR/FANDN nodes to integer operations if they are available. This will allow a bunch of patterns to be removed.
These nodes are only emitted for lowering FABS/FNEG/FNABS/FCOPYSIGN. Ideally we just wouldn't create these nodes if SSE2 or higher is available, but it was simple to just convert them in DAG combine.
For SSE2, AVX, and AVX512 with DQI this is no functional change as the execution domain fixing pass ensures the right domain is selected regardless of the ISD opcode.
For AVX-512 without DQI we end up using integer instructions since the floating point versions aren't available. But we were already doing that for any logical operations in code that didn't come from FABS/FNEG/FNABS/FCOPYSIGN so this seems no worse. And we get the benefit of being able to fold broadcasts now.
Daniel Jasper [Sat, 17 Dec 2016 12:27:49 +0000 (12:27 +0000)]
Revert "[libFuzzer] add an experimental flag -experimental_len_control=1 that sets max_len to 1M and tries to increases the actual max sizes of mutations very gradually. Also remove a bit of dead code"
George Rimar [Sat, 17 Dec 2016 09:10:32 +0000 (09:10 +0000)]
[DWARF] - Introduce DWARFDebugPubTable class for dumping pub* sections.
Patch implements parser of pubnames/pubtypes tables instead of static
function used before. It is now should be possible to reuse it
in LLD or other projects and clean up the duplication code.