DAGTypeLegalizer::CanSkipSoftenFloatOperand should allow
SELECT op code for x86_64 fp128 type for MME targets,
so SoftenFloatOperand does not abort on SELECT op code.
David Majnemer [Mon, 18 Jul 2016 17:03:09 +0000 (17:03 +0000)]
[MathExtras] Fix UB in minIntN
We negated a value with a signed type which invited problems when that
value was the most negative signed number. Use an unsigned type
for the value instead. It will compute the same twos complement
result without the UB.
Adam Nemet [Mon, 18 Jul 2016 16:29:27 +0000 (16:29 +0000)]
[LoopDist] Port to new PM
Summary:
The direct motivation for the port is to ensure that the OptRemarkEmitter
tests work with the new PM.
This remains a function pass because we not only create multiple loops
but could also version the original loop.
In the test I need to invoke opt
with -passes='require<aa>,loop-distribute'. LoopDistribute does not
directly depend on AA however LAA does. LAA uses getCachedResult so
I *think* we need manually pull in 'aa'.
Adam Nemet [Mon, 18 Jul 2016 16:29:21 +0000 (16:29 +0000)]
[OptRemarkEmitter] Port to new PM
Summary:
The main goal is to able to start using the new OptRemarkEmitter
analysis from the LoopVectorizer. Since the vectorizer was recently
converted to the new PM, it makes sense to convert this analysis as
well.
This pass is currently tested through the LoopDistribution pass, so I am
also porting LoopDistribution to get coverage for this analysis with the
new PM.
[Hexagon] Enable .cur formation in MISched for Hexagon V60
Schedule a load and its use in the same packet in MISched. Previously,
isResourceAvailable was returning false for dependences in the same
packet, which prevented MISched from packetizing a load and its use in
the same packet for v60.
[PowerPC] Remove redundant direct moves when extracting integers and converting to FP
This patch corresponds to review:
https://reviews.llvm.org/D21354
We use direct moves for extracting integer elements from vectors. We also use
direct moves when converting integers to FP. When these operations are chained,
we get a direct move out of a VSR followed by a direct move back into a VSR.
These are redundant - all we need to do is line up the element and convert.
Nirav Dave [Mon, 18 Jul 2016 15:24:03 +0000 (15:24 +0000)]
[MC] Cleanup Error Handling in AsmParser
Add parseToken and compatriot functions to stitch error checks in
straight linear code. As part of this fix some erronous handling of
directives where the EndOfStatement token either was not checked or
Lexed on termination.
[Hexagon] HexagonMachineScheduler should account for resources
The machine scheduler needs to account for available resources
more accurately in order to avoid scheduling an instruction that
forces a new packet to be created.
This occurs in two ways: First, an instruction without an available
resource may have a large priority due to other metrics and be
scheduled when there are other instructions with available resources.
Second, an instruction with a non-zero latency may become available
prematurely. In both these cases, we attempt change the priority
in order to allow a better instruction to be scheduled.
[Hexagon] Fix zero latency instructions with multiple predecessors
An instruction may have multiple predecessors that are candidates
for using .cur. However, only one of them can use .cur in the
packet. When this case occurs, we need to make sure that only
one of the dependences gets a 0 latency value.
Simon Dardis [Mon, 18 Jul 2016 13:17:31 +0000 (13:17 +0000)]
[inlineasm] Propagate operand constraints to the backend
When SelectionDAGISel transforms a node representing an inline asm
block, memory constraint information is not preserved. This can cause
constraints to be broken when a memory offset is of the form:
offset + frame index
when the frame is resolved.
By propagating the constraints all the way to the backend, targets can
enforce memory operands of inline assembly to conform to their constraints.
For MIPSR6, some instructions had their offsets reduced to 9 bits from
16 bits such as ll/sc. This becomes problematic when using inline assembly
to perform atomic operations, as an offset can generated that is too big to
encode in the instruction.
AMDGPU: Disable AMDGPUPromoteAlloca pass for shader calling conventions.
Summary:
The work item intrinsics are not available for the shader
calling conventions. And even if we did hook them up most
shader stages haves some extra restrictions on the amount
of available LDS.
[ARM] Skip inline asm memory operands in DAGToDAGISel
The current logic for handling inline asm operands in DAGToDAGISel interprets
the operands by looking for constants, which should represent the flags
describing the kind of operand we're dealing with (immediate, memory, register
def etc). The operands representing actual data are skipped only if they are
non-const, with the exception of immediate operands which are skipped explicitly
when a flag describing an immediate is found.
The oversight is that memory operands may be const too (e.g. for device drivers
reading a fixed address), so we should explicitly skip the operand following a
flag describing a memory operand. If we don't, we risk interpreting that
constant as a flag, which is definitely not intended.
[ARM] Honour ABI for rem under -O0 for EABI, GNUEABI, Android and Musl
At higher optimization levels, we generate the libcall for DIVREM_Ix, which is
fine: aeabi_{u|i}divmod. At -O0 we generate the one for REM_Ix, which is the
default {u}mod{q|h|s|d}i3.
This commit makes sure that we don't generate REM_Ix calls for ABIs that
don't support them (i.e. where we need to use DIVREM_Ix instead). This is
achieved by bailing out of FastISel, which can't handle non-double multi-reg
returns, and letting the legalization infrastructure expand the REM_Ix calls.
It also updates the divmod-eabi.ll test to run under -O0 as well, and adds some
Windows checks to it to make sure we don't break things for it.
David Majnemer [Mon, 18 Jul 2016 06:11:37 +0000 (06:11 +0000)]
[GVNHoist] Change the key for VNtoInsns to a pair
While debugging GVNHoist, I found it confusing that the entries in a
VNtoInsns were not always value numbers. They _usually_ were except for
StoreInst in which case they were a hash of two different value numbers.
This leads to two observations:
- It is more difficult to debug things when the semantic contents of
VNtoInsns changes over time.
- Using a single value number is not much cheaper, the value of
VNtoInsns is a SmallVector.
- It is not immediately clear what the algorithm would do if there were
hash collisions in the StoreInst case.
Using a DenseMap of std::pair sidesteps all of this.
N.B. The changes in the test were due their sensitivity to the
iteration order of VNtoInsns which has changed.
[llvm-cov] Attempt to fix a test failure on Windows
Don't make the test/tools/llvm-cov/demangle.test depend on the order in
which symbols are seen, or on the exact formatting llvm-cov emits after
a symbol is printed. This is an attempt to fix a Windows bot failure:
I don't know what the root cause of the failure is, or why the
showTemplateInstantiations test doesn't fail in the same way on the
Windows bots. However, this measure can't hurt, and it'll at least get
me on the blamelists again.
Add assertions checking SignExtend{32,64}'s bit width.
Summary:
The bit width must be greater than zero, otherwise we shift by the
integer's width, which is UB. Also (more obviously) the width must be
less than or equal to the integer's width, otherwise we shift by a
negative number, which is also UB.
Summary:
To enable profile-guided indirect call promotion in ThinLTO mode, we
simply add call graph edges for each profitable target from the profile
to the summaries, then the summary-guided importing will consider the
callee for importing as usual.
Also we need to enable the indirect call promotion pass creation in the
PassManagerBuilder when PerformThinLTO=true (we are in the ThinLTO
backend), so that the newly imported functions are considered for
promotion in the backends.
The IC promotion profiles refer to callees by GUID, which required
adding GUIDs to the per-module VST in bitcode (and assigning them
valueIds similar to how they are assigned valueIds in the combined
index).
Summary:
Refactored the profitability analysis out of the IC promotion pass and
into lib/Analysis so that it can be accessed by the summary index
builder in a follow-on patch to enable IC promotion in ThinLTO (D21932).
[InstCombine] reassociate logic ops with constants separated by a zext
This is a partial implementation of a general fold for associative+commutative operators:
(op (cast (op X, C2)), C1) --> (cast (op X, op (C1, C2)))
(op (cast (op X, C2)), C1) --> (op (cast X), op (C1, C2))
There are 7 associative operators and 13 cast types, so this could potentially go a lot further.
Hal Finkel [Sat, 16 Jul 2016 07:21:28 +0000 (07:21 +0000)]
Revert "Revert r275027 - Let FuncAttrs infer the 'returned' argument attribute"
This reverts commit r275042; the initial commit triggered self-hosting failures
on ARM/AArch64. James Molloy identified the problematic backend code, which has
been disabled in r275677. Trying again...
Original commit message:
Let FuncAttrs infer the 'returned' argument attribute
A function can have one argument with the 'returned' attribute, indicating that
the associated argument is always the return value of the function. Add
FuncAttrs inference logic.
Hal Finkel [Sat, 16 Jul 2016 07:07:29 +0000 (07:07 +0000)]
Disable this-return argument forwarding on ARM/AArch64
r275042 reverted function-attribute inference for the 'returned' attribute
because the feature triggered self-hosting failures on ARM and AArch64. James
Molloy determined that the this-return argument forwarding feature, which
directly ties the returned input argument to the returned value, was the cause.
It seems likely that this forwarding code contains, or triggers, a subtle bug.
Disabling for now until we can track that down.
This does not schedule any passes besides the ones necessary to
construct and print the machine function. This is useful to test .mir
file reading and printing.
Initializing them in LLVMInitializeARMTarget() makes them visible early
enough for "llc -run-pass usage".
This required the pass to be renamed from "arm-load-store-opt" to
"arm-ldst-opt", because there already exists an arm-load-store-opt
cl::opt switch which would now clash with the passname getting added as
a switch in opt. On the bright side the pass name now matches the
DEBUG_TYPE name. Renamed "arm-prera-load-store-opt" to
"arm-repra-ldst-opt" as well for consistency.
It's using a version of clang which can't (or won't) deduce an implicit
conversion from a SmallString to a StringRef. Write the conversion out
explicitly:
Sebastian Pop [Fri, 15 Jul 2016 23:15:06 +0000 (23:15 +0000)]
bugpoint: add flag -verbose-errors
The default behavior of bugpoint is to print "<crash>" when it finds a reduced
test that crashes compilation. With this flag we now can see the output of the
crashing program. This is useful to make sure it is the same error being
tracked down and not a different error that happens to crash the compiler as
well.
This reverts commit r275562, effectively reapplying r275141. Doug
Gilmore reported that there was an error when bisecting the Mips
buildbot failure, and that r275141 was not to blame after all. Here is
the green build:
https://dmz-portal.mips.com/bb/builders/LLVM%20with%20integrated%20assembler%20and%20fPIC%20and%20-O0/builds/803
ExpandPostRAPseudos should transfer implicit uses, not only implicit defs
Previously, we would expand:
%BL<def> = COPY %DL<kill>, %EBX<imp-use,kill>, %EBX<imp-def>
Into:
%BL<def> = MOV8rr %DL<kill>, %EBX<imp-def>
Dropping the imp-use on the floor.
That confused CriticalAntiDepBreaker, which (correctly) assumes that if an
instruction defs but doesn't use a register, that register is dead immediately
before the instruction - while in this case, the high lanes of EBX can be very
much alive.
[pdb] Teach MsfBuilder and other classes about the Free Page Map.
Block 1 and 2 of an MSF file are bit vectors that represent the
list of blocks allocated and free in the file. We had been using
these blocks to write stream data and other data, so we mark them
as the free page map now. We don't yet serialize these pages to
the disk, but at least we make a note of what it is, and avoid
writing random data to them.
Doing this also necessitated cleaning up some of the tests to be
more general and hardcode fewer values, which is nice.
Previously we would read a PDB, then write some of it back out,
but write the directory, super block, and other pertinent metadata
back out unchanged. This generates incorrect PDBs since the amount
of data written was not always the same as the amount of data read.
This patch changes things to use the newly introduced `MsfBuilder`
class to write out a correct and accurate set of Msf metadata for
the data *actually* written, which opens up the door for adding and
removing type records, symbol records, and other types of data to
an existing PDB.