IR: Function summary extensions for whole-program devirtualization pass.
The summary information includes all uses of llvm.type.test and
llvm.type.checked.load intrinsics that can be used to devirtualize calls,
including any constant arguments for virtual constant propagation.
Benjamin Kramer [Fri, 10 Feb 2017 22:26:35 +0000 (22:26 +0000)]
[InstCombine] Move class into anonymous namespace. NFC.
This is necessary to avoid warnings from GCC.
InstCombineLoadStoreAlloca.cpp:238:7: error: 'PointerReplacer' declared
with greater visibility than the type of its field 'PointerReplacer::IC'
Davide Italiano [Fri, 10 Feb 2017 22:16:17 +0000 (22:16 +0000)]
[lib/LTO] Rework optimization remarkers setup.
This makes this code much more similar to what ThinLTO is
using (also API wise), so now we can probably use a single
code path instead of copying stuff around.
Yaxun Liu [Fri, 10 Feb 2017 21:46:07 +0000 (21:46 +0000)]
Fix invalid addrspacecast due to combining alloca with global var
For function-scope variables with large initialisation list, FE usually
generates a global variable to hold the initializer, then generates
memcpy intrinsic to initialize the alloca. InstCombiner::visitAllocaInst
identifies such allocas which are accessed only by reading and replaces
them with the global variable. This is done by casting the global variable
to the type of the alloca and replacing all references.
However, when the global variable is in a different address space which
is disjoint with addr space 0 (e.g. for IR generated from OpenCL,
global variable cannot be in private addr space i.e. addr space 0), casting
the global variable to addr space 0 results in invalid IR for certain
targets (e.g. amdgpu).
To fix this issue, when the global variable is not in addr space 0,
instead of casting it to addr space 0, this patch chases down the uses
of alloca until reaching the load instructions, then replaces load from
alloca with load from the global variable. If during the chasing
bitcast and GEP are encountered, new bitcast and GEP based on the global
variable are generated and used in the load instructions.
Dehao Chen [Fri, 10 Feb 2017 21:09:07 +0000 (21:09 +0000)]
Encode duplication factor from loop vectorization and loop unrolling to discriminator.
Summary:
This patch starts the implementation as discuss in the following RFC: http://lists.llvm.org/pipermail/llvm-dev/2016-October/106532.html
When optimization duplicates code that will scale down the execution count of a basic block, we will record the duplication factor as part of discriminator so that the offline process tool can find the duplication factor and collect the accurate execution frequency of the corresponding source code. Two important optimization that fall into this category is loop vectorization and loop unroll. This patch records the duplication factor for these 2 optimizations.
The recording will be guarded by a flag encode-duplication-in-discriminators, which is off by default.
Tim Shen [Fri, 10 Feb 2017 21:03:24 +0000 (21:03 +0000)]
[XRay] Implement powerpc64le xray.
Summary:
powerpc64 big-endian is not supported, but I believe that most logic can
be shared, except for xray_powerpc64.cc.
Also add a function InvalidateInstructionCache to xray_util.h, which is
copied from llvm/Support/Memory.cpp. I'm not sure if I need to add a unittest,
and I don't know how.
Ahmed Bougacha [Fri, 10 Feb 2017 19:51:47 +0000 (19:51 +0000)]
[X86] Bitcast subvector before broadcasting it.
Since r274013, we've been looking through bitcasts on broadcast inputs.
In the scalar-folding case (from a load, build_vector, or sc2vec),
the input type didn't matter, as we'd simply bitcast the resulting
scalar back.
However, when broadcasting a 128-bit-lane-aligned element, we create an
EXTRACT_SUBVECTOR. Use proper types, by creating an extract_subvector
of the original input type.
John Brawn [Fri, 10 Feb 2017 17:41:08 +0000 (17:41 +0000)]
[ARM] Fix incorrect mask bits in MSR encoding for write_register intrinsic
In the encoding of system registers in the M-class MSR instruction the mask bits
should be 2 for registers that don't take a _<bits> qualifier (the instruction
is unpredictable otherwise), and should also be 2 if the register takes a
_<bits> qualifier but it's not present as no _<bits> is an alias for _nzcvq.
Simon Pilgrim [Fri, 10 Feb 2017 14:37:25 +0000 (14:37 +0000)]
[DAGCombine] Allow vector constant folding of any value type before type legalization
The patch comes in 2 parts:
1 - it makes use of the SelectionDAG::NewNodesMustHaveLegalTypes flag to tell when it can safely constant fold illegal types.
2 - it correctly resets SelectionDAG::NewNodesMustHaveLegalTypes at the start of each call to SelectionDAGISel::CodeGenAndEmitDAG so all the pre-legalization stages can make use of it - not just the first basic block that gets handled.
Simon Pilgrim [Fri, 10 Feb 2017 14:04:11 +0000 (14:04 +0000)]
[X86][SSE] Add support for extracting target constants from BUILD_VECTOR
In some cases we call getTargetConstantBitsFromNode for nodes that haven't been lowered from BUILD_VECTOR yet
Note: We're getting very close to being able to move most of the constant extraction code from getTargetShuffleMaskIndices into getTargetConstantBitsFromNode
Chandler Carruth [Fri, 10 Feb 2017 08:26:58 +0000 (08:26 +0000)]
[PM] Fix a bug in the new loop PM when handling functions with no loops.
Without any loops, we don't even bother to build the standard analyses
used by loop passes. Without these, we can't run loop analyses or
invalidate them properly. Unfortunately, we did these things in the
wrong order which would allow a loop analysis manager's proxy to be
built but then not have the standard analyses built. When we went to do
the invalidation in the proxy thing would fall apart. In the test case
provided, it would actually crash.
The fix is to carefully check for loops first, and to in fact build the
standard analyses before building the proxy. This allows it to
correctly trigger invalidation for those standard analyses.
An alternative might seem to be to look at whether there are any loops
when doing invalidation, but this doesn't work when during the loop
pipeline run we delete the last loop. I've even included that as a test
case. It is both simpler and more robust to defer building the proxy
until there are definitely the standard set of analyses and indeed
loops.
This bug was uncovered by enabling GlobalsAA in the pipeline.
Summary:
In preparation for graph comparison and filtering, this is a library for
representing graphs in LLVM. This will enable easier encapsulation and reuse
of graphs in llvm-xray.
Philip Reames [Fri, 10 Feb 2017 06:12:06 +0000 (06:12 +0000)]
[LoopUnswitch] Remove BFI usage (dead code)
Chandler mentioned at the last social that the need for BFI in the new pass manager was causing a slight hiccup for this pass. Given this code has been checked in, but off for over a year, it makes sense to just remove it for now.
Note that there's nothing wrong with the general idea - it's actually a quite good one - and once we have the infrastructure in place to implement this without the full recompuation on every loop, we absolutely should.
Summary:
In preparation for graph comparison and filtering, this is a library for
representing graphs in LLVM. This will enable easier encapsulation and reuse
of graphs in llvm-xray.
Craig Topper [Fri, 10 Feb 2017 05:05:57 +0000 (05:05 +0000)]
[SelectionDAG] Dump the DAG after legalizing vector ops and after the second type legalization
Summary:
With -debug, we aren't dumping the DAG after legalizing vector ops. In particular, on X86 with AVX1 only, we don't dump the DAG after we split 256-bit integer ops into pairs of 128-bit ADDs since this occurs during vector legalization.
I'm only dumping if the legalize vector ops changes something since we don't print anything during legalize vector ops. So this dump shows up right after the first type-legalization dump happens. So if nothing changed this second dump is unnecessary.
Having said that though, I think we should probably fix legalize vector ops to log what its doing.
Adam Nemet [Fri, 10 Feb 2017 04:50:18 +0000 (04:50 +0000)]
opt-viewer: fix HtmlFormatter encoding
Summary: Small fix to HtmlFormatter, defaults to ascii encoding, so utf-8 output may get `UnicodeEncodeError: 'ascii' codec can't encode character ... ordinal not in range(128)` during write.
Eric Christopher [Fri, 10 Feb 2017 04:35:32 +0000 (04:35 +0000)]
Temporarily revert "For X86-64 linux and PPC64 linux align int128 to 16 bytes."
until we can get better TargetMachine::isCompatibleDataLayout to compare - otherwise
we can't code generate existing bitcode without a string equality data layout.
Matthias Braun [Fri, 10 Feb 2017 03:48:50 +0000 (03:48 +0000)]
SubtargetFeature: Increase MAX_SUBTARGET_FEATURES
The ARM target is getting really close to the current limit of 128
subtarget features already breaking out of tree enhancements. Increase
the size once more to 196.
I filed http://llvm.org/PR31926 to request a proper solution.
Eric Christopher [Fri, 10 Feb 2017 03:32:21 +0000 (03:32 +0000)]
For X86-64 linux and PPC64 linux align int128 to 16 bytes.
For other platforms we should find out what they need and likely
make the same change, however, a smaller additional change is easier
for platforms we know have it specified in the ABI. As part of this
rewrite some of the handling in the backends for data layout and update
a bunch of testcases.
Quentin Colombet [Fri, 10 Feb 2017 02:43:09 +0000 (02:43 +0000)]
[TableGen][AsmWriterEmitter] Use a deterministic order to sort InstrAliases
Inside an alias group, when ordering instruction aliases, we rely
on the priority field to sort them.
When the priority is not set or more generally when there is a tie between
two aliases, we used to rely on the lexicographic order. However, this
order can change for the anonymous records when more instruction, intrinsic,
etc. are inserted.
For instance, given two anonymous records r1 and r2 with respective name
A_999 and A_1000, their lexicography order will be r2 then r1. Now, if
an instruction is added before them, their name will become respectively
A_1000 and A_1001, thus the lexicography order will be r1 then r2, i.e.,
it changed.
If that happens in an alias group, the assembly output would prefer a
different alias for no apparent good reasons.
A way to fix that is to use proper priority for all aliases, but we
can also make the tie breaker comparison smarter and use a deterministic
ordering. This is what this patch does.
This change returns empty PSet list for M0 register. Otherwise its
PSet as defined by tablegen is SReg_32. This results in incorrect
register pressure calculation every time an instruction uses M0.
Such uses count as SReg_32 PSet and inadequately increase pressure
on SGPRs.
Eric Fiselier [Fri, 10 Feb 2017 01:59:20 +0000 (01:59 +0000)]
[CMake] Fix pthread handling for out-of-tree builds
LLVM defines `PTHREAD_LIB` which is used by AddLLVM.cmake and various projects
to correctly link the threading library when needed. Unfortunately
`PTHREAD_LIB` is defined by LLVM's `config-ix.cmake` file which isn't installed
and therefore can't be used when configuring out-of-tree builds. This causes
such builds to fail since `pthread` isn't being correctly linked.
This patch attempts to fix that problem by renaming and exporting
`LLVM_PTHREAD_LIB` as part of`LLVMConfig.cmake`. I renamed `PTHREAD_LIB`
because It seemed likely to cause collisions with downstream users of
`LLVMConfig.cmake`.
Marcos Pividori [Fri, 10 Feb 2017 01:40:28 +0000 (01:40 +0000)]
[libFuzzer] Export external functions on tests.
We need to export external functions so they are found when calling
GetProcAddress() on Windows. But we can't use `__declspec(dllexport)` because
we want the targets to be completely independent from the fuzz engines and don't
depend on other header files. Also, we don't want to include platform specific
code managed with conditional macros.
So, the solution is to add the exported symbols with linker flags in cmake.
Marcos Pividori [Fri, 10 Feb 2017 01:35:46 +0000 (01:35 +0000)]
[libFuzzer] Use dynamic loading for External Functions on Windows.
Replace weak aliases with dynamic loading.
Weak aliases were generating some problems when linking for MT on Windows. For
MT, compiler-rt's libraries are statically linked to the main executable the
same than libFuzzer, so if we use weak aliases, we are providing two different
default implementations for the same weak function and the linker fails.
In this diff I re implement ExternalFunctions() using dynamic loading, so it
works in both cases (MD and MT). Also, dynamic loading is simpler, since we are
not defining any auxiliary external function, and we don't need to deal with
weak aliases.
This is equivalent to the implementation using dlsym(RTLD_DEFAULT, FnName) for
Posix.
Dan Gohman [Fri, 10 Feb 2017 00:02:58 +0000 (00:02 +0000)]
[Support] Extend SLEB128 encoding support.
Add support for padded SLEB128 values, and support for writing SLEB128
values to buffers rather than to ostreams, similar to the existing
ULEB128 support.
The pygments syntax highlighting package used by sphinx fails to parse
newer LLVM constructs or valid (at least to me) gas constructs like
`.secrel32 _function_name + 0`.
Disable this particular warning so the build doesn't abort as fixing
pygments doesn't seem a workable option here.
This needs explicit requires of the optimization remark emission before
loop pass pipelines containing LICM as we no longer get it from the
inliner -- Argument Promotion may invalidate it. Technically the inliner
could also have broken this, but it never came up in testing.
[PM] Port ArgumentPromotion to the new pass manager.
Now that the call graph supports efficient replacement of a function and
spurious reference edges, we can port ArgumentPromotion to the new pass
manager very easily.
The old PM-specific bits are sunk into callbacks that the new PM simply
doesn't use. Unlike the old PM, the new PM simply does argument
promotion and afterward does the update to LCG reflecting the promoted
function.
[PM/LCG] Teach LCG to support spurious reference edges.
Somewhat amazingly, this only requires teaching it to clean them up when
deleting a dead function from the graph. And we already have exactly the
necessary data structures to do that in the parent RefSCCs.
This allows ArgPromote to work in a much simpler way be merely letting
reference edges linger in the graph after the causing IR is deleted. We
will clean up these edges when we run any function pass over the IR, but
don't remove them eagerly.
This avoids all of the quadratic update issues both in the current pass
manager and in my previous attempt with the new pass manager.
[PM/LCG] Teach the LazyCallGraph how to replace a function without
disturbing the graph or having to update edges.
This is motivated by porting argument promotion to the new pass manager.
Because of how LLVM IR Function objects work, in order to change their
signature a new object needs to be created. This is efficient and
straight forward in the IR but previously was very hard to implement in
LCG. We could easily replace the function a node in the graph
represents. The challenging part is how to handle updating the edges in
the graph.
LCG previously used an edge to a raw function to represent a node that
had not yet been scanned for calls and references. This was the core
of its laziness. However, that model causes this kind of update to be
very hard:
1) The keys to lookup an edge need to be `Function*`s that would all
need to be updated when we update the node.
2) There will be some unknown number of edges that haven't transitioned
from `Function*` edges to `Node*` edges.
All of this complexity isn't necessary. Instead, we can always build
a node around any function, always pointing edges at it and always using
it as the key to lookup an edge. To maintain the laziness, we need to
sink the *edges* of a node into a secondary object and explicitly model
transitioning a node from empty to populated by scanning the function.
This design seems much cleaner in a number of ways, but importantly
there is now exactly *one* place where the `Function*` has to be
updated!
Some other cleanups that fall out of this include having something to
model the *entry* edges more accurately. Rather than hand rolling parts
of the node in the graph itself, we have an explicit `EdgeSequence`
object that gives us exactly the functionality needed. We also have
a consistent place to define the edge iterators and can use them for
both the entry edges and the internal edges of the graph.
The API used to model the separation between a node and its edges is
intentionally very thin as most clients are expected to deal with nodes
that have populated edges. We model this exactly as an optional does
with an additional method to populate the edges when that is
a reasonable thing for a client to do. This is based on API design
suggestions from Richard Smith and David Blaikie, credit goes to them
for helping pick how to model this without it being either too explicit
or too implicit.
The patch is somewhat noisy due to shifting around iterator types and
new syntax for walking the edges of a node, but most of the
functionality change is in the `Edge`, `EdgeSequence`, and `Node` types.
X86: Teach X86InstrInfo::analyzeCompare to recognize compares of symbols.
This requires that we communicate to X86InstrInfo::optimizeCompareInstr
that the second operand is neither a register nor an immediate. The way we
do that is by setting CmpMask to zero.
Note that there were already instructions where the second operand was not a
register nor an immediate, namely X86::SUB*rm, so also set CmpMask to zero
for those instructions. This seems like a latent bug, but I was unable to
trigger it.
Adrian McCarthy [Thu, 9 Feb 2017 21:51:19 +0000 (21:51 +0000)]
Introduce NativeRawSymbol for PDB reading.
This is a stub for a new concrete implementation of IPDBRawSymbol.
Nothing uses this uses this implementation yet. My plan is to
locally switch lldb-pdbdump from the DIA reader to the Native one
and flesh out the implementations of these method stubs in the order
they're needed.
Daniel Berlin [Thu, 9 Feb 2017 20:37:24 +0000 (20:37 +0000)]
GraphTraits: Add range versions of graph traits functions (graph_nodes, graph_children, inverse_graph_nodes, inverse_graph_children).
Summary:
Convert all obvious node_begin/node_end and child_begin/child_end
pairs to range based for.
Sending for review in case someone has a good idea how to make
graph_children able to be inferred. It looks like it would require
changing GraphTraits to be two argument or something. I presume
inference does not happen because it would have to check every
GraphTraits in the world to see if the noderef types matched.
Note: This change was 3-staged with clang as well, which uses
Dominators/etc from LLVM.
Frederic Riss [Thu, 9 Feb 2017 19:41:55 +0000 (19:41 +0000)]
[dsymutil] Fix handling of empty CUs in LTO links.
r288399 introduced the DIEUnit class, and in the process broke
the corner case where dsymutil generates an empty CU during an
LTO link. This restores the logic and adds a test for the corner
case.