Simon Pilgrim [Sun, 19 Jul 2015 10:50:53 +0000 (10:50 +0000)]
Remove TargetInstrInfo::canFoldMemoryOperand
canFoldMemoryOperand is not actually used anywhere in the codebase - all existing users instead call foldMemoryOperand directly when they wish to fold and can correctly deduce what they need from the return value.
This patch removes the canFoldMemoryOperand base function and the target implementations; only x86 had a real (bit-rotted) implementation, although AMDGPU had a preparatory stub that had never needed to be completed.
AVX-512: Floating point conversions for SKX - DAG Lowering.
SKX supports conversion for all FP types. Integer types include doublewords and quardwords.
I added "Legal" status for these nodes and a bunch of tests.
I added "NoVLX" for AVX DAG selection to force VLX instructions selection when VLX is supported.
[PM/AA] Remove the addEscapingUse update API that won't be easy to
directly model in the new PM.
This also was an incredibly brittle and expensive update API that was
never fully utilized by all the passes that claimed to preserve AA, nor
could it reasonably have been extended to all of them. Any number of
places add uses of values. If we ever wanted to reliably instrument
this, we would want a callback hook much like we have with ValueHandles,
but doing this for every use addition seems *extremely* expensive in
terms of compile time.
The only user of this update mechanism is GlobalsModRef. The idea of
using this to keep it up to date doesn't really work anyways as its
analysis requires a symmetric analysis of two different memory
locations. It would be very hard to make updates be sufficiently
rigorous to *guarantee* symmetric analysis in this way, and it pretty
certainly isn't true today.
However, folks have been using GMR with this update for a long time and
seem to not be hitting the issues. The reported issue that the update
hook fixes isn't even a problem any more as other changes to
GetUnderlyingObject worked around it, and that issue stemmed from *many*
years ago. As a consequence, a prior patch provided a flag to control
the unsafe behavior of GMR, and this patch removes the update mechanism
that has questionable compile-time tradeoffs and is causing problems
with moving to the new pass manager. Note the lack of test updates --
not one test in tree actually requires this update, even for a contrived
case.
All of this was extensively discussed on the dev list, this patch will
just enact what that discussion decides on. I'm sending it for review in
part to show what I'm planning, and in part to show the *amazing* amount
of work this avoids. Every call to the AA here is something like three
to six indirect function calls, which in the non-LTO pipeline never do
any work! =[
ARM: Enable MachineScheduler and disable PostRAScheduler for swift.
Reapply r242500 now that the swift schedmodel includes LDRLIT.
This is mostly done to disable the PostRAScheduler which optimizes for
instruction latencies which isn't a good fit for out-of-order
architectures. This also allows to leave out the itinerary table in
swift in favor of the SchedModel ones.
This change leads to performance improvements/regressions by as much as
10% in some benchmarks, in fact we loose 0.4% performance over the
llvm-testsuite for reasons that appear to be unknown or out of the
compilers control. rdar://20803802 documents the investigation of
these effects.
While it is probably a good idea to perform the same switch for the
other ARM out-of-order CPUs, I limited this change to swift as I cannot
perform the benchmark verification on the other CPUs.
ARM: Add scheduling information for LDRLIT instructions to swift scheduling model
These pseudo instructions are only lowered after register allocation and
are therefore still present when the machine scheduler runs.
Add a run: line to a testcase that uses the uncommon flags necessary to
actually produce a LDRLIT instruction on swift.
[RAGreedy] Add an experimental deferred spilling feature.
The idea of deferred spilling is to delay the insertion of spill code until the
very end of the allocation. A "candidate" to spill variable might not required
to be spilled because of other evictions that happened after this decision was
taken. The spirit is similar to the optimistic coloring strategy implemented in
Preston and Briggs graph coloring algorithm.
For now, this feature is highly experimental. Although correct, it would require
much more modification to properly model the effect of spilling.
Anyway, this early patch helps prototyping this feature.
Note: The test case cannot unfortunately be reduced and is probably fragile.
Alex Lorenz [Fri, 17 Jul 2015 22:48:04 +0000 (22:48 +0000)]
MIR Parser: Allow the dollar characters in all of the identifier tokens.
This commit modifies the machine instruction lexer so that it now accepts the
'$' characters in identifier tokens.
This change makes the syntax for unquoted global value tokens consistent with
the syntax for the global idenfitier tokens in the LLVM's assembly language.
Peter Zotov [Fri, 17 Jul 2015 17:33:23 +0000 (17:33 +0000)]
[OCaml] Do not use -warn-error in tests.
This -warn-error flag invariably gets into release tarballs
and breaks builds on distributions that run tests as a part
of release process. The OCaml binding tests are especially
critical, since they often expose lingering toolchain bugs,
and so it is replaced with -w +A (equivalent to -Wall).
Daniel Sanders [Fri, 17 Jul 2015 10:40:40 +0000 (10:40 +0000)]
test-release.sh: Add ability to do a test build using the trunk or branches.
Summary:
Adds '--svn-path BRANCH' that causes the script to export the specified path
from each project. Otherwise the tag specified by -release, -rc, etc. will be
used. The version portion of the package name will be 'test-$path' (any forward
slashes in the branch name are replaced with underscores), for example:
-svn-path trunk => clang+llvm-test-trunk-mips-linux-gnu.tar.xz
-svn-path branches/release_35 => clang+llvm-test-branches_release_35-mips-linux-gnu.tar.xz
This is primarily useful for bringing new release packages up to standard
without needing to create and maintain a tag for the purpose.
[PM/AA] Disable the core unsafe aspect of GlobalsModRef in the face of
basic changes to the IR such as folding pointers through PHIs, Selects,
integer casts, store/load pairs, or outlining.
This leaves the feature available behind a flag. This flag's default
could be flipped if necessary, but the real-world performance impact of
this particular feature of GMR may not be sufficiently significant for
many folks to want to run the risk.
Currently, the risk here is somewhat mitigated by half-hearted attempts
to update GlobalsModRef when the rest of the optimizer changes
something. However, I am currently trying to remove that update
mechanism as it makes migrating the AA infrastructure to a form that can
be readily shared between new and old pass managers very challenging.
Without this update mechanism, it is possible that this still unlikely
failure mode will start to trip people, and so I wanted to try to
proactively avoid that.
There is a lengthy discussion on the mailing list about why the core
approach here is flawed, and likely would need to look totally different
to be both reasonably effective and resilient to basic IR changes
occuring. This patch is essentially the first of two which will enact
the result of that discussion. The next patch will remove the current
update mechanism.
Thanks to lots of folks that helped look at this from different angles.
Especial thanks to Michael Zolotukhin for doing some very prelimanary
benchmarking of LTO without GlobalsModRef to get a rough idea of the
impact we could be facing here. So far, it looks very small, but there
are some concerns lingering from other benchmarking. The default here
may get flipped if performance results end up pointing at this as a more
significant issue.
Kuba Brecka [Fri, 17 Jul 2015 06:29:57 +0000 (06:29 +0000)]
[asan] Fix invalid debug info for promotable allocas
Since r230724 ("Skip promotable allocas to improve performance at -O0"), there is a regression in the generated debug info for those non-instrumented variables. When inspecting such a variable's value in LLDB, you often get garbage instead of the actual value. ASan instrumentation is inserted before the creation of the non-instrumented alloca. The only allocas that are considered standard stack variables are the ones declared in the first basic-block, but the initial instrumentation setup in the function breaks that invariant.
This patch makes sure uninstrumented allocas stay in the first BB.
ARM: Enable MachineScheduler and disable PostRAScheduler for swift.
This is mostly done to disable the PostRAScheduler which optimizes for
instruction latencies which isn't a good fit for out-of-order
architectures. This also allows to leave out the itinerary table in
swift in favor of the SchedModel ones.
This change leads to performance improvements/regressions by as much as
10% in some benchmarks, in fact we loose 0.4% performance over the
llvm-testsuite for reasons that appear to be unknown or out of the
compilers control. rdar://20803802 documents the investigation of
these effects.
While it is probably a good idea to perform the same switch for the
other ARM out-of-order CPUs, I limited this change to swift as I cannot
perform the benchmark verification on the other CPUs.
Add new constructors for LoopInfo/DominatorTree/BFI/BPI
Those new constructors make it more natural to construct an object for a function. For example, previously to build a LoopInfo for a function, we need four statements:
Arm: Don't define a label twice with two setjmps in a function.
Constructing a name based on the function name didn't give us a unique
symbol if we had more than one setjmp in a function. Using
MCContext::createTempSymbol() always gives us a unique name.
Fix __builtin_setjmp in combination with sjlj exception handling.
llvm.eh.sjlj.setjmp was used as part of the SjLj exception handling
style but is also used in clang to implement __builtin_setjmp. The ARM
backend needs to output additional dispatch tables for the SjLj
exception handling style, these tables however can't be emitted if
llvm.eh.sjlj.setjmp is simply used for __builtin_setjmp and no actual
landing pad blocks exist.
To solve this issue a new llvm.eh.sjlj.setup_dispatch intrinsic is
introduced which is used instead of llvm.eh.sjlj.setjmp in the SjLj
exception handling lowering, so we can differentiate between the case
where we actually need to setup a dispatch table and the case where we
just need the __builtin_setjmp semantic.
Tim Northover [Thu, 16 Jul 2015 21:30:21 +0000 (21:30 +0000)]
AArch64: make inexact signalling on round Darwin-specific
C11 leaves the choice on whether round-to-integer operations set the inexact
flag implementation-defined. Darwin does expect it to be set, but this seems to
be against the intent of the IEEE document and slower to implement anyway. So
it should be opt-in.
Bill Schmidt [Thu, 16 Jul 2015 21:14:07 +0000 (21:14 +0000)]
[PowerPC] v4i32 is a VSRCRegClass
I was looking at some vector code generation and kept seeing
unnecessary vector copies into the Altivec half of the VSX registers.
I discovered that we overlooked v4i32 when adding the register classes
for VSX; we only added v4f32 and v2f64. This means that anything that
canonicalizes into v4i32 (which is a LOT of stuff) ends up being
forced into VRRC on its way to VSRC.
The fix is one line. The rest of the patch is fixing up some test
cases whose code generation has changed as a result.
This seems like it would be a good candidate for backport to 3.7.
Summary:
SpeculativeExecution enables a series straight line optimizations (such
as SLSR and NaryReassociate) on conditional code. For example,
if (...)
... b * s ...
if (...)
... (b + 1) * s ...
speculative execution can hoist b * s and (b + 1) * s from then-blocks,
so that we have
... b * s ...
if (...)
...
... (b + 1) * s ...
if (...)
...
Then, SLSR can rewrite (b + 1) * s to (b * s + s) because after
speculative execution b * s dominates (b + 1) * s.
The performance impact of this change is significant. It speeds up the
benchmarks running EigenFloatContractionKernelInternal16x16
(https://bitbucket.org/eigen/eigen/src/ba68f42fa69e4f43417fe1e52669d4dd5d2b3bee/unsupported/Eigen/CXX11/src/Tensor/TensorContractionCuda.h?at=default#cl-526)
by roughly 2%. Some internal benchmarks that have the above code pattern
are improved by up to 40%. No significant slowdowns are observed on
Eigen CUDA microbenchmarks.
This is a new iteration of the reverted r238793 /
http://reviews.llvm.org/D8232 which wrongly assumed that any and/or
trees can be represented by conditional compare sequences, however there
are some restrictions to that. This version fixes this and adds comments
that explain exactly what types of and/or trees can actually be
implemented as conditional compare sequences.
Related to http://llvm.org/PR20927, rdar://18326194
LiveInterval: Document and enforce rules about empty subranges.
Empty subranges are not allowed in a LiveInterval and must be removed
instead: Check this in the verifiers, put a reminder for this in the
comment of the shrinkToUses variant for a single lane and make it
automatic for the shrinkToUses variant for a LiveInterval.
Internalize: internalize comdat members as a group, and drop comdat on such members.
Internalizing an individual comdat group member without also internalizing
the other members of the comdat can break comdat semantics. For example,
if a module contains a reference to an internalized comdat member, and the
linker chooses a comdat group from a different object file, this will break
the reference to the internalized member.
This change causes the internalizer to only internalize comdat members if all
other members of the comdat are not externally visible. Once a comdat group
has been fully internalized, there is no need to apply comdat rules to its
members; later optimization passes (e.g. globaldce) can legally drop individual
members of the comdat. So we drop the comdat attribute from all comdat members.
Mehdi Amini [Thu, 16 Jul 2015 16:34:23 +0000 (16:34 +0000)]
Make ExecutionEngine owning a DataLayout
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.
The ExecutionEngine will act as an exception and will be unsafe to
be reused across context. We don't enforce this rule but undefined
behavior can occurs if the user tries to do it.
James Molloy [Thu, 16 Jul 2015 15:22:46 +0000 (15:22 +0000)]
[Codegen] Add intrinsics 'absdiff' and corresponding SDNodes for absolute difference operation
This adds new intrinsics "*absdiff" for absolute difference ops to facilitate efficient code generation for "sum of absolute differences" operation.
The patch also contains the introduction of corresponding SDNodes and basic legalization support.Sanity of the generated code is tested on X86.
[X86] Reapply r240257 : "Allow more call sequences to use push instructions for argument passing"
This allows more call sequences to use pushes instead of movs when optimizing for size.
In particular, calling conventions that pass some parameters in registers (e.g. thiscall) are now supported.
This should no longer cause miscompiles, now that a bug in emitPrologue was fixed in r242395.
[X86] Fix emitPrologue() to make less assumptions about pushes
When X86FrameLowering::emitPrologue() looks for where to insert the %esp subtraction
to allocate stack space for local allocations, it assumes that any sequence of push
instructions that starts at function entry consists purely of spills of callee-save
registers.
This may be false, since from some point forward, the pushes may pushing arguments
to a subsequent function call.
This caused a miscompile that was exposed by r240257, and is not easily testable
since r240257 was reverted. A test will be committed separately after r240257 is
reapplied.
Mehdi Amini [Thu, 16 Jul 2015 06:17:14 +0000 (06:17 +0000)]
Make ExecutionEngine owning a DataLayout
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.
The ExecutionEngine will act as an exception and will be unsafe to
be reused across context. We don't enforce this rule but undefined
behavior can occurs if the user tries to do it.
Mehdi Amini [Thu, 16 Jul 2015 06:11:10 +0000 (06:11 +0000)]
Move most user of TargetMachine::getDataLayout to the Module one
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.
This patch is quite boring overall, except for some uglyness in
ASMPrinter which has a getDataLayout function but has some clients
that use it without a Module (llmv-dsymutil, llvm-dwarfdump), so
some methods are taking a DataLayout as parameter.
Mehdi Amini [Thu, 16 Jul 2015 06:04:17 +0000 (06:04 +0000)]
Remove DataLayout from TargetLoweringObjectFile, redirect to Module
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.
Adam Nemet [Thu, 16 Jul 2015 02:48:05 +0000 (02:48 +0000)]
[LAA] Split out a helper to check the pointer partitions, NFC
This is made a static public member function to allow the transition of
this logic from LAA to LoopDistribution. (Technically, it could be an
implementation-local static function but then it would not be accessible
from LoopDistribution.)
Revert the changes to the C API LLVMBuildLandingPad that were part of
the personality function move. We now set the personality on the parent
function when the C API attempts to construct a landingpad with a
personality.
[ARM] Define a subtarget feature that is used to avoid using movt/movw
pairs for 32-bit immediates.
This change is needed to avoid emitting movt/movw pairs when doing LTO
and do so on a per-function basis.
Out-of-tree projects currently using cl::opt option -arm-use-movt=0 or
false to avoid emitting movt/movw pairs should make changes to add
subtarget feature "+no-movt" (see the changes made to clang in r242368).
Pete Cooper [Thu, 16 Jul 2015 00:09:18 +0000 (00:09 +0000)]
Clear kill flags in ARMLoadStoreOptimizer.
The pass here was clearing kill flags on instructions which had
their sources killed in the instruction being combined. But
given that the new instruction is inserted after the existing ones,
any existing instructions with kill flags will lead to the verifier
complaining that we are reading an undefined physreg.
For example, what we had prior to this optimization is
t2STRi12 %R1, %SP, 12
t2STRi12 %R1<kill>, %SP, 16
t2STRi12 %R0<kill>, %SP, 8
and prior to this fix that would generate
t2STRi12 %R1<kill>, %SP, 16
t2STRDi8 %R0<kill>, %R1, %SP, 8
This is clearly incorrect as it didn't clear the kill flag on R1
used with offset 16 because there was no kill flag on the instruction
with offset 12.
After this change we clear the kill flag on the offset 16 instruction
because we know it will be used afterwards in the new instruction.
I haven't provided a test case. I have a small test, but even it is
very sensitive to register allocation order which isn't ideal.
Alex Lorenz [Wed, 15 Jul 2015 23:31:07 +0000 (23:31 +0000)]
MIR Serialization: Serialize the jump table info.
The jump table info is serialized using a YAML mapping that contains its kind
and a YAML sequence of jump table entries. A jump table entry is a YAML mapping
that has an ID and an inline YAML sequence of machine basic block references.
The testcase 'CodeGen/MIR/X86/jump-table-info.mir' doesn't have any instructions
because one of them contains a jump table index operand. The jump table index
operands will be serialized in a follow up patch, and the appropriate
instructions will be added to this testcase.