[SROA] Teach SROA how to much more intelligently handle split loads and
stores.
When there are accesses to an entire alloca with an integer
load or store as well as accesses to small pieces of the alloca, SROA
splits up the large integer accesses. In order to do that, it uses bit
math to merge the small accesses into large integers. While this is
effective, it produces insane IR that can cause significant problems in
the rest of the optimizer:
- It can cause load and store mismatches with GVN on the non-alloca side
where we end up loading an i64 (or some such) rather than loading
specific elements that are stored.
- We can't always get rid of the integer bit math, which is why we can't
always fix the loads and stores to work well with GVN.
- This is especially bad when we have operations that mix poorly with
integer bit math such as floating point operations.
- It will block things like the vectorizer which might be able to handle
the scalar stores that underly the aggregate.
At the same time, we can't just directly split up these loads and stores
in all cases. If there is actual integer arithmetic involved on the
values, then using integer bit math is actually the perfect lowering
because we can often combine it heavily with the surrounding math.
The solution this patch provides is to find places where SROA is
partitioning aggregates into small elements, and look for splittable
loads and stores that it can split all the way to some other adjacent
load and store. These are uniformly the cases where failing to split the
loads and stores hurts the optimizer that I have seen, and I've looked
extensively at the code produced both from more and less aggressive
approaches to this problem.
However, it is quite tricky to actually do this in SROA. We may have
loads and stores to the same alloca, or other complex patterns that are
hard to handle. This complexity leads to the somewhat subtle algorithm
implemented here. We have to do this entire process as a separate pass
over the partitioning of the alloca, and split up all of the loads prior
to splitting the stores so that we can handle safely the cases of
overlapping, including partially overlapping, loads and stores to the
same alloca. We also have to reconstitute the post-split slice
configuration so we can avoid iterating again over all the alloca uses
(the slow part of SROA). But we also have to ensure that when we split
up loads and stores to *other* allocas, we *do* re-iterate over them in
SROA to adapt to the more refined partitioning now required.
With this, I actually think we can fix a long-standing TODO in SROA
where I avoided splitting as many loads and stores as probably should be
splittable. This limitation historically mitigated the fallout of all
the bad things mentioned above. Now that we have more intelligent
handling, I plan to remove the FIXME and more aggressively mark integer
loads and stores as splittable. I'll do that in a follow-up patch to
help with bisecting any fallout.
The net result of this change should be more fine-grained and accurate
scalars being formed out of aggregates. At the very least, Clang now
generates perfect code for this high-level test case using
std::complex<float>:
#include <complex>
void g1(std::complex<float> &x, float a, float b) {
x += std::complex<float>(a, b);
}
void g2(std::complex<float> &x, float a, float b) {
x -= std::complex<float>(a, b);
}
This code isn't just hypothetical either. It was reduced out of the hot
inner loops of essentially every part of the Eigen math library when
using std::complex<float>. Those loops would consistently and
pervasively hop between the floating point unit and the integer unit due
to bit math extraction and insertion of floating point values that were
"stored" in a 64-bit integer register around the loop backedge.
So far, this change has passed a bootstrap and I have done some other
testing and so far, no issues. That doesn't mean there won't be though,
so I'll be prepared to help with any fallout. If you performance swings
in particular, please let me know. I'm very curious what all the impact
of this change will be. Stay tuned for the follow-up to also split more
integer loads and stores.
This is the second installment of improvements to instruction selection for "bit
permutation" instruction sequences. r224318 added logic for instruction
selection for 32-bit bit permutation sequences, and this adds lowering for
64-bit sequences. The 64-bit sequences are more complicated than the 32-bit
ones because:
a) the 64-bit versions of the 32-bit rotate-and-mask instructions
work by replicating the lower 32-bits of the value-to-be-rotated into the
upper 32 bits -- and integrating this into the cost modeling for the various
bit group operations is non-trivial
b) unlike the 32-bit instructions in 32-bit mode, the rotate-and-mask instructions
cannot, in one instruction, specify the
mask starting index, the mask ending index, and the rotation factor. Also,
forming arbitrary 64-bit constants is more complicated than in 32-bit mode
because the number of instructions necessary is value dependent.
Plus, support for 'late masking' was added: it is sometimes more efficient to
treat the overall value as if it had no mandatory zero bits when planning the
bit-group insertions, and then mask them in at the very end. Unfortunately, as
the structure of the bit groups is different in the two cases, the more
feasible implementation technique was to generate both instruction sequences,
and then pick the shorter one.
And finally, we now generate reasonable code for i64 bswap:
Sanjay Patel [Wed, 31 Dec 2014 22:14:05 +0000 (22:14 +0000)]
InstCombine: fsub nsz 0, X ==> fsub nsz -0.0, X
Some day the backend may handle instruction-level fast math flags and make
this transform unnecessary, but it's still better practice to use the canonical
representation of fneg when possible (use a -0.0).
This is a partial fix for PR20870 ( http://llvm.org/bugs/show_bug.cgi?id=20870 ).
See also http://reviews.llvm.org/D6723.
Rafael Espindola [Wed, 31 Dec 2014 17:19:34 +0000 (17:19 +0000)]
Add r224985 back with a fix.
The issues was that AArch64 has additional restrictions on when local
relocations can be used. We have to take those into consideration when
deciding to put a L symbol in the symbol table or not.
Original message:
Remove doesSectionRequireSymbols.
In an assembly expression like
bar:
.long L0 + 1
the intended semantics is that bar will contain a pointer one byte past L0.
In sections that are merged by content (strings, 4 byte constants, etc), a
single position in the section doesn't give the linker enough information.
For example, it would not be able to tell a relocation must point to the
end of a string, since that would look just like the start of the next.
The solution used in ELF to use relocation with symbols if there is a non-zero
addend.
In MachO before this patch we would just keep all symbols in some sections.
This would miss some cases (only cstrings on x86_64 were implemented) and was
inefficient since most relocations have an addend of 0 and can be represented
without the symbol.
This patch implements the non-zero addend logic for MachO too.
Colin LeMahieu [Wed, 31 Dec 2014 15:57:38 +0000 (15:57 +0000)]
[Hexagon] Changing an llvm_unreachable to an assertion and returning 0. Relocations aren't implemented yet but we don't need to abort for this in release builds.
Craig Topper [Wed, 31 Dec 2014 07:07:31 +0000 (07:07 +0000)]
[X86] Fix disassembly of absolute moves to work correctly in 16 and 32-bit modes with all 4 combinations of OpSize and AdSize prefixes being present or not.
David Blaikie [Tue, 30 Dec 2014 23:33:55 +0000 (23:33 +0000)]
Fix a test case to not depend on asm comment syntax, so as to be portable
Too many different comment characters - instead of trying to account for
them all, instead disable the comments and just check for end-of-line
instead.
David Blaikie [Tue, 30 Dec 2014 22:47:13 +0000 (22:47 +0000)]
DebugInfo: Omit is_stmt from line table entries on the same line.
GCC does this for non-zero discriminators and since GCC doesn't produce
column info, that was the only place it comes up there. For LLVM, since
we can emit discriminators and/or column info, it makes more sense to
invert the condition and just test for changes in line number.
This should resolve at least some of the GDB 7.5 test suite failures
created by recent Clang changes that increase the location fidelity
(which, since Clang defaults to including column info on Linux by
default created a bunch of cases that confused GDB).
In theory we could do this better/differently by grouping actual source
statements together in a similar manner to the way lexical scopes are
handled but given that GDB isn't really in a position to consume that (&
users are probably somewhat used to different lines being different
'statements') this seems the safest and cheapest change. (I'm concerned
that doing this 'right' would bloat the debugloc data even further -
something Duncan's working hard to address)
x86_64: Fix calls to __morestack under the large code model.
Under the large code model, we cannot assume that __morestack lives within
2^31 bytes of the call site, so we cannot use pc-relative addressing. We
cannot perform the call via a temporary register, as the rax register may
be used to store the static chain, and all other suitable registers may be
either callee-save or used for parameter passing. We cannot use the stack
at this point either because __morestack manipulates the stack directly.
To avoid these issues, perform an indirect call via a read-only memory
location containing the address.
This solution is not perfect, as it assumes that the .rodata section
is laid out within 2^31 bytes of each function body, but this seems to
be sufficient for JIT.
[COFF] Don't try to add quotes to already quoted linker directives
If a linker directive is already quoted, don't try to quote it again, otherwise it creates a mess.
This pops up in places like:
#pragma comment(linker,"\"/foo bar'\"")
Rafael Espindola [Tue, 30 Dec 2014 13:13:27 +0000 (13:13 +0000)]
Remove doesSectionRequireSymbols.
In an assembly expression like
bar:
.long L0 + 1
the intended semantics is that bar will contain a pointer one byte past L0.
In sections that are merged by content (strings, 4 byte constants, etc), a
single position in the section doesn't give the linker enough information.
For example, it would not be able to tell a relocation must point to the
end of a string, since that would look just like the start of the next.
The solution used in ELF to use relocation with symbols if there is a non-zero
addend.
In MachO before this patch we would just keep all symbols in some sections.
This would miss some cases (only cstrings on x86_64 were implemented) and was
inefficient since most relocations have an addend of 0 and can be represented
without the symbol.
This patch implements the non-zero addend logic for MachO too.
Rafael Espindola [Tue, 30 Dec 2014 05:09:17 +0000 (05:09 +0000)]
Simplify test a bit.
It looks like the original intent was to check which symbols were created.
With macho-dump the sections were being checked just to match which symbol
was in which section.
Philip Reames [Mon, 29 Dec 2014 23:55:33 +0000 (23:55 +0000)]
Semantic tests for memory invalidation at statepoints
These are simply a collection of tests intended to show that information about the contents of gc references in the heap is lost at a statepoint. I've tried to write them so that they don't disallow correct transformations, while still being fairly easy to understand.
Philip Reames [Mon, 29 Dec 2014 23:27:30 +0000 (23:27 +0000)]
Carry facts about nullness and undef across GC relocation
This change implements four basic optimizations:
If a relocated value isn't used, it doesn't need to be relocated.
If the value being relocated is null, relocation doesn't change that. (Technically, this might be collector specific. I don't know of one which it doesn't work for though.)
If the value being relocated is undef, the relocation is meaningless.
If the value being relocated was known nonnull, the relocated pointer also isn't null. (Since it points to the same source language object.)
Philip Reames [Mon, 29 Dec 2014 23:00:57 +0000 (23:00 +0000)]
Refine the notion of MayThrow in LICM to include a header specific version
In LICM, we have a check for an instruction which is guaranteed to execute and thus can't introduce any new faults if moved to the preheader. To handle a function which might unconditionally throw when first called, we check for any potentially throwing call in the loop and give up.
This is unfortunate when the potentially throwing condition is down a rare path. It prevents essentially all LICM of potentially faulting instructions where the faulting condition is checked outside the loop. It also greatly diminishes the utility of loop unswitching since control dependent instructions - which are now likely in the loops header block - will not be lifted by subsequent LICM runs.
The current patch really only helps with non-memory instructions (i.e. divs, etc..) since the maythrow call down the rare path will be considered to alias an otherwise hoistable load. The one exception is that it does kick in for loads which are known to be invariant without regard to other possible stores, i.e. those marked with either !invarant.load metadata of tbaa 'is constant memory' metadata.
Chandler Carruth [Mon, 29 Dec 2014 22:50:30 +0000 (22:50 +0000)]
[go] Teach the go cmake build functions to funnel the include directories down into the cgo-setup variables of llvm-go.
Summary:
This in turn allows us to use #includes with cgo that rely on CMake
provided include directories which is particularly useful for handling
generated headers that aren't reasonable to put in an "installable"
location.
Philip Reames [Mon, 29 Dec 2014 22:46:21 +0000 (22:46 +0000)]
Loading from null is valid outside of addrspace 0
This patches fixes a miscompile where we were assuming that loading from null is undefined and thus we could assume it doesn't happen. This transform is perfectly legal in address space 0, but is not neccessarily legal in other address spaces.
We really should introduce a hook to control this property on a per target per address space basis. We may be loosing valuable optimizations in some address spaces by being too conservative.
Original patch by Thomas P Raoux (submitted to llvm-commits), tests and formatting fixes by me.
Chandler Carruth [Mon, 29 Dec 2014 19:36:05 +0000 (19:36 +0000)]
[py3] Teach the CMake build to reject Python versions older than 2.7.
Continue to require Python 2 however as recent experiments suggest
LLDB's build requires it.
Craig Topper [Mon, 29 Dec 2014 16:25:23 +0000 (16:25 +0000)]
[X86] Add the 0x82 instructions to the disassebmler. They are identical in functionality to the 0x80 opcode instructions, but are not valid in 64-bit mode.
Chandler Carruth [Mon, 29 Dec 2014 11:58:17 +0000 (11:58 +0000)]
[multilib] Add support to the autoconf build to substitute
a CLANG_LIBDIR_SUFFIX variable. This is necessary before I can add
support for using that variable to CMake and the C++ code in Clang, and
the autoconf build system does all substitutions in the LLVM tree.
As mentioned before, I'm not planning to add actual multilib support to
the autoconf build, just enough stubs for it to keep playing nicely with
the CMake build once that one has support.
Chandler Carruth [Mon, 29 Dec 2014 11:16:25 +0000 (11:16 +0000)]
[cmake] Teach the llvm-config program to respect LLVM_LIBDIR_SUFFIX.
For this to work, we have to encode it in the build variables and use it
from llvm-config.cpp. I've tried to do this reasonably cleanly, but the
code for llvm-config.cpp is pretty strange. However, with this,
llvm-config stops giving the wrong answer when using LLVM_LIBDIR_SUFFIX.
Note that the configure+make build just sets this to an empty string as
that build system has zero support for multilib of any form. I'm not
planning to add support there either, but this should leave a path for
anyone that wanted to.
Chandler Carruth [Mon, 29 Dec 2014 11:16:23 +0000 (11:16 +0000)]
[cmake] Push LLVM_LIBDIR_SUFFIX through to the LLVMConfig.cmake file
that is used by other projects to build against LLVM. This will allow
subsequent patches to them to use LLVM_LIBDIR_SUFFIX, both when built as
part of the larger LLVM build an as part of a standalone build against
an installed set of LLVM libraries.
Chandler Carruth [Mon, 29 Dec 2014 11:16:19 +0000 (11:16 +0000)]
[cmake] Start making LLVM_LIBDIR_SUFFIX effective by adding it to
*numerous* places where it was missing in the CMake build. The primary
change here is that the suffix is now actually used for all of the lib
directories in the LLVM project's CMake. The various subprojects still
need similar treatment.
This is the first of a series of commits to try to make LLVM's cmake
effective in a multilib Linux installation. I don't think many people
are seriously using this variable so I'm hoping the fallout will be
minimal. A somewhat unfortunate consequence of the nature of these
commits is that until I land all of them, they will in part make the
brokenness of our multilib support more apparant. At the end, things
should actually work.
Keno Fischer [Sun, 28 Dec 2014 15:20:57 +0000 (15:20 +0000)]
[X86][ISel] Fix a regression I introduced in r224884
The else case ResultReg was not checked for validity.
To my surprise, this case was not hit in any of the
existing test cases. This includes a new test cases
that tests this path.
Also drop the `target triple` declaration from the
original test as suggested by H.J. Lu, because
apparently with it the test won't be run on Linux
Andrea Di Biagio [Sun, 28 Dec 2014 11:07:35 +0000 (11:07 +0000)]
[CodeGenPrepare] Teach when it is profitable to speculate calls to @llvm.cttz/ctlz.
If the control flow is modelling an if-statement where the only instruction in
the 'then' basic block (excluding the terminator) is a call to cttz/ctlz,
CodeGenPrepare can try to speculate the cttz/ctlz call and simplify the control
flow graph.
In this example, basic block %then.bb is taken if value %val is not zero.
Also, the phi node in %end.bb would propagate the size-of in bits of %val
only if %val is equal to zero.
With this patch, CodeGenPrepare will try to hoist the call to cttz from %then.bb
into basic block %entry only if cttz is cheap to speculate for the target.
Added two new hooks in TargetLowering.h to let targets customize the behavior
(i.e. decide whether it is cheap or not to speculate calls to cttz/ctlz). The
two new methods are 'isCheapToSpeculateCtlz' and 'isCheapToSpeculateCttz'.
By default, both methods return 'false'.
On X86, method 'isCheapToSpeculateCtlz' returns true only if the target has
LZCNT. Method 'isCheapToSpeculateCttz' only returns true if the target has BMI.
Masked vector intrinsics are a part of common LLVM IR, but they are really supported on AVX2 and AVX-512 targets. I added a code that translates masked intrinsic for all other targets. The masked vector intrinsic is converted to a chain of scalar operations inside conditional basic blocks.
Craig Topper [Sat, 27 Dec 2014 20:08:45 +0000 (20:08 +0000)]
[x86] Prevent instruction selection of AVX512 cmp.ps/pd/ss/sd intrinsics with illegal immediates. Correctly this time. I did the wrong patterns the first time.
David Majnemer [Sat, 27 Dec 2014 19:45:38 +0000 (19:45 +0000)]
PowerPC: CTR shouldn't fire if a TLS call is in the loop
Determining the address of a TLS variable results in a function call in
certain TLS models. This means that a simple ICmpInst might actually
result in invalidating the CTR register.
In such cases, do not attempt to rely on the CTR register for loop
optimization purposes.
Craig Topper [Sat, 27 Dec 2014 18:11:00 +0000 (18:11 +0000)]
[x86] Assert on invalid immediates in the instruction printer for cmp.ps/pd/ss/sd instead of truncating the immediate. The assembly parser and instruction selection shouldn't generate invalid immediates.
Craig Topper [Sat, 27 Dec 2014 18:10:56 +0000 (18:10 +0000)]
[x86] Prevent llvm.x86.cmp.ps/pd/ss/sd from being selected with bad immediates. The frontend now checks this when the builtin is used. This will allow the instruction printer to not have to deal with invalid immediates on these instructions.
Bools (that are the result of direct truncs) are lowered as whatever
the argument to the trunc was and a "and 1", causing the part of the
MBB responsible for this argument to look something like this:
but remember to (at the end of isel) replace vreg7 by vreg15. Now for
the bug. In fast isel lowering, we mistakenly mark vreg8 as the result
of the load instead of the trunc. This adds a fixup to have
vreg8 replaced by whatever the result of the load is as well, so
we end up with