James Molloy [Wed, 28 Oct 2015 10:41:29 +0000 (10:41 +0000)]
[GlobalsAA] An indirect global that is initialized is not fair game
When checking if an indirect global (a global with pointer type) is only assigned by allocation functions, first check if the global is itself initialized. If it is, it's not only assigned by allocation functions.
This fixes PR25309. Thanks to David Majnemer for reducing the test case!
Chen Li [Wed, 28 Oct 2015 04:45:47 +0000 (04:45 +0000)]
[IndVarSimplify] Rewrite loop exit values with their initial values from loop preheader
Summary:
This patch adds support to check if a loop has loop invariant conditions which lead to loop exits. If so, we know that if the exit path is taken, it is at the first loop iteration. If there is an induction variable used in that exit path whose value has not been updated, it will keep its initial value passing from loop preheader. We can therefore rewrite the exit value with
its initial value. This will help remove phis created by LCSSA and enable other optimizations like loop unswitch.
Change InstrProfReaderIndex from typedef into a wrapper
class with helper methods. This makes the index profile
reader code more readable. It also hides the implementation
detail of the index format and make it more flexible to allow
support different (or more than one) format in the future.
Hal Finkel [Wed, 28 Oct 2015 03:26:45 +0000 (03:26 +0000)]
[PowerPC] Replace cntlz[.] with cntlzw[.]
cntlz is the old POWER mnemonic. cntlzw is the PowerPC mnemonic.
This change fixes an issue when -no-integrated-as: The opcode cntlz is
unrecognized by gas
Alias the POWER mnemonic cntlz[.] to the PowerPC mnemonic cntlzw[.]
This is done for because the POWER cntlz mnemonic has be used by LLVM for
a very long time. We need to make sure that assembly programs
that are using the cntlz[.] do not break with this change.
Change PowerPC tests to reflect the insn change from cntlz to cntlzw.
Add assembly test to verify cntlz[.] is encoded correctly.
Sanjoy Das [Wed, 28 Oct 2015 03:20:10 +0000 (03:20 +0000)]
[SelectionDAG] Don't inspect !range metadata for extended loads
Summary:
Don't call `computeKnownBitsFromRangeMetadata` for extended loads --
this can cause a mismatch between the width of the !range metadata and
the width of the APInt's accumulating `KnownZero` (and `KnownOne` in the
future). This isn't a problem now, but will be after a future change.
Note: this can be made more aggressive in the future.
James Y Knight [Tue, 27 Oct 2015 23:09:03 +0000 (23:09 +0000)]
Make the SelectionDAG graph printer use SDNode::PersistentId labels.
r248010 changed the -debug output to use short ids, but did not
similarly modify the graph printer. Change to be consistent, for ease of
cross-reference.
Vedant Kumar [Tue, 27 Oct 2015 22:10:17 +0000 (22:10 +0000)]
[Bitcode] Fix accidental syntax errors in compatibility tests
We used automated tools to update our IR to its current syntax in commit 21f77df7(r247378). While it correctly updated the CHECK lines in our
compatibility tests, the IR should have remained untouched. This commit
fixes the syntax errors.
Vedant Kumar [Tue, 27 Oct 2015 21:17:06 +0000 (21:17 +0000)]
[IR] Limit bits used for CallingConv::ID, update tests
Use 10 bits to represent calling convention ID's instead of 13, and
update the bitcode compatibility tests accordingly. We now error-out in
the bitcode reader when we see bad calling conv ID's.
Hal Finkel [Tue, 27 Oct 2015 20:37:04 +0000 (20:37 +0000)]
[AliasSetTracker] Use mod/ref information for UnknownInstr
AliasSetTracker does not need to convert the access mode to ModRefAccess if the
new visited UnknownInst has only 'REF' modrefinfo to existing pointers in the
sets.
David Majnemer [Tue, 27 Oct 2015 19:48:28 +0000 (19:48 +0000)]
[ScalarEvolutionExpander] PHI on a catchpad can be used on both edges
A PHI on a catchpad might be used by both edges out of the catchpad,
feeding back into a loop. In this case, just use the insertion point.
Anything more clever would require new basic blocks or PHI placement.
Jun Bum Lim [Tue, 27 Oct 2015 19:16:03 +0000 (19:16 +0000)]
[AArch64]Merge halfword loads into a 32-bit load
This recommits r250719, which caused a failure in SPEC2000.gcc
because of the incorrect insert point for the new wider load.
Convert two halfword loads into a single 32-bit word load with bitfield extract
instructions. For example :
ldrh w0, [x2]
ldrh w1, [x2, #2]
becomes
ldr w0, [x2]
ubfx w1, w0, #16, #16
and w0, w0, #ffff
Cong Hou [Tue, 27 Oct 2015 17:59:36 +0000 (17:59 +0000)]
Create a new interface addSuccessorWithoutWeight(MBB*) in MBB to add successors when optimization is disabled.
When optimization is disabled, edge weights that are stored in MBB won't be used so that we don't have to store them. Currently, this is done by adding successors with default weight 0, and if all successors have default weights, the weight list will be empty. But that the weight list is empty doesn't mean disabled optimization (as is stated several times in MachineBasicBlock.cpp): it may also mean all successors just have default weights.
We should discourage using default weights when adding successors, because it is very easy for users to forget update the correct edge weights instead of using default ones (one exception is that the MBB only has one successor). In order to detect such usages, it is better to differentiate using default weights from the case when optimizations is disabled.
In this patch, a new interface addSuccessorWithoutWeight(MBB*) is created for when optimization is disabled. In this case, MBB will try to maintain an empty weight list, but it cannot guarantee this as for many uses of addSuccessor() whether optimization is disabled or not is not checked. But it can guarantee that if optimization is enabled, then the weight list always has the same size of the successor list.
Charlie Turner [Tue, 27 Oct 2015 17:59:03 +0000 (17:59 +0000)]
[SLP] Be more aggressive about reduction width selection.
Summary:
This change could be way off-piste, I'm looking for any feedback on whether it's an acceptable approach.
It never seems to be a problem to gobble up as many reduction values as can be found, and then to attempt to reduce the resulting tree. Some of the workloads I'm looking at have been aggressively unrolled by hand, and by selecting reduction widths that are not constrained by a vector register size, it becomes possible to profitably vectorize. My test case shows such an unrolling which SLP was not vectorizing (on neither ARM nor X86) before this patch, but with it does vectorize.
I measure no significant compile time impact of this change when combined with D13949 and D14063. There are also no significant performance regressions on ARM/AArch64 in SPEC or LNT.
The more principled approach I thought of was to generate several candidate tree's and use the cost model to pick the cheapest one. That seemed like quite a big design change (the algorithms seem very much one-shot), and would likely be a costly thing for compile time. This seemed to do the job at very little cost, but I'm worried I've misunderstood something!
Charlie Turner [Tue, 27 Oct 2015 17:54:16 +0000 (17:54 +0000)]
[SLP] Try a bit harder to find reduction PHIs
Summary:
Currently, when the SLP vectorizer considers whether a phi is part of a reduction, it dismisses phi's whose incoming blocks are not the same as the block containing the phi. For the patterns I'm looking at, extending this rule to allow phis whose incoming block is a containing loop latch allows me to vectorize certain workloads.
There is no significant compile-time impact, and combined with D13949, no performance improvement measured in ARM/AArch64 in any of SPEC2000, SPEC2006 or LNT.
Charlie Turner [Tue, 27 Oct 2015 17:49:11 +0000 (17:49 +0000)]
[SLP] Treat SelectInsts as reduction values.
Summary:
Certain workloads, in particular sum-of-absdiff loops, can be vectorized using SLP if it can treat select instructions as reduction values.
The test case is a bit awkward. The AArch64 cost model needs some tuning to not be so pessimistic about selects. I've had to tweak the SLP threshold here.
Diego Novillo [Tue, 27 Oct 2015 17:37:00 +0000 (17:37 +0000)]
Fix SamplePGO segfault when debug info is missing.
When emitting a remark for a conditional branch annotation, the remark
uses the line location information of the conditional branch in the
message. In some cases, that information is unavailable and the
optimization would segfaul. I'm still not sure whether this is a bug or
WAI, but the optimizer should not die because of this.
Ed Schouten [Tue, 27 Oct 2015 16:37:49 +0000 (16:37 +0000)]
Prefer ranlib mode over ar mode.
For CloudABI's toolchain I have a symlink that goes from <target>-ar and
<target>-ranlib to LLVM's ar binary, to mimick GNU Binutils' naming
scheme. The problem is that if we're targetting ARM64, the name of the
ranlib executable is aarch64-unknown-cloudabi-ranlib. This already
contains the string "ar".
Let's move the "ranlib" test above the "ar" test. It's not that likely
that we're going to see operating systems or harwdare architectures that
are called "ranlib".
Reviewed by: rafael
Differential Revision: http://reviews.llvm.org/D14123
Chris Bieneman [Tue, 27 Oct 2015 16:02:04 +0000 (16:02 +0000)]
[CMake] Get rid of LLVM_DYLIB_EXPORT_ALL, and make it the default, add libLLVM-C on darwin to cover the C API needs.
Summary:
We've had a lot of discussion in the past about the meaningful and useful default behaviors for the llvm-shlib tool. The original implementation was heavily geared toward Apple's use, and I think that was wrong. This patch seeks to correct that.
I've removed the LLVM_DYLIB_EXPORT_ALL variable and made libLLVM export everything by default.
I've also added a new target that is only built on Darwin for libLLVM-C as a library that re-exports the LLVM-C API. This library is not built on Linux because ELF doesn't support re-export libraries in the same way MachO does.
Charlie Turner [Tue, 27 Oct 2015 10:25:20 +0000 (10:25 +0000)]
[ARM] Expand ROTL and ROTR of vector value types
Summary: After D13851 landed, we saw backend crashes when compiling the reduced test case included in this patch. The right fix seems to be to allow these vector types for expansion in instruction selection.
We want to insert no-op casts as close as possible to the def. This is
tricky when the cast is of a PHI node and the BasicBlocks between the
def and the use cannot hold any instructions. Iteratively walk EH pads
until we hit a non-EH pad.
Craig Topper [Tue, 27 Oct 2015 04:14:24 +0000 (04:14 +0000)]
Convert cost table lookup functions to return a pointer to the entry or nullptr instead of the index.
This avoid mentioning the table name an extra time and allows the lookup to be done directly in the ifs by relying on the bool conversion of the pointer.
While there make use of ArrayRef and std::find_if.
Chandler Carruth [Tue, 27 Oct 2015 01:41:43 +0000 (01:41 +0000)]
[function-attrs] Refactor code to handle shorter code with early exits.
No functionality changed here, but the indentation is substantially
reduced and IMO the code is much easier to read. I've also added some
helpful comments.
This is just a clean-up I wrote while studying the code, and that has
been in my backlog for a while.
If we have an operand to a bitwise logic op that's already in
an XMM register and the result is going to be sent to an XMM
register, then use an SSE logic op to avoid moves between the
integer and vector register files.
Related commits:
http://reviews.llvm.org/rL248395
http://reviews.llvm.org/rL248399
http://reviews.llvm.org/rL248404
http://reviews.llvm.org/rL248409
http://reviews.llvm.org/rL248415
This should solve PR22428:
https://llvm.org/bugs/show_bug.cgi?id=22428
Steve King [Tue, 27 Oct 2015 00:14:06 +0000 (00:14 +0000)]
Fix llc crash processing S/UREM for -Oz builds caused by rL250825.
When taking the remainder of a value divided by a constant, visitREM()
attempts to convert the REM to a longer but faster sequence of instructions.
This conversion calls combine() on a speculative DIV instruction. Commit
rL250825 may cause this combine() to return a DIVREM, corrupting nearby nodes.
Flow eventually hits unreachable().
This patch adds a test case and a check to prevent visitREM() from trying
to convert the REM instruction in cases where a DIVREM is possible.
See http://reviews.llvm.org/D14035
Chandler Carruth [Mon, 26 Oct 2015 22:54:53 +0000 (22:54 +0000)]
[x86] Make the vselect-minmax test 2x to 3x faster by deleting all the
instructions that aren't relevant for instruction selection of vector
min and max.
Alexey Samsonov [Mon, 26 Oct 2015 22:34:56 +0000 (22:34 +0000)]
[LLVMSymbolize] Don't use LLVMSymbolizer::Options in ModuleInfo. NFC.
LLVMSymbolizer::Options is mostly used in LLVMSymbolizer class anyway.
Let's keep their usage restricted to that class, especially given that
it's worth to move ModuleInfo to a different header, independent from
the symbolizer class.
Rui Ueyama [Mon, 26 Oct 2015 19:58:29 +0000 (19:58 +0000)]
Optimize StringTableBuilder.
This is a patch to improve StringTableBuilder's performance. That class'
finalize function is very hot particularly in LLD because the function
does tail-merge strings in string tables or SHF_MERGE sections.
Generic std::sort-style sorter is not efficient for sorting strings.
The function implemented in this patch seems to be more efficient.
Here's a benchmark of LLD to link Clang with or without this patch.
The numbers are medians of 50 runs.
-O0
real 0m0.455s
real 0m0.430s (5.5% faster)
-O3
real 0m0.487s
real 0m0.452s (7.2% faster)
Since that is a benchmark of the whole linker, the speedup of
StringTableBuilder itself is much more than that.
Igor Laevsky [Mon, 26 Oct 2015 19:06:01 +0000 (19:06 +0000)]
[RS4GC] Strip noalias attribute after statepoint rewrite
We should remove noalias along with dereference and dereference_or_null attributes
because statepoint could potentially touch the entire heap including noalias objects.
Diego Novillo [Mon, 26 Oct 2015 18:52:53 +0000 (18:52 +0000)]
SamplePGO - Add optimization reports.
This adds a couple of optimization remarks to the SamplePGO
transformation. When it decides to inline a hot function (to mimic the
inline stack and repeat useful inline decisions in the original build).
It will also report branch destinations. For instance, given the code
fragment:
6 if (i < 1000)
7 sum -= i;
8 else
9 sum += -i * rand();
If the 'else' branch is taken most of the time, building this code with
-Rpass=sample-profile will produce:
a.cc:9:14: remark: most popular destination for conditional branches at small.cc:6:9 [-Rpass=sample-profile]
sum += -i * rand();
^
Mehdi Amini [Mon, 26 Oct 2015 18:37:00 +0000 (18:37 +0000)]
Add an (optional) identification block in the bitcode
Processing bitcode from a different LLVM version can lead to
unexpected behavior. The LLVM project guarantees autoupdating
bitcode from a previous minor revision for the same major, but
can't make any promise when reading bitcode generated from a
either a non-released LLVM, a vendor toolchain, or a "future"
LLVM release. This patch aims at being more user-friendly and
allows a bitcode produce to emit an optional block at the
beginning of the bitcode that will contains an opaque string
intended to describe the bitcode producer information. The
bitcode reader will dump this information alongside any error it
reports.
The optional block also includes an "epoch" number, monotonically
increasing when incompatible changes are made to the bitcode. The
reader will reject bitcode whose epoch is different from the one
expected.
Evgeniy Stepanov [Mon, 26 Oct 2015 18:28:25 +0000 (18:28 +0000)]
[safestack] Fast access to the unsafe stack pointer on AArch64/Android.
Android libc provides a fixed TLS slot for the unsafe stack pointer,
and this change implements direct access to that slot on AArch64 via
__builtin_thread_pointer() + offset.
This change also moves more code into TargetLowering and its
target-specific subclasses to get rid of target-specific codegen
in SafeStackPass.
This change does not touch the ARM backend because ARM lowers
builting_thread_pointer as aeabi_read_tp, which is not available
on Android.
The previous iteration of this change was reverted in r250461. This
version leaves the generic, compiler-rt based implementation in
SafeStack.cpp instead of moving it to TargetLoweringBase in order to
allow testing without a TargetMachine.
We were previously overflowing a 32-bit multiply operation when emitting large
(>512MB) bitcode files, resulting in corrupted bitcode. Fix by extending
one of the operands to 64 bits.
There are a few other 32-bit integer types in this code that seem like they
also ought to be extended to 64 bits; this will be done separately.
ARM/ELF: Better codegen for global variable addresses.
In PIC mode we were previously computing global variable addresses (or GOT
entry addresses) by adding the PC, the PC-relative GOT displacement and
the GOT-relative symbol/GOT entry displacement. Because the latter two
displacements are fixed, we ended up performing one more addition than
necessary.
This change causes us to compute addresses using a single PC-relative
displacement, resulting in a shorter code sequence. This reduces code size
by about 4% in a recent build of Chromium for Android.
As a result of this change we no longer need to compute the GOT base address
in the ARM backend, which allows us to remove the Global Base Reg pass and
SDAG lowering for the GOT.
We also now no longer use the GOT when addressing a symbol which is known
to be defined in the same linkage unit. Specifically, the symbol must have
either hidden visibility or a strong definition in the current module in
order to not use the the GOT.
This is a change from the previous behaviour where we would use the GOT to
address externally visible symbols defined in the same module. I think the
only cases where this could matter are cases involving symbol interposition,
but we don't really support that well anyway.
Cong Hou [Mon, 26 Oct 2015 18:00:17 +0000 (18:00 +0000)]
Check the case that the numerator and denominator are both zeros when getting edge probabilities in BPI and return 100% in this case.
This issue is triggered in PGO mode when bootstrapping LLVM. It seems that it is not guaranteed that edge weights are always greater than zero which are read from profile data.
James Molloy [Mon, 26 Oct 2015 14:10:46 +0000 (14:10 +0000)]
[ValueTracking] Extend r251146 to catch a fairly common case
Even though we may not know the value of the shifter operand, it's possible we know the shifter operand is non-zero. This can allow us to infer more known bits - for example:
%1 = load %p !range {1, 5}
%2 = shl %q, %1
We don't know %1, but we do know that it is nonzero so %2[0] is known zero, and importantly %2 is known non-zero.
Calling isKnownNonZero is nontrivially expensive so use an Optional to run it lazily and cache its result.