Simon Pilgrim [Sun, 23 Oct 2016 16:49:04 +0000 (16:49 +0000)]
[X86][SSE] Add SSE41/AVX1 costs for vector shifts.
We were defaulting to SSE2 costs which weren't taking into account the availability of PBLENDW/PBLENDVB to improve merging of per-element shift results.
Brian Gesiak [Sat, 22 Oct 2016 17:27:31 +0000 (17:27 +0000)]
[lit] Add more testing instructions to README
Summary:
r283710 introduced two regressions, one to llvm-lit, and the other to
lit executables that were installed via setuptools. Add instructions on
how to test for these regressions in the future.
Craig Topper [Sat, 22 Oct 2016 06:51:52 +0000 (06:51 +0000)]
[X86] Add support for lowering v4i64 and v8i64 shuffles directly to PALIGNR. I think shuffle combine can figure it out later, but we should try to get it right up front.
Craig Topper [Sat, 22 Oct 2016 06:51:44 +0000 (06:51 +0000)]
[X86] Remove 128-bit lane handling from the main loop of matchVectorShuffleAsByteRotate. Instead check for is128LaneRepeatedSuffleMask before the loop and just loop over the repeated mask.
I plan to use the loop to support VALIGND/Q shuffles so this makes it easier to reuse.
Gerolf Hoflehner [Sat, 22 Oct 2016 02:41:39 +0000 (02:41 +0000)]
[BasicAA] Fix - missed alias in GEP expressions
In BasicAA GEP operand values get adjusted ("wrap-around") based on the
pointersize. Otherwise, in non-64b modes, AA could report false negatives.
However, a wrap-around is valid only for a fully evaluated expression.
It had been introduced to fix an alias problem in
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160118/326163.html.
This commit restricts the wrap-around to constant gep operands only where the
value is known at compile-time.
David L. Jones [Fri, 21 Oct 2016 23:30:39 +0000 (23:30 +0000)]
Fix map insertion that is elided in release build.
The assert() macro doesn't actually execute its body in Release builds, so using
it to check cache invariants requires that the insertion be outside of the
assert() statement. This change does that, and also makes sure to return the
actual map contents.
Justin Lebar [Fri, 21 Oct 2016 22:10:23 +0000 (22:10 +0000)]
[ADT] Don't rely on string literals not being convertible to non-const char* in CachedHashString.
The build was breaking on some platforms because we assumed that
CachedHashString("foo") would match the CachedHashString(StringRef)
constructor rather than the CachedHashString(char*) constructor.
To fix this, provide a CachedHashString(const char*) constructor, and
add a dummy argument to the old CachedHashString(char*) constructor.
Justin Lebar [Fri, 21 Oct 2016 21:45:01 +0000 (21:45 +0000)]
Switch SmallSetVector to use DenseSet when it overflows its inline space.
Summary:
SetVector already used DenseSet, but SmallSetVector used std::set. This
leads to surprising performance differences. Moreover, it means that
the set of key types accepted by SetVector and SmallSetVector are
quite different!
In order to make this change, we had to convert some callsites that used
SmallSetVector<std::string, N> to use SmallSetVector<CachedHashString, N>
instead.
Li Huang [Fri, 21 Oct 2016 20:05:21 +0000 (20:05 +0000)]
[SCEV] Memoize visitMulExpr results in SCEVRewriteVisitor.
Summary:
When SCEVRewriteVisitor traverses the SCEV DAG, it may visit the same SCEV
multiple times if this SCEV is referenced by multiple other SCEVs. This has
exponential time complexity in the worst case. Memoizing the results will
avoid re-visiting the same SCEV. Add a map to save the results, and override
the visit function of SCEVVisitor. Now SCEVRewriteVisitor only visit each
SCEV once and thus returns the same result for the same input SCEV.
This patch fixes PR18606, PR18607.
Reviewers: Sanjoy Das, Mehdi Amini, Michael Zolotukhin
Kevin Enderby [Fri, 21 Oct 2016 20:03:14 +0000 (20:03 +0000)]
Fix a bug in the code of llvm-cxxdump in dumpArchive() when
iterating over an archive with object and non-object members that
would cause an Abort because to was not calling consumeError()
when the code was wanting to ignore a non-object file.
X86: Improve BT instruction selection for 64-bit values.
If a 64-bit value is tested against a bit which is known to be in the range
[0..31) (modulo 64), we can use the 32-bit BT instruction, which has a slightly
shorter encoding.
Anna Thomas [Fri, 21 Oct 2016 18:43:16 +0000 (18:43 +0000)]
[StripGCRelocates] New pass to remove gc.relocates added by RS4GC
Summary:
Utility pass to remove gc.relocates created by rewrite statepoints for GC.
With respect to safepoint verification, the IR generated would be incorrect, and cannot run
as such.
This would be a single transformation on the final optimized IR.
The benefit of the pass is for easy analysis when the IRs are 'polluted' by too
many gc.relocates.
Added tests.
test run: All RS4GC tests with -verify option. Local downstream tests on large
IR files. This also works when the pointer being gc.relocated is another
gc.relocate.
Sanjay Patel [Fri, 21 Oct 2016 17:24:26 +0000 (17:24 +0000)]
[DAG] fold negation of sign-bit
0 - X --> 0, if the sub is NUW
0 - X --> 0, if X is 0 or the minimum signed value and the sub is NSW
0 - X --> X, if X is 0 or the minimum signed value
This is the DAG equivalent of:
https://reviews.llvm.org/rL284649
plus the fold for the NUW case which already existed in InstSimplify.
Note that we miss a vector fold because of a deficiency in the DAG version of
computeKnownBits().
[Hexagon] Handle spills of partially defined double vector registers
After register allocation it is possible to have a spill of a register
that is only partially defined. That in itself it fine, but creates a
problem for double vector registers. Stores of such registers are pseudo
instructions that are expanded into pairs of individual vector stores,
and in case of a partially defined source, one of the stores may use
an entirely undefined register. To avoid this, track the defined parts
and only generate actual stores for those.
Derek Schuff [Fri, 21 Oct 2016 16:38:07 +0000 (16:38 +0000)]
[WebAssembly] Fix for 0xc call_indirect changes
Summary:
Need to reorder the operands to have the callee as the last argument.
Adds a pseudo-instruction, and a pass to lower it into a real
call_indirect.
This is the first of two options for how to fix the problem.
Sanjay Patel [Fri, 21 Oct 2016 14:58:30 +0000 (14:58 +0000)]
fix variable names; NFCI
Because we're just 'or-ing' these 2 variables later in the code, I
don't think there's a logical bug here, but of course the string with
"no size" is the one that should have the size suffix stripped off.
Sanjay Patel [Fri, 21 Oct 2016 14:36:58 +0000 (14:36 +0000)]
[DAG] use SDNode flags 'nsz' to enable fadd/fsub with zero folds
As discussed in D24815, let's start the process of killing off the broken fast-math global
state housed in TargetOptions and eliminate the need for function-level fast-math attributes.
Here we enable two similar folds that are possible when we don't care about signed-zero:
fadd nsz x, 0 --> x
fsub nsz 0, x --> -x
Note that although the test cases include a 'sin' function call, I'm side-stepping the
FMF-on-calls question (and lack of support in the DAG) for now. It's not needed for these
tests - isNegatibleForFree/GetNegatedExpression just look through a ISD::FSIN node.
Also, when we create an FNEG node and propagate the Flags of the FSUB to it, this doesn't
actually do anything today because Flags are silently dropped for any node that is not a
binary operator.
John Brawn [Fri, 21 Oct 2016 11:08:48 +0000 (11:08 +0000)]
[LoopUnroll] Keep the loop test only on the first iteration of max-or-zero loops
When we have a loop with a known upper bound on the number of iterations, and
furthermore know that either the number of iterations will be either exactly
that upper bound or zero, then we can fully unroll up to that upper bound
keeping only the first loop test to check for the zero iteration case.
Most of the work here is in plumbing this 'max-or-zero' information from the
part of scalar evolution where it's detected through to loop unrolling. I've
also gone for the safe default of 'false' everywhere but howManyLessThans which
could probably be improved.
Bjorn Pettersson [Fri, 21 Oct 2016 09:53:42 +0000 (09:53 +0000)]
[AArch64] Corrected spill size for DDD register class. NFCI
Summary:
The spill size was incorrectly set to 196 bits,
which isn't a multiple of 8. This problem was detected when
experimenting with asserts that the spill size should be a
multiple of the byte size.
New corrected value for the spill size is set to 192 bits.
Note that tablegen (RegisterInfoEmitter) will divide the
size set in the RegisterClass definition by 8. So this
change should not have any impact on the tablegen output
(trunc(192/8) == trunc(196/8) == 24 bytes).
Benjamin Kramer [Fri, 21 Oct 2016 09:15:57 +0000 (09:15 +0000)]
[Support] Fix AlignOf test on i386-linux.
On i386 alignof(double) = 8 is not the same as alignof(struct { double
}) = 4. This used to be not an issue because the old implementation
always measured alignment inside of structs. Wrap a dummy struct around
the test to avoid this issue.
Davide Italiano [Fri, 21 Oct 2016 01:37:02 +0000 (01:37 +0000)]
Revert "[GVN/PRE] Hoist global values outside of loops."
There's no agreement about this patch. I personally find the
PRE machinery of the current GVN hard enough to reason about
that I'm not sure I'll try to land this again, instead of working
on the rewrite).
Keno Fischer [Thu, 20 Oct 2016 22:15:56 +0000 (22:15 +0000)]
Fix cross-endianness RuntimeDyld relocation for ARM
rL284780 fixed the PREL31 relocation and added a test for it. Being
the first such test for ARM relocations, it exposed incorrect endianness
assumptions (causing buildbot failures on big-endian hosts). Fix that by
using the same helpers used for the x86 case.
Daniel Berlin [Thu, 20 Oct 2016 20:13:45 +0000 (20:13 +0000)]
[MSSA] Avoid unnecessary use walks when calling getClobberingMemoryAccess
Summary:
This allows us to mark when uses have been optimized.
This lets us avoid rewalking (IE when people call getClobberingAccess on everything), and also
enables us to later relax the requirement of use optimization during updates with less cost.
Kevin Enderby [Thu, 20 Oct 2016 20:10:30 +0000 (20:10 +0000)]
Another additional error check for invalid Mach-O files for the
load commands that use the MachO::twolevel_hints_command type
which includes only the LC_TWOLEVEL_HINTS load command.
This is not used in llvm libObject code or in llvm tool code. But
does appear in one of the binary test files. While this load command is
obsolete it is easier to add code for it in libObject than edit or change
the binary test case.
Zachary Turner [Thu, 20 Oct 2016 18:31:19 +0000 (18:31 +0000)]
[CodeView] Refactor serialization to use StreamInterface.
This was all using ArrayRef<>s before which presents a problem
when you want to serialize to or deserialize from an actual
PDB stream. An ArrayRef<> is really just a special case of
what can be handled with StreamInterface though (e.g. by using
a ByteStream), so changing this to use StreamInterface allows
us to plug in a PDB stream and get all the record serialization
and deserialization for free on a MappedBlockStream.
Subsequent patches will try to remove TypeTableBuilder and
TypeRecordBuilder in favor of class that operate on
Streams as well, which should allow us to completely merge
the reading and writing codepaths for both types and symbols.
Dehao Chen [Thu, 20 Oct 2016 18:06:52 +0000 (18:06 +0000)]
Using branch probability to guide critical edge splitting.
Summary:
The original heuristic to break critical edge during machine sink is relatively conservertive: when there is only one instruction sinkable to the critical edge, it is likely that the machine sink pass will not break the critical edge. This leads to many speculative instructions executed at runtime. However, with profile info, we could model the splitting benefits: if the critical edge has 50% taken rate, it would always be beneficial to split the critical edge to avoid the speculated runtime instructions. This patch uses profile to guide critical edge splitting in machine sink pass.
The performance impact on speccpu2006 on Intel sandybridge machines:
Summary:
While promoting *_EXTEND_VECTOR_INREG nodes whose inputs are already
promoted, perform the appropriate sign extension for the promoted node
before doing the *_EXTEND_VECTOR_INREG operation. If not, the undefined
high-order bits of the promoted operand may (a) be garbage inc ase of
zext) or (b) contribute the wrong sign-bit (in case of sext)
Updated the promote-vec3.ll test after this change. The diff shows
explicit zeroing in case of zext and intermediate sign extension in case
of sext.
This is a follow-up to https://reviews.llvm.org/D24816 - where we changed reciprocal estimates to be function attributes
rather than TargetOptions.
This patch is intended to be a structural, but not functional change. By moving all of the
TargetRecip functionality into TargetLowering, we can remove all of the reciprocal estimate
state, shield the callers from the string format implementation, and simplify/localize the
logic needed for a target to enable this.
If a function has a "reciprocal-estimates" attribute, those settings may override the target's
default reciprocal preferences for whatever operation and data type we're trying to optimize.
If there's no attribute string or specific setting for the op/type pair, just use the target
default settings.
As noted earlier, a better solution would be to move the reciprocal estimate settings to IR
instructions and SDNodes rather than function attributes, but that's a multi-step job that
requires infrastructure improvements. I intend to work on that, but it's not clear how long
it will take to get all the pieces in place.
Benjamin Kramer [Thu, 20 Oct 2016 15:36:38 +0000 (15:36 +0000)]
[Support] Remove llvm::alignOf now that all uses are gone.
Also clean up the legacy hacks for AlignedCharArray. I'm keeping
LLVM_ALIGNAS alive for a bit longer because GCC 4.8.0 (which we still
support apparently) shipped a buggy alignas(). All other supported
compilers have a working alignas.