Keno Fischer [Mon, 8 Jun 2015 20:09:58 +0000 (20:09 +0000)]
[InstrInfo] Refactor foldOperandImpl to thread through InsertPt. NFC
Summary:
This was a longstanding FIXME and is a necessary precursor to cases
where foldOperandImpl may have to create more than one instruction
(e.g. to constrain a register class). This is the split out NFC changes from
D6262.
Akira Hatanaka [Mon, 8 Jun 2015 18:50:43 +0000 (18:50 +0000)]
[ARM] Pass a callback to FunctionPass constructors to enable skipping execution
on a per-function basis.
Previously some of the passes were conditionally added to ARM's pass pipeline
based on the target machine's subtarget. This patch makes changes to add those
passes unconditionally and execute them conditonally based on the predicate
functor passed to the pass constructors. This enables running different sets of
passes for different functions in the module.
Pete Cooper [Mon, 8 Jun 2015 17:17:16 +0000 (17:17 +0000)]
Move COFF Type in to the MCSymbolCOFF class.
The flags field in MCSymbol only needs to be 16-bits on ELF and MachO.
This moves the 16-bit Type out of there so that it can be reduced in size in a future commit.
Matthias Braun [Mon, 8 Jun 2015 16:56:23 +0000 (16:56 +0000)]
X86: Reject register operands with obvious type mismatches.
While we have some code to transform specification like {ax} into
{eax}/{rax} if the operand type isn't 16bit, we should reject cases
where there is no sane way to do this, like the i128 type in the
example.
Oliver Stannard [Mon, 8 Jun 2015 16:55:31 +0000 (16:55 +0000)]
Fix assertion failure in global-merge with unused ConstantExpr
The global-merge pass was crashing because it assumes that all ConstantExprs
(reached via the global variables that they use) have at least one user.
I haven't worked out a way to test this, as an unused ConstantExpr cannot be
represented by serialised IR, and global-merge can only be run in llc, which
does not run any passes which can make a ConstantExpr dead.
This (reduced to the point of silliness) C code triggers this bug when compiled
for arm-none-eabi at -O1:
static a = 7;
static volatile b[10] = {&a};
c;
main() {
c = 0;
for (; c < 10;)
printf(b[c]);
}
Colin LeMahieu [Mon, 8 Jun 2015 16:34:47 +0000 (16:34 +0000)]
[Hexagon] Adding functionality for searching for compound instruction pairs. Compound instructions reduce slot resource requirements freeing those packet slots up for more instructions.
Javed Absar [Mon, 8 Jun 2015 15:01:11 +0000 (15:01 +0000)]
ARM]: Add support for MMFR4_EL1 in assembler
This patch adds support for system register MMFR4_EL1 (memory model feature register) in the assembler.
This register provides information about the implemented memory model and memory management support.
Igor Breger [Mon, 8 Jun 2015 14:03:17 +0000 (14:03 +0000)]
AVX-512: Implemented 256/128bit VALIGND/Q instructions for SKX and KNL
Implemented DAG lowering for all these forms.
Added tests for DAG lowering and encoding.
Artur Pilipenko [Mon, 8 Jun 2015 11:58:13 +0000 (11:58 +0000)]
Minor refactoring of GEP handling in isDereferenceablePointer
For GEP instructions isDereferenceablePointer checks that all indices are constant and within bounds. Replace this index calculation logic to a call to accumulateConstantOffset. Separated from the http://reviews.llvm.org/D9791
Silviu Baranga [Mon, 8 Jun 2015 10:27:06 +0000 (10:27 +0000)]
[LAA] Fix estimation of number of memchecks
Summary:
We need to add a runtime memcheck for pair of accesses (x,y) where at least one of x and y
are writes.
Assuming we have w writes and r reads, currently this number is estimated as being
w* (w+r-1). This estimation will count (write,write) pairs twice and will overestimate
the number of checks required.
This change adds a getNumberOfChecks method to RuntimePointerCheck, which
will count the number of runtime checks needed (similar in implementation to
needsAnyChecking) and uses it to produce the correct number of runtime checks.
Test Plan:
llvm test suite
spec2k
spec2k6
Performance results: no changes observed (not surprising since the formula for 1 writer is basically the same, which would covers most cases - at least with the current check limit).
Hao Liu [Mon, 8 Jun 2015 06:39:56 +0000 (06:39 +0000)]
[LoopVectorize] Teach Loop Vectorizor about interleaved memory accesses.
Interleaved memory accesses are grouped and vectorized into vector load/store and shufflevector.
E.g. for (i = 0; i < N; i+=2) {
a = A[i]; // load of even element
b = A[i+1]; // load of odd element
... // operations on a, b, c, d
A[i] = c; // store of even element
A[i+1] = d; // store of odd element
}
The loads of even and odd elements are identified as an interleave load group, which will be transfered into vectorized IRs like:
%wide.vec = load <8 x i32>, <8 x i32>* %ptr
%vec.even = shufflevector <8 x i32> %wide.vec, <8 x i32> undef, <4 x i32> <i32 0, i32 2, i32 4, i32 6>
%vec.odd = shufflevector <8 x i32> %wide.vec, <8 x i32> undef, <4 x i32> <i32 1, i32 3, i32 5, i32 7>
The stores of even and odd elements are identified as an interleave store group, which will be transfered into vectorized IRs like:
%interleaved.vec = shufflevector <4 x i32> %vec.even, %vec.odd, <8 x i32> <i32 0, i32 4, i32 1, i32 5, i32 2, i32 6, i32 3, i32 7>
store <8 x i32> %interleaved.vec, <8 x i32>* %ptr
This optimization is currently disabled by defaut. To try it by adding '-enable-interleaved-mem-accesses=true'.
Remove SCEVCache and FindConstantPointers from complete loop unrolling heuristic.
Summary:
Using some SCEV functionality helped to entirely remove SCEVCache class and FindConstantPointers SCEV visitor.
Also, this makes the code more universal - I'll take advandate of it in next patches where I start handling additional types of instructions.
Test Plan: Tests would be submitted in subsequent patches.
Colin LeMahieu [Sun, 7 Jun 2015 01:46:24 +0000 (01:46 +0000)]
Teaching llvm-mc how to understand the defsym command line option. This allows integer-constant symbols to be defined on the command line and used during assembly.
David Majnemer [Sat, 6 Jun 2015 22:40:21 +0000 (22:40 +0000)]
[InstCombine, InstSimplify] Move xforms from Combine to Simplify
There were several SelectInst combines that always returned an existing
instruction instead of modifying an old one or creating a new one.
These are prime candidates for moving to InstSimplify.
Colin LeMahieu [Sat, 6 Jun 2015 20:12:40 +0000 (20:12 +0000)]
[MC] Common symbols weren't being checked for redeclaration which allowed an assembly file to generate an assertion in setCommon(): !isCommon(). This change allows redeclaration as long as the size and alignment match exactly, otherwise report a fatal error.
Sanjoy Das [Sat, 6 Jun 2015 05:24:10 +0000 (05:24 +0000)]
[LoopUnroll] Fix truncation bug in canUnrollCompletely.
Summary:
canUnrollCompletely takes `unsigned` values for `UnrolledCost` and
`RolledDynamicCost` but is passed in `uint64_t`s that are silently
truncated. Because of this, when `UnrolledSize` is a large integer
that has a small remainder with UINT32_MAX, LLVM tries to completely
unroll loops with high trip counts.
David Majnemer [Sat, 6 Jun 2015 04:56:51 +0000 (04:56 +0000)]
[CVP] Don't assume Constants of type i1 can be known to be true or false
CVP wants to analyze the condition operand of a select along an edge.
It succeeds in getting back a Constant but not a ConstantInt. Instead,
it gets a ConstantExpr. It then assumes that the Constant must be equal
to false because it isn't equal to true.
David Majnemer [Sat, 6 Jun 2015 02:30:43 +0000 (02:30 +0000)]
[InstCombine] Don't miscompile select to poison
If we have (select a, b, c), it is sometimes valid to simplify this to a
single select operand. However, doing so is only valid if the
computation doesn't inject poison into the computation.
It might be helpful to consider the following example:
(select (icmp ne %i, INT_MAX), (add nsw %i, 1), INT_MIN)
The select is equivalent to (add %i, 1) but not (add nsw %i, 1).
Self hosting on x86_64 revealed that this occurs very, very rarely so
bailing out is hopefully pretty reasonable.
Frederic Riss [Fri, 5 Jun 2015 23:06:11 +0000 (23:06 +0000)]
[dsymutil] Add support for linking the debug_frame section.
Linking the debug frame section is actually very easy as we just have to
patch the start address in the FDE header and then copy the rest of the
FDE without even looking at it. The only small complexity comes from the
handling of the CIEs that we should unique across object file. This is
also really easy by using a StringMap keyed on the raw contents of the
CIE.
Akira Hatanaka [Fri, 5 Jun 2015 21:58:14 +0000 (21:58 +0000)]
Move the code in TargetPassConfig::addPass that inserts machine printer pass to
the overloaded version of addPass which takes Pass*.
This change enables inserting the machine printer pass when the overloaded
version of addPass that takes Pass* is called to add a pass, instead of the
one which takes AnalysisID. I need this to prevent make-check tests from
failing when I commit another patch later.
Frederic Riss [Fri, 5 Jun 2015 21:12:07 +0000 (21:12 +0000)]
[dsymutil] Have the YAML deserialization rewrite the object address of symbols.
The main use of the YAML debug map format is for testing inside LLVM. If we have IR
files in the tests used to generate object files, then we obviously don't know the
addresses of the symbols inside the object files beforehand.
This change lets the YAML import lookup the addresses in the object files and rewrite
them. This will allow to have test that really don't need any binary input.
Renato Golin [Fri, 5 Jun 2015 18:24:12 +0000 (18:24 +0000)]
Revert "[InstCombine] Rephrase fix to SimplifyWithOpReplaced"
This reverts commit r239141. This commit was an attempt to reintroduce
a previous patch that broke many self-hosting bots with clang timeouts,
but it still has slowdown issues, at least on ARM, increasing the
compilation time (stage 2, clang's) by 5x.
Sanjoy Das [Fri, 5 Jun 2015 18:04:46 +0000 (18:04 +0000)]
[InstCombine][NFC] Add a ``break;`` statement.
This change is NFC because both the ``break;`` and the fall through end
up returning immediately. However, this helps clarify intent and also
ensures correctness in case more ``case`` blocks are added later.
Revert r238473, "Thumb2: Modify codegen for memcpy intrinsic to prefer LDM/STM."
as it caused miscompilations and assertion failures (PR23768,
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150601/280380.html).
[Unroll] Rework the naming and structure of the new unroll heuristics.
The new naming is (to me) much easier to understand. Here is a summary
of the new state of the world:
- '*Threshold' is the threshold for full unrolling. It is measured
against the estimated unrolled cost as computed by getUserCost in TTI
(or CodeMetrics, etc). We will exceed this threshold when unrolling
loops where unrolling exposes a significant degree of simplification
of the logic within the loop.
- '*PercentDynamicCostSavedThreshold' is the percentage of the loop's
estimated dynamic execution cost which needs to be saved by unrolling
to apply a discount to the estimated unrolled cost.
- '*DynamicCostSavingsDiscount' is the discount applied to the estimated
unrolling cost when the dynamic savings are expected to be high.
When actually analyzing the loop, we now produce both an estimated
unrolled cost, and an estimated rolled cost. The rolled cost is notably
a dynamic estimate based on our analysis of the expected execution of
each iteration.
While we're still working to build up the infrastructure for making
these estimates, to me it is much more clear *how* to make them better
when they have reasonably descriptive names. For example, we may want to
apply estimated (from heuristics or profiles) dynamic execution weights
to the *dynamic* cost estimates. If we start doing that, we would also
need to track the static unrolled cost and the dynamic unrolled cost, as
only the latter could reasonably be weighted by profile information.
This patch is sadly not without functionality change for the new unroll
analysis logic. Buried in the heuristic management were several things
that surprised me. For example, we never subtracted the optimized
instruction count off when comparing against the unroll heursistics!
I don't know if this just got lost somewhere along the way or what, but
with the new accounting of things, this is much easier to keep track of
and we use the post-simplification cost estimate to compare to the
thresholds, and use the dynamic cost reduction ratio to select whether
we can exceed the baseline threshold.
The old values of these flags also don't necessarily make sense. My
impression is that none of these thresholds or discounts have been tuned
yet, and so they're just arbitrary placehold numbers. As such, I've not
bothered to adjust for the fact that this is now a discount and not
a tow-tier threshold model. We need to tune all these values once the
logic is ready to be enabled.
John Brawn [Fri, 5 Jun 2015 13:31:19 +0000 (13:31 +0000)]
[ARM] Add support for -sp- FPUs and FPU none to TargetParser
These are added mainly for the benefit of clang, but this also means that they
are now allowed in .fpu directives and we emit the correct .fpu directive when
single-precision-only is used.
John Brawn [Fri, 5 Jun 2015 13:29:24 +0000 (13:29 +0000)]
[ARM] Add knowledge of FPU subtarget features to TargetParser
Add getFPUFeatures to TargetParser, which gets the list of subtarget features
that are enabled/disabled for each FPU, and use it when handling the .fpu
directive.
No functional change in this commit, though clang will start behaving
differently once it starts using this.