Dylan Noblesmith [Mon, 25 Aug 2014 00:28:27 +0000 (00:28 +0000)]
IR: remove dead code
This was added in r134994, to fix a memory leak;
three days later, r135248 switched
ContainedTys from being new-allocated to being allocated
via BumpPtrAllocator, and the earlier fix was never
reverted.
The destructor doesn't seem to ever actually be called
on Types anyway, so it's harmless, but if it were,
this'd be an invalid pointer.
Dylan Noblesmith [Sun, 24 Aug 2014 19:10:53 +0000 (19:10 +0000)]
TableGen: delete no-op code
This does nothing but remove the Record from the map, and
then re-add it, without actually changing it in between.
The Record's Name used to be changed before re-adding it
when the code was first committed in r137232, but the
name-changing lines were removed in r142510, and since
then this code seems to do nothing.
This was also the only caller of removeClass or removeDef,
so now RecordKeeper owns its Records unconditionally,
and could be unique_ptr-ified.
Aaron Ballman [Sun, 24 Aug 2014 13:25:16 +0000 (13:25 +0000)]
This code is from r216285, which did not go out to the mailing list for some reason.
The switch statement would never fire due to the preceding break statement. Also, the switch statement has a default label with no case labels. Simplified the code, and allow it to execute.
David Majnemer [Sun, 24 Aug 2014 09:10:57 +0000 (09:10 +0000)]
InstCombine: Properly optimize or'ing bittests together
CFE, with -03, would turn:
bool f(unsigned x) {
bool a = x & 1;
bool b = x & 2;
return a | b;
}
into:
%1 = lshr i32 %x, 1
%2 = or i32 %1, %x
%3 = and i32 %2, 1
%4 = icmp ne i32 %3, 0
This sort of thing exposes a nasty pathology in GCC, ICC and LLVM.
Instead, we would rather want:
%1 = and i32 %x, 3
%2 = icmp ne i32 %1, 0
Things get a bit more interesting in the following case:
%1 = lshr i32 %x, %y
%2 = or i32 %1, %x
%3 = and i32 %2, 1
%4 = icmp ne i32 %3, 0
Replacing it with the following sequence is better:
%1 = shl nuw i32 1, %y
%2 = or i32 %1, 1
%3 = and i32 %2, %x
%4 = icmp ne i32 %3, 0
This sequence is preferable because %1 doesn't involve %x and could
potentially be hoisted out of loops if it is invariant; only perform
this transform in the non-constant case if we know we won't increase
register pressure.
Dylan Noblesmith [Sat, 23 Aug 2014 23:07:14 +0000 (23:07 +0000)]
Support: add llvm::unique_lock
Based on the STL class of the same name, it guards a mutex
while also allowing it to be unlocked conditionally before
destruction.
This eliminates the last naked usages of mutexes in LLVM and
clang.
It also uncovered and fixed a bug in callExternalFunction()
when compiled without USE_LIBFFI, where the mutex would never
be unlocked if the end of the function was reached.
Dylan Noblesmith [Sat, 23 Aug 2014 21:10:58 +0000 (21:10 +0000)]
cmake: actually test -Wcomment
This test was testing nothing, as only -Werror was ever
being added to the compiler flags.
You can see the final nitty-gritty compiler invocation in
CMakeFiles/CMakeOutput.log (for successful tests) and
CMakeFiles/CMakeError.log (for failed tests).
Dylan Noblesmith [Sat, 23 Aug 2014 21:10:56 +0000 (21:10 +0000)]
cmake: disable -Wnon-virtual-dtor when it gives false positives
clang has only been smart enough not to trigger -Wnon-virtual-dtor
warnings on final classes since r208449 (in clang 3.5). Building
with older versions is extremely noisy, so disable the warning
on those compilers.
Chandler Carruth [Sat, 23 Aug 2014 10:25:15 +0000 (10:25 +0000)]
[x86] Start fixing a really subtle and terrible form of miscompile in
these DAG combines.
The DAG auto-CSE thing is truly terrible. Due to it, when RAUW-ing
a node with its operand, you can cause its uses to CSE to itself, which
then causes their uses to become your uses which causes them to be
picked up by the RAUW. For nodes that are determined to be "no-ops",
this is "fine". But if the RAUW is one of several steps to enact
a transformation, this causes the DAG to really silently eat an discard
nodes that you would never expect. It took days for me to actually
pinpoint a test case triggering this and a really frustrating amount of
time to even comprehend the bug because I never even thought about the
ability of RAUW to iteratively consume nodes due to CSE-ing them into
itself.
To fix this, we have to build up a brand-new chain of operations any
time we are combining across (potentially) intervening nodes. But once
the logic is added to do this, another issue surfaces: CombineTo eagerly
deletes the one node combined, *but no others*. This is... really
frustrating. If deleting it makes its operands become dead, those
operand nodes often won't go onto the worklist in the
order you would want -- they're already on it and not near the top. That
means things higher on the worklist will get combined prior to these
dead nodes being GCed out of the worklist, and if the chain is long, the
immediate users won't be enough to re-detect where the root of the chain
is that became single-use again after deleting the dead nodes. The
better way to do this is to never immediately delete nodes, and instead
to just enqueue them so we can recursively delete them. The
combined-from node is typically not on the worklist anyways by virtue of
having been popped off.... But that in turn breaks other tests that
*require* CombineTo to delete unused nodes. :: sigh ::
Fortunately, there is a better way. This whole routine should have been
returning the replacement rather than using CombineTo which is quite
hacky. Switch to that, and all the pieces fall together.
I suspect the same kind of miscompile is possible in the half-shuffle
folding code, and potentially the recursive folding code. I'll be
switching those over to a pattern more like this one for safety's sake
even though I don't immediately have any test cases for them. Note that
the only way I got a test case for this instance was with *heavily* DAG
combined 256-bit shuffle sequences generated by my fuzzer. ;]
Rafael Espindola [Fri, 22 Aug 2014 23:26:10 +0000 (23:26 +0000)]
Add support for comdats to the gold plugin.
There are two parts to this. First, the plugin needs to tell gold the comdat by
setting comdat_key.
What gets things a bit more complicated is that gold only seems
symbols. In particular, if A is an alias to B, it only sees the symbols
A and B. It can then ask us to keep symbol A but drop symbol B. What
we have to do instead is to create an internal version of B and make A
an alias to that.
At some point some of this logic should be moved to lib/Linker so that
we don't map a Constant to an internal version just to have lib/Linker
map that again to the destination module.
The reason for implementing this in tools/gold for now is simplicity.
With it in place it should be possible to update clang to use comdats
for constructors and destructors on ELF without breaking the LTO
bootstrap. Once that is done I intend to come back and improve the
interface lib/Linker exposes.
Alex Lorenz [Fri, 22 Aug 2014 22:56:03 +0000 (22:56 +0000)]
llvm-cov: add code coverage tool that's based on coverage mapping format and clang's pgo.
This commit expands llvm-cov's functionality by adding support for a new code coverage
tool that uses LLVM's coverage mapping format and clang's instrumentation based profiling.
The gcov compatible tool can be invoked by supplying the 'gcov' command as the first argument,
or by modifying the tool's name to end with 'gcov'.
Jingyue Wu [Fri, 22 Aug 2014 22:45:57 +0000 (22:45 +0000)]
[SROA] Fold a PHI node if all its incoming values are the same
Summary:
Fixes PR20425.
During slice building, if all of the incoming values of a PHI node are the same, replace the PHI node with the common value. This simplification makes alloca's used by PHI nodes easier to promote.
Test Plan: Added three more tests in phi-and-select.ll
Reid Kleckner [Fri, 22 Aug 2014 21:59:26 +0000 (21:59 +0000)]
ARM / x86_64 varargs: Don't save regparms in prologue without va_start
There's no need to do this if the user doesn't call va_start. In the
future, we're going to have thunks that forward these register
parameters with musttail calls, and they won't need these spills for
handling va_start.
Most of the test suite changes are adding va_start calls to existing
tests to keep things working.
Reid Kleckner [Fri, 22 Aug 2014 19:29:17 +0000 (19:29 +0000)]
Fix PR17239 by changing the semantics of the RemainingArgsClass Option kind
This patch contains the LLVM side of the fix of PR17239.
This bug that happens because the /link (clang-cl.exe argument) is
marked as "consume all remaining arguments". However, when inside a
response file, /link should only consume all remaining arguments inside
the response file where it is located, not the entire command line after
expansion.
My patch will change the semantics of the RemainingArgsClass kind to
always consume only until the end of the response file when the option
originally came from a response file. There are only two options in this
class: dash dash (--) and /link.
Tom Stellard [Fri, 22 Aug 2014 18:49:31 +0000 (18:49 +0000)]
R600/SI: Wrap local memory pointer in AssertZExt on SI
These pointers are really just offsets and they will always be
less than 16-bits. Using AssertZExt allows us to use computeKnownBits
to prove that these values are positive. We will use this information
in a later commit.
Quentin Colombet [Fri, 22 Aug 2014 18:05:22 +0000 (18:05 +0000)]
[ARM] Move the implementation of the target hooks related to copy-related
instruction from ARMInstrInfo to ARMBaseInstrInfo.
That way, thumb mode can also benefit from the advanced copy optimization.
Regardless of whether or not we demand the sign bit of %add, we cannot
replace -16777216 with 2130706432 without also removing 'nuw' from the
instruction.
Regardless of whether or not we demand the sign bit of %add, we cannot
replace -16777216 with 2130706432 without also removing 'nsw' from the
instruction.
Erik Eckstein [Fri, 22 Aug 2014 01:18:39 +0000 (01:18 +0000)]
fix: SLPVectorizer crashes for unreachable blocks containing not schedulable instructions.
In unreachable blocks it's legal to have instructions like "%x = op %x".
Such instuctions are not schedulable. Therefore the SLPVectorizer has to check for
unreachable blocks and ignore them.
Reid Kleckner [Fri, 22 Aug 2014 00:09:56 +0000 (00:09 +0000)]
SROA: Handle a case of store size being smaller than allocation size
In this case, we are creating an x86_fp80 slice for a union from C where
the padding bytes may contain real data. An x86_fp80 alloca is 16 bytes,
and that's just fine. We can't, however, use regular loads and stores to
access the slice, because the store size is only 10 bytes / 80 bits.
Instead, use memcpy and memset.
Revert "X86: Align the stack on word boundaries in LowerFormalArguments()"
This (mostly) reverts commit r216119.
Somewhere during the review Reid committed r214980 which fixed this
another way, and I neglected to check that the testcase still failed
before committing.
I've left test/CodeGen/X86/aligned-variadic.ll around in case it adds
extra coverage.
David Blaikie [Thu, 21 Aug 2014 22:45:21 +0000 (22:45 +0000)]
Use DILexicalBlockFile, rather than DILexicalBlock, to track discriminator changes to ensure discriminator changes don't introduce new DWARF DW_TAG_lexical_blocks.
Somewhat unnoticed in the original implementation of discriminators, but
it could cause instructions to end up in new, small,
DW_TAG_lexical_blocks due to the use of DILexicalBlock to track
discriminator changes.
Instead, use DILexicalBlockFile which we already use to track file
changes without introducing new scopes, so it works well to track
discriminator changes in the same way.
Sanjay Patel [Thu, 21 Aug 2014 22:31:48 +0000 (22:31 +0000)]
name change: isPow2DivCheap -> isPow2SDivCheap
isPow2DivCheap
That name doesn't specify signed or unsigned.
Lazy as I am, I eventually read the function and variable comments. It turns out that this is strictly about signed div. But I discovered that the comments are wrong:
srl/add/sra
is not the general sequence for signed integer division by power-of-2. We need one more 'sra':
sra/srl/add/sra
That's the sequence produced in DAGCombiner. The first 'sra' may be removed when dividing by exactly '2', but that's a special case.
This patch corrects the comments, changes the name of the flag bit, and changes the name of the accessor methods.
Quentin Colombet [Thu, 21 Aug 2014 22:23:52 +0000 (22:23 +0000)]
[PeepholeOptimizer] Enable the advanced copy optimization by default.
The advanced copy optimization does not yield any difference on the whole llvm
test-suite + SPECs, either in compile time or runtime (binaries are identical),
but has a big potential when data go back and forth between register files as
demonstrated with test/CodeGen/ARM/adv-copy-opt.ll.
Note: This was measured for both Os and O3 for armv7s, arm64, and x86_64.
Robin Morisset [Thu, 21 Aug 2014 21:50:01 +0000 (21:50 +0000)]
Rename AtomicExpandLoadLinked into AtomicExpand
AtomicExpandLoadLinked is currently rather ARM-specific. This patch is the first of
a group that aim at making it more target-independent. See
http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-August/075873.html
for details
Juergen Ributzka [Thu, 21 Aug 2014 20:57:57 +0000 (20:57 +0000)]
[FastISel][AArch64] Use the correct register class to make the MI verifier happy.
This is mostly achieved by providing the correct register class manually,
because getRegClassFor always returns the GPR*AllRegClass for MVT::i32 and
MVT::i64.
Also cleanup the code to use the FastEmitInst_* method whenever possible. This
makes sure that the operands' register class is properly constrained. For all
the remaining cases this adds the missing constrainOperandRegClass calls for
each operand.
Tom Stellard [Thu, 21 Aug 2014 20:40:54 +0000 (20:40 +0000)]
R600/SI: Use eliminateFrameIndex() to expand SGPR spill pseudos
This will simplify the SGPR spilling and also allow us to use
MachineFrameInfo for calculating offsets, which should be more
reliable than our custom code.
This fixes a crash in some cases where a register would be spilled
in a branch such that the VGPR defined for spilling did not dominate
all the uses when restoring.
This fixes a crash in an ocl conformance test. The test requries
register spilling and is too big to include.
Rafael Espindola [Thu, 21 Aug 2014 20:28:55 +0000 (20:28 +0000)]
Rewrite the gold plugin to fix pr19901.
There is a fundamental difference between how the gold API and lib/LTO view
the LTO process.
The gold API talks about a particular symbol in a particular file. The lib/LTO
API talks about a symbol in the merged module.
The merged module is then defined in terms of the IR semantics. In particular,
a linkonce_odr GV is only copied if it is used, since it is valid to drop
unused linkonce_odr GVs.
In the testcase in pr19901 both properties collide. What happens is that gold
asks us to keep a particular linkonce_odr symbol, but the IR linker doesn't
copy it to the merged module and we never have a chance to ask lib/LTO to keep
it.
This patch fixes it by having a more direct implementation of the gold API. If
it asks us to keep a symbol, we change the linkage so it is not linkonce. If it
says we can drop a symbol, we do so. All of this before we even send the module
to lib/Linker.
Since now we don't have to produce LTO_SYMBOL_SCOPE_DEFAULT_CAN_BE_HIDDEN,
during symbol resolution we can use a temporary LLVMContext and do lazy
module loading. This allows us to keep the minimum possible amount of
allocated memory around. This should also allow as much parallelism as
we want, since there is no shared context.
Adam Nemet [Thu, 21 Aug 2014 19:50:07 +0000 (19:50 +0000)]
[AVX512] Add class to group common template arguments related to vector type
We discussed the issue of generality vs. readability of the AVX512 classes
recently. I proposed this approach to try to hide and centralize the mappings
we commonly perform based on the vector type. A new class X86VectorVTInfo
captures these.
The idea is to pass an instance of this class to classes/multiclasses instead
of the corresponding ValueType. Then the class/multiclass can use its field
for things that derive from the type rather than passing all those as separate
arguments.
I modified avx512_valign to demonstrate this new approach. As you can see
instead of 7 related template parameters we now have one. The downside is
that we have to refer to fields for the derived values. I named the argument
'_' in order to make this as invisible as possible. Please let me know if you
absolutely hate this. (Also once we allow local initializations in
multiclasses we can recover the original version by assigning the fields to
local variables.)
Another possible use-case for this class is to directly map things, e.g.:
Alex Lorenz [Thu, 21 Aug 2014 19:23:25 +0000 (19:23 +0000)]
Coverage Mapping: add function's hash to coverage function records.
The profile data format was recently updated and the new indexing api
requires the code coverage tool to know the function's hash as well
as the function's name to get the execution counts for a function.
Quentin Colombet [Thu, 21 Aug 2014 18:10:07 +0000 (18:10 +0000)]
[AArch64] Run a peephole pass right after AdvSIMD pass.
The AdvSIMD pass may produce copies that are not coalescer-friendly. The
peephole optimizer knows how to fix that as demonstrated in the test case.
Moritz Roth [Thu, 21 Aug 2014 17:11:03 +0000 (17:11 +0000)]
Thumb1 load/store optimizer: Improve code to materialize new base register.
There are two add-immediate instructions in Thumb1: tADDi8 and tADDi3. Only
the latter supports using different source and destination registers, so
whenever we materialize a new base register (at a certain offset) we'd do
so by moving the base register value to the new register and then adding in
place. This patch changes the code to use a single tADDi3 if the offset is
small enough to fit in 3 bits.
The annoying thing is that because the default placement-new operator has a
nothrow specification, the compiler will insert a null check of Mem before
calling the DeclRefExpr constructor. This null check is redundant for us,
because we expect the allocation functions to never return null.
By annotating the allocator functions with returns_nonnull, we can optimize
away these checks. Compiling clang with a recent version of Clang and measuring
with:
$ perf stat -r20 bin/clang.patch -fsyntax-only -w gcc.c && perf stat -r20 bin/clang.orig -fsyntax-only -w gcc.c
Shows a 2.4% speed-up (+- 0.8%).
The pattern occurs in LLVM too. Measuring with -O3 (and now using bzip2.c
instead, because it's smaller):
$ perf stat -r20 bin/clang.patch -O3 -w bzip2.c && perf stat -r20 bin/clang.orig -O3 -w bzip2.c
Shows 4.4 % speed-up (+- 1%).
If anyone knows of a similar attribute we can use for MSVC, or some other
technique to get rid off the null check there, please let me know.
Josh Klontz [Thu, 21 Aug 2014 12:55:27 +0000 (12:55 +0000)]
X86AsmPrinter MCJIT MSVC bug fix.
Summary:
This bug was introduced in r213006 which makes an assumption that MCSection is COFF for Windows MSVC. This assumption is broken for MCJIT users where ELF is used instead [1]. The fix is to change the MCSection cast to a dyn_cast.
Oliver Stannard [Thu, 21 Aug 2014 12:50:31 +0000 (12:50 +0000)]
[ARM] Enable DP copy, load and store instructions for FPv4-SP
The FPv4-SP floating-point unit is generally referred to as
single-precision only, but it does have double-precision registers and
load, store and GPR<->DPR move instructions which operate on them.
This patch enables the use of these registers, the main advantage of
which is that we now comply with the AAPCS-VFP calling convention.
This partially reverts r209650, which added some AAPCS-VFP support,
but did not handle return values or alignment of double arguments in
registers.
This patch also adds tests for Thumb2 code generation for
floating-point instructions and intrinsics, which previously only
existed for ARM.