The existing code would unnecessarily break LDRD/STRD apart with
non-adjacent registers, on thumb2 this is not necessary.
Ideally on thumb2 we shouldn't match for ldrd/strd pre-regalloc anymore
as there is not reason to set register hints anymore, changing that is
something for a future patch however.
Matthias Braun [Mon, 1 Jun 2015 22:31:17 +0000 (22:31 +0000)]
AArch64: Use CMP;CCMP sequences for and/or/setcc trees.
Previously CCMP/FCCMP instructions were only used by the
AArch64ConditionalCompares pass for control flow. This patch uses them
for SELECT like instructions as well by matching patterns in ISelLowering.
Owen Anderson [Mon, 1 Jun 2015 22:24:01 +0000 (22:24 +0000)]
Move the name pointer out of Value into a map that lives on the
LLVMContext. Production builds of clang do not set names on most
Value's, so this is wasted space on almost all subclasses of Value.
This reduces the size of all Value subclasses by 8 bytes on 64 bit
hosts.
The one tricky part of this change is averting compile time regression
by keeping Value::hasName() fast. This required stealing bits out of
NumOperands.
With this change, peak memory usage on verify-uselistorder-nodbg.lto.bc
is decreased by approximately 2.3% (~3MB absolute on my machine).
Matthias Braun [Mon, 1 Jun 2015 21:26:26 +0000 (21:26 +0000)]
LiveRangeEdit: Fix liveranges not shrinking on subrange kill.
If a dead instruction we may not only have a last-use in the main live
range but also in a subregister range if subregisters are tracked. We
need to partially rebuild live ranges in both cases.
The testcase only broke when subregister liveness was enabled. I
commited it in the current form because there is currently no flag to
enable/disable subregister liveness.
Frederic Riss [Mon, 1 Jun 2015 21:12:45 +0000 (21:12 +0000)]
[dsymutil] Use YAMLIO to dump debug map.
Doing so will allow us to also accept a YAML debug map in input as using
YAMLIO gives us the parsing for free. Being able to have textual debug
maps will in turn allow much more control over the tests, because 1/
no need to check-in a binary containing the debug map and 2/ it will allow
to use the same objects/IR files with made-up debug-maps to test
different scenari.
Make the C++ LTO API easier to use from C++ clients.
Start using C++ types such as StringRef and MemoryBuffer in the C++ LTO
API. In doing so, clarify the ownership of the native object file: the caller
now owns it, not the LTOCodeGenerator. The C libLTO library has been modified
to use a derived class of LTOCodeGenerator that owns the object file.
lit: Allow configurations to restrict the set of tests to run
By setting limit_to_features to a non empty list of features a configuration can
restrict the set of tests to run to only include tests that require a feature in
this list.
Owen Anderson [Mon, 1 Jun 2015 17:26:30 +0000 (17:26 +0000)]
Disable MachineSink on convergent operations, similar to how IR Sink is
restricted. No test because no in-tree target currently has convergent
MachineInstr's.
[mips][FastISel] Clobber HI0/LO0 registers in MUL instructions.
Summary:
The contents of the HI/LO registers are unpredictable after the execution of
the MUL instruction. In addition to implicitly defining these registers in the
MUL instruction definition, we have to mark those registers as dead too.
Without this the fast register allocator is running out of registers when the
MUL instruction is followed by another one that tries to allocate the AC0
register.
Hans Wennborg [Mon, 1 Jun 2015 15:37:58 +0000 (15:37 +0000)]
Drop remaining Dragonegg support in release scripts
r236077 and r236081 dropped Dragonegg support from the release scripts
but left some pieces. The most notable change is that Dragonegg won't
be tagged any more.
Artur Pilipenko [Mon, 1 Jun 2015 14:53:55 +0000 (14:53 +0000)]
Add isConstant argument to MDBuilder::createTBAAStructTagNode
According to the TBAA description struct-path tag node can have an optional IsConstant field. Add corresponding argument to MDBuilder::createTBAAStructTagNode.
Greg Bedwell [Mon, 1 Jun 2015 12:41:55 +0000 (12:41 +0000)]
In MSVC builds embed a VERSIONINFO resource in our exe and DLL files.
This embeds Windows version information into our executables and DLLs.
The most visible place to view this data is in the details tab of the file
properties window in Windows explorer.
AVX-512: Implemented vector shuffle lowering for v8i64 and v8f64 types.
I removed the vector-shuffle-512-v8.ll, it is auto-generated test, not valid any more.
David Blaikie [Mon, 1 Jun 2015 03:09:34 +0000 (03:09 +0000)]
[opaque pointer type] Explicitly store the pointee type of the result of a GEP
Alternatively, this type could be derived on-demand whenever
getResultElementType is called - if someone thinks that's the better
choice (simple time/space tradeoff), I'm happy to give it a go.
David Majnemer [Mon, 1 Jun 2015 00:15:08 +0000 (00:15 +0000)]
[PHITransAddr] Don't translate unreachable values
Unreachable values may use themselves in strange ways due to their
dominance property. Attempting to translate through them can lead to
infinite recursion, crashing LLVM. Instead, claim that we weren't able
to translate the value.
Keno Fischer [Sun, 31 May 2015 23:37:04 +0000 (23:37 +0000)]
[DWARF] Fix a bug in line info handling
This fixes a bug in the line info handling in the dwarf code, based on a
problem I when implementing RelocVisitor support for MachO.
Since addr+size will give the first address past the end of the function,
we need to back up one line table entry. Fix this by looking up the
end_addr-1, which is the last address in the range. Note that this also
removes a duplicate output from the llvm-rtdyld line table dump. The
relevant line is the end_sequence one in the line table and has an offset
of the first address part the end of the range and hence should not be
included.
Also factor out the common functionality into a separate function.
This comes up on MachO much more than on ELF, since MachO
doesn't store the symbol size separately, hence making
said situation always occur.
Tim Northover [Sun, 31 May 2015 19:22:07 +0000 (19:22 +0000)]
ARM: recommit r237590: allow jump tables to be placed as constant islands.
The original version didn't properly account for the base register
being modified before the final jump, so caused miscompilations in
Chromium and LLVM. I've fixed this and tested with an LLVM self-host
(I don't have the means to build & test Chromium).
The general idea remains the same: in pathological cases jump tables
can be too far away from the instructions referencing them (like other
constants) so they need to be movable.
Keno Fischer [Sat, 30 May 2015 19:44:53 +0000 (19:44 +0000)]
Add RelocVisitor support for MachO
This commit adds partial support for MachO relocations to RelocVisitor.
A simple test case is added to show that relocations are indeed being
applied and that using llvm-dwarfdump on MachO files no longer errors.
Correctness is not yet tested, due to an unrelated bug in DebugInfo,
which will be fixed with appropriate testcase in a followup commit.
Chandler Carruth [Sat, 30 May 2015 10:35:03 +0000 (10:35 +0000)]
[x86] Unify the horizontal adding used for popcount lowering taking the
best approach of each.
For vNi16, we use SHL + ADD + SRL pattern that seem easily the best.
For vNi32, we use the PUNPCK + PSADBW + PACKUSWB pattern. In some cases
there is a huge improvement with this in IACA's estimated throughput --
over 2x higher throughput!!!! -- but the measurements are too good to be
true. In one narrow case, the SHL + ADD + SHL + ADD + SRL pattern looks
slightly faster, but I'm not sure I believe any of the measurements at
this point. Both are the exact same uops though. Hard to be confident of
anything past that.
If anyone wants to collect very detailed (Agner-level) timings with the
result of this patch, or with the i32 case replaced with SHL + ADD + SHl
+ ADD + SRL, I'd be very interested. Note that you'll need to test it on
both Ivybridge and Haswell, with both SSE3, SSSE3, and AVX selected as
I saw unique behavior in each of these buckets with IACA all of which
should be checked against measured performance.
But this patch is still a useful improvement by dropping duplicate work
and getting the much nicer PSADBW lowering for v2i64.
I'd still like to rephrase this in terms of generic horizontal sum. It's
a bit lame to have a special case of that just for popcount.
Renato Golin [Sat, 30 May 2015 10:30:02 +0000 (10:30 +0000)]
[ARMTargetParser] Move IAS arch ext parser. NFC
The plan was to move the whole table into the already existing ArchExtNames
but some fields depend on a table-generated file, and we don't yet have this
feature in the generic lib/Support side.
Once the minimum target-specific table-generated files are available in a
generic fashion to these libraries, we'll have to keep it in the ASM parser.
Craig Topper [Sat, 30 May 2015 07:36:01 +0000 (07:36 +0000)]
[TableGen] Merge RecTy::typeIsConvertibleTo and RecTy::baseClassOf. NFC
typeIsConvertibleTo was just calling baseClassOf(this) on the argument passed to it, but there weren't different signatures for baseClassOf so passing 'this' didn't really do anything interesting. typeIsConvertibleTo could have just been a non-virtual method in RecTy. But since that would be kind of a silly method, I instead re-distributed the logic from baseClassOf into typeIsConvertibleTo.
Craig Topper [Sat, 30 May 2015 07:34:51 +0000 (07:34 +0000)]
[TableGen] Remove all the variations of RecTy::convertValue and just handle the conversions in convertInitializerTo directly. This saves a bunch of vtable entries. NFC
David Majnemer [Sat, 30 May 2015 04:56:02 +0000 (04:56 +0000)]
[WinCOFF] Add support for the .safeseh directive
.safeseh adds an entry to the .sxdata section to register all the
appropriate functions which may handle an exception. This entry is not
a relocation to the symbol but instead the symbol table index of the
function.
Chandler Carruth [Sat, 30 May 2015 04:23:13 +0000 (04:23 +0000)]
[x86] Replace the long spelling of getting a bitcast with the *much*
shorter one. NFC.
In addition to being much shorter to type and requiring fewer arguments,
this change saves over 30 lines from this one file, all wasted on total
boilerplate...
Chandler Carruth [Sat, 30 May 2015 04:05:11 +0000 (04:05 +0000)]
[x86] Restore the bitcasts I removed when refactoring this to avoid
shifting vectors of bytes as x86 doesn't have direct support for that.
This removes a bunch of redundant masking in the generated code for SSE2
and SSE3.
In order to avoid the really significant code size growth this would
have triggered, I also factored the completely repeatative logic for
shifting and masking into two lambdas which in turn makes all of this
much easier to read IMO.
Chandler Carruth [Sat, 30 May 2015 03:20:59 +0000 (03:20 +0000)]
[x86] Implement a faster vector population count based on the PSHUFB
in-register LUT technique.
Summary:
A description of this technique can be found here:
http://wm.ite.pl/articles/sse-popcount.html
The core of the idea is to use an in-register lookup table and the
PSHUFB instruction to compute the population count for the low and high
nibbles of each byte, and then to use horizontal sums to aggregate these
into vector population counts with wider element types.
On x86 there is an instruction that will directly compute the horizontal
sum for the low 8 and high 8 bytes, giving vNi64 popcount very easily.
Various tricks are used to get vNi32 and vNi16 from the vNi8 that the
LUT computes.
The base implemantion of this, and most of the work, was done by Bruno
in a follow up to D6531. See Bruno's detailed post there for lots of
timing information about these changes.
I have extended Bruno's patch in the following ways:
0) I committed the new tests with baseline sequences so this shows
a diff, and regenerated the tests using the update scripts.
1) Bruno had noticed and mentioned in IRC a redundant mask that
I removed.
2) I introduced a particular optimization for the i32 vector cases where
we use PSHL + PSADBW to compute the the low i32 popcounts, and PSHUFD
+ PSADBW to compute doubled high i32 popcounts. This takes advantage
of the fact that to line up the high i32 popcounts we have to shift
them anyways, and we can shift them by one fewer bit to effectively
divide the count by two. While the PSHUFD based horizontal add is no
faster, it doesn't require registers or load traffic the way a mask
would, and provides more ILP as it happens on different ports with
high throughput.
3) I did some code cleanups throughout to simplify the implementation
logic.
4) I refactored it to continue to use the parallel bitmath lowering when
SSSE3 is not available to preserve the performance of that version on
SSE2 targets where it is still much better than scalarizing as we'll
still do a bitmath implementation of popcount even in scalar code
there.
With #1 and #2 above, I analyzed the result in IACA for sandybridge,
ivybridge, and haswell. In every case I measured, the throughput is the
same or better using the LUT lowering, even v2i64 and v4i64, and even
compared with using the native popcnt instruction! The latency of the
LUT lowering is often higher than the latency of the scalarized popcnt
instruction sequence, but I think those latency measurements are deeply
misleading. Keeping the operation fully in the vector unit and having
many chances for increased throughput seems much more likely to win.
With this, we can lower every integer vector popcount implementation
using the LUT strategy if we have SSSE3 or better (and thus have
PSHUFB). I've updated the operation lowering to reflect this. This also
fixes an issue where we were scalarizing horribly some AVX lowerings.
Finally, there are some remaining cleanups. There is duplication between
the two techniques in how they perform the horizontal sum once the byte
population count is computed. I'm going to factor and merge those two in
a separate follow-up commit.