Previously, subtarget features were a bitfield with the underlying type being uint64_t.
Since several targets (X86 and ARM, in particular) have hit or were very close to hitting this bound, switching the features to use a bitset.
No functional change.
The first several times this was committed (e.g. r229831, r233055), it caused several buildbot failures.
Apparently the reason for most failures was both clang and gcc's inability to deal with large numbers (> 10K) of bitset constructor calls in tablegen-generated initializers of instruction info tables.
This should now be fixed.
Daniel Sanders [Tue, 26 May 2015 10:19:18 +0000 (10:19 +0000)]
[mips] Make TTypeEncoding indirect to allow .eh_frame to be read-only.
Summary:
Following on from r209907 which made personality encodings indirect, do the
same for TType encodings. This fixes the case where a try/catch block needs
to generate references to, for example, std::exception in the
.gcc_except_table.
This commit uses DW_EH_PE_sdata8 for N64 as far as is possible at the moment.
However, it is possible to end up with DW_EH_PE_sdata4 when a TargetMachine is
not available. There's no risk of issues with inconsistency here since the
tables are self describing but it does mean there is a small chance of the
PC-relative offset being out of range for particularly large programs.
Craig Topper [Tue, 26 May 2015 08:07:56 +0000 (08:07 +0000)]
[TableGen] Fix line wrapping logic for the autogenerated header to use math that makes more sense (at least to me).
The old code had a bug if the description was between 75 and 85 characters or so as it substracted PSLen from Desc.size() instead of MAX_LINE_LEN in the compare. It also calculated odd values for PosE on the last split and just let StringRef::slice take care of it being larger than the description string.
Craig Topper [Tue, 26 May 2015 08:07:49 +0000 (08:07 +0000)]
[TableGen] Rewrite an assert to not do a bunch unsigned math and then try to ensure the result is a positive number.
I think the fact that it was explicitly excluding 0 kept this from being a tautology. The exclusion of 0 for the old math was also a bug that's easily hit if the description gets split into multiple lines.
Bjorn Steinbrink [Mon, 25 May 2015 19:46:38 +0000 (19:46 +0000)]
Remove conflicting attributes before adding deduced readonly/readnone
Summary:
In case of functions that have a pointer argument and only pass it to
each other, the function attributes pass deduces that the pointer should
get the readnone attribute, but fails to remove a readonly attribute
that may already have been present.
Tom Stellard [Mon, 25 May 2015 16:15:50 +0000 (16:15 +0000)]
R600/SI: Use NAME rather than opName as the key to the MCOpcode tables
This lets us drop a parameter the opName parameter to the VINTRP
multiclass and makes it possible to create multiple VINTRP defs
with the same asm mnemonic.
Kit Barton [Mon, 25 May 2015 15:49:26 +0000 (15:49 +0000)]
This patch adds support for the vector quadword add/sub instructions introduced
in POWER8:
vadduqm
vaddeuqm
vaddcuq
vaddecuq
vsubuqm
vsubeuqm
vsubcuq
vsubecuq
In addition to adding the instructions themselves, it also adds support for the
v1i128 type for intrinsics (Intrinsics.td, Function.cpp, and
IntrinsicEmitter.cpp).
[X86] When pattern-matching scalar FMA3 intrinsics, don't re-arrange the first and second operands.
The semantics of the scalar FMA intrinsics are that the high vector elements are copied from the first source.
The existing pattern switches src1 and src2 around, to match the "213" order, which ends up tying the original src2 to the dest. Since the actual scalar fma3 instructions copy the high elements from the dest register, the wrong values are copied.
This modifies the pattern to leave src1 and src2 in their original order.
Chandler Carruth [Mon, 25 May 2015 01:00:46 +0000 (01:00 +0000)]
[Unroll] Switch from an eagerly populated SCEV cache to one that is
lazily built.
Also, make it a much more generic SCEV cache, which today exposes only
a reduced GEP model description but could be extended in the future to
do other profitable caching of SCEV information.
AsmPrinter: Avoid creating symbols in DwarfStringPool
Stop creating symbols we don't need in `DwarfStringPool`. The consumers
only call `DwarfStringPoolEntryRef::getSymbol()` when DWARF is
relocatable, so this just stops creating the unused symbols when it's
not. This drops memory usage from 851 MB to 845 MB, around 0.7%.
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
AsmPrinter: Avoid EmitLabelDifference() in DwarfAccelTable
Mint a new function, `AsmPrinter::emitDwarfStringOffset()`, which takes
a `DwarfStringPoolEntryRef`. When DWARF is relocatable across sections,
this defers to `emitSectionOffset()` and emits the `MCSymbol`;
otherwise, just emit the offset directly, without using any intermediate
symbols.
`EmitLabelDifference()` is already optimized to emit absolute label
differences cheaply when possible, so there aren't any major memory
savings here (853 MB down to 851 MB, or 0.2%). However, it prepares for
making the `MCSymbol`s in the `DwarfStringPool` optional.
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
Expose the `DwarfStringPool` entry in a header, and store a pointer to
it directly in `DIEString`. Instead of choosing at creation time how to
emit it, use the `dwarf::Form` to determine that at emission time.
Besides avoiding the other `DIEValue`, this shaves two pointers off of
`DIEString`; the data is now a single pointer. This is a nice cleanup
on its own -- and drops memory usage from 861 MB down to 853 MB, around
0.9% -- but it's also preparation for passing `DIEValue`s by value.
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
AsmPrinter: Extract DwarfStringPoolEntry from DwarfStringPool, NFC
Extract out `DwarfStringPoolEntry` and `DwarfStringPoolRef` from
`DwarfStringPool` so that downstream users can start using
`DwarfStringPool::getEntry()` directly. This will allow users to delay
the decision between emitting a symbol or an offset until later.
AsmPrinter: Emit the DwarfStringPool offset directly when possible
Change `DwarfStringPool` to calculate byte offsets on-the-fly, and
update `DwarfUnit::getLocalString()` to use a `DIEInteger` instead of a
`DIEDelta` when Dwarf doesn't use relocations (i.e., Mach-O). This
eliminates another call to `EmitLabelDifference()`, and drops memory
usage from 865 MB down to 861 MB, around 0.5%.
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
This test was relying on the numbering of preceding .set directives, but
an upcoming commit is going to remove some of them. Make the CHECKs
more nuanced.
Vince Harron [Sun, 24 May 2015 13:24:31 +0000 (13:24 +0000)]
Remove log2 dependency when building against Android API-9 SDK
Android's API-9 SDK is missing log2 builtins. A previous commit added
support for building against this API revision but this requires log2l
to be present. (And it doesn't seem to be defined, despite being in
the headers.)
Hal Finkel [Sat, 23 May 2015 12:18:10 +0000 (12:18 +0000)]
[PowerPC] Fix fast-isel when compare is split from branch
When the compare feeding a branch was in a different BB from the branch, we'd
try to "regenerate" the compare in the block with the branch, possibly trying
to make use of values not available there. Copy a page from AArch64's play book
here to fix the problem (at least in terms of correctness).
Remove all virtual functions from `DIEValue`, dropping the vtable
pointer from its layout. Instead, create "impl" functions on the
subclasses, and use the `DIEValue::Type` to implement the dynamic
dispatch.
This is necessary -- obviously not sufficient -- for passing `DIEValue`s
around by value. However, this change stands on its own: we make tons
of these. I measured a drop in memory usage from 888 MB down to 860 MB,
or around 3.2%.
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
Akira Hatanaka [Sat, 23 May 2015 01:14:08 +0000 (01:14 +0000)]
Stop resetting NoFramePointerElim in TargetMachine::resetTargetOptions.
This is part of the work to remove TargetMachine::resetTargetOptions.
In this patch, instead of updating global variable NoFramePointerElim in
resetTargetOptions, its use in DisableFramePointerElim is replaced with a call
to TargetFrameLowering::noFramePointerElim. This function determines on a
per-function basis if frame pointer elimination should be disabled.
There is no change in functionality except that cl:opt option "disable-fp-elim"
can now override function attribute "no-frame-pointer-elim".
[lib/Fuzzer] start getting rid of std::cerr. Sadly, these parts of C++ library used in libFuzzer badly interract with the same code used in the target function and also with dfsan. It's easier to just not use std::cerr than to defeat these issues.
Philip Reames [Fri, 22 May 2015 23:53:24 +0000 (23:53 +0000)]
Extend EarlyCSE to handle basic cases from JumpThreading and CVP
This patch extends EarlyCSE to take advantage of the information that a controlling branch gives us about the value of a Value within this and dominated basic blocks. If the current block has a single predecessor with a controlling branch, we can infer what the branch condition must have been to execute this block. The actual change to support this is downright simple because EarlyCSE's existing scoped hash table logic deals with most of the complexity around merging.
The patch actually implements two optimizations.
1) The first is analogous to JumpThreading in that it enables EarlyCSE's CSE handling to fold branches which are exactly redundant due to a previous branch to branches on constants. (It doesn't actually replace the branch or change the CFG.) This is pretty clearly a win since it enables substantial CFG simplification before we start trying to inline.
2) The second is analogous to CVP in that it exploits the knowledge gained to replace dominated *uses* of the original value. EarlyCSE does not otherwise reason about specific uses, so this is the more arguable one. It does enable further simplication and constant folding within the rest of the visit by EarlyCSE.
In both cases, the added code only handles the easy dominance based case of each optimization. The general case is deferred to the existing passes.
David Majnemer [Fri, 22 May 2015 23:02:11 +0000 (23:02 +0000)]
[InstCombine] Don't eagerly propagate nsw for A*B+A*C => A*(B+C)
InstCombine transforms A *nsw B +nsw A *nsw C to A *nsw (B + C).
This is incorrect -- e.g. if A = -1, B = 1, C = INT_SMAX. Then
nothing in the LHS overflows, but the multiplication in RHS overflows.
We need to first make sure that we won't multiple by INT_SMAX + 1.
Ahmed Bougacha [Fri, 22 May 2015 21:37:17 +0000 (21:37 +0000)]
[AArch64][CGP] Sink zext feeding stxr/stlxr into the same block.
The usual CodeGenPrepare trickery, on a target-specific intrinsic.
Without this, the expansion of atomics will usually have the zext
be hoisted out of the loop, defeating the various patterns we have
to catch this precise case.