PMB: Run the whole-program-devirt pass during LTO at --lto-O0.
The whole-program-devirt pass needs to run at -O0 because only it
knows about the llvm.type.checked.load intrinsic: it needs to both
lower the intrinsic itself and handle it in the summary.
David Blaikie [Fri, 26 May 2017 17:05:15 +0000 (17:05 +0000)]
DebugInfo: Don't include locations for debug-having code inlined into nodebug functions
This produced 'strange' DWARF anyway - the CU would have no ranges (or
at least not a range including the inlined code) nor any subprogram or
inlined_subroutine - yet the line table would have entries for these
instructions.
(this actually becomes more relevant with changes coming after this,
where a CU without any contents will be omitted entirely - so there
would be no line table to put this on anyway)
George Rimar [Fri, 26 May 2017 16:26:18 +0000 (16:26 +0000)]
[DWARF] - Make collectAddressRanges() return section index in addition to Low/High PC
This change is intended to use for LLD in D33183.
Problem we have in LLD when building .gdb_index is that we need to know section which address range belongs to.
Previously it was solved on LLD side by providing fake section addresses with use of llvm::LoadedObjectInfo
interface. We assigned file offsets as addressed. Then after obtaining ranges lists, for each range we had to find section ID's.
That not only was slow, but also complicated implementation and was the reason of incorrect behavior when
sections share the same offsets, like D33176 shows.
This patch makes DWARF parsers to return section index as well. That solves problem mentioned above.
Matthias Braun [Fri, 26 May 2017 16:23:08 +0000 (16:23 +0000)]
LivePhysRegs: Fix addLiveOutsNoPristines() for return blocks past PEI
Re-commit r303938 and r303954 with a fix for addLiveIns(): the internal
addPristines() function must be called on an empty set or it may
accidentally reset saved registers.
- addLiveOutsNoPristines() needs to add callee saved registers that are
actually saved and restored somewhere to the set (they are not
pristine).
- Cleanup/rewrite the code for addLiveOuts()/addLiveOutsNoPristines().
Sanjay Patel [Fri, 26 May 2017 15:33:18 +0000 (15:33 +0000)]
[DAGCombiner] use narrow vector ops to eliminate concat/extract (PR32790)
In the best case:
extract (binop (concat X1, X2), (concat Y1, Y2)), N --> binop XN, YN
...we kill all of the extract/concat and just have narrow binops remaining.
If only one of the binop operands is amenable, this transform is still
worthwhile because we kill some of the extract/concat.
Optional bitcasting makes the code more complicated, but there doesn't
seem to be a way to avoid that.
The TODO about extending to more than bitwise logic is there because we really
will regress several x86 tests including madd, psad, and even a plain
integer-multiply-by-2 or shift-left-by-1. I don't think there's anything
fundamentally wrong with this patch that would cause those regressions; those
folds are just missing or brittle.
If we extend to more binops, I found that this patch will fire on at least one
non-x86 regression test. There's an ARM NEON test in
test/CodeGen/ARM/coalesce-subregs.ll with a pattern like:
There was no functional change in the codegen from this transform from what I
could see though.
For the x86 test changes:
1. PR32790() is the closest call. We don't reduce the AVX1 instruction count in that case,
but we improve throughput. Also, on a core like Jaguar that double-pumps 256-bit ops,
there's an unseen win because two 128-bit ops have the same cost as the wider 256-bit op.
SSE/AVX2/AXV512 are not affected which is expected because only AVX1 has the extract/concat
ops to match the pattern.
2. do_not_use_256bit_op() is the best case. Everyone wins by avoiding the concat/extract.
Related bug for IR filed as: https://bugs.llvm.org/show_bug.cgi?id=33026
3. The SSE diffs in vector-trunc-math.ll are just scheduling/RA, so nothing real AFAICT.
4. The AVX1 diffs in vector-tzcnt-256.ll are all the same pattern: we reduced the instruction
count by one in each case by eliminating two insert/extract while adding one narrower logic op.
John Brawn [Fri, 26 May 2017 13:59:12 +0000 (13:59 +0000)]
[ARM] Fix lowering of misaligned memcpy/memset
Currently getOptimalMemOpType returns i32 for large enough sizes without
checking for alignment, leading to poor code generation when misaligned accesses
aren't permitted as we generate a word store then later split it up into byte
stores. This means we inadvertantly go over the MaxStoresPerMemcpy limit and for
memset we splat the memset value into a word then immediately split it up
again.
Fix this by leaving it up to FindOptimalMemOpLowering to figure out which type
to use, but also fix a bug there where it wasn't correctly checking if
misaligned memory accesses are allowed.
George Rimar [Fri, 26 May 2017 13:13:50 +0000 (13:13 +0000)]
Recommit r303978 "[DWARF] - Make collectAddressRanges() return section index in addition to Low/High PC"
With fix of test compilation.
Initial commit message:
This change is intended to use for LLD in D33183.
Problem we have in LLD when building .gdb_index is that we need to know section
which address range belongs to.
Previously it was solved on LLD side by providing fake section addresses
with use of llvm::LoadedObjectInfo interface. We assigned file offsets as addressed.
Then after obtaining ranges lists, for each range we had to find section ID's.
That not only was slow, but also complicated implementation and was the reason
of incorrect behavior when
sections share the same offsets, like D33176 shows.
This patch makes DWARF parsers to return section index as well.
That solves problem mentioned above.
Export the required symbol from DynamicLibraryTests
Running unittests/Support/DynamicLibrary/DynamicLibraryTests fails when LLVM is
configured with LLVM_EXPORT_SYMBOLS_FOR_PLUGINS=ON, because the test's version
script only contains symbols extracted from the static libraries, that the test
links with, but not those from the main object/executable itself. The patch
explicitly exports the one symbol needed by the test.
This change fixes https://bugs.llvm.org/show_bug.cgi?id=32893
George Rimar [Fri, 26 May 2017 12:46:41 +0000 (12:46 +0000)]
[DWARF] - Make collectAddressRanges() return section index in addition to Low/High PC
This change is intended to use for LLD in D33183.
Problem we have in LLD when building .gdb_index is that we need to know section
which address range belongs to.
Previously it was solved on LLD side by providing fake section addresses
with use of llvm::LoadedObjectInfo interface. We assigned file offsets as addressed.
Then after obtaining ranges lists, for each range we had to find section ID's.
That not only was slow, but also complicated implementation and was the reason
of incorrect behavior when
sections share the same offsets, like D33176 shows.
This patch makes DWARF parsers to return section index as well.
That solves problem mentioned above.
Max Kazantsev [Fri, 26 May 2017 06:47:04 +0000 (06:47 +0000)]
Re-enable "[SCEV] Do not fold dominated SCEVUnknown into AddRecExpr start"
The patch rL303730 was reverted because test lsr-expand-quadratic.ll failed on
many non-X86 configs with this patch. The reason of this is that the patch
makes a correctless fix that changes optimizer's behavior for this test.
Without the change, LSR was making an overconfident simplification basing on a
wrong SCEV. Apparently it did not need the IV analysis to do this. With the
change, it chose a different way to simplify (that wasn't so confident), and
this way required the IV analysis. Now, following the right execution path,
LSR tries to make a transformation relying on IV Users analysis. This analysis
is target-dependent due to this code:
// LSR is not APInt clean, do not touch integers bigger than 64-bits.
// Also avoid creating IVs of non-native types. For example, we don't want a
// 64-bit IV in 32-bit code just because the loop has one 64-bit cast.
uint64_t Width = SE->getTypeSizeInBits(I->getType());
if (Width > 64 || !DL.isLegalInteger(Width))
return false;
To make a proper transformation in this test case, the type i32 needs to be
legal for the specified data layout. When the test runs on some non-X86
configuration (e.g. pure ARM 64), opt gets confused by the specified target
and does not use it, rejecting the specified data layout as well. Instead,
it uses some default layout that does not treat i32 as a legal type
(currently the layout that is used when it is not specified does not have
legal types at all). As result, the transformation we expect to happen does
not happen for this test.
This re-enabling patch does not have any source code changes compared to the
original patch rL303730. The only difference is that the failing test is
moved to X86 directory and now has requirement of running on x86 only to comply
with the specified target triple and data layout.
Chandler Carruth [Fri, 26 May 2017 03:10:00 +0000 (03:10 +0000)]
[IR] Add an iterator and range accessor for the PHI nodes of a basic
block.
This allows writing much more natural and readable range based for loops
directly over the PHI nodes. It also takes advantage of the same tricks
for terminating the sequence as the hand coded versions.
I've replaced one example of this mostly to showcase the difference and
I've added a unit test to make sure the facilities really work the way
they're intended. I want to use this inside of SimpleLoopUnswitch but it
seems generally nice.
Zachary Turner [Fri, 26 May 2017 00:15:15 +0000 (00:15 +0000)]
[llvm-pdbdump] Don't crash when displaying padding.
We have a lot of complicated logic to determine where padding
is in a record, and the debug info doesn't always provide enough
information to figure it out with laser precision. In this case
we were putting the padding in the wrong place causing an
out of bounds access on a BitVector.
Right now we decide that any trailing padding of a child type
will be truncated during record layout, but this is only true
insofar as the class still is sized properly to end on an
alignment boundary, which the algorithm doesn't yet know about.
For now, just don't crash, even though we display padding twice
in this case.
Dimitry Andric [Thu, 25 May 2017 23:56:44 +0000 (23:56 +0000)]
Return a lit.Test.Result object from TestRunner's executeShTest()
Summary:
For various clang analyzer tests, which were unsupported, I got lit
exceptions, similar to the following:
Exception during script execution:
Traceback (most recent call last):
File "utils/lit/lit/run.py", line 190, in execute_test
result = test.config.test_format.execute(test, lit_config)
File "tools/clang/test/Analysis/analyzer_test.py", line 11, in execute
if result.code == lit.Test.FAIL:
AttributeError: 'tuple' object has no attribute 'code'
This is because executeShTest() in utils/lit/lit/TestRunner.py is
supposed to return a lit.Test.Result object, but in case of unsupported
tests, it returns a plain tuple.
Fix this by returning a properly initialized lit.Test.Result object
instead.
Matthias Braun [Thu, 25 May 2017 23:39:40 +0000 (23:39 +0000)]
LivePhysRegs: Fix addLiveOutsNoPristines() for return blocks past PEI
- addLiveOutsNoPristines() needs to add callee saved registers that are
actually saved and restored somewhere to the set (they are not
pristine).
- Cleanup/rewrite the code for addLiveOuts()/addLiveOutsNoPristines().
Zachary Turner [Thu, 25 May 2017 23:36:16 +0000 (23:36 +0000)]
[CV Type Merging] Find nested type indices faster.
Merging two type streams is one of the most time consuming
parts of generating a PDB, and as such it needs to be as
fast as possible. The visitor abstractions used for interoperating
nicely with many different types of inputs and outputs have
been used widely and help greatly for testability and implementing
tools, but the abstractions build up and get in the way of
performance.
This patch removes all of the visitation stuff from the type
stream merger, essentially re-inventing the leaf / member switch
and loop, but at a very low level. This allows us many other
optimizations, such as not actually deserializing *any* records
(even member records which don't describe their own length), as
the operation of "figure out how long this record is" is somewhat
faster than "figure out how long this record *and* get all its
fields out". Furthermore, whereas before we had to deserialize,
re-write type indices, then re-serialize, now we don't have to
do any of those 3 steps. We just find out where the type indices
are and pull them directly out of the byte stream and re-write
them.
This is worth a 50-60% performance increase. On top of all other
optimizations that have been applied this week, I now get the
following numbers when linking lld.exe and lld.pdb
MSVC: 25.67s
Before This Patch: 18.59s
After This Patch: 8.92s
David Blaikie [Thu, 25 May 2017 23:11:28 +0000 (23:11 +0000)]
DebugInfo: Simplify scopes+subprogram handling since the subprogram<>cu link inversion
Previously this code was defensive to the situation in which the debug
info scopes would lead to a different subprogram from the subprogram in
the CU's subprogram list (this could've happened with linkonce
functions, etc as per the comment being removed). Since the CU<>SP link
reversal this is no longer possible.
Craig Topper [Thu, 25 May 2017 21:51:12 +0000 (21:51 +0000)]
[InstCombine] Add an InstCombine specific wrapper around isKnownToBeAPowerOfTwo to shorten code. NFC
We have wrappers for several other ValueTracking methods that take care of passing all of the analysis and assumption cache parameters. This extends it to isKnownToBeAPowerOfTwo.
Wei Mi [Thu, 25 May 2017 21:49:02 +0000 (21:49 +0000)]
[GVN] Add phi-translate support in scalarpre.
Right now scalarpre doesn't have phi-translate support, so it will miss some
simple pre opportunities. Like the following testcase, current scalarpre cannot
recognize the last "a * b" is fully redundent because a and b used by the last
"a * b" expr are both defined by phis.
long a[100], b[100], g1, g2, g3;
__attribute__((pure)) long goo();
void foo(long a, long b, long c, long d) {
g1 = a * b;
if (__builtin_expect(g2 > 3, 0)) {
a = c;
b = d;
g2 = a * b;
}
g3 = a * b; // fully redundant.
}
The patch adds phi-translate support in scalarpre. This is only a temporary
solution before the newpre based on newgvn is available.
Matthias Braun [Thu, 25 May 2017 21:26:32 +0000 (21:26 +0000)]
CodeGen: Rename DEBUG_TYPE to match passnames
Rename the DEBUG_TYPE to match the names of corresponding passes where
it makes sense. Also establish the pattern of simply referencing
DEBUG_TYPE instead of repeating the passname where possible.
Zachary Turner [Thu, 25 May 2017 21:16:03 +0000 (21:16 +0000)]
[lld] Fix a bug where we continually re-follow type servers.
Originally this was intended to be set up so that when linking
a PDB which refers to a type server, it would only visit the
PDB once, and on subsequent visitations it would just skip it
since all the records had already been added.
Due to some C++ scoping issues, this was not occurring and it
was revisiting the type server every time, which caused every
record to end up being thrown away on all subsequent visitations.
This doesn't affect the performance of linking clang-cl generated
object files because we don't use type servers, but when linking
object files and libraries generated with /Zi via MSVC, this means
only 1 object file has to be linked instead of N object files, so
the speedup is quite large.
Zachary Turner [Thu, 25 May 2017 21:15:37 +0000 (21:15 +0000)]
[CodeView Type Merging] Don't keep re-allocating temp serializer.
Previously, every time we wanted to serialize a field list record, we
would create a new copy of FieldListRecordBuilder, which would in turn
create a temporary instance of TypeSerializer, which itself had a
std::vector<> that was about 128K in size. So this 128K allocation was
happening every time. We can re-use the same instance over and over, we
just have to clear its internal hash table and seen records list between
each run. This saves us from the constant re-allocations.
This is worth an ~18.5% speed increase (3.75s -> 3.05s) in my tests.
Zachary Turner [Thu, 25 May 2017 21:12:27 +0000 (21:12 +0000)]
Make BinaryStreamReader::readCString a bit faster.
Previously it would do a character by character search for a null
terminator, to account for the fact that an arbitrary stream need not
store its data contiguously so you couldn't just do a memchr. However, the
stream API has a function which will return the longest contiguous chunk
without doing a copy, and by using this function we can do a memchr on the
individual chunks. For certain types of streams like data from object
files etc, this is guaranteed to find the null terminator with only a
single memchr, but even with discontiguous streams such as
MappedBlockStream, it's rare that any given string will cross a block
boundary, so even those will almost always be satisfied with a single
memchr.
This optimization is worth a 10-12% reduction in link time (4.2 seconds ->
3.75 seconds)
Bob Haarman [Thu, 25 May 2017 21:12:15 +0000 (21:12 +0000)]
[pdb] pad source file name buffer at the end instead of the beginning
Summary:
DbiStreamBuilder calculated the offset of the source file names inside
the file info substream as the size of the file info substream minus
the size of the file names. Since the file info substream is padded to
a multiple of 4 bytes, this caused the first file name to be aligned
on a 4-byte boundary. By contrast, DbiModuleList would read the file
names immediately after the file name offset table, without skipping
to the next 4-byte boundary. This change makes it so that the file
names are written to the location where DbiModuleList expects them,
and puts any necessary padding for the file info substream after the
file names instead of before it.
Zachary Turner [Thu, 25 May 2017 21:12:00 +0000 (21:12 +0000)]
Fix a bug in MappedBlockStream.
It was using the number of blocks of the entire PDB file as the number
of blocks of each stream that was created. This was only an issue in
the readLongestContiguousChunk function, which was never called prior.
This bug surfaced when I updated an algorithm to use this function and
the algorithm broke.
Zachary Turner [Thu, 25 May 2017 21:06:28 +0000 (21:06 +0000)]
[CodeView Type Merging] Avoid record deserialization when possible.
A profile shows the majority of time doing type merging is spent
deserializing records from sequences of bytes into friendly C++ structures
that we can easily access members of in order to find the type indices to
re-write.
Records are prefixed with their length, however, and most records have
type indices that appear at fixed offsets in the record. For these
records, we can save some cycles by just looking at the right place in the
byte sequence and re-writing the value, then skipping the record in the
type stream. This saves us from the costly deserialization of examining
every field, including potentially null terminated strings which are the
slowest, even though it was unnecessary to begin with.
In addition, we apply another optimization. Previously, after
deserializing a record and re-writing its type indices, we would
unconditionally re-serialize it in order to compute the hash of the
re-written record. This would result in an alloc and memcpy for every
record. If no type indices were re-written, however, this was an
unnecessary allocation. In this patch re-writing is made two phase. The
first phase discovers the indices that need to be rewritten and their new
values. This information is passed through to the de-duplication code,
which only copies and re-writes type indices in the serialized byte
sequence if at least one type index is different.
Some records have type indices which only appear after variable length
strings, or which have lists of type indices, or various other situations
that can make it tricky to make this optimization. While I'm not giving up
on optimizing these cases as well, for now we can get the easy cases out
of the way and lay the groundwork for more complicated cases later.
This patch yields another 50% speedup on top of the already large speedups
submitted over the past 2 days. In two tests I have run, I went from 9
seconds to 3 seconds, and from 16 seconds to 8 seconds.
Aaron Ballman [Thu, 25 May 2017 21:01:30 +0000 (21:01 +0000)]
Update the documentation and CMake file for Visual Studio generators.
By default, CMake uses a 32-bit toolchain, even when on a 64-bit platform targeting a 64-bit build. However, due to the size of the binaries involved, this can cause linker instabilities (such as the linker running out of memory). Guide people to the correct solution to get CMake to use the native toolchain.
Kyle Butt [Thu, 25 May 2017 19:37:41 +0000 (19:37 +0000)]
PPC: Correct Size for GETtlsADDR
PPC::GETtlsADDR is lowered to a branch and a nop, by the assembly
printer. Its size was incorrectly marked as 4, correct it to 8. The
incorrect size can cause incorrect branch relaxation in
PPCBranchSelector under the right conditions.
David Blaikie [Thu, 25 May 2017 18:50:28 +0000 (18:50 +0000)]
DebugInfo: Produce debug_{gnu_}pub{names,types} entries when explicitly requested, even in -gmlt or when empty
Turns out gold doesn't use the DW_AT_GNU_pubnames to decide whether to
parse the rest of the DIEs when building gdb-index. This causes gold to
trip over LLVM's output when there are DW_FORM_ref_addr present.
Gold does use the presence of a debug_gnu_pub{names,types} entry for the
CU to skip parsing the debug_info portion, so make sure that's included
even when empty (technically, when empty there couldn't be any ref_addr
anyway - it only came up when gmlt didn't produce any (even non-empty)
pubnames - but given what that reveals about gold's implementation, this
seems like a good thing to do for consistency).
Bob Haarman [Thu, 25 May 2017 18:04:17 +0000 (18:04 +0000)]
[llvm-pdbdump] [yaml2pdb] always include object file name in module info
Summary:
Previously, the yaml2pdb subcommand of llvm-pdbdump only
included object file names in module info if a module info stream was
present. This change makes it so that we include the object file name
even if there is no module info stream for the module. As a result,
running
llvm-pdbdump pdb2yaml -dbi-module-info original.pdb > original.yaml &&
llvm-pdbdump yaml2pdb -pdb=new.pdb original.yaml && llvm-pdbdump
pdb2yaml -dbi-module-info new.pdb > new.yaml now produces identical
original.yaml and new.yaml files.
Sanjay Patel [Thu, 25 May 2017 14:13:57 +0000 (14:13 +0000)]
[InstCombine] make icmp-mul fold more efficient
There's probably a lot more like this (see also comments in D33338 about responsibility),
but I suspect we don't usually get a visible manifestation.
Given the recent interest in improving InstCombine efficiency, another potential micro-opt
that could be repeated several times in this function: morph the existing icmp pred/operands
instead of creating a new instruction.
Oren Ben Simhon [Thu, 25 May 2017 13:45:23 +0000 (13:45 +0000)]
[X86] Adding vpopcntd and vpopcntq instructions
AVX512_VPOPCNTDQ is a new feature set that was published by Intel.
The patch represents the LLVM side of the addition of two new intrinsic based instructions (vpopcntd and vpopcntq).
James Molloy [Thu, 25 May 2017 12:51:11 +0000 (12:51 +0000)]
[GVNSink] GVNSink pass
This patch provides an initial prototype for a pass that sinks instructions based on GVN information, similar to GVNHoist. It is not yet ready for commiting but I've uploaded it to gather some initial thoughts.
This pass attempts to sink instructions into successors, reducing static
instruction count and enabling if-conversion.
We use a variant of global value numbering to decide what can be sunk.
Consider:
GVN would number %a1 and %c1 differently because they compute different
results - the VN of an instruction is a function of its opcode and the
transitive closure of its operands. This is the key property for hoisting
and CSE.
What we want when sinking however is for a numbering that is a function of
the *uses* of an instruction, which allows us to answer the question "if I
replace %a1 with %c1, will it contribute in an equivalent way to all
successive instructions?". The (new) PostValueTable class in GVN provides this
mapping.
This pass has some shown really impressive improvements especially for codesize already on internal benchmarks, so I have high hopes it can replace all the sinking logic in SimplifyCFG.
Chandler Carruth [Thu, 25 May 2017 07:15:09 +0000 (07:15 +0000)]
[PM] Teach the PGO instrumentation pasess to run GlobalDCE before
instrumenting code.
This is important in the new pass manager. The old pass manager's
inliner has a small DCE routine embedded within it. The new pass manager
relies on the actual GlobalDCE pass for this.
Without this patch, instrumentation profiling with the new PM results in
massive code bloat in the object files because the instrumentation
itself ends up preventing DCE from working to remove the code.
We should probably change the instrumentation (and/or DCE) so that we
can eliminate dead code even if instrumented, but we shouldn't even
spend the time generating instrumentation for that code so this still
seems like a good patch.
Chandler Carruth [Thu, 25 May 2017 06:33:36 +0000 (06:33 +0000)]
[PM/Unswitch] Fix a bug in the domtree update logic for the new unswitch
pass.
The original logic only considered direct successors of the hoisted
domtree nodes, but that isn't really enough. If there are other basic
blocks that are completely within the subtree, their successors could
just as easily be impacted by the hoisting.
The more I think about it, the more I think the correct update here is
to hoist every block on the dominance frontier which has an idom in the
chain we hoist across. However, this is subtle enough that I'd
definitely appreciate some more eyes on it.
Sadly, if this is the correct algorithm, it requires computing a (highly
localized) dominance frontier. I've done this in the simplest (IE, least
code) way I could come up with, but that may be too naive. Suggestions
welcome here, dominance update algorithms are not an area I've studied
much, so I don't have strong opinions.
In good news, with this patch, turning on simple unswitch passes the
LLVM test suite for me with asserts enabled.
Craig Topper [Thu, 25 May 2017 05:38:40 +0000 (05:38 +0000)]
[SelectionDAG] Fix off by one in a compare in getOperationAction.
If Op is equal to array_lengthof, the lookup would be out of bounds, but we were only checking for greater than. I suspect nothing ever passes in the equal value because its a sentinel to mark the end of the builtin opcodes and not a real opcode.
So really this fix is just so that the code looks right and makes sense.
Chandler Carruth [Thu, 25 May 2017 03:01:31 +0000 (03:01 +0000)]
[LegacyPM] Make the 'addLoop' method accept a loop to add rather than
having it internally allocate the loop.
This is a much more flexible API and necessary in the new loop unswitch
to reasonably support both new and old PMs in common code. It also just
seems like a cleaner separation of concerns.
Vitaly Buka [Thu, 25 May 2017 01:43:13 +0000 (01:43 +0000)]
[libFuzzer] Don't replace custom signal handlers.
Summary:
This allows to keep handlers installed by sanitizers.
In other cases third-party code can replace handlers after libFuzzer
initialization anyway.
George Karpenkov [Thu, 25 May 2017 01:41:46 +0000 (01:41 +0000)]
Fix coverage check for full post-dominator basic blocks.
Coverage instrumentation which does not instrument full post-dominators
and full-dominators may skip valid paths, as the reasoning for skipping
blocks may become circular.
This patch fixes that, by only skipping
full post-dominators with multiple predecessors, as such predecessors by
definition can not be full-dominators.
```
Such instructions may require spilling into coro.frame, but, coro-frame address is only available after coro.begin and thus needs to be moved after coro.begin.
The only instructions that should not be moved are the arguments of coro.begin and all of their operands.
Tony Jiang [Wed, 24 May 2017 23:48:29 +0000 (23:48 +0000)]
[PowerPC] Fix a performance bug for PPC::XXSLDWI.
There are some VectorShuffle Nodes in SDAG which can be selected to XXSLDWI
instruction, this patch recognizes them and does the selection to improve the
PPC performance.
Sanjay Patel [Wed, 24 May 2017 22:58:17 +0000 (22:58 +0000)]
[InstCombine] use m_APInt to allow icmp-mul-mul vector fold
The swapped operands in the first test is a manifestation of an
inefficiency for vectors that doesn't exist for scalars because
the IRBuilder checks for an all-ones mask for scalars, but not
vectors.
Craig Topper [Wed, 24 May 2017 18:40:25 +0000 (18:40 +0000)]
[InstCombine] Merge together the SimplifyDemandedUseBits implementations for ZExt and Trunc. NFC
While there avoid resizing the DemandedMask twice. Make a copy into a separate variable instead. This potentially removes an allocation on large bit widths.
With the use of the zextOrTrunc methods on APInt and KnownBits these can be made almost source identical. The only difference is the zero of the upper bits for ZExt. This is similar to how its done in computeKnownBits in ValueTracking.
Craig Topper [Wed, 24 May 2017 17:33:30 +0000 (17:33 +0000)]
[InstCombine] Use less bitwise operations to handle Instruction::SExt in SimplifyDemandedUseBits. Other improvements.
The current code created a NewBits mask and used it as a mask several times. One of them just before a call to trunc making it unnecessary. A call to getActiveBits can get us the same information for the case. We also ORed with this mask later when we should have just sign extended the known bits.
We also called trunc on the guaranteed to be zero KnownZeros/Ones masks entering this code. Creating appropriately sized temporary APInts is probably better.