Roman Lebedev [Fri, 18 Oct 2019 13:20:16 +0000 (13:20 +0000)]
[NFC][CVP] Count all the no-wraps we proved
Summary:
It looks like this is the only missing statistic in the CVP pass.
Since we prove NSW and NUW separately i'd think we should count them separately too.
Graham Hunter [Fri, 18 Oct 2019 11:48:35 +0000 (11:48 +0000)]
[AArch64][SVE] Add SPLAT_VECTOR ISD Node
Adds a new ISD node to replicate a scalar value across all elements of
a vector. This is needed for scalable vectors, since BUILD_VECTOR cannot
be used.
Fixes up default type legalization for scalable vectors after the
new MVT type ranges were introduced.
At present I only use this node for scalable vectors. A DAGCombine has
been added to transform a BUILD_VECTOR into a SPLAT_VECTOR if all
elements are the same, but only if the default operation action of
Expand has been overridden by the target.
I've only added result promotion legalization for scalable vector
i8/i16/i32/i64 types in AArch64 for now.
David Green [Fri, 18 Oct 2019 10:35:46 +0000 (10:35 +0000)]
[AArch64] Don't combine callee-save and local stack adjustment when optimizing for size
For arm64, D18619 introduced the ability to combine bumping the stack pointer
upfront in case it needs to be bumped for both the callee-save area as well as
the local stack area.
That diff already remarks that "This change can cause an increase in
instructions", but argues that even when that happens, it should be still be a
performance benefit because the number of micro-ops is reduced.
We have observed that this code-size increase can be significant in practice.
This diff disables combining stack bumping for methods that are marked as
optimize-for-size.
Example of a prologue with the behavior before this diff (combining stack bumping when possible):
sub sp, sp, #0x40
stp d9, d8, [sp, #0x10]
stp x20, x19, [sp, #0x20]
stp x29, x30, [sp, #0x30]
add x29, sp, #0x30
[... compute x8 somehow ...]
stp x0, x8, [sp]
And after this diff, if the method is marked as optimize-for-size:
stp d9, d8, [sp, #-0x30]!
stp x20, x19, [sp, #0x10]
stp x29, x30, [sp, #0x20]
add x29, sp, #0x20
[... compute x8 somehow ...]
stp x0, x8, [sp, #-0x10]!
Note that without combining the stack bump there are two auto-decrements,
nicely folded into the stp instructions, whereas otherwise there is a single
sub sp, ... instruction, but not folded.
David Green [Fri, 18 Oct 2019 09:47:48 +0000 (09:47 +0000)]
[Codegen] Alter the default promotion for saturating adds and subs
The default promotion for the add_sat/sub_sat nodes currently does:
ANY_EXTEND iN to iM
SHL by M-N
[US][ADD|SUB]SAT
L/ASHR by M-N
If the promoted add_sat or sub_sat node is not legal, this can produce code
that effectively does a lot of shifting (and requiring large constants to be
materialised) just to use the overflow flag. It is simpler to just do the
saturation manually, using the higher bitwidth addition and a min/max against
the saturating bounds. That is what this patch attempts to do.
Kerry McLaughlin [Fri, 18 Oct 2019 09:40:16 +0000 (09:40 +0000)]
[AArch64][SVE] Implement unpack intrinsics
Summary:
Implements the following intrinsics:
- int_aarch64_sve_sunpkhi
- int_aarch64_sve_sunpklo
- int_aarch64_sve_uunpkhi
- int_aarch64_sve_uunpklo
This patch also adds AArch64ISD nodes for UNPK instead of implementing
the intrinsics directly, as they are required for a future patch which
implements the sign/zero extension of legal vectors.
This patch includes tests for the Subdivide2Argument type added by D67549
David Blaikie [Thu, 17 Oct 2019 23:02:19 +0000 (23:02 +0000)]
DebugInfo: Move loclist base address from DwarfFile to DebugLocStream
There's no need to have more than one of these (there can be two
DwarfFiles - one for the .o, one for the .dwo - but only one loc/loclist
section (either in the .o or the .dwo) & certainly one per
DebugLocStream, which is currently singular in DwarfDebug)
Jordan Rupprecht [Thu, 17 Oct 2019 21:55:43 +0000 (21:55 +0000)]
Reland [llvm-objdump] Use a counter for llvm-objdump -h instead of the section index.
This relands r374931 (reverted in r375088). It fixes 32-bit builds by using the right format string specifier for uint64_t (PRIu64) instead of `%d`.
Original description:
When listing the index in `llvm-objdump -h`, use a zero-based counter instead of the actual section index (e.g. shdr->sh_index for ELF).
While this is effectively a noop for now (except one unit test for XCOFF), the index values will change in a future patch that filters certain sections out (e.g. symbol tables). See D68669 for more context. Note: the test case in `test/tools/llvm-objdump/X86/section-index.s` already covers the case of incrementing the section index counter when sections are skipped.
Don Hinton [Thu, 17 Oct 2019 21:54:15 +0000 (21:54 +0000)]
[Error] Make llvm::cantFail include the original error messages
Summary:
The current implementation eats the current errors and just outputs
the message parameter passed to llvm::cantFail. This change appends
the original error message(s), so the user can see exactly why
cantFail failed. New logic is conditional on NDEBUG.
Jordan Rupprecht [Thu, 17 Oct 2019 20:51:00 +0000 (20:51 +0000)]
[llvm-objcopy] Add support for shell wildcards
Summary: GNU objcopy accepts the --wildcard flag to allow wildcard matching on symbol-related flags. (Note: it's implicitly true for section flags).
The basic syntax is to allow *, ?, \, and [] which work similarly to how they work in a shell. Additionally, starting a wildcard with ! causes that wildcard to prevent it from matching a flag.
Use an updated GlobPattern in libSupport to handle these patterns. It does not fully match the `fnmatch` used by GNU objcopy since named character classes (e.g. `[[:digit:]]`) are not supported, but this should support most existing use cases (mostly just `*` is what's used anyway).
Julian Lettner [Thu, 17 Oct 2019 20:22:32 +0000 (20:22 +0000)]
Reland "[lit] Synthesize artificial deadline"
We always want to use a deadline when calling `result.await`. Let's
synthesize an artificial deadline (now plus one year) to simplify code
and do less busy waiting.
Thanks to Reid Kleckner for diagnosing that a deadline for of "positive
infinity" does not work with Python 3 anymore. See commit: 4ff1e34b606d9a9fcfd8b8b5449a558315af94e5
Shoaib Meenai [Thu, 17 Oct 2019 19:24:58 +0000 (19:24 +0000)]
[cmake] Pass external project source directories to sub-configures
We're passing LLVM_EXTERNAL_PROJECTS to cross-compilation configures, so
we also need to pass the source directories of those projects, otherwise
configuration can fail from not finding them.
Header64.offset/Header64.size are uint64_t, thus we should not
truncate them to unit32_t. Moreover, there are a number of places
where we sum the offset and the size (e.g. in various checks in MachOUniversal.cpp),
the truncation causes issues since the offset/size can perfectly fit into uint32_t,
while the sum overflows.
Nemanja Ivanovic [Thu, 17 Oct 2019 18:24:28 +0000 (18:24 +0000)]
[PowerPC] Turn on CR-Logical reducer pass
Quite a while ago, we implemented a pass that will reduce the number of
CR-logical operations we emit. It does so by converting a CR-logical operation
into a branch. We have kept this off by default because it seemed to cause a
significant regression with one benchmark.
However, that regression turned out to be due to a completely unrelated
reason - AADB introducing a self-copy that is a priority-setting nop and it was
just exacerbated by this pass.
Now that we understand the reason for the only degradation, we can turn this
pass on by default. We have long since fixed the cause for the degradation.
Jordan Rupprecht [Thu, 17 Oct 2019 18:09:05 +0000 (18:09 +0000)]
Reapply r375051: [support] GlobPattern: add support for `\` and `[!...]`, and allow `]` in more places
Reland r375051 (reverted in r375052) after fixing lld tests on Windows in r375126 and r375131.
Original description: Update GlobPattern in libSupport to handle a few more cases. It does not fully match the `fnmatch` used by GNU objcopy since named character classes (e.g. `[[:digit:]]`) are not supported, but this should support most existing use cases (mostly just `*` is what's used anyway).
This will be used to implement the `--wildcard` flag in llvm-objcopy to be more compatible with GNU objcopy.
This is split off of D66613 to land the libSupport changes separately. The llvm-objcopy part will land soon.
Philip Reames [Thu, 17 Oct 2019 17:29:07 +0000 (17:29 +0000)]
[IndVars] Split loop predication out of optimizeLoopExits [NFC]
In the process of writing D69009, I realized we have two distinct sets of invariants within this single function, and basically no shared logic. The optimize loop exit transforms (including the new one in D69009) only care about *analyzeable* exits. Loop predication, on the other hand, has to reason about *all* exits. At the moment, we have the property (due to the requirement for an exact btc) that all exits are analyzeable, but that will likely change in the future as we add widenable condition support.
Reid Kleckner [Thu, 17 Oct 2019 17:28:31 +0000 (17:28 +0000)]
[codeview] Workaround for PR43479, don't re-emit instr labels
Summary:
In the long run we should come up with another mechanism for marking
call instructions as heap allocation sites, and remove this workaround.
For now, we've had two bug reports about this, so let's apply this
workaround. SLH (the other client of instruction labels) probably has
the same bug, but the solution there is more likely to be to mark the
call instruction as not duplicatable, which doesn't work for debug info.
Julian Lettner [Thu, 17 Oct 2019 16:01:18 +0000 (16:01 +0000)]
[lit] Synthesize artificial deadline
We always want to use a deadline when calling `result.await`. Let's
synthesize an artificial deadline (positive infinity) to simplify code
and do less busy waiting.
Joel E. Denny [Thu, 17 Oct 2019 14:03:06 +0000 (14:03 +0000)]
[lit] Extend internal diff to support `-` argument
When using lit's internal shell, RUN lines like the following
accidentally execute an external `diff` instead of lit's internal
`diff`:
```
# RUN: program | diff file -
```
Such cases exist now, in `clang/test/Analysis` for example. We are
preparing patches to ensure lit's internal `diff` is called in such
cases, which will then fail because lit's internal `diff` doesn't
recognize `-` as a command-line option. This patch adds support for
`-` to mean stdin.
Joel E. Denny [Thu, 17 Oct 2019 14:02:42 +0000 (14:02 +0000)]
[lit] Make internal diff work in pipelines
When using lit's internal shell, RUN lines like the following
accidentally execute an external `diff` instead of lit's internal
`diff`:
```
# RUN: program | diff file -
# RUN: not diff file1 file2 | FileCheck %s
```
Such cases exist now, in `clang/test/Analysis` for example. We are
preparing patches to ensure lit's internal `diff` is called in such
cases, which will then fail because lit's internal `diff` cannot
currently be used in pipelines and doesn't recognize `-` as a
command-line option.
To enable pipelines, this patch moves lit's `diff` implementation into
an out-of-process script, similar to lit's `cat` implementation. A
follow-up patch will implement `-` to mean stdin.
Sam Parker [Thu, 17 Oct 2019 12:11:18 +0000 (12:11 +0000)]
[ARM][MVE] Enable truncating masked stores
Allow us to generate truncating masked store which take v4i32 and
v8i16 vectors and can store to v4i8, v4i16 and v8i8 and memory.
Removed support for unaligned masked stores.
Roman Lebedev [Thu, 17 Oct 2019 11:01:29 +0000 (11:01 +0000)]
[LoopIdiom] BCmp: check, not assert that loop exits exit out of the loop (PR43687)
We can't normally stumble into that assertion because a tautological
*conditional* `br` in loop body is required, one that always
branches to loop latch. But that should have been always folded
to an unconditional branch before we get it.
But that is not guaranteed if the pass is run standalone.
So let's just promote the assertion into a proper check.
Oliver Stannard [Thu, 17 Oct 2019 09:58:57 +0000 (09:58 +0000)]
Reland: Dead Virtual Function Elimination
Remove dead virtual functions from vtables with
replaceNonMetadataUsesWith, so that CGProfile metadata gets cleaned up
correctly.
Original commit message:
Currently, it is hard for the compiler to remove unused C++ virtual
functions, because they are all referenced from vtables, which are referenced
by constructors. This means that if the constructor is called from any live
code, then we keep every virtual function in the final link, even if there
are no call sites which can use it.
This patch allows unused virtual functions to be removed during LTO (and
regular compilation in limited circumstances) by using type metadata to match
virtual function call sites to the vtable slots they might load from. This
information can then be used in the global dead code elimination pass instead
of the references from vtables to virtual functions, to more accurately
determine which functions are reachable.
To make this transformation safe, I have changed clang's code-generation to
always load virtual function pointers using the llvm.type.checked.load
intrinsic, instead of regular load instructions. I originally tried writing
this using clang's existing code-generation, which uses the llvm.type.test
and llvm.assume intrinsics after doing a normal load. However, it is possible
for optimisations to obscure the relationship between the GEP, load and
llvm.type.test, causing GlobalDCE to fail to find virtual function call
sites.
The existing linkage and visibility types don't accurately describe the scope
in which a virtual call could be made which uses a given vtable. This is
wider than the visibility of the type itself, because a virtual function call
could be made using a more-visible base class. I've added a new
!vcall_visibility metadata type to represent this, described in
TypeMetadata.rst. The internalization pass and libLTO have been updated to
change this metadata when linking is performed.
This doesn't currently work with ThinLTO, because it needs to see every call
to llvm.type.checked.load in the linkage unit. It might be possible to
extend this optimisation to be able to use the ThinLTO summary, as was done
for devirtualization, but until then that combination is rejected in the
clang driver.
To test this, I've written a fuzzer which generates random C++ programs with
complex class inheritance graphs, and virtual functions called through object
and function pointers of different types. The programs are spread across
multiple translation units and DSOs to test the different visibility
restrictions.
I've also tried doing bootstrap builds of LLVM to test this. This isn't
ideal, because only classes in anonymous namespaces can be optimised with
-fvisibility=default, and some parts of LLVM (plugins and bugpoint) do not
work correctly with -fvisibility=hidden. However, there are only 12 test
failures when building with -fvisibility=hidden (and an unmodified compiler),
and this change does not cause any new failures for either value of
-fvisibility.
On the 7 C++ sub-benchmarks of SPEC2006, this gives a geomean code-size
reduction of ~6%, over a baseline compiled with "-O2 -flto
-fvisibility=hidden -fwhole-program-vtables". The best cases are reductions
of ~14% in 450.soplex and 483.xalancbmk, and there are no code size
increases.
I've also run this on a set of 8 mbed-os examples compiled for Armv7M, which
show a geomean size reduction of ~3%, again with no size increases.
I had hoped that this would have no effect on performance, which would allow
it to awlays be enabled (when using -fwhole-program-vtables). However, the
changes in clang to use the llvm.type.checked.load intrinsic are causing ~1%
performance regression in the C++ parts of SPEC2006. It should be possible to
recover some of this perf loss by teaching optimisations about the
llvm.type.checked.load intrinsic, which would make it worth turning this on
by default (though it's still dependent on -fwhole-program-vtables).
Mikhail Maltsev [Thu, 17 Oct 2019 08:59:06 +0000 (08:59 +0000)]
[Analysis] Don't assume that unsigned overflow can't happen in EmitGEPOffset (PR42699)
Summary:
Currently when computing a GEP offset using the function EmitGEPOffset
for the following instruction
getelementptr inbounds i32, i32* %p, i64 %offs
we get
mul nuw i64 %offs, 4
Unfortunately we cannot assume that unsigned wrapping won't happen
here because %offs is allowed to be negative.
Making such assumptions can lead to miscompilations: see the new test
test24_neg_offs in InstCombine/icmp.ll. Without the patch InstCombine
would generate the following comparison:
Hans Wennborg [Thu, 17 Oct 2019 08:52:29 +0000 (08:52 +0000)]
Revert r374931 "[llvm-objdump] Use a counter for llvm-objdump -h instead of the section index."
This broke llvm-objdump in 32-bit builds, see e.g.
http://lab.llvm.org:8011/builders/clang-cmake-armv7-quick/builds/10925
> Summary:
> When listing the index in `llvm-objdump -h`, use a zero-based counter instead of the actual section index (e.g. shdr->sh_index for ELF).
>
> While this is effectively a noop for now (except one unit test for XCOFF), the index values will change in a future patch that filters certain sections out (e.g. symbol tables). See D68669 for more context. Note: the test case in `test/tools/llvm-objdump/X86/section-index.s` already covers the case of incrementing the section index counter when sections are skipped.
>
> Reviewers: grimar, jhenderson, espindola
>
> Reviewed By: grimar
>
> Subscribers: emaste, sbc100, arichardson, aheejin, arphaman, seiya, llvm-commits, MaskRay
>
> Tags: #llvm
>
> Differential Revision: https://reviews.llvm.org/D68848
James Molloy [Thu, 17 Oct 2019 08:34:29 +0000 (08:34 +0000)]
[DFAPacketizer] Use DFAEmitter. NFC.
Summary:
This is a NFC change that removes the NFA->DFA construction and emission logic from DFAPacketizerEmitter and instead uses the generic DFAEmitter logic. This allows DFAPacketizer to use the Automaton class from Support and remove a bunch of logic there too.
After this patch, DFAPacketizer is mostly logic for grepping Itineraries and collecting functional units, with no state machine logic. This will allow us to modernize by removing the 16-functional-unit limit and supporting non-itinerary functional units. This is all for followup patches.
[Alignment][NFC] Use Align for TargetFrameLowering/Subtarget
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
Daniel Sanders [Thu, 17 Oct 2019 00:37:04 +0000 (00:37 +0000)]
[gicombiner] Add the run-time rule disable option
Summary:
Each generated helper can be configured to generate an option that disables
rules in that helper. This can be used to bisect rulesets.
The disable bits are stored in a SparseVector as this is very cheap for the
common case where nothing is disabled. It gets more expensive the more rules
are disabled but you're generally doing that for debug purposes where
performance is less of a concern.
Daniel Sanders [Wed, 16 Oct 2019 23:53:35 +0000 (23:53 +0000)]
[gicombiner] Hoist pure C++ combine into the tablegen definition
Summary:
This is just moving the existing C++ code around and will be NFC w.r.t
AArch64. Renamed 'CombineBr' to something more descriptive
('ElideByByInvertingCond') at the same time.
The remaining combines in AArch64PreLegalizeCombiner require features that
aren't implemented at this point and will be hoisted as they are added.
The patch does not work on Windows due to `\` in filenames being interpreted as escaping rather than literal path separators when used by lld linker scripts.
Jordan Rupprecht [Wed, 16 Oct 2019 22:31:16 +0000 (22:31 +0000)]
[support] GlobPattern: add support for `\` and `[!...]`, and allow `]` in more places
Summary: Update GlobPattern in libSupport to handle a few more cases. It does not fully match the `fnmatch` used by GNU objcopy since named character classes (e.g. `[[:digit:]]`) are not supported, but this should support most existing use cases (mostly just `*` is what's used anyway).
This will be used to implement the `--wildcard` flag in llvm-objcopy to be more compatible with GNU objcopy.
This is split off of D66613 to land the libSupport changes separately. The llvm-objcopy part will land soon.
Alina Sbirlea [Wed, 16 Oct 2019 22:23:20 +0000 (22:23 +0000)]
[Utils] Cleanup similar cases to MergeBlockIntoPredecessor.
Summary:
There are two cases where a block is merged into its predecessor and the
MergeBlockIntoPredecessor API is not used. Update the API so it can be
reused in the other cases, in order to avoid code duplication.
Julian Lettner [Wed, 16 Oct 2019 21:53:20 +0000 (21:53 +0000)]
[lit] Small refactoring and cleanups in main.py
* Remove outdated precautions for Python versions < 2.7
* Remove dead code related to `maxIndividualTestTime` option
* Move printing of test and result summary out of main into its own
function
[dsymutil] Print warning/error for unknown/missing arguments.
After changing dsymutil to use libOption, we lost error reporting for
missing required arguments (input files). Additionally, we stopped
complaining about unknown arguments. This patch fixes both and adds a
test.
Shoaib Meenai [Wed, 16 Oct 2019 21:41:05 +0000 (21:41 +0000)]
[AArch64] Fix offset calculation
r374772 changed Offset to be an int64_t but left NewOffset as an int.
Scale is unsigned, so in the calculation `Offset - NewOffset * Scale`,
`NewOffset * Scale` was promoted to unsigned and was then zero-extended
to 64 bits, leading to an incorrect computation which manifested as an
out-of-memory when building the Swift standard library for Android
aarch64. Promote NewOffset to int64_t to fix this, and promote
EmittableOffset as well, since its one user passes it to a function
which takes an int64_t anyway.
Test case based on a suggestion by Sander de Smalen!
Philip Reames [Wed, 16 Oct 2019 19:58:26 +0000 (19:58 +0000)]
[IndVars] Fix a miscompile in off-by-default loop predication implementation
The problem is that we can have two loop exits, 'a' and 'b', where 'a' and 'b' would exit at the same iteration, 'a' precedes 'b' along some path, and 'b' is predicated while 'a' is not. In this case (see the previously submitted test case), we causing the loop to exit through 'b' whereas it should have exited through 'a'.
This only applies to loop exits where the exit counts are not provably inequal, but that isn't as much of a restriction as it appears. If we could order the exit counts, we'd have already removed one of the two exits. In theory, we might be able to prove inequality w/o ordering, but I didn't really explore that piece. Instead, I went for the obvious restriction and ensured we didn't predicate exits following non-predicateable exits.
Credit goes to Evgeny Brevnov for figuring out the problematic case. Fuzzing probably also found it (failures seen), but due to some silly infrastructure problems I hadn't gotten to the results before Evgeny hand reduced it from a benchmark (he manually enabled the transform). Once this is fixed, I'll try to filter through the fuzzer failures to see if there's anything additional lurking.
Jordan Rupprecht [Wed, 16 Oct 2019 18:39:52 +0000 (18:39 +0000)]
[llvm-ar] Implement the V modifier as an alias for --version
Summary: Also update the help modifier (h) so that it works as a modifier and not just as a standalone `h`. For example, `llvm-ar h` prints the help message, but `llvm-ar xh` currently prints `unknown option h`.
Sanjay Patel [Wed, 16 Oct 2019 18:06:24 +0000 (18:06 +0000)]
[SLP] avoid reduction transform on patterns that the backend can load-combine (2nd try)
The 1st attempt at this modified the cost model in a bad way to avoid the vectorization,
but that caused problems for other users (the loop vectorizer) of the cost model.
I don't see an ideal solution to these 2 related, potentially large, perf regressions:
https://bugs.llvm.org/show_bug.cgi?id=42708
https://bugs.llvm.org/show_bug.cgi?id=43146
We decided that load combining was unsuitable for IR because it could obscure other
optimizations in IR. So we removed the LoadCombiner pass and deferred to the backend.
Therefore, preventing SLP from destroying load combine opportunities requires that it
recognizes patterns that could be combined later, but not do the optimization itself (
it's not a vector combine anyway, so it's probably out-of-scope for SLP).
Here, we add a cost-independent bailout with a conservative pattern match for a
multi-instruction sequence that can probably be reduced later.
In the x86 tests shown (and discussed in more detail in the bug reports), SDAG combining
will produce a single instruction on these tests like:
movbe rax, qword ptr [rdi]
or:
mov rax, qword ptr [rdi]
Not some (half) vector monstrosity as we currently do using SLP:
Jason Liu [Wed, 16 Oct 2019 17:36:31 +0000 (17:36 +0000)]
[NFC][XCOFF][AIX] Rename ControlSections to CsectGroup
The name of ControlSections is not expressive enough to convey what they really are.
CsectGroup can better communicate the concept of grouping csects together since they have similar property.