Reid Kleckner [Sat, 19 Oct 2019 00:48:11 +0000 (00:48 +0000)]
Move endian constant from Host.h to SwapByteOrder.h, prune include
Works on this dependency chain:
ArrayRef.h ->
Hashing.h -> --CUT--
Host.h ->
StringMap.h / StringRef.h
ArrayRef is very popular, but Host.h is rarely needed. Move the
IsBigEndianHost constant to SwapByteOrder.h. Clients of that header are
more likely to need it.
Matt Arsenault [Fri, 18 Oct 2019 23:24:25 +0000 (23:24 +0000)]
LiveIntervals: Fix handleMoveUp with subreg def moving across a def
If a subregister def was moved across another subregister def and
another use, the main range was not correctly updated. The end point
of the moved interval ended too early and missed the use from theh
other lanes in the subreg def.
Wei Mi [Fri, 18 Oct 2019 22:35:20 +0000 (22:35 +0000)]
[SampleFDO] Add profile remapping support for profile on-demand loading used
by ExtBinary format profile
Profile on-demand loading was added for ExtBinary format profile in rL374233,
but currently profile on-demand loading doesn't work well with profile
remapping. The patch adds the support.
Suppose a function in the current module has outline instance in the profile.
The function name in the module is different from the name of the outline
instance, but remapper knows the two names are equal. When loading profile
on-demand, the outline instance has to be loaded with remapper's help.
At the same time SampleProfileReaderItaniumRemapper is changed from a proxy
of SampleProfileReader to a helper member in SampleProfileReader.
Vedant Kumar [Fri, 18 Oct 2019 21:05:30 +0000 (21:05 +0000)]
Disable exit-on-SIGPIPE in lldb
Occasionally, during test teardown, LLDB writes to a closed pipe.
Sometimes the communication is inherently unreliable, so LLDB tries to
avoid being killed due to SIGPIPE (it calls `signal(SIGPIPE, SIG_IGN)`).
However, LLVM's default SIGPIPE behavior overrides LLDB's, causing it to
exit with IO_ERR.
Opt LLDB out of the default SIGPIPE behavior. I expect that this will
resolve some LLDB test suite flakiness (tests randomly failing with
IO_ERR) that we've seen since r344372.
Reid Kleckner [Fri, 18 Oct 2019 21:01:41 +0000 (21:01 +0000)]
[X86] Fix register parsing in .seh_* in Intel syntax
Previously, the parser checked for a '%' prefix to indicate a register.
In Intel syntax mode, LLVM does not print a '%' prefix on registers, so
LLVM could not parse its own assembly output. Instead, require that
register numbers be integer literals, or at least start with an integer
literal, which is consistent with .cfi_* directive register parsing.
Thomas Lively [Fri, 18 Oct 2019 20:27:30 +0000 (20:27 +0000)]
[WebAssembly] Allow multivalue signatures in object files
Summary:
Also changes the wasm YAML format to reflect the possibility of having
multiple return types and to put the returns after the params for
consistency with the binary encoding.
Quentin Colombet [Fri, 18 Oct 2019 20:13:42 +0000 (20:13 +0000)]
[GISel][CallLowering] Make isIncomingArgumentHandler a pure virtual method
The default implementation of isIncomingArgumentHandler could lead
to generating incorrect code.
Make it a pure virtual method, so that targets know they have to
override it to produce correct code.
Roman Lebedev [Fri, 18 Oct 2019 19:32:47 +0000 (19:32 +0000)]
[CVP] After proving that @llvm.with.overflow()/@llvm.sat() don't overflow, also try to prove other no-wrap
Summary:
CVP, unlike InstCombine, does not run till exaustion.
It only does a single pass.
When dealing with those special binops, if we prove that they can
safely be demoted into their usual binop form,
we do set the no-wrap we deduced. But when dealing with usual binops,
we try to deduce both no-wraps.
So if we convert e.g. @llvm.uadd.with.overflow() to `add nuw`,
we won't attempt to check whether it can be `add nuw nsw`.
This patch proposes to call `processBinOp()` on newly-created binop,
which is identical to what we do for div/rem already.
Lang Hames [Fri, 18 Oct 2019 18:25:15 +0000 (18:25 +0000)]
[examples] Add an example of how to use JITLink and small-code-model with LLJIT.
JITLink is LLVM's newer jit-linker. It is an alternative to (and hopefully
eventually a replacement for) LLVM's older jit-linker, RuntimeDyld. Unlike
RuntimeDyld which requries JIT'd code to be complied with the large code
model, JITlink can link code compiled with the small code model, which is
the native code model for a number of targets (including all supported MachO
targets).
This example shows how to:
-- Create a JITLink InProcessMemoryManager
-- Set the code model to small
-- Use a JITLink backed ObjectLinkingLayer as the linking layer for LLJIT
(rather than the default RTDyldObjectLinkingLayer).
Note: This example will only work on platforms supported by JITLink. As of
this commit that's MachO/x86-64 and MachO/arm64.
Julian Lettner [Fri, 18 Oct 2019 17:59:46 +0000 (17:59 +0000)]
[lit] Reduce value of synthesized timeouts
Large timeout values (one year, positive infinity) trip up Python on
Windows with "OverflowError: timeout value is too large". One week
seems to work and is still large enough in practice.
Thanks to Simon Pilgrim for helping me test this.
https://reviews.llvm.org/rL375171
Jay Foad [Fri, 18 Oct 2019 16:16:36 +0000 (16:16 +0000)]
[IR] Reimplement FPMathOperator::classof as a whitelist.
Summary:
This makes it much easier to verify that the implementation matches the
documentation. It uncovered a bug in the unit tests where we were
accidentally setting fast math flags on a load instruction.
James Molloy [Fri, 18 Oct 2019 14:48:35 +0000 (14:48 +0000)]
[DFAPacketizer] Fix large compile-time regression for VLIW targets
D68992 / rL375086 refactored the packetizer and removed a bunch of logic. Unfortunately it creates an Automaton object whenever a DFAPacketizer is required. These objects have no longevity, and in particular on a debug build the population of the Automaton's transition map from the underlying table is very slow (because it is called ~10 times per MachineFunction, in the testcase I'm looking at).
This patch changes Automaton to wrap its underlying constant data in std::shared_ptr, which allows trivial copy construction. The DFAPacketizer creation function now creates a static archetypical Automaton and copies that whenever a new DFAPacketizer is required.
This takes a testcase down from ~20s to ~0.5s in debug mode.
Roman Lebedev [Fri, 18 Oct 2019 13:20:16 +0000 (13:20 +0000)]
[NFC][CVP] Count all the no-wraps we proved
Summary:
It looks like this is the only missing statistic in the CVP pass.
Since we prove NSW and NUW separately i'd think we should count them separately too.
Graham Hunter [Fri, 18 Oct 2019 11:48:35 +0000 (11:48 +0000)]
[AArch64][SVE] Add SPLAT_VECTOR ISD Node
Adds a new ISD node to replicate a scalar value across all elements of
a vector. This is needed for scalable vectors, since BUILD_VECTOR cannot
be used.
Fixes up default type legalization for scalable vectors after the
new MVT type ranges were introduced.
At present I only use this node for scalable vectors. A DAGCombine has
been added to transform a BUILD_VECTOR into a SPLAT_VECTOR if all
elements are the same, but only if the default operation action of
Expand has been overridden by the target.
I've only added result promotion legalization for scalable vector
i8/i16/i32/i64 types in AArch64 for now.
David Green [Fri, 18 Oct 2019 10:35:46 +0000 (10:35 +0000)]
[AArch64] Don't combine callee-save and local stack adjustment when optimizing for size
For arm64, D18619 introduced the ability to combine bumping the stack pointer
upfront in case it needs to be bumped for both the callee-save area as well as
the local stack area.
That diff already remarks that "This change can cause an increase in
instructions", but argues that even when that happens, it should be still be a
performance benefit because the number of micro-ops is reduced.
We have observed that this code-size increase can be significant in practice.
This diff disables combining stack bumping for methods that are marked as
optimize-for-size.
Example of a prologue with the behavior before this diff (combining stack bumping when possible):
sub sp, sp, #0x40
stp d9, d8, [sp, #0x10]
stp x20, x19, [sp, #0x20]
stp x29, x30, [sp, #0x30]
add x29, sp, #0x30
[... compute x8 somehow ...]
stp x0, x8, [sp]
And after this diff, if the method is marked as optimize-for-size:
stp d9, d8, [sp, #-0x30]!
stp x20, x19, [sp, #0x10]
stp x29, x30, [sp, #0x20]
add x29, sp, #0x20
[... compute x8 somehow ...]
stp x0, x8, [sp, #-0x10]!
Note that without combining the stack bump there are two auto-decrements,
nicely folded into the stp instructions, whereas otherwise there is a single
sub sp, ... instruction, but not folded.
David Green [Fri, 18 Oct 2019 09:47:48 +0000 (09:47 +0000)]
[Codegen] Alter the default promotion for saturating adds and subs
The default promotion for the add_sat/sub_sat nodes currently does:
ANY_EXTEND iN to iM
SHL by M-N
[US][ADD|SUB]SAT
L/ASHR by M-N
If the promoted add_sat or sub_sat node is not legal, this can produce code
that effectively does a lot of shifting (and requiring large constants to be
materialised) just to use the overflow flag. It is simpler to just do the
saturation manually, using the higher bitwidth addition and a min/max against
the saturating bounds. That is what this patch attempts to do.
Kerry McLaughlin [Fri, 18 Oct 2019 09:40:16 +0000 (09:40 +0000)]
[AArch64][SVE] Implement unpack intrinsics
Summary:
Implements the following intrinsics:
- int_aarch64_sve_sunpkhi
- int_aarch64_sve_sunpklo
- int_aarch64_sve_uunpkhi
- int_aarch64_sve_uunpklo
This patch also adds AArch64ISD nodes for UNPK instead of implementing
the intrinsics directly, as they are required for a future patch which
implements the sign/zero extension of legal vectors.
This patch includes tests for the Subdivide2Argument type added by D67549
David Blaikie [Thu, 17 Oct 2019 23:02:19 +0000 (23:02 +0000)]
DebugInfo: Move loclist base address from DwarfFile to DebugLocStream
There's no need to have more than one of these (there can be two
DwarfFiles - one for the .o, one for the .dwo - but only one loc/loclist
section (either in the .o or the .dwo) & certainly one per
DebugLocStream, which is currently singular in DwarfDebug)
Jordan Rupprecht [Thu, 17 Oct 2019 21:55:43 +0000 (21:55 +0000)]
Reland [llvm-objdump] Use a counter for llvm-objdump -h instead of the section index.
This relands r374931 (reverted in r375088). It fixes 32-bit builds by using the right format string specifier for uint64_t (PRIu64) instead of `%d`.
Original description:
When listing the index in `llvm-objdump -h`, use a zero-based counter instead of the actual section index (e.g. shdr->sh_index for ELF).
While this is effectively a noop for now (except one unit test for XCOFF), the index values will change in a future patch that filters certain sections out (e.g. symbol tables). See D68669 for more context. Note: the test case in `test/tools/llvm-objdump/X86/section-index.s` already covers the case of incrementing the section index counter when sections are skipped.
Don Hinton [Thu, 17 Oct 2019 21:54:15 +0000 (21:54 +0000)]
[Error] Make llvm::cantFail include the original error messages
Summary:
The current implementation eats the current errors and just outputs
the message parameter passed to llvm::cantFail. This change appends
the original error message(s), so the user can see exactly why
cantFail failed. New logic is conditional on NDEBUG.
Jordan Rupprecht [Thu, 17 Oct 2019 20:51:00 +0000 (20:51 +0000)]
[llvm-objcopy] Add support for shell wildcards
Summary: GNU objcopy accepts the --wildcard flag to allow wildcard matching on symbol-related flags. (Note: it's implicitly true for section flags).
The basic syntax is to allow *, ?, \, and [] which work similarly to how they work in a shell. Additionally, starting a wildcard with ! causes that wildcard to prevent it from matching a flag.
Use an updated GlobPattern in libSupport to handle these patterns. It does not fully match the `fnmatch` used by GNU objcopy since named character classes (e.g. `[[:digit:]]`) are not supported, but this should support most existing use cases (mostly just `*` is what's used anyway).
Julian Lettner [Thu, 17 Oct 2019 20:22:32 +0000 (20:22 +0000)]
Reland "[lit] Synthesize artificial deadline"
We always want to use a deadline when calling `result.await`. Let's
synthesize an artificial deadline (now plus one year) to simplify code
and do less busy waiting.
Thanks to Reid Kleckner for diagnosing that a deadline for of "positive
infinity" does not work with Python 3 anymore. See commit: 4ff1e34b606d9a9fcfd8b8b5449a558315af94e5
Shoaib Meenai [Thu, 17 Oct 2019 19:24:58 +0000 (19:24 +0000)]
[cmake] Pass external project source directories to sub-configures
We're passing LLVM_EXTERNAL_PROJECTS to cross-compilation configures, so
we also need to pass the source directories of those projects, otherwise
configuration can fail from not finding them.
Header64.offset/Header64.size are uint64_t, thus we should not
truncate them to unit32_t. Moreover, there are a number of places
where we sum the offset and the size (e.g. in various checks in MachOUniversal.cpp),
the truncation causes issues since the offset/size can perfectly fit into uint32_t,
while the sum overflows.
Nemanja Ivanovic [Thu, 17 Oct 2019 18:24:28 +0000 (18:24 +0000)]
[PowerPC] Turn on CR-Logical reducer pass
Quite a while ago, we implemented a pass that will reduce the number of
CR-logical operations we emit. It does so by converting a CR-logical operation
into a branch. We have kept this off by default because it seemed to cause a
significant regression with one benchmark.
However, that regression turned out to be due to a completely unrelated
reason - AADB introducing a self-copy that is a priority-setting nop and it was
just exacerbated by this pass.
Now that we understand the reason for the only degradation, we can turn this
pass on by default. We have long since fixed the cause for the degradation.
Jordan Rupprecht [Thu, 17 Oct 2019 18:09:05 +0000 (18:09 +0000)]
Reapply r375051: [support] GlobPattern: add support for `\` and `[!...]`, and allow `]` in more places
Reland r375051 (reverted in r375052) after fixing lld tests on Windows in r375126 and r375131.
Original description: Update GlobPattern in libSupport to handle a few more cases. It does not fully match the `fnmatch` used by GNU objcopy since named character classes (e.g. `[[:digit:]]`) are not supported, but this should support most existing use cases (mostly just `*` is what's used anyway).
This will be used to implement the `--wildcard` flag in llvm-objcopy to be more compatible with GNU objcopy.
This is split off of D66613 to land the libSupport changes separately. The llvm-objcopy part will land soon.
Philip Reames [Thu, 17 Oct 2019 17:29:07 +0000 (17:29 +0000)]
[IndVars] Split loop predication out of optimizeLoopExits [NFC]
In the process of writing D69009, I realized we have two distinct sets of invariants within this single function, and basically no shared logic. The optimize loop exit transforms (including the new one in D69009) only care about *analyzeable* exits. Loop predication, on the other hand, has to reason about *all* exits. At the moment, we have the property (due to the requirement for an exact btc) that all exits are analyzeable, but that will likely change in the future as we add widenable condition support.
Reid Kleckner [Thu, 17 Oct 2019 17:28:31 +0000 (17:28 +0000)]
[codeview] Workaround for PR43479, don't re-emit instr labels
Summary:
In the long run we should come up with another mechanism for marking
call instructions as heap allocation sites, and remove this workaround.
For now, we've had two bug reports about this, so let's apply this
workaround. SLH (the other client of instruction labels) probably has
the same bug, but the solution there is more likely to be to mark the
call instruction as not duplicatable, which doesn't work for debug info.
Julian Lettner [Thu, 17 Oct 2019 16:01:18 +0000 (16:01 +0000)]
[lit] Synthesize artificial deadline
We always want to use a deadline when calling `result.await`. Let's
synthesize an artificial deadline (positive infinity) to simplify code
and do less busy waiting.
Joel E. Denny [Thu, 17 Oct 2019 14:03:06 +0000 (14:03 +0000)]
[lit] Extend internal diff to support `-` argument
When using lit's internal shell, RUN lines like the following
accidentally execute an external `diff` instead of lit's internal
`diff`:
```
# RUN: program | diff file -
```
Such cases exist now, in `clang/test/Analysis` for example. We are
preparing patches to ensure lit's internal `diff` is called in such
cases, which will then fail because lit's internal `diff` doesn't
recognize `-` as a command-line option. This patch adds support for
`-` to mean stdin.
Joel E. Denny [Thu, 17 Oct 2019 14:02:42 +0000 (14:02 +0000)]
[lit] Make internal diff work in pipelines
When using lit's internal shell, RUN lines like the following
accidentally execute an external `diff` instead of lit's internal
`diff`:
```
# RUN: program | diff file -
# RUN: not diff file1 file2 | FileCheck %s
```
Such cases exist now, in `clang/test/Analysis` for example. We are
preparing patches to ensure lit's internal `diff` is called in such
cases, which will then fail because lit's internal `diff` cannot
currently be used in pipelines and doesn't recognize `-` as a
command-line option.
To enable pipelines, this patch moves lit's `diff` implementation into
an out-of-process script, similar to lit's `cat` implementation. A
follow-up patch will implement `-` to mean stdin.
Sam Parker [Thu, 17 Oct 2019 12:11:18 +0000 (12:11 +0000)]
[ARM][MVE] Enable truncating masked stores
Allow us to generate truncating masked store which take v4i32 and
v8i16 vectors and can store to v4i8, v4i16 and v8i8 and memory.
Removed support for unaligned masked stores.
Roman Lebedev [Thu, 17 Oct 2019 11:01:29 +0000 (11:01 +0000)]
[LoopIdiom] BCmp: check, not assert that loop exits exit out of the loop (PR43687)
We can't normally stumble into that assertion because a tautological
*conditional* `br` in loop body is required, one that always
branches to loop latch. But that should have been always folded
to an unconditional branch before we get it.
But that is not guaranteed if the pass is run standalone.
So let's just promote the assertion into a proper check.
Oliver Stannard [Thu, 17 Oct 2019 09:58:57 +0000 (09:58 +0000)]
Reland: Dead Virtual Function Elimination
Remove dead virtual functions from vtables with
replaceNonMetadataUsesWith, so that CGProfile metadata gets cleaned up
correctly.
Original commit message:
Currently, it is hard for the compiler to remove unused C++ virtual
functions, because they are all referenced from vtables, which are referenced
by constructors. This means that if the constructor is called from any live
code, then we keep every virtual function in the final link, even if there
are no call sites which can use it.
This patch allows unused virtual functions to be removed during LTO (and
regular compilation in limited circumstances) by using type metadata to match
virtual function call sites to the vtable slots they might load from. This
information can then be used in the global dead code elimination pass instead
of the references from vtables to virtual functions, to more accurately
determine which functions are reachable.
To make this transformation safe, I have changed clang's code-generation to
always load virtual function pointers using the llvm.type.checked.load
intrinsic, instead of regular load instructions. I originally tried writing
this using clang's existing code-generation, which uses the llvm.type.test
and llvm.assume intrinsics after doing a normal load. However, it is possible
for optimisations to obscure the relationship between the GEP, load and
llvm.type.test, causing GlobalDCE to fail to find virtual function call
sites.
The existing linkage and visibility types don't accurately describe the scope
in which a virtual call could be made which uses a given vtable. This is
wider than the visibility of the type itself, because a virtual function call
could be made using a more-visible base class. I've added a new
!vcall_visibility metadata type to represent this, described in
TypeMetadata.rst. The internalization pass and libLTO have been updated to
change this metadata when linking is performed.
This doesn't currently work with ThinLTO, because it needs to see every call
to llvm.type.checked.load in the linkage unit. It might be possible to
extend this optimisation to be able to use the ThinLTO summary, as was done
for devirtualization, but until then that combination is rejected in the
clang driver.
To test this, I've written a fuzzer which generates random C++ programs with
complex class inheritance graphs, and virtual functions called through object
and function pointers of different types. The programs are spread across
multiple translation units and DSOs to test the different visibility
restrictions.
I've also tried doing bootstrap builds of LLVM to test this. This isn't
ideal, because only classes in anonymous namespaces can be optimised with
-fvisibility=default, and some parts of LLVM (plugins and bugpoint) do not
work correctly with -fvisibility=hidden. However, there are only 12 test
failures when building with -fvisibility=hidden (and an unmodified compiler),
and this change does not cause any new failures for either value of
-fvisibility.
On the 7 C++ sub-benchmarks of SPEC2006, this gives a geomean code-size
reduction of ~6%, over a baseline compiled with "-O2 -flto
-fvisibility=hidden -fwhole-program-vtables". The best cases are reductions
of ~14% in 450.soplex and 483.xalancbmk, and there are no code size
increases.
I've also run this on a set of 8 mbed-os examples compiled for Armv7M, which
show a geomean size reduction of ~3%, again with no size increases.
I had hoped that this would have no effect on performance, which would allow
it to awlays be enabled (when using -fwhole-program-vtables). However, the
changes in clang to use the llvm.type.checked.load intrinsic are causing ~1%
performance regression in the C++ parts of SPEC2006. It should be possible to
recover some of this perf loss by teaching optimisations about the
llvm.type.checked.load intrinsic, which would make it worth turning this on
by default (though it's still dependent on -fwhole-program-vtables).
Mikhail Maltsev [Thu, 17 Oct 2019 08:59:06 +0000 (08:59 +0000)]
[Analysis] Don't assume that unsigned overflow can't happen in EmitGEPOffset (PR42699)
Summary:
Currently when computing a GEP offset using the function EmitGEPOffset
for the following instruction
getelementptr inbounds i32, i32* %p, i64 %offs
we get
mul nuw i64 %offs, 4
Unfortunately we cannot assume that unsigned wrapping won't happen
here because %offs is allowed to be negative.
Making such assumptions can lead to miscompilations: see the new test
test24_neg_offs in InstCombine/icmp.ll. Without the patch InstCombine
would generate the following comparison:
Hans Wennborg [Thu, 17 Oct 2019 08:52:29 +0000 (08:52 +0000)]
Revert r374931 "[llvm-objdump] Use a counter for llvm-objdump -h instead of the section index."
This broke llvm-objdump in 32-bit builds, see e.g.
http://lab.llvm.org:8011/builders/clang-cmake-armv7-quick/builds/10925
> Summary:
> When listing the index in `llvm-objdump -h`, use a zero-based counter instead of the actual section index (e.g. shdr->sh_index for ELF).
>
> While this is effectively a noop for now (except one unit test for XCOFF), the index values will change in a future patch that filters certain sections out (e.g. symbol tables). See D68669 for more context. Note: the test case in `test/tools/llvm-objdump/X86/section-index.s` already covers the case of incrementing the section index counter when sections are skipped.
>
> Reviewers: grimar, jhenderson, espindola
>
> Reviewed By: grimar
>
> Subscribers: emaste, sbc100, arichardson, aheejin, arphaman, seiya, llvm-commits, MaskRay
>
> Tags: #llvm
>
> Differential Revision: https://reviews.llvm.org/D68848
James Molloy [Thu, 17 Oct 2019 08:34:29 +0000 (08:34 +0000)]
[DFAPacketizer] Use DFAEmitter. NFC.
Summary:
This is a NFC change that removes the NFA->DFA construction and emission logic from DFAPacketizerEmitter and instead uses the generic DFAEmitter logic. This allows DFAPacketizer to use the Automaton class from Support and remove a bunch of logic there too.
After this patch, DFAPacketizer is mostly logic for grepping Itineraries and collecting functional units, with no state machine logic. This will allow us to modernize by removing the 16-functional-unit limit and supporting non-itinerary functional units. This is all for followup patches.