Summary:
Enabling MemorySSA in the old pass manager leads to MemorySSA being run
twice due to the fact that LCSSA and LoopSimplify do not preserve
MemorySSA. This is the first step to address that: target LCSSA.
LCSSA does not make any changes that invalidate MemorySSA, so it
preserves it by design. It must preserve AA as well, for this to hold.
After this patch, MemorySSA is still run twice in the old pass manager.
Step two follows: target LoopSimplify.
Apparently FileCheck wasn't actually matching the fallback check lines in
arm64-vfloatintrinsics.ll properly. So, there were selection fallbacks for
G_INTRINSIC_TRUNC there.
Actually hook it up into AArch64InstructionSelector.cpp and write a proper
selection test.
I guess I'll figure out the FileCheck magic to make the fallback checks work
properly in arm64-vfloatintrinsics.ll.
; After optimization
%1 = tail call i8* @foo1()
%2 = tail call i8* @llvm.objc.retainAutoreleasedReturnValue(i8* %1)
store i8* %2, i8** @g0, align 8 // %1 is replaced with %2
Before replacing the argument use, DominatorTree::dominate is called to
determine whether the user instruction is dominated by the ObjC runtime
function call instruction. The call to DominatorTree::dominate can be
expensive if the two instructions belong to the same basic block and the
size of the basic block is large. This patch checks the basic block size
and just bails out if the size exceeds the limit set by command line
option "arc-contract-max-bb-size".
David Blaikie [Tue, 23 Apr 2019 19:00:45 +0000 (19:00 +0000)]
Reapply: "DebugInfo: Emit only one kind of accelerated access/name table""
Originally committed in r358931
Reverted in r358997
Seems this change made Apple accelerator tables miss names (because
names started respecting the CU NameTableKind GNU & assuming that
shouldn't produce accelerated names too), which is never correct (apple
accelerator tables don't have separators or CU lists - if present, they
must describe all names in all CUs).
Original Description:
Currently to opt in to debug_names in DWARFv5, the IR must contain
'nameTableKind: Default' which also enables debug_pubnames.
Instead, only allow one of {debug_names, apple_names, debug_pubnames,
debug_gnu_pubnames}.
nameTableKind: Default gives debug_names in DWARFv5 and greater,
debug_pubnames in v4 and earlier - and apple_names when tuning for lldb
on MachO.
nameTableKind: GNU always gives gnu_pubnames
Teresa Johnson [Tue, 23 Apr 2019 18:56:19 +0000 (18:56 +0000)]
[ThinLTO] Pass down opt level to LTO backend and handle -O0 LTO in new PM
Summary:
The opt level was not being passed down to the ThinLTO backend when
invoked via clang (for distributed ThinLTO).
This exposed an issue where the new PM was asserting if the Thin or
regular LTO backend pipelines were invoked with -O0 (not a new issue,
could be provoked by invoking in-process *LTO backends via linker using
new PM and -O0). Fix this similar to the old PM where -O0 only does the
necessary lowering of type metadata (WPD and LowerTypeTest passes) and
then quits, rather than asserting.
llvm-cvtres: Split addChild(ID) into two functions
Before, there was an IsData parameter. Now, there are two different
functions for data nodes and ID nodes. No behavior change, needed for a
follow-up change to make two data nodes (but not two ID nodes) with the
same ID an error.
For consistency, rename another addChild() overload to addNameChild().
Add urem support to ConstantRange, so we can handle in in LVI. This
is an approximate implementation that tries to capture the most useful
conditions: If the LHS is always strictly smaller than the RHS, then
the urem is a no-op and the result is the same as the LHS range.
Otherwise the lower bound is zero and the upper bound is
min(LHSMax, RHSMax - 1).
Joel E. Denny [Tue, 23 Apr 2019 17:04:15 +0000 (17:04 +0000)]
[APSInt][OpenMP] Fix isNegative, etc. for unsigned types
Without this patch, APSInt inherits APInt::isNegative, which merely
checks the sign bit without regard to whether the type is actually
signed. isNonNegative and isStrictlyPositive call isNegative and so
are also affected.
This patch adjusts APSInt to override isNegative, isNonNegative, and
isStrictlyPositive with implementations that consider whether the type
is signed.
A large set of Clang OpenMP tests are affected. Without this patch,
these tests assume that `true` is not a valid argument for clauses
like `collapse`. Indeed, `true` fails APInt::isStrictlyPositive but
not APSInt::isStrictlyPositive. This patch adjusts those tests to
assume `true` should be accepted.
This patch also adds tests revealing various other similar fixes due
to APSInt::isNegative calls in Clang's ExprConstant.cpp and
SemaExpr.cpp: `++` and `--` overflow in `constexpr`, evaluated object
size based on `alloc_size`, `<<` and `>>` shift count validation, and
OpenMP array section validation.
Philip Reames [Tue, 23 Apr 2019 15:25:14 +0000 (15:25 +0000)]
[InstCombine] Convert a masked.load of a dereferenceable address to an unconditional load
If we have a masked.load from a location we know to be dereferenceable, we can simply issue a speculative unconditional load against that address. The key advantage is that it produces IR which is well understood by the optimizer. The select (cnd, load, passthrough) form produced should be pattern matchable back to hardware predication if profitable.
[x86] use psubus for more vsetcc lowering (PR39859)
Circling back to a leftover bit from PR39859:
https://bugs.llvm.org/show_bug.cgi?id=39859#c1
...we have this counter-intuitive (based on the test diffs) opportunity to use 'psubus'.
This appears to be the better perf option for both Haswell and Jaguar based on llvm-mca.
We already do this transform for the SETULT predicate, so this makes the code more
symmetrical too. If we have pminub/pminuw, we prefer those, so this should not affect
anything but pre-SSE4.1 subtargets.
Tim Northover [Tue, 23 Apr 2019 13:50:13 +0000 (13:50 +0000)]
ARM: disallow add/sub to sp unless Rn is also sp.
The manual says that Thumb2 add/sub instructions are only allowed to modify sp
if the first source is also sp. This is slightly different from the usual rGPR
restriction since it's context-sensitive, so implement it in C++.
If we only match build vectors, we can miss some patterns
that use shuffles as seen in the affected tests.
Note that the underlying calls within getSplatSourceVector()
have the potential for compile-time explosion because of
exponential recursion looking through binop opcodes, but
currently the list of supported opcodes is very limited.
Both of those problems should be addressed in follow-up
patches.
Summary:
When an LCSSA phi survives through instruction selection, the pass
ends up removing that phi entirely because it is dominated by the
logic that does the lanemask merging.
This then used to trigger an assertion when processing a dependent
phi instruction.
David Green [Tue, 23 Apr 2019 12:11:26 +0000 (12:11 +0000)]
[ARM] Update check for CBZ in Ifcvt
The check for creating CBZ in constant island pass recently obtained the
ability to search backwards to find a Cmp instruction. The code in IfCvt should
mirror this to allow more conversions to the smaller form. The common code has
been pulled out into a separate function to be shared between the two places.
David Green [Tue, 23 Apr 2019 11:46:58 +0000 (11:46 +0000)]
[ARM] Don't replicate instructions in Ifcvt at minsize
Ifcvt can replicate instructions as it converts them to be predicated. This
stops that from happening on thumb2 targets at minsize where an extra IT
instruction is likely needed.
[DAGCombiner] Combine OR as ADD when no common bits are set
Summary:
The DAGCombiner is rewriting (canonicalizing) an ISD::ADD
with no common bits set in the operands as an ISD::OR node.
This could sometimes result in "missing out" on some
combines that normally are performed for ADD. To be more
specific this could happen if we already have rewritten an
ADD into OR, and later (after legalizations or combines)
we expose patterns that could have been optimized if we
had seen the OR as an ADD (e.g. reassociations based on ADD).
To make the DAG combiner less sensitive to if ADD or OR is
used for these "no common bits set" ADD/OR operations we
now apply most of the ADD combines also to an OR operation,
when value tracking indicates that the operands have no
common bits set.
This patch provides intrinsics support for Memory Tagging Extension (MTE),
which was introduced with the Armv8.5-a architecture.
The intrinsics are described in detail in the latest
ACLE Q1 2019 documentation: https://developer.arm.com/docs/101028/latest
Reviewed by: David Spickett
Differential Revision: https://reviews.llvm.org/D60486
George Rimar [Tue, 23 Apr 2019 09:16:53 +0000 (09:16 +0000)]
[llvm-mc] - Properly set the the address align field of the compressed sections.
About the compressed sections spec says:
(https://docs.oracle.com/cd/E37838_01/html/E36783/section_compression.html)
sh_addralign fields of the section header for a compressed section
reflect the requirements of the compressed section.
Currently, llvm-mc always puts uncompressed section alignment to sh_addralign.
It is not correct. zlib styled section contains an Elfxx_Chdr header,
so we should either use 4 or 8 values depending on the target
(Uncompressed section alignment is stored in ch_addralign field of the compression header).
GNU assembler version 2.31.1 also has this issue,
but in 2.32.51 it was already fixed. This is how it was found
during debugging of the https://bugs.llvm.org/show_bug.cgi?id=40482
actually.
David Green [Tue, 23 Apr 2019 08:52:21 +0000 (08:52 +0000)]
[LSR] Limit the recursion for setup cost
In some circumstances we can end up with setup costs that are very complex to
compute, even though the scevs are not very complex to create. This can also
lead to setupcosts that are calculated to be exactly -1, which LSR treats as an
invalid cost. This patch puts a limit on the recursion depth for setup cost to
prevent them taking too long.
While this patch *seems* trivial and safe and correct, it is not. The
copies are actually load bearing copies. You can observe this with MSan
or other ways of checking for use-after-destroy, but otherwise this may
result in ... difficult to debug inexplicable behavior.
I suspect the issue is that the debug location is used after the
original reference to it is removed. The metadata backing it gets
destroyed as its last references goes away, and then we reference it
later through these const references.
Petr Hosek [Mon, 22 Apr 2019 23:31:39 +0000 (23:31 +0000)]
[CMake] Replace the sanitizer support in runtimes build with multilib
This is a more generic solution; while the sanitizer support can be used
only for sanitizer instrumented builds, the multilib support can be used
to build other variants such as noexcept which is what we would like to use
in Fuchsia.
The name CMake target name uses the target name, same as for the regular
runtimes build and the name of the multilib, concatenated with '+'. The
libraries are installed in a subdirectory named after the multilib.
David Blaikie [Mon, 22 Apr 2019 22:45:11 +0000 (22:45 +0000)]
DebugInfo: Emit only one kind of accelerated access/name table
Currently to opt in to debug_names in DWARFv5, the IR must contain
'nameTableKind: Default' which also enables debug_pubnames.
Instead, only allow one of {debug_names, apple_names, debug_pubnames,
debug_gnu_pubnames}.
nameTableKind: Default gives debug_names in DWARFv5 and greater,
debug_pubnames in v4 and earlier - and apple_names when tuning for lldb
on MachO.
nameTableKind: GNU always gives gnu_pubnames
Michael Liao [Mon, 22 Apr 2019 22:05:49 +0000 (22:05 +0000)]
[AMDGPU] Fix an issue in `op_sel_hi` skipping.
Summary:
- Only apply packed literal `op_sel_hi` skipping on operands requiring
packed literals. Even an instruction is `packed`, it may have operand
requiring non-packed literal, such as `v_dot2_f32_f16`.
Adrian Prantl [Mon, 22 Apr 2019 21:33:22 +0000 (21:33 +0000)]
[dsymutil] Collect parseable Swift interfaces in the .dSYM bundle.
When a Swift module built with debug info imports a library without
debug info from a textual interface, the textual interface is
necessary to reconstruct types defined in the library's interface. By
recording the Swift interface files in DWARF dsymutil can collect them
and LLDB can find them.
This patch teaches dsymutil to look for DW_TAG_imported_modules and
records all references to parseable Swift ingterfrace files and copies
them to
Philip Reames [Mon, 22 Apr 2019 20:28:19 +0000 (20:28 +0000)]
[InstCombine] Eliminate stores to constant memory
If we have a store to a piece of memory which is known constant, then we know the store must be storing back the same value. As a result, the store (or memset, or memmove) must either be down a dead path, or a noop. In either case, it is valid to simply remove the store.
The motivating case for this involves a memmove to a buffer which is constant down a path which is dynamically dead.
Note that I'm choosing to implement the less aggressive of two possible semantics here. We could simply say that the store *is undefined*, and prune the path. Consensus in the review was that the more aggressive form might be a good follow on change at a later date.
Bob Haarman [Mon, 22 Apr 2019 19:46:25 +0000 (19:46 +0000)]
[Support] unflake TempFileCollisions test
Summary:
This test was added to verify that createUniqueEntity() does
not enter an infinite loop when all possible names are taken. However,
it also checked that all possible names are generated, which is flaky
(because the names are generated randomly). This change increases the
number of attempts we make to make flakes exceedingly
unlikely (3.88e-62).
Matt Arsenault [Mon, 22 Apr 2019 19:14:26 +0000 (19:14 +0000)]
AMDGPU: Skip debug instructions in assert
These are inserted after branch relaxation, and for some reason it's
decided to put them in the long branch expansion block. It's probably
not great to rely on the source block address, so this should probably
be switched to being PC relative instead of relying on the block
address
Back in August, r340525 introduced a dependency on the assumption
cache tracker in the ipsccp pass, but that commit missed a call to
INITIALIZE_PASS_DEPENDENCY, which leaves the assumption cache
improperly registered if SCCP is the only thing that pulls it in.
Currently, we do not expose BPI to loop passes at all. In the old pass manager, we appear to have been ignoring the fact that LCSSA and/or LoopSimplify didn't preserve BPI, and making it available to the following loop passes anyways. In the new one, it's invalidated before running any loop pass if either LCSSA or LoopSimplify actually make changes. If they don't make changes, then BPI is valid and available. So, we go ahead and teach LCSSA and LoopSimplify how to preserve BPI for consistency between old and new pass managers.
This patch avoids an invalidation between the two requires in the following trivial pass pipeline:
opt -passes="requires<branch-prob>,loop(no-op-loop),requires<branch-prob>"
(when the input file is one which requires either LCSSA or LoopSimplify to canonicalize the loops)
Wei Mi [Mon, 22 Apr 2019 17:04:51 +0000 (17:04 +0000)]
[PGO/SamplePGO][NFC] Move the function updateProfWeight from Instruction
to CallInst.
The issue was raised here: https://reviews.llvm.org/D60903#1472783
The function Instruction::updateProfWeight is only used for CallInst in
profile update. From the current interface, it is very easy to think that
the function can also be used for branch instruction. However, Branch
instruction does't need the scaling the function provides for
branch_weights and VP (value profile), in addition, scaling may introduce
inaccuracy for branch probablity.
The patch moves the function updateProfWeight from Instruction class to
CallInst to remove the confusion. The patch also changes the scaling of
branch_weights from a loop to a block because we know that ProfileData
for branch_weights of CallInst will only have two operands at most.
This patch adds support for BigBitWidth -> SmallBitWidth bitcasts, splitting the DemandedBits/Elts accordingly.
The AMDGPU backend needed an extra (srl (and x, c1 << c2), c2) -> (and (srl(x, c2), c1) combine to encourage BFE creation, I investigated putting this in DAGCombine but it caused a lot of noise on other targets - some improvements, some regressions.
Following D60632 makeGuaranteedNoWrapRegion() always returns an
exact nowrap region. Rename the function accordingly. This is in
line with the naming of makeExactICmpRegion().
Lang Hames [Sun, 21 Apr 2019 20:34:19 +0000 (20:34 +0000)]
[JITLink] Add an option to dump relocated section content.
The -dump-relocated-section-content option will dump the contents of each
section after relocations are applied, and before any checks are run or
code executed.
Since the symlinks list for llvm-symbolizer is now never empty,
the :symlinks target no longer needs an explicit dep on :llvm-symbolizer
-- there will be at least one dep on a symlink, and each symlink depends
on :llvm-symbolizer already.
Since llvm-symbolizer:symlinks now produces symlinks that check-llvm
uses, make llvm/test depend on the symlink target.