Pavel Labath [Fri, 20 Jul 2018 12:59:05 +0000 (12:59 +0000)]
[DebugInfo] Generate .debug_names section when it makes sense
Summary:
This patch makes us generate the debug_names section in response to some
user-facing commands (previously it was only generated if explicitly
selected via the -accel-tables option).
My goal was to make this work for DWARF>=5 (as it's an official part of
that standard), and also, as an extension, for DWARF<5 if one is
explicitly tuning for lldb as a debugger (because it brings a large
performance improvement there).
This is slightly complicated by the fact that the debug_names tables are
incompatible with the DWARF v4 type units (they assume that the type
units are in the debug_info section), and unfortunately, right now we
generate DWARF v4-style type units even for -gdwarf-5. For this reason,
I disable all accelerator tables if the user requested type unit
generation. I do this even for apple tables, as they have the same
problem (in fact generating type units for apple targets makes us crash
even before we get around to emitting the accelerator tables).
[UBSan] Also use blacklist for 'Address; Undefined' setting
It looks like currently the UBSan blacklist is only applied when "Undefined" is selected.
This patch updates the cmake file to apply it whenever Undefined is selected
(e.g. 'Address; Undefined' ). This allows us to use the workaround added in
rL335525 when using AddressSan and UBSan together.
Jonas Paulsson [Fri, 20 Jul 2018 09:40:43 +0000 (09:40 +0000)]
[SystemZ] Reimplent SchedModel IssueWidth and WriteRes/ReadAdvance mappings.
As a consequence of recent discussions
(http://lists.llvm.org/pipermail/llvm-dev/2018-May/123164.html), this patch
changes the SystemZ SchedModels so that the IssueWidth is 6, which is the
decoder capacity, and NumMicroOps become the number of decoder slots needed
per instruction.
In addition, the SchedWrite latencies now match the MachineInstructions
def-operand indexes, and ReadAdvances have been added on instructions with
one register operand and one memory operand.
The failing bots that had `SmallVector` in the backtrace recovered after
the unrelated commit r337508. The backtraces looked bogus anyway, with
`SmallVector::size()` calling (e.g.) `ConstantArray::get()`.
Here's the original commit message:
ADT: Shrink size of SmallVector by 8B on 64-bit platforms
Represent size and capacity directly as unsigned and calculate
`end()` using `begin() + size()`.
This limits the maximum size/capacity of a vector to UINT32_MAX.
Summary:
lifetime2.C violates DR1696, which prevents reference members from being
initialized to temporaries, whose lifetime would end at the end of ctor.
[x86/SLH] Clean up helper naming for return instruction handling and
remove dead declaration of a call instruction handling helper.
This moves to the 'harden' terminology that I've been trying to settle
on for returns. It also adds a really detailed comment explaining what
all we're trying to accomplish with return instructions and why.
Hopefully this makes it much more clear what exactly is being
"hardened".
Eli Friedman [Thu, 19 Jul 2018 23:02:07 +0000 (23:02 +0000)]
[SCCP] Don't use markForcedConstant on branch conditions.
It's more aggressive than we need to be, and leads to strange
workarounds in other places like call return value inference. Instead,
just directly mark an edge viable.
Teresa Johnson [Thu, 19 Jul 2018 22:25:56 +0000 (22:25 +0000)]
[ThinLTO] Only emit referenced type id records in index files
Summary:
Currently all type ids are emitted into the index file when it is
written. For distributed ThinLTO, that meant that all type ids were
being duplicated into every single distributed index file, regardless of
whether they were referenced, leading to huge amounts of unnecessary
duplication and size bloat.
Keep track of the type id GUIDs actually referenced by the GV summary
records being emitted, and only emit those type IDs.
Add a new test, and fix test/Assembler/thinlto-summary.ll so that all
type ids are referenced to prevent deletion in that test.
Simon Pilgrim [Thu, 19 Jul 2018 21:52:06 +0000 (21:52 +0000)]
[X86][AVX] Use extract_subvector to reduce vector op widths (PR36761)
We have a number of cases where we fail to reduce vector op widths, performing the op in a larger vector and then extracting a subvector. This is often because by default it would create illegal types.
This peephole patch attempts to handle a few common cases detailed in PR36761, which typically involved extension+conversion to vX2f64 types.
Matt Davis [Thu, 19 Jul 2018 20:33:59 +0000 (20:33 +0000)]
[llvm-mca][docs] Add Timeline and How MCA works.
For the most part, these changes were from the RFC. I made a few minor
word/structure changes, but nothing significant. I also regenerated the
example output, and adjusted the text accordingly.
This particular version of GCC seems to break bitfields when a method
appears between two bitfield members.
Personally, I think it's nice to keep bitfields close together so that
it's easy to check how things are packed, so I moved the method after
SubClassData.
It fires on things like SmallVector<std::pair<int, int>>, where we
intentionally use memcpy instead of calling the assignment operator.
This warning fires in practically every LLVM TU, so we have to do
something about it, even if we aren't interested in being 100% warning
clean with GCC.
[X86] Fix some 'return SDValue()' after DCI.CombineTo instead return the output of CombineTo
Returning SDValue() means nothing was changed. Returning the result of CombineTo returns the first argument of CombineTo. This is specially detected by DAGCombiner as meaning that something changed, but worklist management was already taken care of.
I think the only real effect of this change is that we now properly update the Statistic the counts the number of combines performed. That's the only thing between the check for null and the check for N in the DAGCombiner.
Roman Tereshin [Thu, 19 Jul 2018 19:42:43 +0000 (19:42 +0000)]
[LSV] Refactoring + supporting bitcasts to a type of different size
This is mostly a preparation work for adding a limited support for
select instructions. It proved to be difficult to do due to size and
irregularity of Vectorizer::isConsecutiveAccess, this is fixed here I
believe.
It also turned out that these changes make it simpler to finish one of
the TODOs and fix a number of other small issues, namely:
1. Looking through bitcasts to a type of a different size (requires
careful tracking of the original load/store size and some math
converting sizes in bytes to expected differences in indices of GEPs).
2. Reusing partial analysis of pointers done by first attempt in proving
them consecutive instead of starting from scratch. This added limited
support for nested GEPs co-existing with difficult sext/zext
instructions. This also required a careful handling of negative
differences between constant parts of offsets.
3. Handing a case where the first pointer index is not an add, but
something else (a function parameter for instance).
I observe an increased number of successful vectorizations on a large
set of shader programs. Only few shaders are affected, but those that
are affected sport >5% less loads and stores than before the patch.
As we already return true from needsAggressiveScheduling() for the most recent
hardware it would be cleaner to just return true for all PowerPC hardware.
[APInt] Keep the original bit width in quotient and remainder
Some trivial cases in udivrem were handled by directly assigning 0 or 1
to APInt objects. This would set the bit width to 1, instead of the bit
width of the inputs. A potentially undesirable side effect of that is
that with the bit width of 1, 1 equals -1.
[LoadStoreVectorizer] Use getMinusScev() to compute the distance between two pointers.
Summary: Currently, isConsecutiveAccess() detects two pointers(PtrA and PtrB) as consecutive by
comparing PtrB with BaseDelta+PtrA. This works when both pointers are factorized or
both of them are not factorized. But isConsecutiveAccess() fails if one of the
pointers is factorized but the other one is not.
Here is an example:
PtrA = 4 * (A + B)
PtrB = 4 + 4A + 4B
This patch uses getMinusSCEV() to compute the distance between two pointers.
getMinusSCEV() allows combining the expressions and computing the simplified distance.
Andrea Di Biagio [Thu, 19 Jul 2018 16:42:15 +0000 (16:42 +0000)]
[X86][BtVer2] correctly model the latency/throughput of LEA instructions.
This patch fixes the latency/throughput of LEA instructions in the BtVer2
scheduling model.
On Jaguar, A 3-operands LEA has a latency of 2cy, and a reciprocal throughput of
1. That is because it uses one cycle of SAGU followed by 1cy of ALU1. An LEA
with a "Scale" operand is also slow, and it has the same latency profile as the
3-operands LEA. An LEA16r has a latency of 3cy, and a throughput of 0.5 (i.e.
RThrouhgput of 2.0).
This patch adds a new TIIPredicate named IsThreeOperandsLEAFn to X86Schedule.td.
The tablegen backend (for instruction-info) expands that definition into this
(file X86GenInstrInfo.inc):
```
static bool isThreeOperandsLEA(const MachineInstr &MI) {
return (
(
MI.getOpcode() == X86::LEA32r
|| MI.getOpcode() == X86::LEA64r
|| MI.getOpcode() == X86::LEA64_32r
|| MI.getOpcode() == X86::LEA16r
)
&& MI.getOperand(1).isReg()
&& MI.getOperand(1).getReg() != 0
&& MI.getOperand(3).isReg()
&& MI.getOperand(3).getReg() != 0
&& (
(
MI.getOperand(4).isImm()
&& MI.getOperand(4).getImm() != 0
)
|| (MI.getOperand(4).isGlobal())
)
);
}
```
A similar method is generated in the X86_MC namespace, and included into
X86MCTargetDesc.cpp (the declaration lives in X86MCTargetDesc.h).
Back to the BtVer2 scheduling model:
A new scheduling predicate named JSlowLEAPredicate now checks if either the
instruction is a three-operands LEA, or it is an LEA with a Scale value
different than 1.
A variant scheduling class uses that new predicate to correctly select the
appropriate latency profile.
Teresa Johnson [Thu, 19 Jul 2018 14:51:32 +0000 (14:51 +0000)]
[ThinLTO] Enable ThinLTO WholeProgramDevirt and LowerTypeTests in new PM
Summary:
Enable these passes for CFI and WPD in ThinLTO and LTO with the new pass
manager. Add a couple of tests for both PMs based on the clang tests
tools/clang/test/CodeGen/thinlto-distributed-cfi*.ll, but just test
through llvm-lto2 and not with distributed ThinLTO.
[x86/SLH] Major refactoring of SLH implementaiton. There are two big
changes that are intertwined here:
1) Extracting the tracing of predicate state through the CFG to its own
function.
2) Creating a struct to manage the predicate state used throughout the
pass.
Doing #1 necessitates and motivates the particular approach for #2 as
now the predicate management is spread across different functions
focused on different aspects of it. A number of simplifications then
fell out as a direct consequence.
I went with an Optional to make it more natural to construct the
MachineSSAUpdater object.
This is probably the single largest outstanding refactoring step I have.
Things get a bit more surgical from here. My current goal, beyond
generally making this maintainable long-term, is to implement several
improvements to how we do interprocedural tracking of predicate state.
But I don't want to do that until the predicate state management and
tracing is in reasonably clear state.
Max Kazantsev [Thu, 19 Jul 2018 01:46:21 +0000 (01:46 +0000)]
[SCEV] Fix buggy behavior in getAddExpr with truncs
SCEV tries to constant-fold arguments of trunc operands in SCEVAddExpr, and when it does
that, it passes wrong flags into the recursion. It is only valid to pass flags that are proved for
narrow type into a computation in wider type if we can prove that trunc instruction doesn't
actually change the value. If it did lose some meaningful bits, we may end up proving wrong
no-wrap flags for sum of arguments of trunc.
In the provided test we end up with `nuw` where it shouldn't be because of this bug.
The solution is to conservatively pass `SCEV::FlagAnyWrap` which is always a valid thing to do.
This prevents gold from printing a warning when trying to export
these symbols via the asan dynamic list after ThinLTO promotes them
from private symbols to external symbols with hidden visibility.
David Blaikie [Wed, 18 Jul 2018 18:04:42 +0000 (18:04 +0000)]
[DebugInfo] Dwarfv5: Avoid unnecessary base_address specifiers in rnglists
Since DWARFv5 rnglists are self descriptive and have distinct encodings
for base-relative (offset_pair) and absolute (start_length) entries,
there's no need to use a base address specifier when describing a lone
address range in a section.
Use that, and improve the test coverage a bit here to include cases like
this and others.
Nirav Dave [Wed, 18 Jul 2018 18:01:03 +0000 (18:01 +0000)]
[ScheduleDAG] Fix unfolding of SUnits to already existent nodes.
Summary:
If unfolding an SUnit results in both load or the operation using it which
already exist in the DAG, abort the unfold if they are already scheduled.
If not, make sure we don't add duplicate dependencies.
Teresa Johnson [Wed, 18 Jul 2018 17:10:17 +0000 (17:10 +0000)]
[docs] Update GoldPlugin documentation
Summary:
Updated and reorganized. Made the following additions:
1) How to see if ld.gold is installed, and whether it is the current
default.
2) How to install ld.gold as the default or alternatively use
-fuse-ld=gold.
3) Move the part about installing the newly built ld-new as the default
to the prior section and how to use --enable-gold=default to do it
automatically on install.
4) Add a note about ld.bfd supporting plugins but indicate that it is
not tested by the LLVM project and gold is the recommended linker for
use with the gold plugin.
[x86/SLH] Add the design document for Speculative Load Hardening,
a Spectre v1 mitigation.
This was initially posted w/ the patch implementing this, got some basic
review there. Also, it is generated from a the Google doc that I shared
as part of the Speculative Load Hardening RFC and which has seen pretty
widespread review at this point.
However, as the patches are landing in LLVM, I wanted to land the docs
as well. But it seemed like a bad idea to have them in the same commit
in case of reverts or other things. So the docs are split out here.
Thanks for all the review so far, and further review and improvements to
the documentation here welcome. Please feel free to keep hammering on
the code review or Google document.
Note that this is a markdown document which Sphinx doesn't yet process.
But we can add support for that after and this should get picked up
(and I'm preparing patches for that). Also, this gets the document
itself into a nice shared place where we can iterate on it.
Tim Northover [Wed, 18 Jul 2018 12:36:25 +0000 (12:36 +0000)]
ARM: deduplicate hard-float detection code. NFC.
ARMSubtarget had a copy/pasted block to determine whether the target was
hard-float, but it just delegated to triple features anyway so it's better at
the TargetMachine level.
[AArch64][SVE] Asm: Support for unpredicated FP operations.
This patch adds support for the following unpredicated
floating-point instructions:
FADD Floating point add
FSUB Floating point subtract
FMUL Floating point multiplication
FTSMUL Floating point trigonometric starting value
FRECPS Floating point reciprocal step
FRSQRTS Floating point reciprocal square root step
The instructions have the following assembly format:
fadd z0.h, z1.h, z2.h
and have variants for 16, 32 and 64-bit FP elements.
As discussed in https://reviews.llvm.org/D49179#1158957 and later,
the IR for 'check for [no] signed truncation' pattern can be improved:
https://rise4fun.com/Alive/gBf
^ that pattern will be produced by Implicit Integer Truncation sanitizer,
https://reviews.llvm.org/D48958 https://bugs.llvm.org/show_bug.cgi?id=21530
in signed case, therefore it is probably a good idea to improve it.
The DAGCombine will reverse this transform, see
https://reviews.llvm.org/D49266
This transform is surprisingly frustrating.
This does not deal with non-splat shift amounts, or with undef shift amounts.
I've outlined what i think the solution should be:
```
// Potential handling of non-splats: for each element:
// * if both are undef, replace with constant 0.
// Because (1<<0) is OK and is 1, and ((1<<0)>>1) is also OK and is 0.
// * if both are not undef, and are different, bailout.
// * else, only one is undef, then pick the non-undef one.
```
This is a re-commit, as the original patch, committed in rL337190
was reverted in rL337344 as it broke chromium build:
https://bugs.llvm.org/show_bug.cgi?id=38204 and
https://crbug.com/864832
Proofs that the fixed folds are ok: https://rise4fun.com/Alive/VYM
Simon Pilgrim [Wed, 18 Jul 2018 10:54:13 +0000 (10:54 +0000)]
[X86][SSE] Add extra scalar fop + blend tests for commuted inputs
While working on PR38197, I noticed that we don't make use of FADD/FMUL being able to commute the inputs to support the addps+movss -> addss style combine
[AArch64][SVE] Asm: Support for UDOT/SDOT instructions.
The signed/unsigned DOT instructions perform a dot-product on
quadtuplets from two source vectors and accumulate the result in
the destination register. The instructions come in two forms:
Vector form, e.g.
sdot z0.s, z1.b, z2.b - signed dot product on four 8-bit quad-tuplets,
accumulating results in 32-bit elements.
udot z0.d, z1.h, z2.h - unsigned dot product on four 16-bit quad-tuplets,
accumulating results in 64-bit elements.
Indexed form, e.g.
sdot z0.s, z1.b, z2.b[3] - signed dot product on four 8-bit quad-tuplets
with specified quadtuplet from second
source vector, accumulating results in 32-bit
elements.
udot z0.d, z1.h, z2.h[1] - dot product on four 16-bit quad-tuplets
with specified quadtuplet from second
source vector, accumulating results in 64-bit
elements.
George Rimar [Wed, 18 Jul 2018 09:25:36 +0000 (09:25 +0000)]
[llvm-objdump] - An attempt to fix BB after r337361.
Seems r337361 is the reason of the following ARM BB failures:
http://lab.llvm.org:8011/builders/clang-cmake-armv8-quick
http://lab.llvm.org:8011/builders/clang-cmake-armv8-full/builds/4633
Reason is unclear to me, other bots are OK.
If this will not help, I'll revert r337361.
This patch adds the following predicated instructions:
UDIV Unsigned divide active elements
UDIVR Unsigned divide active elements, reverse form.
SDIV Signed divide active elements
SDIVR Signed divide active elements, reverse form.
e.g.
udiv z0.s, p0/m, z0.s, z1.s
(unsigned divide active elements in z0 by z1, store result in z0)
sdivr z0.s, p0/m, z0.s, z1.s
(signed divide active elements in z1 by z0, store result in z0)
Philip Pfaffe [Wed, 18 Jul 2018 08:53:31 +0000 (08:53 +0000)]
[CMake] Export the LLVM_LINK_LLVM_DYLIB setting
Summary:
When building out-of-tree tools, there are several macros available to
automate linking against llvm. An examples is `add_llvm_executable`, or
the clang variant of this.
These macros use the LLVM_LINK_LLVM_DYLIB option to decide whether to
link against libraries defined by setting LLVM_LINK_COMPONENTS or to
link against libLLVM instead. Currently this is problematic in
out-of-tree targets, because they cannot identify whether this option is
required or even available. If the option was enabled in LLVM's own
build, the clang libraries are built against libLLVM, so a client
linking against those must link against it too. On the other hand the
client can't just always link against it, because it might not be
available.
This is related to D44391, but that change assumed the client knew
whether they wanted the dylib or not.
Imagine we have a file with few sections, and one of them is .foo
with index N != 0.
Problem is that when llvm-objdump is given a -section=.foo parameter
it lists .foo as a section at index 0. That makes impossible to write
test cases which needs to find the index of the particular section,
while ignoring dumping of others.
says that e_shnum and/or e_shstrndx may have special values if
"the number of sections is greater than or equal to SHN_LORESERVE" or
"the section name string table section index is greater than or equal to SHN_LORESERVE (0xff00)"
Previously llvm-readobj was unable to dump such files, patch changes that.
I had to add a precompiled test case because it does not seem possible to
prepare a test using yaml2obj or llvm-mc (not clear how to make .shstrtab
to have index >= SHN_LORESERVE).
[AArch64][SVE] Asm: Support for integer MUL instructions.
This patch adds the following instructions:
MUL - multiply vectors, e.g.
mul z0.h, p0/m, z0.h, z1.h
- multiply with immediate, e.g.
mul z0.h, z0.h, #127
SMULH - signed multiply returning high half, e.g.
smulh z0.h, p0/m, z0.h, z1.h
UMULH - unsigned multiply returning high half, e.g.
umulh z0.h, p0/m, z0.h, z1.h
* Delete a no-longer-used override, and mark the other
getRegisterTypeForCallingConv() as override.
* SPE only supports i32, not i64, as the internal type, so simply remove
the type check, so that DestReg and Opc are provably always set.
[X86] Remove patterns that mix X86ISD::MOVLHPS/MOVHLPS with v2i64/v2f64 types.
The X86ISD::MOVLHPS/MOVHLPS should now only be emitted in SSE1 only. This means that the v2i64/v2f64 types would be illegal thus we don't need these patterns.
[X86] Generate v2f64 X86ISD::UNPCKL/UNPCKH instead of X86ISD::MOVLHPS/MOVHLPS for unary v2f64 {0,0} and {1,1} shuffles with SSE2.
I'm trying to restrict the MOVLHPS/MOVHLPS ISD nodes to SSE1 only. With SSE2 we can use unpcks. I believe this will allow some patterns to be cleaned up to require fewer bitcasts.
I've put in an odd isel hack to still select MOVHLPS instruction from the unpckh node to avoid changing tests and because movhlps is a shorter encoding. Ideally we'd do execution domain switching on this, but the operands are in the wrong order and are tied. We might be able to try a commute in the domain switching using custom code.
We already support domain switching for UNPCKLPD and MOVLHPS.