James Henderson [Wed, 25 Sep 2019 13:09:12 +0000 (13:09 +0000)]
[docs][llvm-strip] Update llvm-strip doc to better match llvm-objcopy's
Main changes are mostly wording of some options, but this change also
fixes a switch reference so that a link is created and moves
--strip-sections into the ELF-specific area since it is only supported
for ELF currently.
George Rimar [Wed, 25 Sep 2019 12:09:30 +0000 (12:09 +0000)]
[yaml2elf] - Support describing .stack_sizes sections using unique suffixes.
Currently we can't use unique suffixes in section names to describe
stack sizes sections. E.g. '.stack_sizes [1]' will be treated as a regular section.
This happens because we recognize stack sizes section by name and
do not yet drop the suffix before the check.
George Rimar [Wed, 25 Sep 2019 11:40:11 +0000 (11:40 +0000)]
[yaml2obj] - Add a Size field for StackSizesSection.
It is a follow-up requested in the review comment
for D67757. Allows to use Content + Size or just Size
when describing .stack_sizes sections in YAML document
David Green [Wed, 25 Sep 2019 10:16:48 +0000 (10:16 +0000)]
[ARM] Ensure we do not attempt to create lsll #0
During legalisation we can end up with some pretty strange nodes, like shifts
of 0. We need to make sure we don't try to make long shifts of these, ending up
with invalid assembly instructions. A long shift with a zero immediate actually
encodes a shift by 32.
George Rimar [Wed, 25 Sep 2019 10:14:50 +0000 (10:14 +0000)]
[llvm-readobj] - Don't crash when dumping .stack_sizes and unable to find a relocation resolver.
The crash might happen when we have either a broken or unsupported object
and trying to resolve relocations when dumping the .stack_sizes section.
For the test case I used a 32-bits ELF header and a 64-bit relocation.
In this case a null pointer is returned by the code instead of the relocation
resolver function and then we crash.
Thomas Lively [Wed, 25 Sep 2019 00:15:59 +0000 (00:15 +0000)]
[WebAssembly][NFC] Remove duplicate SIMD instructions and predicates
Summary:
Instead of having different v128.load and v128.store instructions for
each MVT, just have one of each that is reused in all the
patterns. Also removes the HasSIMD128 predicate where accompanied by
HasUnimplementedSIMD128, since the latter implies the former.
Yonghong Song [Tue, 24 Sep 2019 22:38:43 +0000 (22:38 +0000)]
[BPF] Generate array dimension size properly for zero-size elements
Currently, if an array element type size is 0, the number of
array elements will be set to 0, regardless of what user
specified. This implementation is done in the beginning where
BTF is mostly used to calculate the member offset.
For example,
struct s {};
struct s1 {
int b;
struct s a[2];
};
struct s1 s1;
The BTF will have struct "s1" member "a" with element count 0.
Now BTF types are used for compile-once and run-everywhere
relocations and we need more precise type representation
for type comparison. Andrii reported the issue as there
are differences between original structure and BTF-generated
structure.
This patch made the change to correctly assign "2"
as the number elements of member "a".
Some dead codes related to ElemSize compuation are also removed.
Adding support for overriding LLVM_ENABLE_RUNTIMES for runtimes builds.
Second attempt: Now with ';' -> '|' replacement.
On some platforms, certain runtimes are not supported. For runtimes builds of
those platforms it would be nice if we could disable certain runtimes (ie
libunwind on Windows).
Philip Reames [Tue, 24 Sep 2019 17:24:16 +0000 (17:24 +0000)]
[GCRelocate] Add a peephole to canonicalize base pointer relocation
If we generate the gc.relocate, and then later prove two arguments to the statepoint are equivalent, we should canonicalize the gc.relocate to the form we would have produced if this had been known before rewriting.
Roman Lebedev [Tue, 24 Sep 2019 16:10:50 +0000 (16:10 +0000)]
[InstCombine] (a+b) < a && (a+b) != 0 -> (0-b) < a iff a/b != 0 (PR43259)
Summary:
This is again motivated by D67122 sanitizer check enhancement.
That patch seemingly worsens `-fsanitize=pointer-overflow`
overhead from 25% to 50%, which strongly implies missing folds.
For
```
#include <cassert>
char* test(char& base, signed long offset) {
__builtin_assume(offset < 0);
return &base + offset;
}
```
We produce
https://godbolt.org/z/r40U47
and again those two icmp's can be merged:
```
Name: 0
Pre: C != 0
%adjusted = add i8 %base, C
%not_null = icmp ne i8 %adjusted, 0
%no_underflow = icmp ult i8 %adjusted, %base
%r = and i1 %not_null, %no_underflow
=>
%neg_offset = sub i8 0, C
%r = icmp ugt i8 %base, %neg_offset
```
https://rise4fun.com/Alive/ALap
https://rise4fun.com/Alive/slnN
There are 3 other variants of this pattern,
i believe they all will go into InstSimplify.
Roman Lebedev [Tue, 24 Sep 2019 16:10:38 +0000 (16:10 +0000)]
[InstCombine] (a+b) <= a && (a+b) != 0 -> (0-b) < a (PR43259)
Summary:
This is again motivated by D67122 sanitizer check enhancement.
That patch seemingly worsens `-fsanitize=pointer-overflow`
overhead from 25% to 50%, which strongly implies missing folds.
This pattern isn't exactly what we get there
(strict vs. non-strict predicate), but this pattern does not
require known-bits analysis, so it is best to handle it first.
Regex: Make "match" and "sub" const member functions
Summary:
The Regex "match" and "sub" member functions were previously not "const"
because they wrote to the "error" member variable. This commit removes
those assignments, and instead assumes that the validity of the regex
is already known after the initial compilation of the regular
expression. As a result, these member functions were possible to make
"const". This makes it easier to do things like pre-compile Regexes
up-front, and makes "match" and "sub" thread-safe. The error status is
now returned as an optional output, which also makes the API of "match"
and "sub" more consistent with each other.
Also, some uses of Regex that could be refactored to be const were made const.
David Bolvansky [Tue, 24 Sep 2019 14:01:14 +0000 (14:01 +0000)]
[Compiler] Fix LLVM_NODISCARD for GCC
Summary:
This branch is currently dead since we don't use C++17.
#if __cplusplus > 201402L && LLVM_HAS_CPP_ATTRIBUTE(nodiscard)
#define LLVM_NODISCARD [[nodiscard]]
This branch is Clang-only.
#elif LLVM_HAS_CPP_ATTRIBUTE(clang::warn_unused_result)
#define LLVM_NODISCARD [[clang::warn_unused_result]]
While we could use gnu variant [[gnu::warn_unused_result]], it is not ideal because it works only on functions.
/home/xbolva00/LLVM/llvm/include/llvm/ADT/ArrayRef.h:41:24: warning: ‘warn_unused_result’ attribute only applies to function types [-Wattributes]
GCC (checked 5,6,7,8) seems to enable [[nodiscard]] even in C++14 mode and does not produce warnings that nodiscard is C++17 feature. but Clang does - but we do not reach it due the code above. So it affects only GCC and does what we want.
The static analyzer is warning about a potential null dereference, but we should be able to use cast<Instruction> directly and if not assert will fire for us.
Revert r372333: [DAG][X86] Convert isNegatibleForFree/GetNegatedExpression to a target hook (PR42863)
Reason: this caused severe compile time regressions in JAX.
See email thread of original revision on llvm-commits for details:
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20190923/697042.html
The static analyzer is warning about a potential null dereference, but we should be able to use cast<CmpInst> directly and if not assert will fire for us.
The static analyzer is warning about a potential null dereference, but we should be able to use cast<LandingPadInst> directly and if not assert will fire for us.
The static analyzer is warning about a potential null dereference, but we should be able to use cast<Instruction> directly and if not assert will fire for us.
David Green [Tue, 24 Sep 2019 10:53:09 +0000 (10:53 +0000)]
[ARM] Split large widening MVE loads
Similar to rL372717, we can force the splitting of extends of vector loads in
MVE, in order to use the better widening loads as opposed to going through
expensive extends. This adds a combine to early-on detect extends of loads and
split the load in two, from where normal legalisation will kick in and we get a
series of widening loads.
The static analyzer is warning about a potential null dereference, but we should be able to use cast<CallInst> directly and if not assert will fire for us.
David Green [Tue, 24 Sep 2019 10:10:41 +0000 (10:10 +0000)]
[ARM] Split large truncating MVE stores
MVE does not have a simple sign extend instruction that can move elements
across lanes. We currently often end up moving each lane into and out of a GPR,
in order to get elements into the correct places. When we have a store of a
trunc (or a extend of a load), we can instead just split the store/load in two,
using the narrowing/widening load/store instructions from each half of the
vector.
This does that for stores. It happens very early in a store combine, so as to
easily detect the truncates. (It would be possible to do this later, but that
would involve looking through a buildvector of extract elements. Not impossible
but this way seemed simpler).
By enabling store combines we also get a vmovdrr combine for free, helping some
other tests.
Pavel Labath [Tue, 24 Sep 2019 09:31:02 +0000 (09:31 +0000)]
MCRegisterInfo: Merge getLLVMRegNum and getLLVMRegNumFromEH
Summary:
The functions different in two ways:
- getLLVMRegNum could return both "eh" and "other" dwarf register
numbers, while getLLVMRegNumFromEH only returned the "eh" number.
- getLLVMRegNum asserted if the register was not found, while the second
function returned -1.
The second distinction was pretty important, but it was very hard to
infer that from the function name. Aditionally, for the use case of
dumping dwarf expressions, we needed a function which can work with both
kinds of number, but does not assert.
This patch solves both of these issues by merging the two functions into
one, returning an Optional<unsigned> value. While the same thing could
be achieved by adding an "IsEH" argument to the (renamed)
getLLVMRegNumFromEH function, it seemed better to avoid the confusion of
two functions and put the choice of asserting into the hands of the
caller -- if he checks the Optional value, he can safely process
"untrusted" input, and if he blindly dereferences the Optional, he gets
the assertion.
I've updated all call sites to the new API, choosing between the two
options according to the function they were calling originally, except
that I've updated the usage in DWARFExpression.cpp to use the "safe"
method instead, and added a test case which would have previously
triggered an assertion failure when processing (incorrect?) dwarf
expressions.
[LV] Forced vectorization with runtime checks and OptForSize
When vectorisation is forced with a pragma, we optimise for min size, and we
need to emit runtime memory checks, then allow this code growth and don't run
in an assert like we currently do.
This is the result of D65197 and D66803, and was a use-case not really
considered before. If this now happens, we emit an optimisation remark warning
about the code-size expansion, which can be avoided by not forcing
vectorisation or possibly source-code modifications.
[InstCombine] Fold a shifty implementation of clamp-to-allones.
Summary:
Fold
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X)
into
X s> Y ? -1 : X
https://rise4fun.com/Alive/d8Ab
clamp255 is a common operator in image processing, can be implemented
in a shifty way "(255 - X) >> 31 | X & 255". Fold shift into select
enables more optimization, e.g., vmin generation for ARM target.
[GlobalISel][IRTranslator] Fix switch table lowering to use signed LE not unsigned.
We were miscompiling switch value comparisons with the wrong signedness, which
shows up when we have things like switch case values with i1 types, which end up
being legalized incorrectly.
Summary:
MemoryPhis may be needed following a Def insertion inthe IDF of all the
new accesses added (phis + potentially a def). Ensure this also occurs when
only the new MemoryPhis are the defining accesses.
Note: The need for computing IDF here is because of new Phis added with
edges incoming from unreachable code, Phis that had previously been
simplified. The preferred solution is to not reintroduce such Phis.
This patch is the needed fix while working on the preferred solution.
HotColdSplitting: invalidate the AssumptionCache on split
When a cold path is outlined, the value tracking in the assumption cache may be
invalidated due to the code motion. We would previously trip an assertion in
subsequent passes (but required the passes to happen in a single run as the
assumption cache is shared across the passes). Invalidating the cache ensures
that we get the correct information when needed with the legacy pass manager as
well.
Wei Mi [Mon, 23 Sep 2019 22:11:35 +0000 (22:11 +0000)]
[SampleFDO] Treat names in profile as not cold only when profile symbol list
is available
In rL372232, we treated names showing up in profile as not cold when
profile-sample-accurate is enabled. This caused 70k size regression in
Chrome/Android. The patch put a guard and only enable the change when
profile symbol list is available, i.e., keep the old behavior when profile
symbol list is not available.
Thomas Lively [Mon, 23 Sep 2019 20:42:12 +0000 (20:42 +0000)]
[WebAssembly] vNxM.load_splat instructions
Summary:
Adds the new load_splat instructions as specified at
https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md#load-and-splat.
DAGISel does not allow matching multiple copies of the same load in a
single pattern, so we use a new node in WebAssemblyISD to wrap loads
that should be splatted.
David Bolvansky [Mon, 23 Sep 2019 19:55:45 +0000 (19:55 +0000)]
[InstCombine] Annotate strndup calls with dereferenceable_or_null
"Implementations are free to malloc() a buffer containing either (size + 1) bytes or (strnlen(s, size) + 1) bytes. Applications should not assume that strndup() will allocate (size + 1) bytes when strlen(s) is smaller than size."
[X86] Use TargetConstant for condition code on X86ISD::SETCC/CMOV/BRCOND nodes.
This removes the need for ConvertToTarget opcodes in the isel table.
It's also consistent with the recent changes to use TargetConstant
for intrinsic nodes that always take immediates.
[TableGen] Emit OperandType enums for RegisterOperands/RegisterClasses
https://reviews.llvm.org/D66773
The OpTypes::OperandType was creating an enum for all records that
inherit from Operand, but in reality there are operands for instructions
that inherit from other types too. In particular, RegisterOperand and
RegisterClass. This commit adds those types to the list of operand types
that are tracked by the OperandType enum.
Roman Lebedev [Mon, 23 Sep 2019 17:04:28 +0000 (17:04 +0000)]
[InstCombine] dropRedundantMaskingOfLeftShiftInput(): pat. c/d/e with mask (PR42563)
Summary:
If we have a pattern `(x & (-1 >> maskNbits)) << shiftNbits`,
we already know (have a fold) that will drop the `& (-1 >> maskNbits)`
mask iff `(shiftNbits-maskNbits) s>= 0` (i.e. `shiftNbits u>= maskNbits`).
So even if `(shiftNbits-maskNbits) s< 0`, we can still
fold, we will just need to apply a **constant** mask afterwards:
```
Name: c, normal+mask
%t0 = lshr i32 -1, C1
%t1 = and i32 %t0, %x
%r = shl i32 %t1, C2
=>
%n0 = shl i32 %x, C2
%n1 = i32 ((-(C2-C1))+32)
%n2 = zext i32 %n1 to i64
%n3 = lshr i64 -1, %n2
%n4 = trunc i64 %n3 to i32
%r = and i32 %n0, %n4
```
https://rise4fun.com/Alive/gslRa
Naturally, old `%masked` will have to be one-use.
This is not valid for pattern f - where "masking" is done via `ashr`.
Roman Lebedev [Mon, 23 Sep 2019 17:04:14 +0000 (17:04 +0000)]
[InstCombine] dropRedundantMaskingOfLeftShiftInput(): pat. a/b with mask (PR42563)
Summary:
And this is **finally** the interesting part of that fold!
If we have a pattern `(x & (~(-1 << maskNbits))) << shiftNbits`,
we already know (have a fold) that will drop the `& (~(-1 << maskNbits))`
mask iff `(maskNbits+shiftNbits) u>= bitwidth(x)`.
But that is actually ignorant, there's more general fold here:
In this pattern, `(maskNbits+shiftNbits)` actually correlates
with the number of low bits that will remain in the final value.
So even if `(maskNbits+shiftNbits) u< bitwidth(x)`, we can still
fold, we will just need to apply a **constant** mask afterwards:
```
Name: a, normal+mask
%onebit = shl i32 -1, C1
%mask = xor i32 %onebit, -1
%masked = and i32 %mask, %x
%r = shl i32 %masked, C2
=>
%n0 = shl i32 %x, C2
%n1 = add i32 C1, C2
%n2 = zext i32 %n1 to i64
%n3 = shl i64 -1, %n2
%n4 = xor i64 %n3, -1
%n5 = trunc i64 %n4 to i32
%r = and i32 %n0, %n5
```
https://rise4fun.com/Alive/F5R
Naturally, old `%masked` will have to be one-use.
Similar fold exists for patterns c,d,e, will post patch later.
[BreakFalseDeps] ignore function with minsize attribute
This came up in the x86-specific:
https://bugs.llvm.org/show_bug.cgi?id=43239
...but it is a general problem for the BreakFalseDeps pass.
Dependencies may be broken by adding some other instruction,
so that should be avoided if the overall goal is to minimize size.
[SLP] Fix for PR31847: Assertion failed: (isLoopInvariant(Operands[i], L) && "SCEVAddRecExpr operand is not loop-invariant!")
Summary:
Initially SLP vectorizer replaced all going-to-be-vectorized
instructions with Undef values. It may break ScalarEvaluation and may
cause a crash.
Reworked SLP vectorizer so that it does not replace vectorized
instructions by UndefValue anymore. Instead vectorized instructions are
marked for deletion inside if BoUpSLP class and deleted upon class
destruction.
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
The matchSelectPattern const wrapper is never explicitly called with the optional Instruction::CastOps argument, and it turns out that it wasn't being forwarded to matchSelectPattern anyway!
Noticed while investigating clang static analyzer warnings.
llvm-undname: Add support for demangling typeinfo names
typeinfo names aren't symbols but string constant contents
stored in compiler-generated typeinfo objects, but llvm-cxxfilt
can demangle these for Itanium names.
In the MSVC ABI, these are just a '.' followed by a mangled
type -- this means they don't start with '?' like all MS-mangled
symbols do.