Philip Reames [Fri, 15 Mar 2019 17:50:30 +0000 (17:50 +0000)]
[X86][GlobalISEL] Support lowering aligned unordered atomics
The existing lowering code is accidentally correct for unordered atomics as far as I can tell. An unordered atomic has no memory ordering, and simply requires the actual load or store to be done as a single well aligned instruction. As such, relax the restriction while adding tests to ensure the lowering remains correct in the future.
Yonghong Song [Fri, 15 Mar 2019 17:39:10 +0000 (17:39 +0000)]
[BPF] handle external global properly
Previous commit 6bc58e6d3dbd ("[BPF] do not generate unused local/global types")
tried to exclude global variable from type generation. The condition is:
if (Global.hasExternalLinkage())
continue;
This is not right. It also excluded initialized globals.
The correct condition (from AssemblyWriter::printGlobal()) is:
if (!GV->hasInitializer() && GV->hasExternalLinkage())
Out << "external ";
Let us do the same in BTF type generation. Also added a test for it.
Signed-off-by: Yonghong Song <yhs@fb.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356279 91177308-0d34-0410-b5e6-96231b3b80d8
Nikita Popov [Fri, 15 Mar 2019 17:29:05 +0000 (17:29 +0000)]
[ConstantRange] Add overflow check helpers
Add functions to ConstantRange that determine whether the
unsigned/signed addition/subtraction of two ConstantRanges
may/always/never overflows. This will allow checking overflow
conditions based on known constant ranges in addition to known bits.
I'm implementing these methods on ConstantRange to allow them to be
unit tested independently of any ValueTracking machinery. The tests
include exhaustive testing on 4-bit ranges, to make sure the result
is both conservatively correct and maximally precise.
The OverflowResult enum is redeclared on ConstantRange, because
I wanted to avoid a dependency in either direction between
ValueTracking.h and ConstantRange.h.
Pavel Labath [Fri, 15 Mar 2019 15:34:10 +0000 (15:34 +0000)]
YAMLIO: Improve endian type support
Summary:
Now that endian types support enumerations (D59141), the existing yaml
support for them is somewhat insufficient. The current solution was to
define the ScalarTraits class for these types, which always forwards to
the ScalarTraits of the underlying type. However, the enum types will
usually have ScalarEnumerationTraits of ScalarBitsetTraits.
In this patch I add the two extra Traits types to the endian types. In
order to properly SFINAE-ize them, I've also added an extra "Enable"
template argument to the Traits template classes.
Teresa Johnson [Fri, 15 Mar 2019 15:11:38 +0000 (15:11 +0000)]
[ThinLTO] Restructure AliasSummary to contain ValueInfo of Aliasee
Summary:
The AliasSummary previously contained the AliaseeGUID, which was only
populated when reading the summary from bitcode. This patch changes it
to instead hold the ValueInfo of the aliasee, and always populates it.
This enables more efficient access to the ValueInfo (specifically in the
recent patch r352438 which needed to perform an index hash table lookup
using the aliasee GUID).
As noted in the comments in AliasSummary, we no longer technically need
to keep a pointer to the corresponding aliasee summary, since it could
be obtained by walking the list of summaries on the ValueInfo looking
for the summary in the same module. However, I am concerned that this
would be inefficient when walking through the index during the thin
link for various analyses. That can be reevaluated in the future.
By always populating this new field, we can remove the guard and special
handling for a 0 aliasee GUID when dumping the dot graph of the summary.
An additional improvement in this patch is when reading the summaries
from LLVM assembly we now set the AliaseeSummary field to the aliasee
summary in that same module, which makes it consistent with the behavior
when reading the summary from bitcode.
Mikael Holmen [Fri, 15 Mar 2019 13:51:05 +0000 (13:51 +0000)]
[CodeGenPrepare] avoid crashing from replacing a phi twice
Summary:
This is a fix to bug 41052:
https://bugs.llvm.org/show_bug.cgi?id=41052
While trying to optimize a memory instruction in a dead basic block, we end up registering the same phi for replacement twice. This patch avoids registering more than the first replacement candidate for a phi.
Michael Liao [Fri, 15 Mar 2019 12:42:21 +0000 (12:42 +0000)]
[AMDGPU] Fix SGPR fixing through SCC chaining
Summary:
- During the fixing of SGPR copying from VGPR, ensure users of SCC is
properly propagated, i.e.
* only propagate through live def of SCC,
* skip the SCC-def inst itself, and
* stop the propagation on the other SCC-def inst after checking its
SCC-use first.
Florian Hahn [Fri, 15 Mar 2019 12:17:36 +0000 (12:17 +0000)]
[LSR] Check for signed overflow in NarrowSearchSpaceByDetectingSupersets.
We are adding a sign extended IR value to an int64_t, which can cause
signed overflows, as in the attached test case, where we have a formula
with BaseOffset = -1 and a constant with numeric_limits<int64_t>::min().
If the addition would overflow, skip the simplification for this
formula. Note that the target triple is required to trigger the failure.
James Henderson [Fri, 15 Mar 2019 10:35:27 +0000 (10:35 +0000)]
[yaml2obj]Allow explicit setting of p_filesz, p_memsz, and p_offset
yaml2obj currently derives the p_filesz, p_memsz, and p_offset values of
program headers from their sections. This makes writing tests for
certain formats more complex, and sometimes impossible. This patch
allows setting these fields explicitly, overriding the default value,
when relevant.
Yonghong Song [Fri, 15 Mar 2019 05:51:25 +0000 (05:51 +0000)]
[BPF] do not generate unused local/global types
The kernel currently has a limit for # of types to be 64KB and
the size of string subsection to be 64KB. A simple bcc tool
runqlat.py generates:
. the size of ~33KB type section, roughly ~10K types
. the size of ~17KB string section
The majority type is from the types referenced by local
variables in the bpf program. For example, the kernel "task_struct"
itself recursively brings in ~900 other types.
This patch did the following optimization to avoid generating
unused types:
. do not generate types for local variables unless they are
function arguments.
. do not generate types for external globals.
If an external global is not used in the program, llvm
already removes it from IR, so global variable saving is
typical small. For runqlat.py, only one variable "llvm.used"
is the external global.
The types for locals and external globals can be added back
once there is a usage for them.
After the above optimization, the runqlat.py generates:
. the size of ~1.5KB type section, roughtly 500 types
. the size of ~0.7KB string section
UPDATE:
resubmitted the patch after previous revert with
the following fix:
use Global.hasExternalLinkage() to test "external"
linkage instead of using Global.getInitializer(),
which will assert on external variables.
Signed-off-by: Yonghong Song <yhs@fb.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356234 91177308-0d34-0410-b5e6-96231b3b80d8
Yonghong Song [Fri, 15 Mar 2019 04:42:01 +0000 (04:42 +0000)]
[BPF] do not generate unused local/global types
The kernel currently has a limit for # of types to be 64KB and
the size of string subsection to be 64KB. A simple bcc tool
runqlat.py generates:
. the size of ~33KB type section, roughly ~10K types
. the size of ~17KB string section
The majority type is from the types referenced by local
variables in the bpf program. For example, the kernel "task_struct"
itself recursively brings in ~900 other types.
This patch did the following optimization to avoid generating
unused types:
. do not generate types for local variables unless they are
function arguments.
. do not generate types for external globals.
If an external global is not used in the program, llvm
already removes it from IR, so global variable saving is
typical small. For runqlat.py, only one variable "llvm.used"
is the external global.
The types for locals and external globals can be added back
once there is a usage for them.
After the above optimization, the runqlat.py generates:
. the size of ~1.5KB type section, roughtly 500 types
. the size of ~0.7KB string section
Signed-off-by: Yonghong Song <yhs@fb.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356232 91177308-0d34-0410-b5e6-96231b3b80d8
Matt Arsenault [Thu, 14 Mar 2019 23:45:09 +0000 (23:45 +0000)]
AMDGPU: Remove intrinsic operand assert
Before r355981, this was under LLVM_DEBUG. I don't think the assert is
quite right, but this really should be a verifier check. Instcombine
should not be asserting on this sort of thing.
Sanjay Patel [Thu, 14 Mar 2019 23:14:31 +0000 (23:14 +0000)]
[CGP] add another bailout for degenerate code (PR41064)
This is almost the same as:
rL355345
...and should prevent any potential crashing from examples like:
https://bugs.llvm.org/show_bug.cgi?id=41064
...although the bug was masked by:
rL355823
...and I'm not sure how to repro the problem after that change.
Paul Robinson [Thu, 14 Mar 2019 23:09:17 +0000 (23:09 +0000)]
Tighten up tests that use -debugify as a shortcut. NFC
These now verify that a given instruction has a specific source
location, rather than any old location. We want to make sure we
propagate the correct locations from one instruction to another.
Eli Friedman [Thu, 14 Mar 2019 23:08:19 +0000 (23:08 +0000)]
[MC] Sort FDEs by the associated CIE before emitting them.
This isn't necessary according to the DWARF standard, but it matches the
.eh_frame sections emitted by other tools in practice, and the Android
libunwindstack rejects .eh_frame sections where an FDE refers to a CIE
other than the closest previous CIE. So match the other tools and also
sort accordingly.
I consider this a bug in libunwindstack, but it's easy enough to emit
a compatible .eh_frame section for compatibility with installed
operating systems.
Matt Arsenault [Thu, 14 Mar 2019 22:54:43 +0000 (22:54 +0000)]
MIR: Allow targets to serialize MachineFunctionInfo
This has been a very painful missing feature that has made producing
reduced testcases difficult. In particular the various registers
determined for stack access during function lowering were necessary to
avoid undefined register errors in a large percentage of
cases. Implement a subset of the important fields that need to be
preserved for AMDGPU.
Most of the changes are to support targets parsing register fields and
properly reporting errors. The biggest sort-of bug remaining is for
fields that can be initialized from the IR section will be overwritten
by a default initialized machineFunctionInfo section. Another
remaining bug is the machineFunctionInfo section is still printed even
if empty.
Pete Couperus [Thu, 14 Mar 2019 20:50:54 +0000 (20:50 +0000)]
[ARC] Add more load/store variants.
On ARC ISA, general format of load instruction is this:
LD<zz><.x><.aa><.di> a, [b,c]
And general format of store is this:
ST<zz><.aa><.di> c, [b,s9]
Where:
<zz> is data size field and can be one of
<empty> (bits 00) - Word (32-bit), default behavior
B (bits 01) - Byte
H (bits 10) - Half-word (16-bit)
<.x> is data extend mode:
<empty> (bit 0) - If size is not Word(32-bit), then data is zero extended
X (bit 1) - If size is not Word(32-bit), then data is sign extended
<.aa> is address write-back mode:
<empty> (bits 00) - no write-back
.AW (bits 01) - Preincrement, base register updated pre memory transaction
.AB (bits 10) - Postincrement, base register updated post memory transaction
Sanjay Patel [Thu, 14 Mar 2019 19:22:08 +0000 (19:22 +0000)]
[InstCombine] canonicalize funnel shift constant shift amount to be modulo bitwidth
The shift argument is defined to be modulo the bitwidth, so if that argument
is a constant, we can always reduce the constant to its minimal form to allow
better CSE and other follow-on transforms.
We need to be careful to ignore constant expressions here, or we will likely
infinite loop. I'm adding a general vector constant query for that case.
Pete Couperus [Thu, 14 Mar 2019 17:50:46 +0000 (17:50 +0000)]
[ARC] Better classify add/sub immediate instructions in frame lowering.
Summary:
Some operations have multiple ARC instructions that are applicable.
For instance, "add r0, r0, 123" can be encoded as a "LImm" instruction
with a 32-bit immediate (8-bytes), or as a signed 12-bit immediate instruction
for the case where the source and destination register are the same (4-bytes).
The ARC assembler will choose the shortest encoding, but we should track
the correct instruction in the compiler.
This patch fixes the instruction used in some cases from ARCFrameLowering.
Max Moroz [Thu, 14 Mar 2019 17:49:27 +0000 (17:49 +0000)]
Speeding up llvm-cov export with multithreaded renderFiles implementation.
Summary:
CoverageExporterJson::renderFiles accounts for most of the execution time given a large profdata file with multiple binaries.
Proposed solution is to generate JSON for each file in parallel and sort at the end to preserve deterministic output. Also added flags to skip generating parts of the output to trim the output size.
Building on the work done in D57601, now that we can distinguish between atomic and volatile memory accesses, go ahead and allow code motion of unordered atomics. As seen in the diffs, this allows much better folding of memory operations into using instructions. (Mostly done by the PeepholeOpt pass.)
Note: I have not reviewed all callers of hasOrderedMemoryRef since one of them - isSafeToMove - is very widely used. I'm relying on the documented semantics of each method to judge correctness.
Craig Topper [Thu, 14 Mar 2019 16:53:24 +0000 (16:53 +0000)]
[X86] Fix the pattern changes from r356121 so that the ROR*r1/ROR*m1 pattern use the rotr opcode.
These instructions used to use rotl with a bitwidth-1 immediate. I changed the immediate to 1,
but failed to change the opcode.
Thankfully this seems to have not caused a functional issue because we now had two rotl by 1 patterns,
but the correct ones were earlier and took priority. So we just missed some optimization.
Sanjay Patel [Thu, 14 Mar 2019 15:32:34 +0000 (15:32 +0000)]
[x86] prevent infinite looping from vselect commutation (PR41066)
This is an immediate fix for:
https://bugs.llvm.org/show_bug.cgi?id=41066
...but as noted there and the code comments, we should do better
by stubbing this out sooner.
Pavel Labath [Thu, 14 Mar 2019 15:23:40 +0000 (15:23 +0000)]
YAMLIO: Improve template arg deduction for mapOptional
Summary:
The way c++ template argument deduction works, both arguments are used
to deduce the template type in the three-argument overload of
mapOptional. This is a problem if the types are slightly different, even
if they are implicitly convertible. This is fairly easy to trigger with
integral types, as the default type of most integral constants is int,
which then requires casting the constant to the type of the other
argument.
This patch fixes that by using a separate template type for the default
value, which is then cast to the type of the first argument. To avoid
this conversion triggerring conversions marged as explicit, we use
static_assert to check that the types are implicitly convertible.
Matt Arsenault [Thu, 14 Mar 2019 14:18:56 +0000 (14:18 +0000)]
GlobalISel: Use multiple returns for intrinsic structs
This is consistent with what SelectionDAG does and is much easier to
work with than the extract sequence with an artificial wide register.
For the AMDGPU control flow intrinsics, this was producing an s128 for
the i64, i1 tuple return. Any legalization that should apply to a real
s128 value would badly obscure the direct values that need to be seen.
Than McIntosh [Thu, 14 Mar 2019 13:56:49 +0000 (13:56 +0000)]
[SampleFDO] add suffix elision control for fcn names
Summary:
Add hooks for determining the policy used to decide whether/how
to chop off symbol 'suffixes' when locating a given function
in a sample profile.
Prior to this change, any function symbols of the form "X.Y" were
elided/truncated into just "X" when looking up things in a sample
profile data file.
With this change, the policy on suffixes can be changed by adding a
new attribute "sample-profile-suffix-elision-policy" to the function:
this attribute can have the value "all" (the default), "selected", or
"none". A value of "all" preserves the previous behavior (chop off
everything after the first "." character, then treat that as the
symbol name). A value of "selected" chops off only the rightmost
".llvm.XXXX" suffix (where "XXX" is any string not containing a "."
char). A value of "none" indicates that names should be left as is.
Matt Arsenault [Thu, 14 Mar 2019 13:46:14 +0000 (13:46 +0000)]
ARM: Add ImmArg to intrinsics
I found these by asserting in clang for any GCCBuiltin that doesn't
require mangling and requires a constant for the builtin. This means
that intrinsics are missing which don't use GCCBuiltin, don't have
builtins defined in clang, or were missing the constant annotation in
the builtin definition.
James Henderson [Thu, 14 Mar 2019 11:47:41 +0000 (11:47 +0000)]
[llvm-objcopy]Don't implicitly strip sections in segments
This patch changes llvm-objcopy's behaviour to not strip sections that
are in segments, if they otherwise would be due to a stripping operation
(--strip-all, --strip-sections, --strip-non-alloc). This preserves the
segment contents. It does not change the behaviour of --strip-all-gnu
(although we could choose to do so), because GNU objcopy's behaviour in
this case seems to be to strip the section, nor does it prevent removing
of sections in segments with --remove-section (if a user REALLY wants to
remove a section, we should probably let them, although I could be
persuaded that warning might be appropriate). Tests have been added to
show this latter behaviour.
This fixes https://bugs.llvm.org/show_bug.cgi?id=41006.
This is a reland of r356129, attempting to fix greendragon failures
due to a suspected compatibility issue with od on the greendragon bots
versus other versions.
Sam Parker [Thu, 14 Mar 2019 11:14:13 +0000 (11:14 +0000)]
[ARM][ParallelDSP] Enable multiple uses of loads
When choosing whether a pair of loads can be combined into a single
wide load, we check that the load only has a sext user and that sext
also only has one user. But this can prevent the transformation in
the cases when parallel macs use the same loaded data multiple times.
To enable this, we need to fix up any other uses after creating the
wide load: generating a trunc and a shift + trunc pair to recreate
the narrow values. We also need to keep a record of which loads have
already been widened.
James Henderson [Thu, 14 Mar 2019 10:20:27 +0000 (10:20 +0000)]
[llvm-objcopy]Don't implicitly strip sections in segments
This patch changes llvm-objcopy's behaviour to not strip sections that
are in segments, if they otherwise would be due to a stripping operation
(--strip-all, --strip-sections, --strip-non-alloc). This preserves the
segment contents. It does not change the behaviour of --strip-all-gnu
(although we could choose to do so), because GNU objcopy's behaviour in
this case seems to be to strip the section, nor does it prevent removing
of sections in segments with --remove-section (if a user REALLY wants to
remove a section, we should probably let them, although I could be
persuaded that warning might be appropriate). Tests have been added to
show this latter behaviour.
This fixes https://bugs.llvm.org/show_bug.cgi?id=41006.
Alex Bradbury [Thu, 14 Mar 2019 08:28:48 +0000 (08:28 +0000)]
[RISCV][NFC] Rename callee saved regs 'CSR' to CSR_ILP32_LP64 and minor RISCVRegisterInfo refactoring
The CSR renaming further prepares the way for an upcoming patch adding support for more
RISC-V ABIs.
Modify RISCVRegisterInfo::getCalleeSavedRegs and
RISCVRegisterInfo::getReservedRegs to do MF->getSubtarget<RISCVSubtarget>()
once rather than multiple times.
Craig Topper [Thu, 14 Mar 2019 07:07:26 +0000 (07:07 +0000)]
[X86] Add patterns for rotr by immediate to fix PR41057.
Prior to the introduction of funnel shift intrinsics we could count on rotate
by immediates prefering to use rotl since that's what MatchRotate would check
first. The or+shift pattern doesn't have a direction so one must be chosen
arbitrarily.
With funnel shift, there is a direction and fshr will try to use rotr first.
While fshl will try to use rotl first.
This patch adds the isel patterns for rotr to complement the rotl patterns. I've
put the rotr by 1 patterns in the instruction patterns. And moved the rotl by
bitwidth-1 patterns to separate Pat patterns.
Quentin Colombet [Thu, 14 Mar 2019 01:37:13 +0000 (01:37 +0000)]
[GlobalISel][Utils] Add a getConstantVRegVal variant that looks through instrs
getConstantVRegVal used to only look for G_CONSTANT when looking at
unboxing the value of a vreg. However, constants are sometimes not
directly used and are hidden behind trunc, s|zext or copy chain of
computation.
In particular this may be introduced by the legalization process that
doesn't want to simplify these patterns because it can lead to infine
loop when legalizing a constant.
To circumvent that problem, add a new variant of getConstantVRegVal,
named getConstantVRegValWithLookThrough, that allow to look through
extensions.
Craig Topper [Thu, 14 Mar 2019 01:13:15 +0000 (01:13 +0000)]
[ResetMachineFunctionPass] Add visited functions statistics info
Adding a "NumFunctionsVisited" for collecting the visited function number.
It can be used to collect function pass rate in some tests,
the pass rate = (NumberVisited - NumberReset)/NumberVisited.
e.g. it can be used for caculating GlobalISel pass rate in Test-Suite.