Sanjay Patel [Mon, 4 Mar 2019 22:47:13 +0000 (22:47 +0000)]
[CodeGenPrepare] avoid crashing on non-canonical/degenerate code
The test is reduced from an example in the post-commit thread for:
rL354746
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20190304/632396.html
While we must avoid dying here, the real question should be:
Why is non-canonical and/or degenerate code making it to CGP when
using the new pass manager?
[GlobalISel][AArch64] Add selection support for G_EXTRACT_VECTOR_ELT
This adds instruction selection support for G_EXTRACT_VECTOR_ELT for cases
where the index is defined by a G_CONSTANT.
It also factos out the lane copy opcode selection part into its own function,
`getLaneCopyOpcode`. This is used by both `selectUnmergeValues` and
`selectExtractElt`.
Shoaib Meenai [Mon, 4 Mar 2019 21:19:53 +0000 (21:19 +0000)]
[build] Rename clang-headers to clang-resource-headers
Summary:
The current install-clang-headers target installs clang's resource
directory headers. This is different from the install-llvm-headers
target, which installs LLVM's API headers. We want to introduce the
corresponding target to clang, and the natural name for that new target
would be install-clang-headers. Rename the existing target to
install-clang-resource-headers to free up the install-clang-headers name
for the new target, following the discussion on cfe-dev [1].
I didn't find any bots on zorg referencing install-clang-headers. I'll
send out another PSA to cfe-dev to accompany this rename.
Sanjay Patel [Mon, 4 Mar 2019 20:57:14 +0000 (20:57 +0000)]
[ConstantHoisting] avoid hang/crash from unreachable blocks (PR40930)
I'm not too familiar with this pass, so there might be a better
solution, but this appears to fix the degenerate:
PR40930
PR40931
PR40932
PR40934
...without affecting any real-world code.
As we've seen in several other passes, when we have unreachable blocks,
they can contain semi-bogus IR and/or cause unexpected conditions. We
would not typically expect these patterns to make it this far, but we
have to guard against them anyway.
Craig Topper [Mon, 4 Mar 2019 19:23:37 +0000 (19:23 +0000)]
[Subtarget] Follow up to r355167, add another set of curly braces to FeatureBitArray initialization to satisfy older versions of clang.
Apparently older versions of clang like 3.6 require an extra set of curly braces around std::array initializations. I'm told the C++ language was changed regarding this by CWG 1270.
Amara Emerson [Mon, 4 Mar 2019 19:16:00 +0000 (19:16 +0000)]
Re-commit r355104: "[AArch64][GlobalISel] Add support for 64 bit vector shuffle using TBL1."
The code to materialize a mask from a constant pool load tried to use a 128 bit
LDR to load a 64 bit constant pool entry, which was 8 byte aligned. This resulted
in a link failure in the NEON tests in the test suite since the LDR address was
unaligned. This change fixes that to instead emit a 64 bit LDR if the entry is
64 bit, before converting back to a 128 bit register for the TBL.
Craig Topper [Mon, 4 Mar 2019 19:12:16 +0000 (19:12 +0000)]
[DAGCombiner][X86][SystemZ][AArch64] Combine some cases of (bitcast (build_vector constants)) between legalize types and legalize dag.
This patch enables combining integer bitcasts of integer build vectors when the new scalar type is legal. I've avoided floating point because the implementation bitcasts float to int along the way and we would need to check the intermediate types for legality
[MCA] Highlight kernel bottlenecks in the summary view.
This patch adds a new flag named -bottleneck-analysis to print out information
about throughput bottlenecks.
MCA knows how to identify and classify dynamic dispatch stalls. However, it
doesn't know how to analyze and highlight kernel bottlenecks. The goal of this
patch is to teach MCA how to correlate increases in backend pressure to backend
stalls (and therefore, the loss of throughput).
From a Scheduler point of view, backend pressure is a function of the scheduler
buffer usage (i.e. how the number of uOps in the scheduler buffers changes over
time). Backend pressure increases (or decreases) when there is a mismatch
between the number of opcodes dispatched, and the number of opcodes issued in
the same cycle. Since buffer resources are limited, continuous increases in
backend pressure would eventually leads to dispatch stalls. So, there is a
strong correlation between dispatch stalls, and how backpressure changed over
time.
This patch teaches how to identify situations where backend pressure increases
due to:
- unavailable pipeline resources.
- data dependencies.
Data dependencies may delay execution of instructions and therefore increase the
time that uOps have to spend in the scheduler buffers. That often translates to
an increase in backend pressure which may eventually lead to a bottleneck.
Contention on pipeline resources may also delay execution of instructions, and
lead to a temporary increase in backend pressure.
Internally, the Scheduler classifies instructions based on whether register /
memory operands are available or not.
An instruction is marked as "ready to execute" only if data dependencies are
fully resolved.
Every cycle, the Scheduler attempts to execute all instructions that are ready
to execute. If an instruction cannot execute because of unavailable pipeline
resources, then the Scheduler internally updates a BusyResourceUnits mask with
the ID of each unavailable resource.
ExecuteStage is responsible for tracking changes in backend pressure. If backend
pressure increases during a cycle because of contention on pipeline resources,
then ExecuteStage sends a "backend pressure" event to the listeners.
That event would contain information about instructions delayed by resource
pressure, as well as the BusyResourceUnits mask.
Note that ExecuteStage also knows how to identify situations where backpressure
increased because of delays introduced by data dependencies.
The SummaryView observes "backend pressure" events and prints out a "bottleneck
report".
A bottleneck report is printed out only if increases in backend pressure
eventually caused backend stalls.
About the time complexity:
Time complexity is linear in the number of instructions in the
Scheduler::PendingSet.
The average slowdown tends to be in the range of ~5-6%.
For memory intensive kernels, the slowdown can be significant if flag
-noalias=false is specified. In the worst case scenario I have observed a
slowdown of ~30% when flag -noalias=false was specified.
We can definitely recover part of that slowdown if we optimize class LSUnit (by
doing extra bookkeeping to speedup queries). For now, this new analysis is
disabled by default, and it can be enabled via flag -bottleneck-analysis. Users
of MCA as a library can enable the generation of pressure events through the
constructor of ExecuteStage.
This patch partially addresses https://bugs.llvm.org/show_bug.cgi?id=37494
Jeremy Morse [Mon, 4 Mar 2019 10:56:02 +0000 (10:56 +0000)]
[X86] Avoid codegen changes when DBG_VALUE appears between lowered selects
X86TargetLowering::EmitLoweredSelect presently detects sequences of CMOV pseudo
instructions without accounting for debug intrinsics. This leads to different
codegen with and without option -g, if a DBG_VALUE instruction lands in the
middle of several lowered selects.
Work around this by skipping over debug instructions when looking for CMOV
sequences, and sinking those debug insts into the EmitLoweredSelect sunk block.
This might slightly shift where variables appear in the instruction sequence,
but won't re-order assignments.
Oliver Stannard [Mon, 4 Mar 2019 09:17:38 +0000 (09:17 +0000)]
[ARM] Fix selection of VLDR.16 instruction with imm offset
The isScaledConstantInRange function takes upper and lower bounds which are
checked after dividing by the scale, so the bounds checks for half, single and
double precision should all be the same. Previously, we had wrong bounds checks
for half precision, so selected an immediate the instructions can't actually
represent.
Jonas Hahnfeld [Mon, 4 Mar 2019 08:51:32 +0000 (08:51 +0000)]
[AArch64/ARM] Fix two compiler warnings in InstructionSelector, NFCI
1) GCC complains that KnownValid is set but not used.
2) In ARMInstructionSelector::selectGlobal() the code is mixing "enumeral
and non-enumeral type in conditional expression". Solve this by casting
to unsigned which is the final type anyway.
Heejin Ahn [Sun, 3 Mar 2019 22:35:56 +0000 (22:35 +0000)]
[WebAssembly] Delete ThrowUnwindDest map from WasmEHFuncInfo
Summary:
Before when we implemented the first EH proposal, 'catch <tag>'
instruction may not catch an exception so there were multiple EH pads an
exception can unwind to. That means a BB could have multiple EH pad
successors.
Now after we switched to the new proposal, every 'catch' instruction
catches an exception, and there is only one catchpad per catchswitch, so
we at most have one EH pad successor, making `ThrowUnwindDest` map in
`WasmEHInfo` unnecessary.
Keeping `ThrowUnwindDest` map in `WasmEHInfo` has its own problems,
because other optimization passes can split a BB that contains possibly
throwing calls (previously invokes), and we have to update the map every
time that happens, which is not easy for common CodeGen passes.
This also correctly updates successor info in LateEHPrepare when we add
a rethrow instruction.
Sanjay Patel [Sun, 3 Mar 2019 18:59:33 +0000 (18:59 +0000)]
[ValueTracking] do not try to peek through bitcasts in computeKnownBitsFromAssume()
There are no tests for this case, and I'm not sure how it could ever work,
so I'm just removing this option from the matcher. This should fix PR40940:
https://bugs.llvm.org/show_bug.cgi?id=40940
Michal Gorny [Sun, 3 Mar 2019 10:06:40 +0000 (10:06 +0000)]
[llvm] [Support] Reimplement getMainExecutable() using sysctl on NetBSD
Use sysctl() to implement getMainExecutable() on NetBSD, rather than
trying to guess the correct path from argv[0]. This is one
of the fixes to recent clang-check-mac-libcxx-fixed-compilation-db.cpp
test failure on NetBSD.
This has been historically done on both FreeBSD and NetBSD in r303015,
and reverted in r303285 due to buggy implementation on FreeBSD.
However, FWIK the NetBSD implementation does not suffer from the same
bugs and is more reliable than playing with argv[0].
Craig Topper [Sun, 3 Mar 2019 00:18:07 +0000 (00:18 +0000)]
[X86] Prefer VPBLENDD for v2i64/v4i64 blends with AVX2.
We were using VPBLENDW for v2i64 and VBLENDPD for v4i64. VPBLENDD has better throughput than VPBLENDW on some CPUs so it makes sense to use it when possible. VBLENDPD will probably become VBLENDD during execution domain fixing, but we might as well use integer in isel while we can.
This should work around some issues with the domain fixing pass prefering PBLENDW when we start with PBLENDW. There may still be some v8i16 cases that could use PBLENDD.
Nico Weber [Sat, 2 Mar 2019 18:29:56 +0000 (18:29 +0000)]
gn build: Add a cfi/sources target.
This build target is currently unused, but after r355144 the sync script
started complaining about cfi.cpp not being listed, and this makes the
script happy again.
Sanjay Patel [Sat, 2 Mar 2019 16:45:10 +0000 (16:45 +0000)]
[InstCombine] move add after smin/smax
Follow-up to rL355221.
This isn't specifically called for within PR14613,
but we'll get there eventually if it's not already
requested in some other bug report.
Xing GUO [Sat, 2 Mar 2019 04:20:28 +0000 (04:20 +0000)]
[llvm-objdump] Should print unknown d_tag in hex format
Summary:
Currently, `llvm-objdump` prints "unknown" instead of d_tag value in hex format. Because getDynamicTagAsString returns "unknown" rather than empty
string.
Thomas Lively [Sat, 2 Mar 2019 03:32:25 +0000 (03:32 +0000)]
[WebAssembly] Expand operations not supported by SIMD
Summary:
This prevents crashes in instruction selection when these operations
are used. The tests check that the scalar version of the instruction
is used where applicable, although some expansions do not use the
scalar version.
Florian Hahn [Sat, 2 Mar 2019 02:31:44 +0000 (02:31 +0000)]
[SCEV] Handle case where MaxBECount is less precise than ExactBECount for OR.
In some cases, MaxBECount can be less precise than ExactBECount for AND
and OR (the AND case was PR26207). In the OR test case, both ExactBECounts are
undef, but MaxBECount are different, so we hit the assertion below. This
patch uses the same solution the AND case already uses.
Assertion failed:
((isa<SCEVCouldNotCompute>(ExactNotTaken) || !isa<SCEVCouldNotCompute>(MaxNotTaken))
&& "Exact is not allowed to be less precise than Max"), function ExitLimit
This patch also consolidates test cases for both AND and OR in a single
test case.
Daniel Sanders [Sat, 2 Mar 2019 00:12:57 +0000 (00:12 +0000)]
[tblgen] Track CodeInit origins when possible
Summary:
Add an SMLoc to CodeInit that records the source line it originated from.
This allows tablegen to point precisely at portions of code when reporting
errors within the CodeInit. For example, in the upcoming GlobalISel
combiner, it can report undefined expansions and point at the instance of
the expansion. This is achieved using something like:
SMLoc::getFromPointer(SMLoc::getPointer() +
(StringRef - CodeInit::getValue()))
The location is lost when producing a CodeInit by string concatenation so
a fallback SMLoc is required (e.g. the Record::getLoc()) but that's pretty
rare for CodeInits.
There's a reasonable case for extending tracking of a couple other Init
objects, for example StringInit's are often parsed and it would be good to
point inside the string when reporting errors about that. However, location
tracking also harms de-duplication. This is fine for CodeInit where there's
only a few hundred of them (~160 for X86) and it may be worth it for
StringInit (~86k up to ~1.9M for roughly 15MB increase for X86).
However the origin tracking would be a _terrible_ idea for IntInit, BitInit,
and UnsetInit. I haven't measured either of those three but BitInit would
most likely be on the order of increasing the current 2 BitInit values up
to billions.
Caroline Tice [Fri, 1 Mar 2019 23:51:54 +0000 (23:51 +0000)]
llvm-dwarfdump: Add new variable, parameter and inlining statistics; also function source location statistics.
Add statistics for abstract origins, function, variable and parameter
locations; break the 'variable' counts down into variables and
parameters. Also update call site counting to check for
DW_AT_call_{file,line} in addition to DW_TAG_call_site.
Craig Topper [Fri, 1 Mar 2019 21:02:40 +0000 (21:02 +0000)]
[X86] Remove IntrArgMemOnly from target specific gather/scatter intrinsics
IntrArgMemOnly implies that only memory pointed to by pointer typed arguments will be accessed. But these intrinsics allow you to pass null to the pointer argument and put the full address into the index argument. Other passes won't be able to understand this.
A colleague found that ISPC was creating gathers like this and then dead store elimination removed some stores because it didn't understand what the gather was doing since the pointer argument was null.
Paul Robinson [Fri, 1 Mar 2019 20:58:04 +0000 (20:58 +0000)]
[DWARF] Make -g with empty assembler source work better.
This was sometimes causing clang or llvm-mc to crash, and in other
cases could emit a bogus DWARF line-table header. I did an interim
patch in r352541; this patch should be a cleaner and more complete
fix, and retains the test.
Craig Topper [Fri, 1 Mar 2019 20:18:38 +0000 (20:18 +0000)]
[TableGen][SelectionDAG][X86] Add specific isel matchers for immAllZerosV/immAllOnesV. Remove bitcasts from X86 patterns that are no longer necessary.
Previously we had build_vector PatFrags that called ISD::isBuildVectorAllZeros/Ones. Internally the ISD::isBuildVectorAllZeros/Ones look through bitcasts, but we aren't able to take advantage of that in isel. Instead of we have to canonicalize the types of the all zeros/ones build_vectors and insert bitcasts. Then we have to pattern match those exact bitcasts.
By emitting specific matchers for these 2 nodes, we can make isel look through any bitcasts without needing to explicitly match them. We should also be able to remove the canonicalization to vXi32 from lowering, but I've left that for a follow up.
This removes something like 40,000 bytes from the X86 isel table.
Nikita Popov [Fri, 1 Mar 2019 20:07:04 +0000 (20:07 +0000)]
[ValueTracking] Known bits support for unsigned saturating add/sub
We have two sources of known bits:
1. For adds leading ones of either operand are preserved. For sub
leading zeros of LHS and leading ones of RHS become leading zeros in
the result.
2. The saturating math is a select between add/sub and an all-ones/
zero value. As such we can carry out the add/sub known bits
calculation, and only preseve the known one/zero bits respectively.
Sanjay Patel [Fri, 1 Mar 2019 19:42:40 +0000 (19:42 +0000)]
[InstCombine] move add after umin/umax
In the motivating cases from PR14613:
https://bugs.llvm.org/show_bug.cgi?id=14613
...moving the add enables us to narrow the
min/max which eliminates zext/trunc which
enables signficantly better vectorization.
But that bug is still not completely fixed.
Philip Reames [Fri, 1 Mar 2019 18:45:05 +0000 (18:45 +0000)]
[LICM] Infer proper alignment from loads during scalar promotion
This patch fixes an issue where we would compute an unnecessarily small alignment during scalar promotion when no store is not to be guaranteed to execute, but we've proven load speculation safety. Since speculating a load requires proving the existing alignment is valid at the new location (see Loads.cpp), we can use the alignment fact from the load.
For non-atomics, this is a performance problem. For atomics, this is a correctness issue, though an *incredibly* rare one to see in practice. For atomics, we might not be able to lower an improperly aligned load or store (i.e. i32 align 1). If such an instruction makes it all the way to codegen, we *may* fail to codegen the operation, or we may simply generate a slow call to a library function. The part that makes this super hard to see in practice is that the memory location actually *is* well aligned, and instcombine knows that. So, to see a failure, you have to have a) hit the bug in LICM, b) somehow hit a depth limit in InstCombine/ValueTracking to avoid fixing the alignment, and c) then have generated an instruction which fails codegen rather than simply emitting a slow libcall. All around, pretty hard to hit.
Philip Reames [Fri, 1 Mar 2019 18:00:07 +0000 (18:00 +0000)]
[InstCombine] Extend "idempotent" atomicrmw optimizations to floating point
An idempotent atomicrmw is one that does not change memory in the process of execution. We have already added handling for the various integer operations; this patch extends the same handling to floating point operations which were recently added to IR.
Note: At the moment, we canonicalize idempotent fsub to fadd when ordering requirements prevent us from using a load. As discussed in the review, I will be replacing this with canonicalizing both floating point ops to integer ops in the near future.
Matt Davis [Fri, 1 Mar 2019 17:31:32 +0000 (17:31 +0000)]
[llvm-readobj] Display section names for STT_SECTION symbols.
Summary:
This patch will obtain the section name for symbols that refer to a section. Prior to this patch the Name field for STT_SECTIONs was blank, now it is populated.
Before:
```
Symbol table '.symtab' contains 6 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000000000 0 SECTION LOCAL DEFAULT 1
2: 0000000000000000 0 SECTION LOCAL DEFAULT 3
3: 0000000000000000 0 SECTION LOCAL DEFAULT 4
4: 0000000000000000 0 NOTYPE GLOBAL DEFAULT UND _GLOBAL_OFFSET_TABLE_
5: 0000000000000000 0 TLS GLOBAL DEFAULT UND sym
```
With this patch:
```
Symbol table '.symtab' contains 6 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000000000 0 SECTION LOCAL DEFAULT 1 .text
2: 0000000000000000 0 SECTION LOCAL DEFAULT 3 .data
3: 0000000000000000 0 SECTION LOCAL DEFAULT 4 .bss
4: 0000000000000000 0 NOTYPE GLOBAL DEFAULT UND _GLOBAL_OFFSET_TABLE_
5: 0000000000000000 0 TLS GLOBAL DEFAULT UND sym
```
Jonas Hahnfeld [Fri, 1 Mar 2019 17:15:21 +0000 (17:15 +0000)]
Hide two unused debugging methods, NFCI.
GCC correctly moans that PlainCFGBuilder::isExternalDef(llvm::Value*) and
StackSafetyDataFlowAnalysis::verifyFixedPoint() are defined but not used
in Release builds. Hide them behind 'ifndef NDEBUG'.
Oliver Stannard [Fri, 1 Mar 2019 14:20:28 +0000 (14:20 +0000)]
[ARM] Fix FP16 stack loads/stores for Thumb2 with frame pointer
The new addressing mode added for the v8.2A FP16 instructions uses bit 8 of the
immediate to encode the sign of the offset, like the other FP loads/stores, so
need to be treated the same way.
Oliver Stannard [Fri, 1 Mar 2019 13:58:25 +0000 (13:58 +0000)]
[ARM] Consider undefined-on-NaN conditions in checkVSELConstraints
This function was not checking for the condition code variants which are
undefined if either input is NaN, so we were missing selection of the VSEL
instruction in some cases when using -fno-honor-nans or -ffast-math.
George Rimar [Fri, 1 Mar 2019 10:18:16 +0000 (10:18 +0000)]
[yaml2obj] - Allow setting custom sh_info for RawContentSection sections.
This is for tweaking SHT_SYMTAB sections.
Their sh_info contains the (number of symbols + 1) usually.
But for creating invalid inputs for test cases it would be convenient
to allow explicitly override this field from YAML.
Diana Picus [Fri, 1 Mar 2019 10:01:22 +0000 (10:01 +0000)]
[ARM GlobalISel] Check target flags in test. NFCI
There was a time when we couldn't dump target-specific flags such as
arm-sbrel etc, so the tests didn't check for them. We can now be more
specific in our tests.
Igor Kudrin [Fri, 1 Mar 2019 09:22:42 +0000 (09:22 +0000)]
[CommandLine] Allow grouping options which can have values.
This patch allows all forms of values for options to be used at the end
of a group. With the fix, it is possible to follow the way GNU binutils
tools handle grouping options better. For example, the -j option can be
used with objdump in any of the following ways:
Igor Kudrin [Fri, 1 Mar 2019 09:20:56 +0000 (09:20 +0000)]
[CommandLine] Do not crash if an option has both ValueRequired and Grouping.
If an option, which requires a value, has a `cl::Grouping` formatting
modifier, it works well as far as it is used at the end of a group,
or as a separate argument. However, if the option appears accidentally
in the middle of a group, the program just crashes. This patch prints
an error message instead.