Matt Arsenault [Thu, 7 Feb 2019 19:37:44 +0000 (19:37 +0000)]
GlobalISel: Implement narrowScalar for shift main type
This is pretty much directly ported from SelectionDAG. Doesn't include
the shift by non-constant but known bits version, since there isn't a
globalisel version of computeKnownBits yet.
This shows a disadvantage of targets not specifically which type
should be used for the shift amount. If type 0 is legalized before
type 1, the operations on the shift amount type use the wider type
(which are also less likely to legalize). This can be avoided by
targets specifying legalization actions on type 1 earlier than for
type 0.
Teresa Johnson [Thu, 7 Feb 2019 17:50:35 +0000 (17:50 +0000)]
[HotColdSplit] With PGO add profile entry metadata to split cold function
Summary:
When compiling with profile data, ensure the split cold function gets
cold function_entry_count metadata (just use 0 since it should be cold).
Otherwise with function sections it will not be placed in the unlikely
text section with other cold code.
Sanjay Patel [Thu, 7 Feb 2019 17:43:34 +0000 (17:43 +0000)]
[DAGCombiner] fold add/sub with bool operand based on target's boolean contents
I noticed that we are missing this canonicalization in IR:
rL352515
...and then realized that we don't get this right in SDAG either,
so this has to be fixed first regardless of what we choose to do in IR.
The existing fold was limited to scalars and using the wrong predicate
to guard the transform. We have a boolean contents TLI query that can
be used to decide which direction to fold.
This may eventually lead back to the problems/question in:
https://bugs.llvm.org/show_bug.cgi?id=40486
...but it makes no difference to that yet.
Sanjay Patel [Thu, 7 Feb 2019 17:10:49 +0000 (17:10 +0000)]
[x86] split more 256/512-bit shuffles in lowering
This is intentionally a small step because it's hard to know exactly
where we might introduce a conflicting transform with the code that
tries to form wider shuffles. But I think this is safe - if we have
a wide shuffle with 2 operands, then we should do better with an
extract + narrow shuffle.
[llvm-ar][libObject] Fix relative paths when nesting thin archives.
Summary:
When adding one thin archive to another, we currently chop off the relative path to the flattened members. For instance, when adding `foo/child.a` (which contains `x.txt`) to `parent.a`, when flattening it we should add it as `foo/x.txt` (which exists) instead of `x.txt` (which does not exist).
As a note, this also undoes the `IsNew` parameter of handling relative paths in r288280. The unit test there still passes.
This was reported as part of testing the kernel build with llvm-ar: https://patchwork.kernel.org/patch/10767545/ (see the second point).
Alexandre Ganea [Thu, 7 Feb 2019 15:24:18 +0000 (15:24 +0000)]
[CodeView] Fix cycles in debug info when merging Types with global hashes
When type streams with forward references were merged using GHashes, cycles
were introduced in the debug info. This was caused by
GlobalTypeTableBuilder::insertRecordAs() not inserting the record on the second
pass, thus yielding an empty ArrayRef at that record slot. Later on, upon PDB
emission, TpiStreamBuilder::commit() would skip that empty record, thus
offseting all indices that came after in the stream.
This solution comes in two steps:
1. Fix the hash calculation, by doing a multiple-step resolution, iff there are
forward references in the input stream.
2. Fix merge by resolving with multiple passes, therefore moving records with
forward references at the end of the stream.
This patch also adds support for llvm-readoj --codeview-ghash.
Finally, fix dumpCodeViewMergedTypes() which previously could reference deleted
memory.
Sam Parker [Thu, 7 Feb 2019 13:32:54 +0000 (13:32 +0000)]
[LSR] Generate cross iteration indexes
Modify GenerateConstantOffsetsImpl to create offsets that can be used
by indexed addressing modes. If formulae can be generated which
result in the constant offset being the same size as the recurrence,
we can generate a pre-indexed access. This allows the pointer to be
updated via the single pre-indexed access so that (hopefully) no
add/subs are required to update it for the next iteration. For small
cores, this can significantly improve performance DSP-like loops.
Jiong Wang [Thu, 7 Feb 2019 10:43:09 +0000 (10:43 +0000)]
[BPF] add code-gen support for JMP32 instructions
JMP32 instructions has been added to eBPF ISA. They are 32-bit variants of
existing BPF conditional jump instructions, but the comparison happens on
low 32-bit sub-register only, therefore some unnecessary extensions could
be saved.
JMP32 instructions will only be available for -mcpu=v3. Host probe hook has
been updated accordingly.
JMP32 instructions will only be enabled in code-gen when -mattr=+alu32
enabled, meaning compiling the program using sub-register mode.
For JMP32 encoding, it is a new instruction class, and is using the
reserved eBPF class number 0x6.
This patch has been tested by compiling and running kernel bpf selftests
with JMP32 enabled.
Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@353384 91177308-0d34-0410-b5e6-96231b3b80d8
Craig Topper [Thu, 7 Feb 2019 06:21:28 +0000 (06:21 +0000)]
[BranchFolding] Remove dead code for handling EHPad blocks
Summary: This code tries to handle the case where IBB is an EHPad, but there's an earlier check that uses PBB->hasEHPadSuccessor(). Where PBB is a predecessor of IBB. The hasEHPadSuccessor function would have visited IBB and seen that it was an EHPad and returned false. This would prevent us from reaching this code with IBB as an EHPad.
Looks like this code was originally added in rL37427 (ancient) and made dead in rL143001.
Shoaib Meenai [Thu, 7 Feb 2019 01:12:56 +0000 (01:12 +0000)]
[cmake] Drop clang-tools-extra from LLVM_ALL_PROJECTS
We iterate over the list and only enable projects from that list that
are present in LLVM_ENABLE_PROJECTS and disable all other projects. Most
users will only specify clang in LLVM_ENABLE_PROJECTS and expect
clang-tools-extra to be implicitly enabled, so remove clang-tools-extra
from LLVM_ALL_PROJECTS so that it doesn't get disabled instead.
Shoaib Meenai [Wed, 6 Feb 2019 21:49:47 +0000 (21:49 +0000)]
[cmake] Add all subprojects to LLVM_ALL_PROJECTS
Make LLVM_ALL_PROJECTS reflect all top-level directories in the monorepo
rather than an arbitrary subset. clang-tools-extra is technically
unnecessary since it gets enabled by clang, but having it there for
consistency shouldn't hurt either.
Alina Sbirlea [Wed, 6 Feb 2019 20:25:17 +0000 (20:25 +0000)]
[LICM/MSSA] Add promotion to scalars by building an AliasSetTracker with MemorySSA.
Summary:
Experimentally we found that promotion to scalars carries less benefits
than sinking and hoisting in LICM. When using MemorySSA, we build an
AliasSetTracker on demand in order to reuse the current infrastructure.
We only build it if less than AccessCapForMSSAPromotion exist in the
loop, a cap that is by default set to 250. This value ensures there are
no runtime regressions, and there are small compile time gains for
pathological cases. A much lower value (20) was found to yield a single
regression in the llvm-test-suite and much higher benefits for compile
times. Conservatively we set the current cap to a high value, but we will
explore lowering it when MemorySSA is enabled by default.
Alina Sbirlea [Wed, 6 Feb 2019 19:55:12 +0000 (19:55 +0000)]
[AliasSetTracker] Pass MustAlias to addPointer more often.
Summary:
Pass the alias info to addPointer when available. Will save an alias()
call for must sets when adding a known Must or May alias.
[Part of a series of cleanup patches]
Nirav Dave [Wed, 6 Feb 2019 19:45:47 +0000 (19:45 +0000)]
[X86][DAG] Avoid creating dangling bitcast.
combineExtractWithShuffle may leave a dangling bitcast which may
prevent further optimization in later passes. Avoid constructing it
unless it is used.
Jonas Paulsson [Wed, 6 Feb 2019 19:23:31 +0000 (19:23 +0000)]
[SystemZ] Improved handling of the @llvm.ctlz intrinsic.
Since SystemZ supports counting of leading zeros with the FLOGR instruction,
isCheapToSpeculateCtlz() should return true, which it now does.
ISD::CTLZ_ZERO_UNDEF i32 is now handled the same way as ISD::CTLZ is, which
is needed since promotion to i64 is required and CTLZ_ZERO_UNDEF is only
expanded to CTLZ if it is Legal or Custom.
As far as I can tell, malloc.h is only being used here to provide
a definition of mallinfo (malloc itself is declared in stdlib.h via
cstdlib). We already have a macro for whether mallinfo is available,
so switch to using that instead.
Jonas Paulsson [Wed, 6 Feb 2019 18:59:19 +0000 (18:59 +0000)]
[SystemZ] Wait with VGBM selection until after DAGCombine2.
Don't lower BUILD_VECTORs to BYTE_MASK, but instead expose the BUILD_VECTORs
to the DAGCombiner and select them to VGBM in Select(). This allows the
DAGCombiner to understand the constant vector values.
For floating point, only all-zeros vectors are now generated with VGBM, as it
turned out to be somewhat complicated to handle any arbitrary constants,
while in practice this is very rare and hardly needed.
The SystemZ ISD opcodes z_byte_mask, z_vzero and z_vones have been removed.
Florian Hahn [Wed, 6 Feb 2019 18:43:37 +0000 (18:43 +0000)]
[opt-viewer] Add --filter option to select remarks for displaying.
This allows limiting the displayed remarks to the ones with names
matching the filter (regular) expression.
Generating html pages for a larger project with optimization remarks can
result in a huge HTML documents and using --filter allows to focus on a
set of interesting remarks.
[GlobalISel][NFC] Gardening: Factor out code for simple unary intrinsics
There was a lot of repeated code wrt unary math intrinsics in
translateKnownIntrinsic. This factors out the repeated MIRBuilder code into
two functions: translateSimpleUnaryIntrinsic and getSimpleUnaryIntrinsicOpcode.
This simplifies adding simple unary intrinsics, since after this, all you have
to do is add the mapping to SimpleUnaryIntrinsicOpcodes.
James Henderson [Wed, 6 Feb 2019 17:16:33 +0000 (17:16 +0000)]
[yaml2obj]Allow number for ELF symbol type
yaml2obj previously only recognised standard STT_* names, and didn't
allow arbitrary numbers. This change allows the user to specify a number
for the type instead. It also adds a test to verify the existing
behaviour for obj2yaml for unkown symbol types.
Sanjay Patel [Wed, 6 Feb 2019 16:43:54 +0000 (16:43 +0000)]
[InstCombine] X | C == C --> (X & ~C) == 0
We should canonicalize to one of these forms,
and compare-with-zero could be more conducive
to follow-on transforms. This also leads to
generally better codegen as shown in PR40611:
https://bugs.llvm.org/show_bug.cgi?id=40611
Tim Northover [Wed, 6 Feb 2019 15:26:35 +0000 (15:26 +0000)]
AArch64: enforce even/odd register pairs for CASP instructions.
ARMv8.1a CASP instructions need the first of the pair to be an even register
(otherwise the encoding is unallocated). We enforced this during assembly, but
not CodeGen before.
Ulrich Weigand [Wed, 6 Feb 2019 15:10:13 +0000 (15:10 +0000)]
[SystemZ] Do not return INT_MIN from strcmp/memcmp
The IPM sequence currently generated to compute the strcmp/memcmp
result will return INT_MIN for the "less than zero" case. While
this is in compliance with the standard, strictly speaking, it
turns out that common applications cannot handle this, e.g. because
they negate a comparison result in order to implement reverse
compares.
This patch changes code to use a different sequence that will result
in -2 for the "less than zero" case (same as GCC). However, this
requires that the two source operands of the compare instructions
are inverted, which breaks the optimization in removeIPMBasedCompare.
Therefore, I've removed this (and all of optimizeCompareInstr), and
replaced it with a mostly equivalent optimization in combineCCMask
at the DAGcombine level.
Tim Northover [Wed, 6 Feb 2019 15:07:59 +0000 (15:07 +0000)]
AArch64: annotate atomics with dropped acquire semantics when printing.
A quirk of the v8.1a spec is that when the writeback regiser for an atomic
read-modify-write instruction is wzr/xzr, the instruction no longer enforces
acquire ordering. However, it's still written with the misleading 'a' mnemonic.
So this adds an annotation when disassembling such instructions, mentioning the
change.
Sanjay Patel [Wed, 6 Feb 2019 14:59:39 +0000 (14:59 +0000)]
[x86] vectorize cast ops in lowering to avoid register file transfers
The proposal in D56796 may cross the line because we're trying to avoid vectorization
transforms in generic DAG combining. So this is an alternate, later, x86-specific
translation of that patch.
There are several potential follow-ups to enhance this:
1. Allow extraction from non-zero element index.
2. Peek through extends of smaller width integers.
3. Support x86-specific conversion opcodes like X86ISD::CVTSI2P
When a resource unit R is released, the ResourceManager notifies groups that
contain R. Before this patch, the logic in method ResourceManager::release()
implemented a potentially slow iterative search of dependent groups on the
entire set of processor resources.
This patch replaces that logic with a simpler (and often faster) lookup on array
`Resource2Groups`. This patch gives an average speedup of ~3-4% (observed on a
release build when testing for target btver2).
No functional change intended.
James Henderson [Wed, 6 Feb 2019 10:13:14 +0000 (10:13 +0000)]
[DebugInfo][llvm-symbolizer]Add some tests for edge cases when symbolizing
This patch adds half a dozen new tests that test various edge cases in
the behaviour of the symbolizer and DWARF data parsing. All of them test
the current behaviour.
Summary:
llvm-exegesis uses this functionality to read it's benchmark dumps.
This reading of `.yaml`s takes ~60% of runtime for 14656 benchmark points (i.e. one sweep over all x86 instructions),
but only 30% of time for 3x as much benchmark points.
In particular, this `BinaryRef` appears to be an obvious pain point.
Without patch:
```
$ perf stat -r 25 ./bin/llvm-exegesis -mode=analysis -analysis-epsilon=1.0 -benchmarks-file=/tmp/benchmarks-inverse_throughput-onefull.yaml -analysis-clusters-output-file="" -analysis-inconsistencies-output-file=/tmp/clusters-orig.html
no exegesis target for x86_64-unknown-linux-gnu, using default
Parsed 14656 benchmark points
Printing sched class consistency analysis results to file '/tmp/clusters-orig.html'
...
no exegesis target for x86_64-unknown-linux-gnu, using default
Parsed 14656 benchmark points
Printing sched class consistency analysis results to file '/tmp/clusters-orig.html'
0.97850 +- 0.00647 seconds time elapsed ( +- 0.66% )
```
With the patch:
```
$ perf stat -r 25 ./bin/llvm-exegesis -mode=analysis -analysis-epsilon=1.0 -benchmarks-file=/tmp/benchmarks-inverse_throughput-onefull.yaml -analysis-clusters-output-file="" -analysis-inconsistencies-output-file=/tmp/clusters-new.html
no exegesis target for x86_64-unknown-linux-gnu, using default
Parsed 14656 benchmark points
Printing sched class consistency analysis results to file '/tmp/clusters-new.html'
...
no exegesis target for x86_64-unknown-linux-gnu, using default
Parsed 14656 benchmark points
Printing sched class consistency analysis results to file '/tmp/clusters-new.html'
0.90593 +- 0.00105 seconds time elapsed ( +- 0.12% )
```
So this decreases the overall run time of llvm-exegesis analysis mode (on one sweep) by roughly -7%.
To be noted, `BinaryRef::writeAsBinary()` change is the reason for the perf changes,
usage of `llvm::isHexDigit()` instead of `isxdigit()` does not appear to have any perf impact,
i have only changed it "for symmetry".
`writeAsBinary()` change is correct, it produces identical de-hex-ified buffer, and the final output is thus identical:
```
$ sha512sum /tmp/clusters-*
db4bbd904fe8840853b589b032c5041bc060b91bcd9c27b914b56581fbc473550eea74b852238c79963b5adf2419f379e9f5db76784048b48e3937f9f3e732bf /tmp/clusters-new.html
db4bbd904fe8840853b589b032c5041bc060b91bcd9c27b914b56581fbc473550eea74b852238c79963b5adf2419f379e9f5db76784048b48e3937f9f3e732bf /tmp/clusters-orig.html
```
Teresa Johnson [Wed, 6 Feb 2019 04:29:39 +0000 (04:29 +0000)]
[HotColdSplit] Move splitting after instrumented PGO use
Summary:
Follow up to D57082 which moved splitting earlier in the pipeline, in
order to perform it before inlining. However, it was moved too early,
before the IR is annotated with instrumented PGO data. This caused the
splitting to incorrectly determine cold functions.
Move it to just after PGO annotation (still before inlining), in both
pass managers.
Petr Hosek [Wed, 6 Feb 2019 03:51:00 +0000 (03:51 +0000)]
[CMake] Unify scripts for generating VCS headers
Previously, there were two different scripts for generating VCS headers:
one used by LLVM and one used by Clang and lldb. They were both similar,
but different. They were both broken in their own ways, for example the
one used by Clang didn't properly handle monorepo resulting in an
incorrect version information reported by Clang.
This change unifies two the scripts by introducing a new script that's
used from both LLVM, Clang and lldb, ensures that the new script
supports both monorepo and standalone SVN and Git setups, and removes
the old scripts.
Richard Trieu [Wed, 6 Feb 2019 02:52:52 +0000 (02:52 +0000)]
Move DomTreeUpdater from IR to Analysis
DomTreeUpdater depends on headers from Analysis, but is in IR. This is a
layering violation since Analysis depends on IR. Relocate this code from IR
to Analysis to fix the layering violation.
Alina Sbirlea [Tue, 5 Feb 2019 23:52:08 +0000 (23:52 +0000)]
[BasicAA] Cache nonEscapingLocalObjects for alias() calls.
Summary:
Use a small cache for Values tested by nonEscapingLocalObject().
Since the calls to PointerMayBeCaptured are fairly expensive, this saves
a good amount of compile time for anything relying heavily on
BasicAA.alias() calls.
This uses the same approach as the AliasCache, i.e. the cache is reset
after each alias() call. The cache is not used or updated by modRefInfo
calls since it's harder to know when to reset the cache.
Testcases that show improvements with this patch are too large to
include. Example compile time improvement: 7s to 6s.
Nico Weber [Tue, 5 Feb 2019 23:48:13 +0000 (23:48 +0000)]
gn build: Fix clang-tidy build
Not depending on //clang/lib/StaticAnalyzer/Core and
//clang/lib/StaticAnalyzer/Frontend causes a linker error even if
ClangSACheckers are not supported.
Undefined symbols for architecture x86_64:
"clang::ento::CreateAnalysisConsumer(clang::CompilerInstance&)", referenced from:
clang::tidy::ClangTidyASTConsumerFactory::CreateASTConsumer(
clang::CompilerInstance&, llvm::StringRef)
in libclangTidy.a(libclangTidy.ClangTidy.o)
David Blaikie [Tue, 5 Feb 2019 23:38:55 +0000 (23:38 +0000)]
Orc: Simplify RPC naming system by using function-local statics
The existing scheme of class template static members for Name and
NameMutex is a bit verbose, involves global ctors (even if they're cheap
for string and mutex, still not entirely free), and (importantly/my
immediate motivation here) trips over a bug in LLVM's modules
implementation that's a bit involved (hmm, sounds like Mr. Smith has a
fix for the modules thing - but I'm still inclined to commit this patch
as general goodness).
Lang Hames [Tue, 5 Feb 2019 23:17:11 +0000 (23:17 +0000)]
[ADT] Add a fallible_iterator wrapper.
A fallible iterator is one whose increment or decrement operations may fail.
This would usually be supported by replacing the ++ and -- operators with
methods that return error:
The downside of this style is that it no longer conforms to the C++ iterator
concept, and can not make use of standard algorithms and features such as
range-based for loops.
The fallible_iterator wrapper takes an iterator written in the style above
and adapts it to (mostly) conform with the C++ iterator concept. It does this
by providing standard ++ and -- operator implementations, returning any errors
generated via a side channel (an Error reference passed into the wrapper at
construction time), and immediately jumping the iterator to a known 'end'
value upon error. It also marks the Error as checked any time an iterator is
compared with a known end value and found to be inequal, allowing early exit
from loops without redundant error checking*.
Usage looks like:
MyFallibleIterator I = ..., E = ...;
Error Err = Error::success();
for (auto &Elem : make_fallible_range(I, E, Err)) {
// Loop body is only entered when safe.
// Early exits from loop body permitted without checking Err.
if (SomeCondition)
return;
}
if (Err)
// Handle error.
* Since failure causes a fallible iterator to jump to end, testing that a
fallible iterator is not an end value implicitly verifies that the error is a
success value, and so is equivalent to an error check.
Sanjay Patel [Tue, 5 Feb 2019 22:58:45 +0000 (22:58 +0000)]
[InstCombine] limit extracting shuffle transform based on uses
As discussed in D53037, this can lead to worse codegen, and we
don't generally expect the backend to be able to optimize
arbitrary shuffles. If there's only one use of the 1st shuffle,
that means it's getting removed, so that should always be
safe.
Heejin Ahn [Tue, 5 Feb 2019 22:47:29 +0000 (22:47 +0000)]
[WebAssembly] Disable a v128.const test line temporarily
r353131 caused failures in v128.const test for clang-ppc64be-linux-lnt
and clang-s390x-linux bots. This temporarily disables that line until
it is fixed.
Petar Jovanovic [Tue, 5 Feb 2019 22:23:46 +0000 (22:23 +0000)]
[elfabi] Fix the type of the variable formated for error output
Change the format type of Dyn.SONameOffset to PRIx64 since it is a uint64_t.
The problem was detected on mips builds, where it was printing junk values
and causing test failure.
[NFC][GlobalISel]: Add a convenience method to MachineInstrBuilder to simplify getOperand(i).getReg()
https://reviews.llvm.org/D57608
It's a common pattern in GISel to have a MachineInstrBuilder from which we get various regs
(commonly MIB->getOperand(0).getReg()). This adds a helper method and the above can be
replaced with MIB.getReg(0).
Reid Kleckner [Tue, 5 Feb 2019 21:14:09 +0000 (21:14 +0000)]
[MC] Don't error on numberless .file directives on MachO
Summary:
Before r349976, MC ignored such directives when producing an object file
and asserted when re-producing textual assembly output. I turned this
assertion into a hard error in both cases in r349976, but this makes it
unnecessarily difficult to write a single assembly file that supports
both MachO and other object formats that support .file. A user reported
this as PR40578, and we decided to go back to ignoring the directive.