Greg Clayton [Wed, 26 Jun 2019 14:09:09 +0000 (14:09 +0000)]
Add GSYM utility files along with unit tests.
The full GSYM patch started with: https://reviews.llvm.org/D53379
In that patch we wanted to split up getting GSYM into the LLVM code base so we are not committing too much code at once.
This is a first in a series of patches where I only add the foundation classes along with complete unit tests. They provide the foundation for encoding and decoding a GSYM file.
File entries are defined in llvm::gsym::FileEntry. This class splits the file up into a directory and filename represented by uniqued string table offsets. This allows all files that are referred to in a GSYM file to be encoded as 1 based indexes into a global file table in the GSYM file.
Function information in stored in llvm::gsym::FunctionInfo. This object represents a contiguous address range that has a name and range with an optional line table and inline call stack information.
Line table entries are defined in llvm::gsym::LineEntry. They store only address, file and line information to keep the line tables simple and allows the information to be efficiently encoded in a subsequent patch.
Inline information is defined in llvm::gsym::InlineInfo. These structs store the name of the inline function, along with one or more address ranges, and the file and line that called this function. They also contain any child inline information.
There are also utility classes for address ranges in llvm::gsym::AddressRange, and string table support in llvm::gsym::StringTable which are simple classes.
The unit tests test all the APIs on these simple classes so they will be ready for the next patches where we will create GSYM files and parse GSYM files.
Fedor Sergeev [Wed, 26 Jun 2019 13:24:24 +0000 (13:24 +0000)]
[InlineCost] cleanup calculations of Cost and Threshold
Summary:
Doing better separation of Cost and Threshold.
Cost counts the abstract complexity of live instructions, while Threshold is an upper bound of complexity that inlining is comfortable to pay.
There are two parts:
- huge 15K last-call-to-static bonus is no longer subtracted from Cost
but rather is now added to Threshold.
That makes much more sense, as the cost of inlining (Cost) is not changed by the fact
that internal function is called once. It only changes the likelyhood of this inlining
being profitable (Threshold).
- bonus for calls proved-to-be-inlinable into callee is no longer subtracted from Cost
but added to Threshold instead.
While calculations are somewhat different, overall InlineResult should stay the same since Cost >= Threshold compares the same.
Summary:
The one thing of note here is that the 'bitwidth' constant (32/64) was previously pessimistic.
Given `x & (-1 >> (C - z))`, we were taking `C` to be `bitwidth(x)`, but in reality
we want `(-1 >> (C - z))` pattern to mean "low z bits must be all-ones".
And for that, `C` should be `bitwidth(-1 >> (C - z))`, i.e. of the shift operation itself.
Last pattern D does not seem to exhibit any of these truncation issues.
Although it has the opposite problem - if we extract low bits (no shift) from i64,
and then truncate to i32, then we fail to shrink this 64-bit extraction into 32-bit extraction.
The problem is quite simple:
If we have pattern `(x >> start) & (1 << nbits) - 1`,
and then truncate the result, that truncation will be propagated upwards,
into the `and`. And that isn't currently handled.
I'm only fixing pattern `a` here,
the same fix will be needed for patterns `b`/`c` too.
I *think* this isn't missing any extra legality checks,
since we only look past truncations. Similary, i don't think
we can get any other truncation there other than i64->i32.
As detailed in https://bugs.llvm.org/show_bug.cgi?id=42253, there were a
number of issues in the llvm-symbolizer documentation. This patch fixes
them by:
1. Adding [addresses...] to the synopsis, and matching the formatting
of other tools.
2. Rewriting the description to fix grammar issues and mention other
usage options.
3. Rewriting the examples to be easier to read.
4. Re-ordering the options into alphabetical order.
5. Improving the text of some of the option descriptions, and adding
some examples to individual options.
6. Splitting the Mach-O options into a separate section of the
document.
7. Standardizing on double dashes for long options throughout the file.
8. Adding a reference to the llvm-addr2line document.
Simon Pilgrim [Wed, 26 Jun 2019 11:21:09 +0000 (11:21 +0000)]
[X86][AVX] combineExtractSubvector - 'little to big' extract_subvector(bitcast()) support
Ideally this needs to be a generic combine in DAGCombiner::visitEXTRACT_SUBVECTOR but there's some nasty regressions in aarch64 due to neon shuffles not handling bitcasts at all.....
[IR/DIVar] Add the flag for params that have unmodified value
Introduce the debug info flag that indicates that a parameter has unchanged
value throughout a function. This info will be used to emit the expressions
with DW_OP_entry_value.
([4/13] Introduce the debug entry values.)
Co-authored-by: Ananth Sowda <asowda@cisco.com> Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com> Co-authored-by: Ivan Baev <ibaev@cisco.com>
Differential Revision: https://reviews.llvm.org/D58034
Mikhail Maltsev [Wed, 26 Jun 2019 10:48:40 +0000 (10:48 +0000)]
[ARM] Handle fixup_arm_pcrel_9 correctly on big-endian targets
Summary:
The getFixupKindContainerSizeBytes function returns the size of the
instruction containing a given fixup. Currently fixup_arm_pcrel_9 is
not handled in this function, this causes an assertion failure in
the debug build and incorrect codegen in the release build.
Lewis Revill [Wed, 26 Jun 2019 10:35:58 +0000 (10:35 +0000)]
[RISCV] Add pseudo instruction for calls with explicit register
This patch adds the PseudoCALLReg instruction which allows using an
explicit register operand as the destination for the return address.
GCC can successfully parse this form of the call instruction, which
would be used for calls to functions which do not use ra as the return
address register, such as the __riscv_save libcalls. This patch forms
the first part of an implementation of -msave-restore for RISC-V.
truncateVectorWithPACK is often used in conjunction with ComputeNumSignBits which struggles when peeking through bitcasts.
This fix tries to avoid bitcast(shuffle(bitcast())) patterns in the 256-bit 64-bit sublane shuffles so we can still see through at least until lowering when the shuffles will need to be bitcasted to widen the shuffle type.
Florian Hahn [Wed, 26 Jun 2019 09:16:57 +0000 (09:16 +0000)]
[LoopUnroll] Add support for loops with exiting headers and uncond latches.
This patch generalizes the UnrollLoop utility to support loops that exit
from the header instead of the latch. Usually, LoopRotate would take care
of must of those cases, but in some cases (e.g. -Oz), LoopRotate does
not kick in.
Codesize impact looks relatively neutral on ARM64 with -Oz + LTO.
[Metadata] Add GNU extensions for call site DWARF symbols
As discussed on RFC
(http://lists.llvm.org/pipermail/llvm-dev/2019-February/130094.html), this
is set of patches that introduces debug information about call site and
call site parameters. Since the LLVM has portion of this support (dumping
DWARF 5 symbols for calls), we generate GNU extensions as well. All of that
will be restricted under an option.
([1/13] Introduce the debug entry values.)
Co-authored-by: Ananth Sowda <asowda@cisco.com> Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com> Co-authored-by: Ivan Baev <ibaev@cisco.com>
Differential Revision: https://reviews.llvm.org/D60712
QingShan Zhang [Wed, 26 Jun 2019 05:12:53 +0000 (05:12 +0000)]
Teach the DAGCombine to fold this pattern(c1 and c2 is constant).
// fold (sext (select cond, c1, c2)) -> (select cond, sext c1, sext c2)
// fold (zext (select cond, c1, c2)) -> (select cond, zext c1, zext c2)
// fold (aext (select cond, c1, c2)) -> (select cond, sext c1, sext c2)
Sign extend the operands if it is any_extend, to keep the signess of the operands that, the other combine rule would apply. The any_extend is handled as zero extend for constants. i.e.
This change causes some llvm-obcopy tests to fail with valgrind.
Following is the output for basic-keep.test
Command Output (stderr):
--
==107406== Conditional jump or move depends on uninitialised value(s)
==107406== at 0x1A30DD: executeObjcopy(llvm::objcopy::CopyConfig const&) (llvm-objcopy.cpp:235)
==107406== by 0x1A3935: main (llvm-objcopy.cpp:294)
Nemanja Ivanovic [Wed, 26 Jun 2019 02:46:03 +0000 (02:46 +0000)]
[NFC] Fix buildbot breaks due to r364375
For some reason, the update_llc_checks.py script produces checks for
empty lines which cause failures. Corrected that to check for actual
text produced by llc.
Nemanja Ivanovic [Wed, 26 Jun 2019 02:01:11 +0000 (02:01 +0000)]
[PowerPC][NFC] Add a TOC save test case prior to posting a related patch
An upcoming patch will modify the behaviour with respect to saving the TOC
in functions with indirect calls.
Adding a test case so the patch will show the difference in codegen.
Nemanja Ivanovic [Wed, 26 Jun 2019 01:48:57 +0000 (01:48 +0000)]
[PowerPC] Mark FCOPYSIGN legal for FP vectors
This was just an omission in the back end. We have had the instructions for both
single and double precision for a few HW generations, but never got around to
legalizing these.
The weak alias should have the characteristics set to
`IMAGE_EXTERN_WEAK_SEARCH_ALIAS` to indicate that the weak external here
is a symbol alias and that the symbol is aliased to a locally defined
symbol. We were previously setting the characteristics to
`IMAGE_EXTERN_WEAK_SEARCH_LIBRARY` which indicates that the symbol
should be looked for in the libraries.
Keno Fischer [Wed, 26 Jun 2019 00:52:42 +0000 (00:52 +0000)]
[WebAssembly] Fix list of relocations with addends in lld
Summary:
The list of relocations with addend in lld was missing `R_WASM_MEMORY_ADDR_REL_SLEB`,
causing `wasm-ld` to generate corrupted output. This fixes that problem and while
we're at it pulls the list of such relocations into the Wasm.h header, to avoid
duplicating it in multiple places.
Erich Keane [Wed, 26 Jun 2019 00:08:22 +0000 (00:08 +0000)]
Teach TableGen Intrin Emitter to handle LLVMPointerType<llvm_any_ty>
r363233 rewrote a bunch of the Intrin Emitter code, however the new
function to update the arg codes did not properly consider a pointer to
an any. This patch adds that logic.
Heejin Ahn [Tue, 25 Jun 2019 23:04:12 +0000 (23:04 +0000)]
[WebAssembly] Remove catch_all from AsmParser
Summary:
`catch_all` is from the first version of EH proposal and now has been
removed. There were no tests covering this, and thus no tests to remove
or fix.
When we calculate MII, we use two loops, one with iterator R++ to
check whether we can reserve the resource, then --R to move back
the iterator to do reservation.
This is risky, as R++, --R may not point to the same element at all.
The can cause wrong MII.
Diego Novillo [Tue, 25 Jun 2019 18:55:16 +0000 (18:55 +0000)]
Update phis in AMDGPUUnifyDivergentExitNodes
Original patch https://reviews.llvm.org/D63659 from
Steven Perron <stevenperron@google.com>
The pass AMDGPUUnifyDivergentExitNodes does not update the phi nodes in
the successors of blocks that is splits. This is fixed by calling
BasicBlock::splitBasicBlock to split the block instead of doing it
manually. This does extra work because a new conditional branch is
created in BB which is immediately replaced, but I think the simplicity
is worth it. It also helps make the code more future proof in case other
things need to be updated.
Craig Topper [Tue, 25 Jun 2019 17:31:52 +0000 (17:31 +0000)]
[X86] Remove isel patterns that look for (vzext_movl (scalar_to_vector (load)))
I believe these all get canonicalized to vzext_movl. The only case where that wasn't true was when the load was loadi32 and the load was an extload aligned to 32 bits. But that was fixed in r364207.
Philip Reames [Tue, 25 Jun 2019 17:29:18 +0000 (17:29 +0000)]
[Peephole] Allow folding loads into instructions w/multiple uses (such as test64rr)
Peephole opt has a one use limitation which appears to be accidental. The function being used was incorrectly documented as returning whether the def had one *user*, but instead returned true only when there was one *use*. Add a corresponding hasOneNonDbgUser helper, and adjust peephole-opt to use the appropriate one.
All of the actual folding code handles multiple uses within a single instruction. That codepath is well exercised through instruction selection.
Craig Topper [Tue, 25 Jun 2019 17:08:26 +0000 (17:08 +0000)]
[X86] Add a DAG combine to turn vzmovl+load into vzload if the load isn't volatile. Remove isel patterns for vzmovl+load
We currently have some isel patterns for treating vzmovl+load the same as vzload, but that shrinks the load which we shouldn't do if the load is volatile.
Rather than adding isel checks for volatile. This patch removes the patterns and teachs DAG combine to merge them into vzload when its legal to do so.
Simon Tatham [Tue, 25 Jun 2019 16:49:32 +0000 (16:49 +0000)]
[ARM] Support inline assembler constraints for MVE.
"To" selects an odd-numbered GPR, and "Te" an even one. There are some
8.1-M instructions that have one too few bits in their register fields
and require registers of particular parity, without necessarily using
a consecutive even/odd pair.
Also, the constraint letter "t" should select an MVE q-register, when
MVE is present. This didn't need any source changes, but some extra
tests have been added.
Simon Tatham [Tue, 25 Jun 2019 16:48:46 +0000 (16:48 +0000)]
[ARM] Code-generation infrastructure for MVE.
This provides the low-level support to start using MVE vector types in
LLVM IR, loading and storing them, passing them to __asm__ statements
containing hand-written MVE vector instructions, and *if* you have the
hard-float ABI turned on, using them as function parameters.
(In the soft-float ABI, vector types are passed in integer registers,
and combining all those 32-bit integers into a q-reg requires support
for selection DAG nodes like insert_vector_elt and build_vector which
aren't implemented yet for MVE. In fact I've also had to add
`arm_aapcs_vfpcc` to a couple of existing tests to avoid that
problem.)
Specifically, this commit adds support for:
* spills, reloads and register moves for MVE vector registers
* ditto for the VPT predication mask that lives in VPR.P0
* make all the MVE vector types legal in ISel, and provide selection
DAG patterns for BITCAST, LOAD and STORE
* make loads and stores of scalar FP types conditional on
`hasFPRegs()` rather than `hasVFP2Base()`. As a result a few
existing tests needed their llc command lines updating to use
`-mattr=-fpregs` as their method of turning off all hardware FP
support.
Fangrui Song [Tue, 25 Jun 2019 15:56:32 +0000 (15:56 +0000)]
[PPC32] Support PLT calls for -msecure-plt -fpic
Summary:
In Secure PLT ABI, -fpic is similar to -fPIC. The differences are that:
* -fpic stores the address of _GLOBAL_OFFSET_TABLE_ in r30, while -fPIC stores .got2+0x8000.
* -fpic uses an addend of 0 for R_PPC_PLTREL24, while -fPIC uses 0x8000.
Sam Parker [Tue, 25 Jun 2019 15:11:17 +0000 (15:11 +0000)]
[ARM] Fix for DLS/LE CodeGen
The expensive buildbots highlighted the mir tests were broken, which
I've now updated and added --verify-machineinstrs to them. This also
uncovered a couple of bugs in the backend pass, so these have also
been fixed.
Xing Xue [Tue, 25 Jun 2019 15:08:28 +0000 (15:08 +0000)]
Improve zero-size allocation with safe_malloc, etc.
Summary:
The current implementations of the memory allocation functions mistake a nullptr returned from std::malloc, std::calloc, or std::realloc as a failure. The behaviour for each of std::malloc, std::calloc, and std::realloc when the size is 0 is implementation defined (ISO/IEC 9899:2018 7.22.3), and may return a nullptr.
This patch checks if space requested is zero when a nullptr is returned, retry requesting non-zero if it is.
Sanjay Patel [Tue, 25 Jun 2019 14:46:52 +0000 (14:46 +0000)]
[SDAG] expand ctpop != 1
Change the generic ctpop expansion to more efficiently handle a
check for not-a-power-of-two value:
(ctpop x) != 1 --> (x == 0) || ((x & x-1) != 0)
This is the inverted predicate sibling pattern that was added with:
D63004
This should have been done before I changed IR canonicalization to
favor this form with:
rL364246
...so if this requires revert/changing, the earlier commit may also
need to modified.
There were a number of issues with the llvm-readobj documentation. The
following points were raised in https://bugs.llvm.org/show_bug.cgi?id=42255,
and have been fixed in this patch:
1. The description section claimed "The tool and its output is
primarily designed for use in FileCheck-based tests" which is not
really the case any more.
2. The documentation used single-dash long options for option names,
but references in the help text to other options exclusively used
double-dashes. Fixed by standardising on double-dashes for all
long-form options.
3. The majority of options available and in the help text were not
present in the documentation. This patch adds them.
4. Several aliases, both long and short, were missing, e.g. --relocs.
Additionally, this patch improves the documentation by:
1. Splitting the options into categories based on the file format they
are specific to.
2. Updating the Exit Status section to correctly mention that errors
lead to a non-zero exit code.
3. Adding a See Also section referencing other similar LLVM tools.
4. Improving/correcting some of the descriptions of options that did
not quite match up with what llvm-readobj does.
Simon Tatham [Tue, 25 Jun 2019 13:10:29 +0000 (13:10 +0000)]
[ARM] Re-enable misspelled RUN: lines in fullfp16.s.
rL364293 committed a couple of lines that just said "// RUN llvm-mc ..."
without the all-important ':' after RUN, so those test lines weren't
actually running anything.
Sanjay Patel [Tue, 25 Jun 2019 12:49:35 +0000 (12:49 +0000)]
[SDAG] improve expansion of ctpop+setcc
This should not cause any visible change in output, but it's
more efficient because we were producing non-canonical 'sub x, 1'
and 'setcc ugt x, 0'. As mentioned in the TODO, we should also
be handling the inverse predicate.
Sjoerd Meijer [Tue, 25 Jun 2019 12:04:31 +0000 (12:04 +0000)]
[ARM] MVE VPT Blocks
A minor iteration on the MVE VPT Block pass to enable more efficient VPT Block
code generation: consecutive VPT predicated statements, predicated on the same
condition, will be placed within the same VPT Block. This essentially is also
an exercise to write some more tests for the next step, which should be more
generic also merging instructions when they are not consecutive.
Nicolai Haehnle [Tue, 25 Jun 2019 11:52:30 +0000 (11:52 +0000)]
AMDGPU: Write LDS objects out as global symbols in code generation
Summary:
The symbols use the processor-specific SHN_AMDGPU_LDS section index
introduced with a previous change. The linker is then expected to resolve
relocations, which are also emitted.
Initially disabled for HSA and PAL environments until they have caught up
in terms of linker and runtime loader.
Some notes:
- The llvm.amdgcn.groupstaticsize intrinsics can no longer be lowered
to a constant at compile times, which means some tests can no longer
be applied.
The current "solution" is a terrible hack, but the intrinsic isn't
used by Mesa, so we can keep it for now.
- We no longer know the full LDS size per kernel at compile time, which
means that we can no longer generate a relevant error message at
compile time. It would be possible to add a check for the size of
individual variables, but ultimately the linker will have to perform
the final check.
Nicolai Haehnle [Tue, 25 Jun 2019 11:51:35 +0000 (11:51 +0000)]
AMDGPU/MC: Add .amdgpu_lds directive
Summary:
The directive defines a symbol as an group/local memory (LDS) symbol.
LDS symbols behave similar to common symbols for the purposes of ELF,
using the processor-specific SHN_AMDGPU_LDS as section index.
It is the linker and/or runtime loader's job to "instantiate" LDS symbols
and resolve relocations that reference them.
It is not possible to initialize LDS memory (not even zero-initialize
as for .bss).
We want to be able to link together objects -- starting with relocatable
objects, but possible expanding to shared objects in the future -- that
access LDS memory in a flexible way.
LDS memory is in an address space that is entirely separate from the
address space that contains the program image (code and normal data),
so having program segments for it doesn't really make sense.
Furthermore, we want to be able to compile multiple kernels in a
compilation unit which have disjoint use of LDS memory. In that case,
we may want to place LDS symbols differently for different kernels
to save memory (LDS memory is very limited and physically private to
each kernel invocation), so we can't simply place LDS symbols in a
.lds section.
Hence this solution where LDS symbols always stay undefined.
The *_EXTEND_VECTOR_INREG opcodes were relaxed back around rL346784 to support source vector widths that are smaller than the output - it looks like the legalizers were never updated to account for this.
This patch inserts the smaller source vector into an undef vector of the same width of the result before performing the shuffle+bitcast to correctly handle this.
Part of the yak shaving to solve the crashes from rL364264 and rL364272
Simon Tatham [Tue, 25 Jun 2019 11:24:50 +0000 (11:24 +0000)]
[ARM] Explicit lowering of half <-> double conversions.
If an FP_EXTEND or FP_ROUND isel dag node converts directly between
f16 and f32 when the target CPU has no instruction to do it in one go,
it has to be done in two steps instead, going via f32.
Previously, this was done implicitly, because all such CPUs had the
storage-only implementation of f16 (i.e. the only thing you can do
with one at all is to convert it to/from f32). So isel would legalize
the f16 into an f32 as soon as it saw it, by inserting an fp16_to_fp
node (or vice versa), and then the fp_extend would already be f32->f64
rather than f16->f64.
But that technique can't support a target CPU which has full f16
support but _not_ f64, such as some variants of Arm v8.1-M. So now we
provide custom lowering for FP_EXTEND and FP_ROUND, which checks
support for f16 and f64 and decides on the best thing to do given the
combination of flags it gets back.
Simon Tatham [Tue, 25 Jun 2019 11:24:42 +0000 (11:24 +0000)]
[ARM] Extra MVE-related testing.
This adds some extra RUN lines to existing test files, to check that
things that worked in previous architecture versions haven't
accidentally stopped working in 8.1-M. Also we add some new tests: a
test of scalar floating point instructions that could be easily
confused with the similar-looking vector ones at assembly time, a test
of basic load/store/move access to the FP registers (which has to work
even in integer-only MVE); and one final check of the really obvious
case where turning off MVE should make sure MVE instructions really
are rejected.
This final batch includes the tail-predicated versions of the
low-overhead loop instructions (LETP); the VPSEL instruction to select
between two vector registers based on the predicate mask without
having to open a VPT block; and VPNOT which complements the predicate
mask in place.
Simon Tatham [Tue, 25 Jun 2019 11:24:18 +0000 (11:24 +0000)]
[ARM] Add MVE vector load/store instructions.
This adds the rest of the vector memory access instructions. It
includes contiguous loads/stores, with an ordinary addressing mode
such as [r0,#offset] (plus writeback variants); gather loads and
scatter stores with a scalar base address register and a vector of
offsets from it (written [r0,q1] or similar); and gather/scatters with
a vector of base addresses (written [q0,#offset], again with
writeback). Additionally, some of the loads can widen each loaded
value into a larger vector lane, and the corresponding stores narrow
them again.
To implement these, we also have to add the addressing modes they
need. Also, in AsmParser, the `isMem` query function now has
subqueries `isGPRMem` and `isMVEMem`, according to which kind of base
register is used by a given memory access operand.
I've also had to add an extra check in `checkTargetMatchPredicate` in
the AsmParser, without which our last-minute check of `rGPR` register
operands against SP and PC was failing an assertion because Tablegen
had inserted an immediate 0 in place of one of a pair of tied register
operands. (This matches the way the corresponding check for `MCK_rGPR`
in `validateTargetOperandClass` is guarded.) Apparently the MVE load
instructions were the first to have ever triggered this assertion, but
I think only because they were the first to have a combination of the
usual Arm pre/post writeback system and the `rGPR` class in particular.
Nemanja Ivanovic [Tue, 25 Jun 2019 10:46:13 +0000 (10:46 +0000)]
[PowerPC] Emit XXSEL for vec_sel and code that has the same pattern
As pointed out in https://bugs.llvm.org/show_bug.cgi?id=41777
we do not emit a vector select even when the pretty much asks for one.
This patch changes that.