Austin Kerbow [Tue, 15 Oct 2019 19:59:45 +0000 (19:59 +0000)]
AMDGPU: Fix infinite searches in SIFixSGPRCopies
Summary:
Two conditions could lead to infinite loops when processing PHI nodes in
SIFixSGPRCopies.
The first condition involves a REG_SEQUENCE that uses registers defined by both
a PHI and a COPY.
The second condition arises when a physical register is copied to a virtual
register which is then used in a PHI node. If the same virtual register is
copied to the same physical register, the result is an endless loop.
SUMMARY:
in the xcoff, if the number of relocation entries or line number entries is
overflow(large than or equal 65535) , there will be overflow section for it.
The interpret of overflow section is different with generic section header,
the patch implement parsing the overflow section.
Thomas Lively [Tue, 15 Oct 2019 18:28:22 +0000 (18:28 +0000)]
[WebAssembly] Allow multivalue types in block signature operands
Summary:
Renames `ExprType` to the more apt `BlockType` and adds a variant for
multivalue blocks. Currently non-void blocks are only generated at the
end of functions where the block return type needs to agree with the
function return type, and that remains true for multivalue
blocks. That invariant means that the actual signature does not need
to be stored in the block signature `MachineOperand` because it can be
inferred by `WebAssemblyMCInstLower` from the return type of the
parent function. `WebAssemblyMCInstLower` continues to lower block
signature operands to immediates when possible but lowers multivalue
signatures to function type symbols. The AsmParser and Disassembler
are updated to handle multivalue block types as well.
Jordan Rupprecht [Tue, 15 Oct 2019 18:13:20 +0000 (18:13 +0000)]
[llvm-objdump] Use a counter for llvm-objdump -h instead of the section index.
Summary:
When listing the index in `llvm-objdump -h`, use a zero-based counter instead of the actual section index (e.g. shdr->sh_index for ELF).
While this is effectively a noop for now (except one unit test for XCOFF), the index values will change in a future patch that filters certain sections out (e.g. symbol tables). See D68669 for more context. Note: the test case in `test/tools/llvm-objdump/X86/section-index.s` already covers the case of incrementing the section index counter when sections are skipped.
I removed this test to unblock the ARM bots while looking into failures
(r374915), and am reinstating it now with a fix.
I believe the problem was that counter ptr address I used,
'\0\0\6\0\1\0\0\1', set the high bits of the pointer, not the low bits
like I wanted. On x86_64 this superficially looks like it tests r370826,
but it doesn't, as it would have been caught before r370826. However, on
ARM (or, 32-bit hosts more generally), I suspect the high bits were
cleared, and you get a 'valid' profile.
I verified that setting the *low* bits of the pointer does trigger the
new condition:
-// Note: The CounterPtr here is off-by-one. This should trigger a malformed profile error.
-RUN: printf '\0\0\6\0\1\0\0\1' >> %t.profraw
+// Note: The CounterPtr here is off-by-one.
+//
+// Octal '\11' is 9 in decimal: this should push CounterOffset to 1. As there are two counters,
+// the profile reader should error out.
+RUN: printf '\11\0\6\0\1\0\0\0' >> %t.profraw
Digger Lin [Tue, 15 Oct 2019 17:40:41 +0000 (17:40 +0000)]
[XCOFF] Output object text section header and symbol entry for program code.
This is remaining part of rG41ca91f2995b: [AIX][XCOFF] Output XCOFF
object text section header and symbol entry for rogram code.
SUMMARY:
Original form of this patch is provided by Stefan Pintillie.
1. The patch try to output program code section header , symbol entry for
program code (PR) and Instruction into the raw text section.
2. The patch include how to alignment and layout the CSection in the text
section.
3. The patch also reorganize the code , put some codes into a function.
(XCOFFObjectWriter::writeSymbolTableEntryForControlSection)
Additional: We can not add raw data of text section test in the patch, If want
to output raw text section data,it need a function description patch first.
Alina Sbirlea [Tue, 15 Oct 2019 17:25:36 +0000 (17:25 +0000)]
[NewGVN] Check that call has an access.
Check that a call has an attached MemoryAccess before calling
getClobbering on the instruction.
If no access is attached, the instruction does not access memory.
[VirtualFileSystem] Support virtual working directory in the RedirectingFS
Before this patch, changing the working directory of the RedirectingFS
would just forward to its external file system. This prevented us from
having a working directory that only existed in the VFS mapping.
This patch adds support for a virtual working directory in the
RedirectingFileSystem. It now keeps track of its own WD in addition to
updating the WD of the external file system. This ensures that we can
still fall through for relative paths.
This change was originally motivated by the reproducer infrastructure in
LLDB where we want to deal transparently with relative paths.
Digger Lin [Tue, 15 Oct 2019 17:09:54 +0000 (17:09 +0000)]
[AIX][XCOFF] Output XCOFF object text section header and symbol entry for program code.
SUMMARY
Original form of this patch is provided by Stefan Pintillie.
The patch try to output program code section header , symbol entry for program code (PR) and Instruction into the raw text section.
The patch include how to alignment and layout the CSection in the text section.
The patch also reorganize the code , put some codes into a function(XCOFFObjectWriter::writeSymbolTableEntryForControlSection)
Additional: We can not add raw data of text section test in the patch, If want to output raw text section data,it need a function description patch first.
This is a small generalization of a fold requested in PR43650:
https://bugs.llvm.org/show_bug.cgi?id=43650
The sign-bit of the condition operand can be used as a mask for the true operand:
https://rise4fun.com/Alive/paT
Note that we already handle some of the patterns (isNegative + scalar) because
there's an over-specialized, yet over-reaching fold for that in foldSelectCCToShiftAnd().
It doesn't use any TLI hooks, so I can't easily rip out that code even though we're
duplicating part of it here. This fold is guarded by TLI.convertSelectOfConstantsToMath(),
so it should not cause problems for targets that prefer select over shift.
Also worth noting: I thought we could generalize this further to include the case where
the true operand of the select is not constant, but Alive says that may allow poison to
pass through where it does not in the original select form of the code.
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
Sam Parker [Tue, 15 Oct 2019 13:12:51 +0000 (13:12 +0000)]
[ARM][MVE] validForTailPredication insts
Reverse the logic for valid tail predication instructions and create
a whitelist instead. Added other instruction groups that aren't
obviously safe:
- instructions that 'narrow' their result.
- lane moves.
- byte swapping instructions.
- interleaving loads and stores.
- cross-beat carries.
- top/bottom instructions.
- complex operations.
Hopefully we should be able to add more of these instructions to the
whitelist, once we have a more concrete idea of the transform.
Sanjay Patel [Tue, 15 Oct 2019 13:12:44 +0000 (13:12 +0000)]
[InstCombine] fold a shifted bool zext to a select (2nd try)
The 1st attempt at rL374828 inserted the code
at the wrong position (outside of the constant-shift-amount
block). Trying again with an additional test to verify
const-ness.
For a constant shift amount, add the following fold.
shl (zext (i1 X)), ShAmt --> select (X, 1 << ShAmt, 0)
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
David Stenberg [Tue, 15 Oct 2019 11:31:21 +0000 (11:31 +0000)]
[DebugInfo] Add a DW_OP_LLVM_entry_value operation
Summary:
Internally in LLVM's metadata we use DW_OP_entry_value operations with
the same semantics as DWARF; that is, its operand specifies the number
of bytes that the entry value covers.
At the time of emitting entry values we don't know the emitted size of
the DWARF expression that the entry value will cover. Currently the size
is hardcoded to 1 in DIExpression, and other values causes the verifier
to fail. As the size is 1, that effectively means that we can only have
valid entry values for registers that can be encoded in one byte, which
are the registers with DWARF numbers 0 to 31 (as they can be encoded as
single-byte DW_OP_reg0..DW_OP_reg31 rather than a multi-byte
DW_OP_regx). It is a bit confusing, but it seems like llvm-dwarfdump
will print an operation "correctly", even if the byte size is less than
that, which may make it seem that we emit correct DWARF for registers
with DWARF numbers > 31. If you instead use readelf for such cases, it
will interpret the number of specified bytes as a DWARF expression. This
seems like a limitation in llvm-dwarfdump.
As suggested in D66746, a way forward would be to add an internal
variant of DW_OP_entry_value, DW_OP_LLVM_entry_value, whose operand
instead specifies the number of operations that the entry value covers,
and we then translate that into the byte size at the time of emission.
In this patch that internal operation is added. This patch keeps the
limitation that a entry value can only be applied to simple register
locations, but it will fix the issue with the size operand being
incorrect for DWARF numbers > 31.
[Alignment][NFC] Remove dependency on GlobalObject::setAlignment(unsigned)
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
David Stenberg [Tue, 15 Oct 2019 11:14:35 +0000 (11:14 +0000)]
[DebugInfo] Add interface for pre-calculating the size of emitted DWARF
Summary:
DWARF's DW_OP_entry_value operation has two operands; the first is a
ULEB128 operand that specifies the size of the second operand, which is
a DWARF block. This means that we need to be able to pre-calculate and
emit the size of DWARF expressions before emitting them. There is
currently no interface for doing this in DwarfExpression, so this patch
introduces that.
When implementing this I initially thought about running through
DwarfExpression's emission two times; first with a temporary buffer to
emit the expression, in order to being able to calculate the size of
that emitted data. However, DwarfExpression is a quite complex state
machine, so I decided against that, as it seemed like the two runs could
get out of sync, resulting in incorrect size operands. Therefore I have
implemented this in a way that we only have to run DwarfExpression once.
The idea is to emit DWARF to a temporary buffer, for which it is
possible to query the size. The data in the temporary buffer can then be
emitted to DwarfExpression's main output.
In the case of DIEDwarfExpression, a temporary DIE is used. The values
are all allocated using the same BumpPtrAllocator as for all other DIEs,
and the values are then transferred to the real value list. In the case
of DebugLocDwarfExpression, the temporary buffer is implemented using a
BufferByteStreamer which emits to a buffer in the DwarfExpression
object.
Jeremy Morse [Tue, 15 Oct 2019 10:46:24 +0000 (10:46 +0000)]
[DebugInfo] Remove some users of DBG_VALUEs IsIndirect field
This patch kills off a significant user of the "IsIndirect" field of
DBG_VALUE machine insts. Brought up in in PR41675, IsIndirect is
techncally redundant as it can be expressed by the DIExpression of a
DBG_VALUE inst, and it isn't helpful to have two ways of expressing
things.
Rather than setting IsIndirect, have DBG_VALUE creators add an extra deref
to the insts DIExpression. There should now be no appearences of
IsIndirect=True from isel down to LiveDebugVariables / VirtRegRewriter,
which is ensured by an assertion in LDVImpl::handleDebugValue. This means
we also get to delete the IsIndirect handling in LiveDebugVariables. Tests
can be upgraded by for example swapping the following IsIndirect=True
DBG_VALUE:
Petar Avramovic [Tue, 15 Oct 2019 09:30:08 +0000 (09:30 +0000)]
[MIPS GlobalISel] Add MSA registers to fprb. Select vector load, store
Add vector MSA register classes to fprb, they are 128 bit wide.
MSA instructions use the same registers for both integer and floating
point operations. Therefore we only need to check for vector element
size during legalization or instruction selection.
Add helper function in MipsLegalizerInfo and switch to legalIf
LegalizeRuleSet to keep legalization rules compact since they depend
on MipsSubtarget and presence of MSA.
fprb is assigned to all vector operands.
Move selectLoadStoreOpCode to MipsInstructionSelector in order to
reduce number of arguments.
David Stenberg [Tue, 15 Oct 2019 09:21:09 +0000 (09:21 +0000)]
Change Comments SmallVector to std::vector in DebugLocStream [NFC]
This changes the 32-element SmallVector to a std::vector. When building
a RelWithDebInfo clang-8 binary, the average size of the vector was
~10000, so it does not seem very beneficial or practical to use a small
vector for that.
The DWARFBytes SmallVector grows in the same way as Comments, so perhaps
that also should be changed to a purely dynamically allocated structure,
but that requires some more code changes, so I let that remain as a
SmallVector for now.
Check if size of operand LLT matches sizes of available register banks
before inspecting the opcode in order to reduce number of checks.
Factor commonly used pieces of code into functions.
Martin Storsjo [Tue, 15 Oct 2019 08:29:56 +0000 (08:29 +0000)]
[Demangle] Add a few more options to the microsoft demangler
This corresponds to commonly used options to UnDecorateSymbolName
within llvm.
Add them as hidden options in llvm-undname. MS undname.exe takes
numeric flags, corresponding to the UNDNAME_* constants, but instead
of hardcoding in mappings for those numbers, just add textual
options instead, as it the use of them here is primarily intended
for testing.
Craig Topper [Tue, 15 Oct 2019 06:10:11 +0000 (06:10 +0000)]
[X86] Don't check for VBROADCAST_LOAD being a user of the source of a VBROADCAST when trying to share broadcasts.
The only things VBROADCAST_LOAD uses is an address and a chain
node. It has no vector inputs.
So if its a user of the source of another broadcast that could
only mean one of two things. The other broadcast is broadcasting
the address of the broadcast_load. Or the source is a load and
the use we're seeing is the chain result from that load. Neither
of these cases make sense to combine here.
This issue was reported post-commit r373871. Test case has not
been reduced yet.
Shiva Chen [Tue, 15 Oct 2019 02:04:29 +0000 (02:04 +0000)]
[RISCV] Support fast calling convention
LLVM may annotate the function with fastcc if there has only one caller
and there're no other caller out of the module and the function is not
naked or contain variable arguments.
The fastcc functions could pass the arguments by the caller saved registers.
Thomas Lively [Tue, 15 Oct 2019 01:11:51 +0000 (01:11 +0000)]
[WebAssembly] Trapping fptoint builtins and intrinsics
Summary:
The WebAssembly backend lowers fptoint instructions to a code sequence
that checks for overflow to avoid traps because fptoint is supposed to
be speculatable. These new builtins and intrinsics give users a way to
depend on the trapping semantics of the underlying instructions and
avoid the extra code generated normally.
Craig Topper [Mon, 14 Oct 2019 23:48:24 +0000 (23:48 +0000)]
[X86] Teach X86MCodeEmitter to properly encode zmm16-zmm31 as index register to vgatherpf/vscatterpf.
We need to encode bit 4 into the EVEX.V' bit. We do this right
for regular gather/scatter which use either MRMSrcMem or MRMDestMem
formats. The prefetches use MRM*m formats.
Eric Christopher [Mon, 14 Oct 2019 22:56:07 +0000 (22:56 +0000)]
In the new pass manager use PTO.LoopUnrolling to determine when and how
we will unroll loops. Also comment a few occasions where we need to
know whether or not we're forcing the unwinder or not.
The default before and after this patch is for LoopUnroll to be enabled,
and for it to use a cost model to determine whether to unroll the loop
(`OnlyWhenForced = false`). Before this patch, disabling loop unroll
would not run the LoopUnroll pass. After this patch, the LoopUnroll pass
is being run, but it restricts unrolling to only the loops marked by a
pragma (`OnlyWhenForced = true`).
In addition, this patch disables the UnrollAndJam pass when disabling unrolling.
Testcase is in clang because it's controlling how the loop optimizer
is being set up and there's no other way to trigger the behavior.
Jian Cai [Mon, 14 Oct 2019 22:22:26 +0000 (22:22 +0000)]
[ARM][AsmParser] handles offset expression in parentheses
Summary:
Integrated assembler does not accept offset expressions surrounded by
parenthesis. Handle this case for GAS compability.
https://bugs.llvm.org/show_bug.cgi?id=43631
David Blaikie [Mon, 14 Oct 2019 22:12:45 +0000 (22:12 +0000)]
DebugInfo: Remove unnecessary/mistaken inclusion of Bitcode/BitcodeAnalyzer.h
Introduced in r374582, Michael Spencer pointed out this broke the
modules build due to a missing tblgen dependency on
llvm/IR/Attributes.inc.
Michael fixed the dependency in r374827.
So this removes the inclusion and the new dependency (effectively
reverting r374827 and including the alternative fix of removing rather
than supporting the new dependency).
A previous commit made libLLVMDebugInfoDWARF depend on the LLVM_Bitcode module which depends on the LLVM_intrinsic_gen module which depends on "llvm/IR/Attributes.inc" which is a generated header not depended on by libLLVMDebugInfo. Add that dependency.
Joel E. Denny [Mon, 14 Oct 2019 19:59:30 +0000 (19:59 +0000)]
[lit] Extend internal diff to support -U
When using lit's internal shell, RUN lines like the following
accidentally execute an external `diff` instead of lit's internal
`diff`:
```
# RUN: program | diff -U1 file -
```
Such cases exist now, in `clang/test/Analysis` for example. We are
preparing patches to ensure lit's internal `diff` is called in such
cases, which will then fail because lit's internal `diff` doesn't
recognize `-U` as a command-line option. This patch adds `-U`
support.
Philip Reames [Mon, 14 Oct 2019 19:49:40 +0000 (19:49 +0000)]
[Tests] Add a test demonstrating a miscompile in the off-by-default loop-pred transform
Credit goes to Evgeny Brevnov for figuring out the problematic case.
Fuzzing probably also found it (lots of failures), but due to some silly infrastructure problems I hadn't gotten to the results before Evgeny hand reduced it from a benchmark.
Roman Lebedev [Mon, 14 Oct 2019 19:46:34 +0000 (19:46 +0000)]
[LoopIdiom] BCmp: loop exit count must not be wider than size_t that `bcmp` takes
As reported by Joerg Sonnenberger in IRC, for 32-bit systems,
where pointer and size_t are 32-bit, if you use 64-bit-wide variable
in the loop, you could end up with loop exit count being of the type
wider than the size_t. Now, i'm not sure if we can produce `bcmp`
from that (just truncate?), but we certainly should not assert/miscompile.
Jordan Rupprecht [Mon, 14 Oct 2019 17:47:17 +0000 (17:47 +0000)]
[llvm-objdump] Adjust spacing and field width for --section-headers
Summary:
- Expand the "Name" column past 13 characters when any of the section names are longer. Current behavior is a staggard output instead of a nice table if a single name is longer.
- Only print the required number of hex chars for addresses (i.e. 8 characters for 32-bit, 16 characters for 64-bit)
- Fix trailing spaces
Vedant Kumar [Mon, 14 Oct 2019 17:20:22 +0000 (17:20 +0000)]
[llvm-profdata] Weaken "malformed-ptr-to-counter-array.test" to appease arm bots
There are a number arm bots failing after r374617 landed, and I'm not
sure why. It looks a bit like the error message llvm-profdata is
expected to print to stderr isn't flushed.
Weaken the test in an attempt to appease the arm bots: if this doesn't
work, that means that llvm-profdata is actually *not failing*, and that
will be a clear indication that some logic error is actually happening.
Simon Pilgrim [Mon, 14 Oct 2019 16:30:17 +0000 (16:30 +0000)]
[CostModel][X86] Add CTLZ scalar costs
Add specific scalar costs for CTLZ instructions, we can't discriminate between CTLZ and CTLZ_ZERO_UNDEF so we have to assume the worst. Given how BSR is often a microcoded nightmare on some older targets we might still be underestimating it.
For targets supporting LZCNT (Intel Haswell+ or AMD Fam10+), we provide overrides that assume 1cy costs.
Add a pass to lower is.constant and objectsize intrinsics
This pass lowers is.constant and objectsize intrinsics not simplified by
earlier constant folding, i.e. if the object given is not constant or if
not using the optimized pass chain. The result is recursively simplified
and constant conditionals are pruned, so that dead blocks are removed
even for -O0. This allows inline asm blocks with operand constraints to
work all the time.
The new pass replaces the existing lowering in the codegen-prepare pass
and fallbacks in SDAG/GlobalISEL and FastISel. The latter now assert
on the intrinsics.
David Green [Mon, 14 Oct 2019 15:19:33 +0000 (15:19 +0000)]
[ARM] Selection for MVE VMOVN
The adds both VMOVNt and VMOVNb instruction selection from the appropriate
shuffles. We detect shuffle masks of the form:
0, N, 2, N+2, 4, N+4, ...
or
0, N+1, 2, N+3, 4, N+5, ...
ISel will also try the opposite patterns, with inputs reversed. These are
selected to VMOVNt and VMOVNb respectively.
Makes Object/macho-invalid.test fail everywhere, e.g. here:
http://lab.llvm.org:8011/builders/llvm-hexagon-elf/builds/23669/steps/test-llvm/logs/FAIL%3A%20LLVM%3A%3Amacho-invalid.test
[Alignment][NFC] Move and type functions from MathExtras to Alignment
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
[AMDGPU] Come back patch for the 'Assign register class for cross block values according to the divergence.'
Detailed description:
After https://reviews.llvm.org/D59990 submit several issues were discovered.
Changes in common code were preserved but AMDGPU specific part was reverted to keep the backend working correctly.
Discovered issues were addressed in the following commits:
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
Craig Topper [Mon, 14 Oct 2019 06:47:56 +0000 (06:47 +0000)]
[X86] Teach EmitTest to handle ISD::SSUBO/USUBO in order to use the Z flag from the subtract directly during isel.
This prevents isel from emitting a TEST instruction that
optimizeCompareInstr will need to remove later.
In some of the modified tests, the SUB gets duplicated due to
the flags being needed in two places and being clobbered in
between. optimizeCompareInstr was able to optimize away the TEST
that was using the result of one of them, but optimizeCompareInstr
doesn't know to turn SUB into CMP after removing the TEST. It
only knows how to turn SUB into CMP if the result was already
dead.
With this change the TEST never exists, so optimizeCompareInstr
doesn't have to remove it. Then it can just turn the SUB into
CMP immediately.