Fangrui Song [Thu, 11 Apr 2019 02:02:44 +0000 (02:02 +0000)]
[DWARF] Set discriminator to 0 for DW_LNS_copy
Summary:
Make DW_LNS_copy set the discriminator register to 0, to conform to
DWARF 4 & 5: "Then it sets the discriminator register to 0, and sets the
basic_block, prologue_end and epilogue_begin registers to false."
Because all of DW_LNE_end_sequence, DN_LNS_copy, and special opcodes reset
discriminator to 0, we can move discriminator=0 to appendRowToMatrix.
Also, make DW_LNS_copy print before appending the row, as it is similar
to a address+=0,line+=0 special opcode, which prints before appending
the row.
Erik Pilkington [Wed, 10 Apr 2019 23:42:11 +0000 (23:42 +0000)]
Fix a hang when lowering __builtin_dynamic_object_size
If the ObjectSizeOffsetEvaluator fails to fold the object size call, then it may
litter some unused instructions in the function. When done repeatably in
InstCombine, this results in an infinite loop. Fix this by tracking the set of
instructions that were inserted, then removing them on failure.
[X86] Teach foldMaskedShiftToScaledMask to look through an any_extend from i32 to i64 between the and & shl
foldMaskedShiftToScaledMask tries to reorder and & shl to enable the shl to fold into an LEA. But if there is an any_extend between them it doesn't work.
This patch modifies the code to look through any_extend from i32 to i64 when the and mask only uses bits that weren't from the extended part.
This will prevent a regression from D60358 caused by 64-bit SHL being narrowed to 32-bits when their upper bits aren't demanded.
[X86] Make _Int instructions the preferred instructon for the assembly parser and disassembly parser to remove inconsistencies between VEX and EVEX.
Many of our instructions have both a _Int form used by intrinsics and a form
used by other IR constructs. In the EVEX space the _Int versions usually cover
all the capabilities include broadcasting and rounding. While the other version
only covers simple register/register or register/load forms. For this reason
in EVEX, the non intrinsic form is usually marked isCodeGenOnly=1.
In the VEX encoding space we were less consistent, but usually the _Int version
was the isCodeGenOnly version.
This commit makes the VEX instructions match the EVEX instructions. This was
done by manually studying the AsmMatcher table so its possible I missed some
cases, but we should be closer now.
I'm thinking about using the isCodeGenOnly bit to simplify the EVEX2VEX
tablegen code that disambiguates the _Int and non _Int versions. Currently it
checks register class sizes and Record the memory operands come from. I have
some other changes I was looking into for D59266 that may break the memory check.
I had to make a few scheduler hacks to keep the _Int versions from being treated
differently than the non _Int version.
[X86] Add test case for LEA formation regression seen with D60358. NFC
If we have an (add X, (and (aext (shl Y, C1)), C2)), we can pull the shift through and+aext to fold into an LEA with the.
Assuming C1 is small enough and C2 masks off all of the extend bits.
This pattern showed up in D60358. And we need to handle it to prevent a regression.
[X86] Replace some if statements in isel address matching that should never be true with asserts. And move them earlier before we looked through operands that don't change size. NFC
These ifs were ensuring we don't have to handle types larger than 64 bits probably because we use getZExtValue in several places below them.
None of the callers of this code pass types larger than 64-bits so we can just assert instead of branching in release code.
I've also moved them earlier since we're just looking through operations that don't effect bit width.
This is prep work for some refactoring I plan to do to the (and (shl)) handling code.
Nick Desaulniers [Wed, 10 Apr 2019 19:01:44 +0000 (19:01 +0000)]
[X86AsmPrinter] refactor to limit use of Modifier. NFC
Summary:
The Modifier memory operands is used in 2 cases of memory references
(H & P ExtraCodes). Rather than pass around the likely nullptr Modifier,
refactor the handling of the Modifier out from printOperand().
The refactorings in this patch:
- Don't forward declare printOperand, move its definition up.
- The diff makes it look like there's a change to printPCRelImm
(narrator: there's not).
- Create printModifiedOperand()
- Move logic for Modifier to there from printOperand
- Use printModifiedOperand in 3 call sites that actually create
Modifiers.
- Remove now unused Modifier parameter from printOperand
- Remove default parameter from printLeaMemReference as it only has 1
call site that explicitly passes a parameter.
- Remove default parameter from printMemReference, make call lone call
site explicitly pass nullptr.
- Drop Modifier parameter from printIntelMemReference, as Intel style
memory references don't support the Modifiers in question.
This will allow future changes to printOperand() to make it a pure virtual
method on the base AsmPrinter class, allowing for more generic handling
of some architecture generic constraints. X86AsmPrinter was the only
derived class of AsmPrinter to have additional parameters on its
printOperand function.
Roman Lebedev [Wed, 10 Apr 2019 18:26:42 +0000 (18:26 +0000)]
[X86] X86ScheduleBdVer2: use !listsplat operator to cleanup loadres calculation
The problem is that one can't concatenate an empty list
(implied all-ones) with non-empty list here. The result
will be the non-empty list, and it won't match the length
of the ExePorts list.
The problems begin when LoadRes != 1 here,
which is the case in PdWriteResYMMPair,
and more importantly i think it will be the case for PdWriteResExPair.
Roman Lebedev [Wed, 10 Apr 2019 18:26:36 +0000 (18:26 +0000)]
[TableGen] Introduce !listsplat 'binary' operator
Summary:
```
``!listsplat(a, size)``
A list value that contains the value ``a`` ``size`` times.
Example: ``!listsplat(0, 2)`` results in ``[0, 0]``.
```
I plan to use this in X86ScheduleBdVer2.td for LoadRes handling.
This is a little bit controversial because unlike every other binary operator
the types aren't identical.
David Green [Wed, 10 Apr 2019 18:05:57 +0000 (18:05 +0000)]
[ARM] Add an extra constant hoisting test. NFC
This adds a simple extra test for constant hoisting to show it's
usefulness with constant addresses like those seen in memory
mapped registers in embedded systems.
David Green [Wed, 10 Apr 2019 18:00:41 +0000 (18:00 +0000)]
Revert rL357745: [SelectionDAG] Compute known bits of CopyFromReg
Certain optimisations from ConstantHoisting and CGP rely on Selection DAG not
seeing through to the constant in other blocks. Revert this patch while we come
up with a better way to handle that.
I will try to follow this up with some better tests.
but changed that to
SymbolNode *Symbol = demangleEncodedSymbol(MangledName, QN);
if (Error)
return nullptr;
Symbol->Name = QN;
and one branch somewhere returned a nullptr without setting Error.
Looking at the code changed in r340083 and r340710 that branch looks
like a remnant from an earlier attempt to demangle RTTI descriptors
that has since been rewritten -- so just remove this branch. It
shouldn't change behavior for correctly mangled symbols.
Matt Arsenault [Wed, 10 Apr 2019 17:27:56 +0000 (17:27 +0000)]
GlobalISel: Move computeValueLLTs
Call lowering should use this directly instead of going through the
EVT version, but more work is needed to deal with this (mostly the
passing of the IR type pointer instead of the relevant properties in
ArgInfo).
[AArch64] Teach getTestBitOperand to look through ANY_EXTENDS
This patch teach getTestBitOperand to look through ANY_EXTENDs when the extended bits aren't used. The test case changed here is based what D60358 did to test16 in tbz-tbnz.ll. So this patch will avoid that regression.
Nick Desaulniers [Wed, 10 Apr 2019 16:38:43 +0000 (16:38 +0000)]
[AsmPrinter] refactor to remove remove AsmVariant. NFC
Summary:
The InlineAsm::AsmDialect is only required for X86; no architecture
makes use of it and as such it gets passed around between arch-specific
and general code while being unused for all architectures but X86.
Since the AsmDialect is queried from a MachineInstr, which we also pass
around, remove the additional AsmDialect parameter and query for it deep
in the X86AsmPrinter only when needed/as late as possible.
This refactor should help later planned refactors to AsmPrinter, as this
difference in the X86AsmPrinter makes it harder to make AsmPrinter more
generic.
ssubo X, C is equivalent to saddo X, -C. Make the transformation in
InstCombine and allow the logic implemented for saddo to fold prior
usages of add nsw or sub nsw with constants.
Improve compile-time performance in computeKnownBitsFromAssume.
This patch changes the order of pattern matching by first testing
a compare instruction's predicate, before doing the pattern
match for the whole expression tree.
David Stenberg [Wed, 10 Apr 2019 11:28:28 +0000 (11:28 +0000)]
[DebugInfo] Track multiple registers in DbgEntityHistoryCalculator
Summary:
When calculating the debug value history, DbgEntityHistoryCalculator
would only keep track of register clobbering for the latest debug value
per inlined entity. This meant that preceding register-described debug
value fragments would live on until the next overlapping debug value,
ignoring any potential clobbering. This patch amends
DbgEntityHistoryCalculator so that it keeps track of all registers that
a inlined entity's currently live debug values are described by.
The DebugInfo/COFF/pieces.ll test case has had to be changed since
previously a register-described fragment would incorrectly outlive its
basic block.
The parent patch D59941 is expected to increase the coverage slightly,
as it makes sure that location list entries are inserted after clobbered
fragments, and this patch is expected to decrease it, as it stops
preceding register-described from living longer than they should. All in
all, this patch and the preceding patch has a negligible effect on the
output from `llvm-dwarfdump -statistics' for a clang-3.4 binary built
using the RelWithDebInfo build profile. "Scope bytes covered" increases
by 0.5%, and "variables with location" increases from 2212083 to 2212088, but it should improve the accuracy quite a bit.
David Stenberg [Wed, 10 Apr 2019 11:28:20 +0000 (11:28 +0000)]
[DebugInfo] Improve handling of clobbered fragments
Summary:
Currently the DbgValueHistorymap only keeps track of clobbered registers
for the last debug value that it has encountered. This could lead to
preceding register-described debug values living on longer in the
location lists than they should. See PR40283 for an example. This
patch does not introduce tracking of multiple registers, but changes
the DbgValueHistoryMap structure to allow for that in a follow-up
patch. This patch is not NFC, as it at least fixes two bugs in
DwarfDebug (both are covered in the new clobbered-fragments.mir test):
* If a debug value was clobbered (its End pointer set), the value would
still be added to OpenRanges, meaning that the succeeding location list
entries could potentially contain stale values.
* If a debug value was clobbered, and there were non-overlapping
fragments that were still live after the clobbering, DwarfDebug would
not create a location list entry starting directly after the
clobbering instruction. This meant that the location list could have
a gap until the next debug value for the variable was encountered.
Before this patch, the history map was represented by <Begin, End>
pairs, where a new pair was created for each new debug value. When
dealing with partially overlapping register-described debug values, such
as in the following example:
the history map would then contain the entries `[<DV1, insn1>, [<DV2, insn2>]`.
This would leave it up to the users of the map to be aware of
the relative order of the instructions, which e.g. could make
DwarfDebug::buildLocationList() needlessly complex. Instead, this patch
makes the history map structure monotonically increasing by dropping the
End pointer, and replacing that with explicit clobbering entries in the
vector. Each debug value has an "end index", which if set, points to the
entry in the vector that ends the debug value. The ending entry can
either be an overlapping debug value, or an instruction which clobbers
the register that the debug value is described by. The ending entry's
instruction can thus either be excluded or included in the debug value's
range. If the end index is not set, the debug value that the entry
introduces is valid until the end of the function.
Changes to test cases:
* DebugInfo/X86/pieces-3.ll: The range of the first DBG_VALUE, which
describes that the fragment (0, 64) is located in RDI, was
incorrectly ended by the clobbering of RAX, which the second
(non-overlapping) DBG_VALUE was described by. With this patch we
get a second entry that only describes RDI after that clobbering.
* DebugInfo/ARM/partial-subreg.ll: This test seems to indiciate a bug
in LiveDebugValues that is caused by it not being aware of fragments.
I have added some comments in the test case about that. Also, before
this patch DwarfDebug would incorrectly include a register-described
debug value from a preceding block in a location list entry.
Make it possible to TableGen code for FCONSTS and FCONSTD.
We need to make two changes to the TableGen descriptions of vfp_f32imm
and vfp_f64imm respectively:
* add GISelPredicateCode to check that the immediate fits in 8 bits;
* extract the SDNodeXForms into separate definitions and create a
GISDNodeXFormEquiv and a custom renderer function for each of them.
There's a lot of boilerplate to get the actual value of the immediate,
but it basically just boils down to calling ARM_AM::getFP32Imm or
ARM_AM::getFP64Imm.
Summary:
In an upcoming commit the history map will be changed so that it
contains explicit entries for instructions that clobber preceding debug
values, rather than Begin- End range pairs, so generalize the name to
"Entry".
Also, prefix the iterator variable names in buildLocationList() with
"E". In an upcoming commit the entry will have query functions such as
"isD(e)b(u)gValue", which could at a glance make one confuse it for
iterations over MachineInstrs, so make the iterator names a bit more
distinct to avoid that.
David Stenberg [Wed, 10 Apr 2019 09:07:32 +0000 (09:07 +0000)]
[DebugInfo] Make InstrRange into a class, NFC
Summary:
Replace use of std::pair by creating a class for the debug value
instruction ranges instead. This is a preparatory refactoring for
improving handling of clobbered fragments.
In an upcoming commit the Begin pointer will become a PointerIntPair, so
it will be cleaner to have a getter for that.
[VPLAN] Minor improvement to testing and debug messages.
1. Use computed VF for stress testing.
2. If the computed VF does not produce vector code (VF smaller than 2), force VF to be 4.
3. Test vectorization of i64 data on AArch64 to make sure we generate VF != 4 (on X86 that was already tested on AVX).
Patch by Francesco Petrogalli <francesco.petrogalli@arm.com>
Fangrui Song [Wed, 10 Apr 2019 07:44:23 +0000 (07:44 +0000)]
[DWARF] Simplify LineTable::findRowInSeq
We want the last row whose address is less than or equal to Address.
This can be computed as upper_bound - 1, which is simpler than
lower_bound followed by skipping equal rows in a loop.
Since FirstRow (LowPC) does not satisfy the predicate (OrderByAddress)
while LastRow-1 (HighPC) satisfies the predicate. We can decrease the
search range by two, i.e.
Check AlwaysOverflow condition for usubo. The implementation is the
same as the existing handling for uaddo and umulo. Handling for saddo
and ssubo will follow (smulo doesn't have the necessary ValueTracking
support).
[ObjC][ARC] Convert the retainRV marker that is passed as a named
metadata into a module flag in the auto-upgrader and make the ARC
contract pass read the marker as a module flag.
This is needed to fix a bug where ARC contract wasn't inserting the
retainRV marker when LTO was enabled, which caused objects returned
from a function to be auto-released.
[X86] Move the 2 byte VEX optimization for MOV instructions back to the X86AsmParser::processInstruction where it used to be. Block when {vex3} prefix is present.
Years ago I moved this to an InstAlias using VR128H/VR128L. But now that we support {vex3} pseudo prefix, we need to block the optimization when it is set to match gas behavior.
Jim Lin [Wed, 10 Apr 2019 01:56:32 +0000 (01:56 +0000)]
[Sparc] Fix incorrect MI insertion position for spilling f128.
Summary:
Obviously, new built MI (sethi+add or sethi+xor+add) for constructing large offset
should be inserted before new created MI for storing even register into memory.
So the insertion position should be *StMI instead of II.
before fixed:
std %f0, [%g1+80]
sethi 4, %g1 <<<
add %g1, %sp, %g1 <<< this two instructions should be put before "std %f0, [%g1+80]".
sethi 4, %g1
add %g1, %sp, %g1
std %f2, [%g1+88]
[X86] Support the EVEX versions vcvt(t)ss2si and vcvt(t)sd2si with the {evex} pseudo prefix in the assembler.
The EVEX versions are ambiguous with the VEX versions based on operands alone so we had explicitly dropped
them from the AsmMatcher table. Unfortunately, when we add them they incorrectly show in the table before
their VEX counterparts. This is different how the prioritization normally works.
To fix this we have to explicitly reject the instructions unless the {evex} prefix has been seen.
Robert Widmann [Tue, 9 Apr 2019 22:31:56 +0000 (22:31 +0000)]
[LLVM-C] Correct The Current Debug Location Accessors
Summary: Deprecate the existing accessors for the "current debug location" of an IRBuilder. The setter could not handle being reset to NULL, and the getter would create bogus metadata if the NULL location was returned. Provide direct metadata-based accessors instead.
Robert Widmann [Tue, 9 Apr 2019 22:27:51 +0000 (22:27 +0000)]
[LLVM-C] Add Bindings to Access an Instruction's DebugLoc
Summary: Provide direct accessors to supplement LLVMSetInstDebugLocation. In addition, properly accept and return the NULL location. The old accessors provided no way to do this, so the current debug location cannot currently be cleared.
Robert Widmann [Tue, 9 Apr 2019 21:53:31 +0000 (21:53 +0000)]
[LLVM-C] Add Section and Symbol Iterator Accessors for Object File Binaries
Summary: This brings us to full feature parity with the old API, so I've deprecated it and updated the tests. I'll do a follow-up patch to do some more cleanup and documentation work in this header.
[X86] Fix a dangling StringRef issue introduced in r358029.
I was attempting to convert mnemonics to lower case after processing a pseudo prefix. But the ParseOperands just hold a StringRef for tokens so there is no where to allocate the memory.
Add FIXMEs for the lower case issue which also exists in the prefix parsing code.
[AArch64][GlobalISel] Add isel support for vector G_ICMP and G_ASHR & G_SHL
The selection for G_ICMP is unfortunately not currently importable from SDAG
due to the use of custom SDNodes. To support this, this selection method has an
opcode table which has been generated by a script, indexed by various
instruction properties. Ideally in future we will have a GISel native selection
patterns that we can write in tablegen to improve on this.
For selection of some types we also need support for G_ASHR and G_SHL which are
generated as a result of legalization. This patch also adds support for them,
generating the same code as SelectionDAG currently does.
[GlobalISel][AArch64] Allow CallLowering to handle types which are normally
required to be passed as different register types. E.g. <2 x i16> may need to
be passed as a larger <2 x i32> type, so formal arg lowering needs to be able
truncate it back. Likewise, when dealing with returns of these types, they need
to be widened in the appropriate way back.
The uadd and umul cases are currently handled, the usub, sadd, ssub
and smul cases are not. usub, sadd and ssub already have the
necessary ValueTracking support, smul doesn't.
[X86] Add support for {vex2}, {vex3}, and {evex} to the assembler to match gas. Use {evex} to improve the one our 32-bit AVX512 tests.
These can be used to force the encoding used for instructions.
{vex2} will fail if the instruction is not VEX encoded, but otherwise won't do anything since we prefer vex2 when possible. Might need to skip use of the _REV MOV instructions for this too, but I haven't done that yet.
{vex3} will force the instruction to use the 3 byte VEX encoding or fail if there is no VEX form.
{evex} will force the instruction to use the EVEX version or fail if there is no EVEX version.
[DAGCombiner][X86][SystemZ] Canonicalize SSUBO with immediate RHS to SADDO by negating the immediate.
This lines up with what we do for regular subtract and it matches up better with X86 assumptions in isel patterns that add with immediate is more canonical than sub with immediate.
sdiv-canonicalize.ll fails after this revision. The fold needs to be
moved outside the branch handling constant operands. However when this
is done there are further test changes, so I'm reverting this in the
meantime.
Change the code to always handle the unsigned+signed cases together
with the same basic structure for add/sub/mul. The simple folds are
always handled first and then the ValueTracking overflow checks are
used.
Remove the unit at a time option
Removes the code from opt and the pass manager builder.
The code was unused - even by the C library code that was supposed to set
it and had been removed previously.
[ValueTracking] Use computeConstantRange() for signed sub overflow determination
This is the same change as D60420 but for signed sub rather than
signed add: Range information is intersected into the known bits
result, allows to detect more no/always overflow conditions.
One of out of tree targets has regressed with this patch. Reverting
it for now and let liveness to be fully reconstructed in case pass
was used after the LIS is created to resolve the regression.
[ValueTracking] Use computeConstantRange() in signed add overflow determination
This is D59386 for the signed add case. The computeConstantRange()
result is now intersected into the existing known bits information,
allowing to detect additional no-overflow/always-overflow conditions
(though the latter isn't used yet).
This (finally...) covers the motivating case from D59071.
[InstCombine] prevent possible miscompile with sdiv+negate of vector op
Similar to:
rL358005
Forego folding arbitrary vector constants to fix a possible miscompile bug.
We can enhance the transform if we do want to handle the more complicated
vector case.
[InstCombine] prevent possible miscompile with negate+sdiv of vector op
// 0 - (X sdiv C) -> (X sdiv -C) provided the negation doesn't overflow.
This fold has been around for many years and nobody noticed the potential
vector miscompile from overflow until recently...
So it seems unlikely that there's much demand for a vector sdiv optimization
on arbitrary vector constants, so just limit the matching to splat constants
to avoid the possible bug.