Yaxun Liu [Thu, 23 Nov 2017 03:08:51 +0000 (03:08 +0000)]
[NFC] CodeGen: Handle shift amount type in DAGTypeLegalizer::SplitInteger
This patch reverts change to X86TargetLowering::getScalarShiftAmountTy in
rL318727 and move the logic to DAGTypeLegalizer::SplitInteger.
The reason is that getScalarShiftAmountTy returns a shift amount type that
is suitable for common use cases in CodeGen. DAGTypeLegalizer::SplitInteger
is a rare situation which requires a shift amount type larger than what
getScalarShiftAmountTy. In this case, it is more reasonable to do special
handling of shift amount type in DAGTypeLegalizer::SplitInteger only. If
similar situations arises the logic may be moved to a separate function.
Fedor Sergeev [Wed, 22 Nov 2017 20:59:53 +0000 (20:59 +0000)]
IR printing improvement for loop passes
Summary:
Loop-pass printing is somewhat deficient since it does not provide the
context around the loop (e.g. preheader). This context information becomes
pretty essential when analyzing transformations that move stuff out of the loop.
Extending printLoop to cover preheader and exit blocks (if any).
[Hexagon] Implement buildVector32 and buildVector64 as utility functions
Change LowerBUILD_VECTOR to use those functions. This commit will tempora-
rily affect constant vector generation (it will generate constant-extended
values instead of non-extended combines), but the code for the general case
should be better. The constant selection part will be fixed later.
Rafael Espindola [Wed, 22 Nov 2017 19:59:05 +0000 (19:59 +0000)]
Allow TempFile::discard to be called twice.
We already allowed keep+discard. It is important to be able to discard
a temporary if a rename fail. It is also convenient as it allows the
use of RAII for discarding.
CachePruning: Allow limiting the number of files in the cache directory.
The default limit is 1000000 but it can be configured with a cache
policy. The motivation is that some filesystems (notably ext4) have
a limit on the number of files that can be contained in a directory
(separate from the inode limit).
Paul Robinson [Wed, 22 Nov 2017 15:33:17 +0000 (15:33 +0000)]
[DWARFv5] Support DW_FORM_strp in the .debug_line.dwo header.
As a side effect, the .debug_line section will be dumped in physical
order, rather than in the order that compile units refer to their
associated portions of the .debug_line section. These are probably
always the same order anyway, and no tests noticed the difference.
Nicolai Haehnle [Wed, 22 Nov 2017 12:25:21 +0000 (12:25 +0000)]
AMDGPU: Consider memory dependencies with moved instructions in SILoadStoreOptimizer
Summary:
This bug seems to have gone unnoticed because critical cases with LDS
instructions are eliminated by the peephole optimizer.
However, equivalent situations arise with buffer loads and stores
as well, so this fixes regressions since r317751 ("AMDGPU: Merge
S_BUFFER_LOAD_DWORD_IMM into x2, x4").
Fixes at least:
KHR-GL45.shader_storage_buffer_object.basic-operations-case1-cs
KHR-GL45.cull_distance.functional
piglit tes-input-gl_ClipDistance.shader_test
... and probably more
George Rimar [Wed, 22 Nov 2017 07:53:48 +0000 (07:53 +0000)]
[llvm-tblgen] - Stop using std::string in RecordKeeper.
RecordKeeper::getDef() is a hot place, it shows up in profiling
and it creates std::string instance for each search in RecordMap
though RecordKeeper::RecordMap can use StringRef as a key
instead to avoid that. Patch do that change.
Craig Topper [Wed, 22 Nov 2017 07:11:03 +0000 (07:11 +0000)]
[X86] Lower all ISD::MGATHER nodes to X86ISD:MGATHER.
Now we consistently represent the mask result without relying on isel ignoring it.
We now have a more general SDNode and type constraints to represent these nodes in isel patterns. This allows us to present both both vXi1 and XMM/YMM mask types with a single set of constraints.
Max Kazantsev [Wed, 22 Nov 2017 06:21:39 +0000 (06:21 +0000)]
[SCEV] Strengthen variance condition in calculateLoopDisposition
Given loops `L1` and `L2` with AddRecs `AR1` and `AR2` varying in them respectively.
When identifying loop disposition of `AR2` w.r.t. `L1`, we only say that it is varying if
`L1` contains `L2`. But there is also a possible situation where `L1` and `L2` are
consecutive sibling loops within the parent loop. In this case, `AR2` is also varying
w.r.t. `L1`, but we don't correctly identify it.
It can lead, for exaple, to attempt of incorrect folding. Consider:
AR1 = {a,+,b}<L1>
AR2 = {c,+,d}<L2>
EXAR2 = sext(AR1)
MUL = mul AR1, EXAR2
If we incorrectly assume that `EXAR2` is invariant w.r.t. `L1`, we can end up trying to
construct something like: `{a * {c,+,d}<L2>,+,b * {c,+,d}<L2>}<L1>`, which is incorrect
because `AR2` is not available on entrance of `L1`.
Both situations "`L1` contains `L2`" and "`L1` preceeds sibling loop `L2`" can be handled
with one check: "header of `L1` dominates header of `L2`". This patch replaces the old
insufficient check with this one.
Davide Italiano [Wed, 22 Nov 2017 03:04:55 +0000 (03:04 +0000)]
[SCCP] Pick the right lattice value for constants.
After the dataflow algorithm proves that an argument is constant,
it replaces it value with the integer constant and drops the lattice
value associated to the DEF.
e.g. in the example we have @f() that's called twice:
call @f(undef, ...)
call @f(2, ...)
`undef` MEET 2 = 2 so we replace the argument and all its uses with
the constant 2.
Shortly after, tryToReplaceWithConstantRange() tries to get the lattice
value for the argument we just replaced, causing an assertion.
This function is a little peculiar as it runs when we're doing replacement
and not as part of the solver but still queries the solver.
The fix is that of checking whether we replaced the value already and
get a temporary lattice value for the constant.
Craig Topper [Tue, 21 Nov 2017 23:36:42 +0000 (23:36 +0000)]
[X86] Move the information about the feature bits used by compiler-rt and shared by Host.cpp to a .def file and TargetParser.h so clang can make use of it.
Since we keep Host.cpp and compiler-rt relatively in sync, clang can use this information as a proxy.
Change the representation of COFF comdats so that a COFF linker
is able to accurately resolve comdats between IR and native object
files. Specifically, apply name mangling to comdat names consistently
with native object files, and do not export comdats with an internal
leader because they do not affect symbol resolution.
Chad Rosier [Tue, 21 Nov 2017 18:08:34 +0000 (18:08 +0000)]
[AArch64] Mark mrs of TPIDR_EL0 (thread pointer) as *having* side effects.
This partially reverts r298851. The the underlying issue is that we don't
currently model the dependency between mrs (read system register) and
msr (write system register) instructions.
Something like the below should never be reordered:
Oliver Stannard [Tue, 21 Nov 2017 16:20:25 +0000 (16:20 +0000)]
[ARM] Remove pre-UAL FLDM/FSTM aliases
These are pre-UAL syntax, and we don't support any other pre-UAL instructions,
with the exception of FLDMX/FSTMX, which don't have a UAL equivalent. Therefore
there's no reason to keep them or their AsmParser hacks around.
With the AsmParser hacks removed, the FLDMX and FSTMX instructions get the same
operand diagnostics as the UAL instructions.
Alina Sbirlea [Tue, 21 Nov 2017 15:45:46 +0000 (15:45 +0000)]
Add MemorySSA as loop dependency, disabled by default [NFC].
Summary:
First step in adding MemorySSA as dependency for loop pass manager.
Adding the dependency under a flag.
New pass manager: MSSA pointer in LoopStandardAnalysisResults can be null.
Legacy and new pass manager: Use cl::opt EnableMSSALoopDependency. Disabled by default.
Oliver Stannard [Tue, 21 Nov 2017 15:16:50 +0000 (15:16 +0000)]
[Asm] Improve "too few operands" errors
- We can still emit this error if the actual instruction has two or more
operands missing compared to the expected one.
- We should only emit this error once per instruction.
Oliver Stannard [Tue, 21 Nov 2017 15:12:05 +0000 (15:12 +0000)]
[Asm] Finish matching once end of formal and actual lists reached (NFC)
This is NFC, as the matcher would continue looping up to the maximum
number of operands with no effect, but this should improve performance a
bit, and makes the debug trace clearer.
Sander de Smalen [Tue, 21 Nov 2017 12:26:06 +0000 (12:26 +0000)]
[TableGen] AsmMatcher: Fix bug with reported diagnostic for operand.
Summary:
The generated diagnostic by the AsmMatcher isn't always applicable to the AsmOperand.
This is because the code will only update the diagnostic if it is more specific than the previous diagnostic. However, when having validated operands and 'moved on' to a next operand (for some instruction/alias for which all previous operands are valid), if the diagnostic is InvalidOperand, than that should be set as the diagnostic, not the more specific message about a previous operand for some other instruction/alias candidate.
Alex Bradbury [Tue, 21 Nov 2017 12:00:19 +0000 (12:00 +0000)]
[RISCV][NFC] Clean up RISCVDAGToDAGISel::Select
As pointed out in post-commit review of r318738, `return ReplaceNode(..)` when
both ReplaceNode and the current function return void is confusing. This patch
moves to using a more obvious early return, and moves to just using an if to
catch the one case we currently care about. A future patch that adds further
custom instruction selection can introduce a switch.
properlyDominates() shouldn't be used as sort key. It causes different output between stdlibc++ and libc++.
Instead, I introduced RPOT. In most cases, it works for CSE.
Alex Bradbury [Tue, 21 Nov 2017 08:23:08 +0000 (08:23 +0000)]
[RISCV] Use register X0 (ZERO) for constant 0
The obvious approach of defining a pattern like the one below actually doesn't
work:
`def : Pat<(i32 0), (i32 X0)>;`
As was noted when Lanai made this change (https://reviews.llvm.org/rL288215),
attempting to handle the constant 0 in tablegen leads to assertions due to a
physical register being used where a virtual register is expected.
Alex Bradbury [Tue, 21 Nov 2017 08:11:03 +0000 (08:11 +0000)]
[RISCV] Support and tests for a variety of additional LLVM IR constructs
Previous patches primarily ensured that codegen was possible for the standard
RISC-V instructions. However, there are a number of IR inputs that wouldn't be
appropriately lowered. This patch both adds test cases and supports lowering
for a number of these cases:
* Improved sext/zext/trunc support
* Support for setcc variants that don't map directly to RISC-V instructions
* Lowering mul, and hence support for external symbols
* addc, adde, subc, sube
* mulhs, srem, mulhu, urem, udiv, sdiv
* {srl,sra,shl}_parts
* brind
* br_jt
* bswap, ctlz, cttz, ctpop
* rotl, rotr
* BlockAddress operands
Alex Bradbury [Tue, 21 Nov 2017 07:51:32 +0000 (07:51 +0000)]
[RISCV] Implement lowering of ISD::SELECT
Although ISD::SELECT_CC is a more natural match for RISCVISD::SELECT_CC (and
ultimately the integer RISC-V conditional branch instructions), we choose to
expand ISD::SELECT_CC and lower ISD::SELECT. The appropriate compare+branch
will be created in the case where an ISD::SELECT condition value is created by
an ISD::SETCC node, which operates on XLen types. Other datatypes such as
floating point don't have conditional branch instructions, and lowering
ISD::SELECT allows more flexibility for handling these cases.
Summary:
Before this change, the FDR mode implementation relied on at thread-exit
handling to return buffers back to the (global) buffer queue. This
introduces issues with the initialisation of the thread_local objects
which, even through the use of pthread_setspecific(...) may eventually
call into an allocation function. Similar to previous changes in this
line, we're finding that there is a huge potential for deadlocks when
initialising these thread-locals when the memory allocation
implementation is also xray-instrumented.
In this change, we limit the call to pthread_setspecific(...) to provide
a non-null value to associate to the key created with
pthread_key_create(...). While this doesn't completely eliminate the
potential for the deadlock(s), it does allow us to still clean up at
thread exit when we need to. The change is that we don't need to do more
work when starting and ending a thread's lifetime. We also have a test
to make sure that we actually can safely recycle the buffers in case we
end up re-using the buffer(s) available from the queue on multiple
thread entry/exits.
This change cuts across both LLVM and compiler-rt to allow us to update
both the XRay runtime implementation as well as the library support for
loading these new versions of the FDR mode logging. Version 2 of the FDR
logging implementation makes the following changes:
* Introduction of a new 'BufferExtents' metadata record that's outside
of the buffer's contents but are written before the actual buffer.
This data is associated to the Buffer handed out by the BufferQueue
rather than a record that occupies bytes in the actual buffer.
* Removal of the "end of buffer" records. This is in-line with the
changes we described above, to allow for optimistic logging without
explicit record writing at thread exit.
The optimistic logging model operates under the following assumptions:
* Threads writing to the buffers will potentially race with the thread
attempting to flush the log. To avoid this situation from occuring,
we make sure that when we've finalized the logging implementation,
that threads will see this finalization state on the next write, and
either choose to not write records the thread would have written or
write the record(s) in two phases -- first write the record(s), then
update the extents metadata.
* We change the buffer queue implementation so that once it's handed
out a buffer to a thread, that we assume that buffer is marked
"used" to be able to capture partial writes. None of this will be
safe to handle if threads are racing to write the extents records
and the reader thread is attempting to flush the log. The optimism
comes from the finalization routine being required to complete
before we attempt to flush the log.
This is a fairly significant semantics change for the FDR
implementation. This is why we've decided to update the version number
for FDR mode logs. The tools, however, still need to be able to support
older versions of the log until we finally deprecate those earlier
versions.
Yaxun Liu [Tue, 21 Nov 2017 02:29:54 +0000 (02:29 +0000)]
[AMDGPU] Fix DAGTypeLegalizer::SplitInteger for shift amount type
DAGTypeLegalizer::SplitInteger uses default pointer size as shift amount constant type,
which causes less performant ISA in amdgcn---amdgiz target since the default pointer
type is i64 whereas the desired shift amount type is i32.
This patch fixes that by using TLI.getScalarShiftAmountTy in DAGTypeLegalizer::SplitInteger.
The X86 change is necessary since splitting i512 requires shifting amount of 256, which
cannot be held by i8.
Richard Trieu [Tue, 21 Nov 2017 01:45:17 +0000 (01:45 +0000)]
Add default values for member functions.
Initialize IsVis2 and IsVis3 in SparcSubtarget::initializeSubtargetDependencies.
MSan detected uninitialized read of IsVis3 after r318704. Initializing the
variables to false will prevent undefined behavior.
Zachary Turner [Tue, 21 Nov 2017 01:20:28 +0000 (01:20 +0000)]
Re-revert "Refactor debuginfo-tests."
This is still breaking greendragon.
At this point I give up until someone can fix the greendragon
bots, and I will probably abandon this effort in favor of using
a private github repository.
David Blaikie [Tue, 21 Nov 2017 00:23:17 +0000 (00:23 +0000)]
YAML/XRay/std::vector: Fix ODR violation by removing local specialization
There's a generic partial specialization for all std::vector<T> that
does what's desired, so no need for this full specialization that's
causing an ODR violation anyway.
Craig Topper [Mon, 20 Nov 2017 23:08:50 +0000 (23:08 +0000)]
[SelectionDAG] When promoting the result of a VSELECT, make sure we promote the condition to the SetCC type for the final result type not the original type.
Normally this would be cleaned up by promoting the condition operand next. But in the attached case we promoted the result from v2i48 to v2i64 and the condition from v2i1 to v2i48. Then we tried to "promote" the v2i48 condition back to v2i1 because that's what the SetCC result type for v2i64 is on X86 with VLX. But promote is either a NOP or SIGN_EXTEND and this would need a truncation.
With the change here we now get the SetCC type of v2i1 when we're handling the result promotion and the operand no longer needs to be promoted itself.
Fedor Sergeev [Mon, 20 Nov 2017 22:33:58 +0000 (22:33 +0000)]
[Sparc] efficient pattern for UINT_TO_FP conversion
Summary:
while investigating performance degradation of imagick benchmark
there were found inefficient pattern for UINT_TO_FP conversion.
That pattern causes RAW hazard in assembly code. Specifically,
uitofp IR operator results in poor assembler :
st %i0, [%fp - 952]
ldd [%fp - 952], %f0
it stores 32-bit integer register into memory location and then
loads 64-bit floating point data from that location.
That is exactly RAW hazard case. To optimize that case it is
possible to use SPISD::ITOF and SPISD::XTOF for conversion from
integer to floating point data type and to use ISD::BITCAST to
copy from integer register into floating point register.
The fix is to write custom UINT_TO_FP pattern using SPISD::ITOF,
SPISD::XTOF, ISD::BITCAST.
Zachary Turner [Mon, 20 Nov 2017 21:41:36 +0000 (21:41 +0000)]
Resubmit "Refactor debuginfo-tests" again.
This was reverted due to the tests being run twice on some
build bots. Each run had a slightly different configuration
due to the way in which it was being invoked. This fixes
the problem (albeit in a somewhat hacky way). Hopefully in
the future we can get rid of the workflow of running
debuginfo-tests as part of clang, and then this hack can
go away.
Yonghong Song [Mon, 20 Nov 2017 21:37:58 +0000 (21:37 +0000)]
bpf: add a test case for trunc-op optimization
Commit b5cbc7760ab8 ("[bpf] allow direct and indirect calls")
allowed more than one function in the bpf program, and
commit 114353884415 ("bpf: fix a bug in trunc-op optimization")
fixed a bug in trunc-op optimization which only showed up
with more than one function in the bpf program.
This patch added a test case for trunc-op optimization
for bpf programs with two functions. Reverting commit
"bpf: fix a bug in trunc-op optimization" will cause
failure for this test case.
Signed-off-by: Yonghong Song <yhs@fb.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@318695 91177308-0d34-0410-b5e6-96231b3b80d8
Hiroshi Yamauchi [Mon, 20 Nov 2017 21:03:38 +0000 (21:03 +0000)]
Add heuristics for irreducible loop metadata under PGO
Summary:
Add the following heuristics for irreducible loop metadata:
- When an irreducible loop header is missing the loop header weight metadata,
give it the minimum weight seen among other headers.
- Annotate indirectbr targets with the loop header weight metadata (as they are
likely to become irreducible loop headers after indirectbr tail duplication.)
These greatly improve the accuracy of the block frequency info of the Python
interpreter loop (eg. from ~3-16x off down to ~40-55% off) and the Python
performance (eg. unpack_sequence from ~50% slower to ~8% faster than GCC) due to
better register allocation under PGO.
[SelectionDAG] Make sorting predicate stronger to remove non-deterministic ordering
Summary:
This fixes failures in the following tests uncovered by D39245:
LLVM :: CodeGen/ARM/ifcvt3.ll
LLVM :: CodeGen/ARM/switch-minsize.ll
LLVM :: CodeGen/X86/switch.ll