Serguei Katkov [Fri, 2 Aug 2019 09:32:52 +0000 (09:32 +0000)]
[Loop Peeling] Introduce an option for profile based peeling disabling.
This patch adds an ability to disable profile based peeling
causing the peeling of all iterations and as a result prohibits
further unroll/peeling attempts on that loop.
The motivation to get an ability to separate peeling usage in
pipeline where in the first part we peel only separate iterations if needed
and later in pipeline we apply the full peeling which will prohibit further peeling.
Peter Smith [Fri, 2 Aug 2019 08:05:14 +0000 (08:05 +0000)]
[AliasAnalysis] Initialize a member variable that may be used by unit test.
The unit tests in BasicAliasAnalysisTest use the alias analysis API
directly and do not call setAAResults to initalize AAR. This gives a
valgrind error "Conditional Jump depends on unitialized variable".
On most buildbots the variable is nullptr, but in some cases it can be
non nullptr leading to seemingly random failures.
These tests were disabled in r366986. With the initialization they can be
enabled again.
Rui Ueyama [Fri, 2 Aug 2019 04:48:30 +0000 (04:48 +0000)]
Improve raw_ostream so that you can "write" colors using operator<<
1. raw_ostream supports ANSI colors so that you can write messages to
the termina with colors. Previously, in order to change and reset
color, you had to call `changeColor` and `resetColor` functions,
respectively.
So, if you print out "error: " in red, for example, you had to do
something like this:
OS.changeColor(raw_ostream::RED);
OS << "error: ";
OS.resetColor();
With this patch, you can write the same code as follows:
OS << raw_ostream::RED << "error: " << raw_ostream::RESET;
2. Add a boolean flag to raw_ostream so that you can disable colored
output. If you disable colors, changeColor, operator<<(Color),
resetColor and other color-related functions have no effect.
Most LLVM tools automatically prints out messages using colors, and
you can disable it by passing a flag such as `--disable-colors`.
This new flag makes it easy to write code that works that way.
Serguei Katkov [Fri, 2 Aug 2019 04:29:23 +0000 (04:29 +0000)]
[Loop Peeling] Do not close further unroll/peel if profile based peeling was not used.
Current peeling cost model can decide to peel off not all iterations
but only some of them to eliminate conditions on phi. At the same time
if any peeling happens the door for further unroll/peel optimizations on that
loop closes because the part of the code thinks that if peeling happened
it is profile based peeling and all iterations are peeled off.
To resolve this inconsistency the patch provides the flag which states whether
the full peeling basing on profile is enabled or not and peeling cost model
is able to modify this field like it does not PeelCount.
In a separate patch I will introduce an option to allow/disallow peeling basing
on profile.
To avoid infinite loop peeling the patch tracks the total number of peeled iteration
through llvm.loop.peeled.count loop metadata.
Kai Luo [Fri, 2 Aug 2019 03:14:17 +0000 (03:14 +0000)]
[PowerPC][Peephole] Check if `extsw`'s second operand is a virtual register
Summary:
When combining `extsw` and `sldi` in `PPCMIPeephole`, we have to check
if `extsw`'s second operand is a virtual register, otherwise we might
get miscompile.
Kang Zhang [Fri, 2 Aug 2019 03:09:07 +0000 (03:09 +0000)]
[NFC][CodeGen] Modify the type element of TailCalls to simplify the dupRetToEnableTailCallOpts()
Summary:
The old code can be simplified to define the element type of TailCalls as `BasicBlock` not `CallInst`. Also I use the for-range loop instead the for loop.
Daniel Sanders [Thu, 1 Aug 2019 23:44:42 +0000 (23:44 +0000)]
Prevent vregs leaking into the MC layer via TargetRegisterClass::contains()
Summary:
The MC layer doesn't expect to deal with vregs but
TargetRegisterClass::contains() forwards into MCRegisterClass::contains()
and this can cause vregs to turn up in the MC layer APIs. Add guards
against this to prevent this becoming a problem as we replace unsigned
with a new MCRegister object for improved type safety.
[dsymutil] Fix heap-use-after-free related to the LinkOptions.
In r367348, I changed dsymutil to pass the LinkOptions by value isntead
of by const reference. However, the options were still captured by
reference in the LinkLambda. This patch fixes that by passing them in by
value.
Rong Xu [Thu, 1 Aug 2019 22:36:34 +0000 (22:36 +0000)]
[PGO] Add PGO support at -O0 in the experimental new pass manager
Add PGO support at -O0 in the experimental new pass manager to sync the
behavior of the legacy pass manager.
Also change the test of gcc-flag-compatibility.c for more complete test:
(1) change the match string to "profc" and "profd" to ensure the
instrumentation is happening.
(2) add IR format proftext so that PGO use compilation is tested.
The previous change to fix crash in the vectorizer introduced
performance regressions. The condition to preserve pointer
address space during the search is too tight, we only need to
match the size.
Daniel Sanders [Thu, 1 Aug 2019 21:18:34 +0000 (21:18 +0000)]
Move register namespacing definitions from TargetRegisterInfo to Register
Summary:
The namespacing in Register is currently slightly wrong as there is a
(rarely used) stack slot namespace too. The namespacing doesn't use
anything from the Target so we can move the definition from
TargetRegisterInfo to Register to keep it in one place
Note: To keep the patch reasonably sized for review I've left stub
functions in the original TargetRegisterInfo. We should update all the uses
instead
Matt Arsenault [Thu, 1 Aug 2019 19:10:05 +0000 (19:10 +0000)]
GlobalISel: Lower scalarizing unmerge of a vector to shifts
AMDGPU sometimes has legal s16 and <2 x s16> operations, but all
registers are really 32-bit. An unmerge destination really should ben
widened to a 32-bit register. If widening a scalarizing vector with a
target size that matches the vector size, bitcast to integer and
extract the relevant bits with shifts.
I'm not sure if this is the right place for this. This could arguably
be part of widenScalar for the result. I also have a growing feeling
that we're missing a bitcast legalize action.
Craig Topper [Thu, 1 Aug 2019 18:49:07 +0000 (18:49 +0000)]
[X86] In decomposeMulByConstant, legalize the VT before querying whether the multiply is legal
If a type is larger than a legal type and needs to be split, we would previously allow the multiply to be decomposed even if the split multiply is legal. Since the shift + add/sub code would also need to be split, its not any better to decompose it.
This patch figures out what type the mul will eventually be legalized to and then uses that type for the query. I tried just returning false illegal types and letting them get handled after type legalization, but then we can't recognize and i64 constant splat on 32-bit targets since will be destroyed by type legalization. We could special case vectors of i64 to avoid that...
Craig Topper [Thu, 1 Aug 2019 18:48:57 +0000 (18:48 +0000)]
[X86] Add some test cases for 512-bit truncate to 128-bits with min-legal-vector-width=0 and prefer-vector-width=256.
We currently split the 512 type, truncate each half to 128 bits,
concatenate them, and then truncate again. Probably better to
truncate each half to 64-bits and then concat the results
using vpunpcklqdq.
Matt Arsenault [Thu, 1 Aug 2019 18:41:28 +0000 (18:41 +0000)]
CodeGen: Allow virtual registers in bundles
The note in the documentation suggests this restriction is a compile
time optimization for architectures that make heavy use of
bundling. Allowing virtual registers in a bundle is useful for some
(non-R600) AMDGPU use cases and are infrequent enough to matter.
A more common AMDGPU use case has already been using virtual registers
in bundles since r333691, although never calling finalizeBundle on
them and manually creating the use/def list on the BUNDLE
instruction. This is also relatively infrequent, and only happens for
consecutive sequences of some load/store types.
Alina Sbirlea [Thu, 1 Aug 2019 18:37:34 +0000 (18:37 +0000)]
[SimplifyCFG] Mark missed Changed to true.
Summary:
DominatorTree is invalid after SimplifyCFG because of a missed `Changed = true` when simplifying a branch condition and removing an edge.
Resolves PR42272.
Alina Sbirlea [Thu, 1 Aug 2019 18:28:28 +0000 (18:28 +0000)]
[MemorySSA] Set LoopSimplify to preserve MemorySSA in the NPM, if analysis exists.
Summary:
LoopSimplify is preserved in the legacy pass manager, but not in the new pass manager.
Update LoopSimplify to preserve MemorySSA conditionally when the analysis is available (same behavior as the legacy pass manager).
Sjoerd Meijer [Thu, 1 Aug 2019 18:21:44 +0000 (18:21 +0000)]
[LV] Tail-Loop Folding
This allows folding of the scalar epilogue loop (the tail) into the main
vectorised loop body when the loop is annotated with a "vector predicate"
metadata hint. To fold the tail, instructions need to be predicated (masked),
enabling/disabling lanes for the remainder iterations.
[WebAssembly] Assembler/InstPrinter: support call_indirect type index.
A TYPE_INDEX operand (as used by call_indirect) used to be represented
by the InstPrinter as a symbol (e.g. .Ltype_index0@TYPE_INDEX) which
was a bit of a mismatch with the WasmObjectWriter which expects an
unnamed symbol, to receive the signature from and then turn into a
reloc.
There was really no good way to round-trip this information. An earlier
version of this patch tried to attach the signature information using
a .functype, but that ran into trouble when the symbol was re-emitted
without a name. Removing the name was a giant hack also.
The current version changes the assembly syntax to have an inline
signature spec for TYPEINDEX operands that is always unnamed, which
is much more elegant both in syntax and in implementation (as now the
assembler is able to follow the same path as the regular backend)
[Attributor][FIX] Indicate a missing update change
User of AAReturnedValues need to know if HasOverdefinedReturnedCalls
changed from false to true as it will impact the result of the return
value traversal (calls are not ignored anymore).
Simon Atanasyan [Thu, 1 Aug 2019 16:04:29 +0000 (16:04 +0000)]
[mips] Fix lowering load/store instruction in PIC case
If an operand of the `lw/sw` instructions is a symbol, these instructions
incorrectly lowered using not-position-independent chain of commands.
For PIC code we should use `lw/addiu` instructions with the `R_MIPS_GOT16`
and `R_MIPS_LO16` relocations respectively. Instead of that LLVM generates
position dependent code with the `R_MIPS_HI16` and `R_MIPS_LO16`
relocations.
This patch provides a fix for the bug by handling PIC case separately in
the `MipsAsmParser::expandMemInst`. The main idea is to generate a chain
of PIC instructions to load a symbol address into a register and then
load the address content.
The fix is not optimal and does not fix all PIC-related problems. This
is a task for subsequent patches.
Kuba Mracek [Thu, 1 Aug 2019 15:51:14 +0000 (15:51 +0000)]
[llvm-objdump] Fix jumptable detection when disassembling Mach-O binaries
- Add LC_SEGMENT_64 handling in getSectionsAndSymbols to be able to find the base segment address from 64-bit Mach-O binaries.
- Add "data in code" detection into the !symbolTableWorked case, extract it into a separate function.
- Fix uninitialized variable usage on BaseSegmentAddress (initialize to 0).
- Add test.
This adds SimplifyMultipleUseDemandedBitsForTargetNode X86 support and uses it to allow us to peek through vector insertions to avoid dependencies on entire insertion chains.
Sam Elliott [Thu, 1 Aug 2019 12:42:31 +0000 (12:42 +0000)]
[RISCV] Add Custom Parser for Atomic Memory Operands
Summary:
GCC Accepts both (reg) and 0(reg) for atomic instruction memory
operands. These instructions do not allow for an offset in their
encoding, so in the latter case, the 0 is silently dropped.
Due to how we have structured the RISCVAsmParser, the easiest way to add
support for parsing this offset is to add a custom AsmOperand and
parser. This parser drops all the parens, and just keeps the register.
This commit also adds a custom printer for these operands, which matches
the GCC canonical printer, printing both `(a0)` and `0(a0)` as `(a0)`.
David Green [Thu, 1 Aug 2019 11:22:03 +0000 (11:22 +0000)]
[ARM] Fix for MVE VREV64
The VREV64 instruction is apparently unpredictable if Qd == Qm, due to the
cross-beat nature of the instruction. This adds an earlyclobber to Qd, which
seems to be the same way we deal with this on other instructions like the
write-back on loads and stores.
[AArch64] Do not allocate unnecessary emergency slot.
Fix an issue where the compiler still allocates an emergency spill slot even
though it already decided to spill an extra callee-save register to use
as a scratch register.
Zi Xuan Wu [Thu, 1 Aug 2019 05:26:02 +0000 (05:26 +0000)]
recommit:[PowerPC] Eliminate loads/swap feeding swap/store for vector type by using big-endian load/store
In PowerPC, there is instruction to load vector in big endian element order when it's in little endian target.
So we can combine vector load + reverse into big endian load to eliminate the swap instruction.
Also combine vector reverse + store into big endian store.
JF Bastien [Thu, 1 Aug 2019 03:30:45 +0000 (03:30 +0000)]
[NFC] Remove obsolete LLVM_GNUC_PREREQ
The current minimum GCC version is 4.8 (soon to be 5.1), we there don't need to check for older versions. While I'm around Compiler.h, also update some of the doxygen comment.
Matt Arsenault [Thu, 1 Aug 2019 03:25:52 +0000 (03:25 +0000)]
AMDGPU: Start redefining atomic PatFrags
Start migrating to a form that will be compatible with the global isel
emitter. Also should fix some overly lax checks on the memory type,
which allowed mis-selecting some illegal atomics.
Create unique, but identically-named ELF sections for explicitly-sectioned functions and globals when using -function-sections and -data-sections.
This allows functions and globals to to be reordered later in the linking phase
(using the -symbol-ordering-file) even though reordering will be limited to
the scope of the explicit section.
[ScalarizeMaskedMemIntrin] Bitcast the mask to the scalar domain and use scalar bit tests for the branches.
X86 at least is able to use movmsk or kmov to move the mask to the scalar
domain. Then we can just use test instructions to test individual bits.
This is more efficient than extracting each mask element
individually.
I special cased v1i1 to use the previous behavior. This avoids
poor type legalization of bitcast of v1i1 to i1.
I've skipped expandload/compressstore as I think we need to
handle constant masks for those better first.
Many tests end up with duplicate test instructions due to tail
duplication in the branch folding pass. But the same thing
happens when constructing similar code in C. So its not unique
to the scalarization.
Not sure if this lowering code will also be good for other targets,
but we're only testing X86 today.
[X86] Add DAG combine to fold any_extend_vector_inreg+truncstore to an extractelement+store
We have custom code that ignores the normal promoting type legalization on less than 128-bit vector types like v4i8 to emit pavgb, paddusb, psubusb since we don't have the equivalent instruction on a larger element type like v4i32. If this operation appears before a store, we can be left with an any_extend_vector_inreg followed by a truncstore after type legalization. When truncstore isn't legal, this will normally be decomposed into shuffles and a non-truncating store. This will then combine away the any_extend_vector_inreg and shuffle leaving just the store. On avx512, truncstore is legal so we don't decompose it and we had no combines to fix it.
This patch adds a new DAG combine to detect this case and emit either an extract_store for 64-bit stoers or a extractelement+store for 32 and 16 bit stores. This makes the avx512 codegen match the avx2 codegen for these situations. I'm restricting to only when -x86-experimental-vector-widening-legalization is false. When we're widening we're not likely to create this any_extend_inreg+truncstore combination. This means we should be able to remove this code when we flip the default. I would like to flip the default soon, but I need to investigate some performance regressions its causing in our branch that I wasn't seeing on trunk.
Michael Berg [Wed, 31 Jul 2019 21:57:28 +0000 (21:57 +0000)]
Migrate some more fadd and fsub cases away from UnsafeFPMath control to utilize NoSignedZerosFPMath options control
Summary: Honoring no signed zeroes is also available as a user control through clang separately regardless of fastmath or UnsafeFPMath context, DAG guards should reflect this context.
Philip Reames [Wed, 31 Jul 2019 21:15:21 +0000 (21:15 +0000)]
[IndVars, RLEV] Support rewriting exit values in loops without known exits (prep work)
This is a prepatory patch for future work on support exit value rewriting in loops with a mixture of computable and non-computable exit counts. The intention is to be "mostly NFC" - i.e. not enable any interesting new transforms - but in practice, there are some small output changes.
The test differences are caused by cases wherewhere getSCEVAtScope can simplify a single entry phi without needing any knowledge of the loop.