Teresa Johnson [Mon, 17 Jul 2017 19:25:38 +0000 (19:25 +0000)]
Revert "Restore with fix "[ThinLTO] Ensure we always select the same function copy to import""
This reverts commit r308114 (and follow on fixes to test).
There is a linking failure in a ThinLTO bot:
http://green.lab.llvm.org/green/job/clang-stage2-configure-Rthinlto_build/3663/
(and undefined reference). It seems like it must be a second order
effect of the heuristic change I made, and may take some time to try
to reproduce locally and track down. Therefore, reverting for now.
Adam Nemet [Mon, 17 Jul 2017 18:00:41 +0000 (18:00 +0000)]
[opt-viewer] Accept directories that are searched for opt.yaml files
This allows to pass the build directory where all the opt.yaml files are
rather than find | xargs which may invoke opt-viewer multiple times producing
incomplete html output.
The patch generalizes the same functionality from opt-diff.
This adds support for the new 128-bit vector float instructions of z14.
Note that these instructions actually only operate on the f128 type,
since only each 128-bit vector register can hold only one 128-bit
float value. However, this is still preferable to the legacy 128-bit
float instructions, since those operate on pairs of floating-point
registers (so we can hold at most 8 values in registers), while the
new instructions use single vector registers (so we hold up to 32
value in registers).
Adding support includes:
- Enabling the instructions for the assembler/disassembler.
- CodeGen for the instructions. This includes allocating the f128
type now to the VR128BitRegClass instead of FP128BitRegClass.
- Scheduler description support for the instructions.
Note that for a small number of operations, we have no new vector
instructions (like integer <-> 128-bit float conversions), and so
we use the legacy instruction and then reformat the operand
(i.e. copy between a pair of floating-point registers and a
vector register).
This adds support for the new 32-bit vector float instructions of z14.
This includes:
- Enabling the instructions for the assembler/disassembler.
- CodeGen for the instructions, including new LLVM intrinsics.
- Scheduler description support for the instructions.
- Update to the vector cost function calculations.
In general, CodeGen support for the new v4f32 instructions closely
matches support for the existing v2f64 instructions.
This patch series adds support for the IBM z14 processor. This part includes:
- Basic support for the new processor and its features.
- Support for new instructions (except vector 32-bit float and 128-bit float).
- CodeGen for new instructions, including new LLVM intrinsics.
- Scheduler description for the new processor.
- Detection of z14 as host processor.
Support for the new 32-bit vector float and 128-bit vector float
instructions is provided by separate patches.
Nirav Dave [Mon, 17 Jul 2017 15:09:47 +0000 (15:09 +0000)]
Avoid store merge to f128 in context of noimpiccitfloat NFCI.
Prevent store merge from merging stores into an invalid 128-bit store
(realized as a f128 value in the context of the noimplicitfloat
attribute). Previously, such stores are immediately split back into
valid stores.
Sam Kolton [Mon, 17 Jul 2017 14:23:38 +0000 (14:23 +0000)]
[AMDGPU] CodeGen: check dst operand type to determine if omod is supported for VOP3 instructions
Summary:
Previously, CodeGen checked first src operand type to determine if omod is supported by instruction. This isn't correct for some instructions: e.g. V_CMP_EQ_F32 has floating-point src operands but desn't support omod.
Changed .td files to check if dst operand instead of src operand.
Alex Bradbury [Mon, 17 Jul 2017 11:41:30 +0000 (11:41 +0000)]
[YAMLTraits] Add filename support to yaml::Input
Summary:
The current yaml::Input constructor takes a StringRef of data as its
first parameter, discarding any filename information that may have been
present when a YAML file was opened. Add an alterate yaml::Input
constructor that takes a MemoryBufferRef, which can have a filename
associated with it. This leads to clearer diagnostic messages.
[X86] Use MSVC's __cpuidex intrinsic instead of inline assembly in getHostCPUName/getHostCPUFeatures for 32-bit builds too.
We're already using it in 64-bit builds because 64-bit MSVC doesn't support inline assembly.
As far as I know we were using inline assembly because at the time the code was added we had to support MSVC 2008 pre-SP1 while the intrinsic was added to MSVC in SP1. Now that we don't have to support that we should be able to just use the intrinsic.
[X86] X86::CMOV to Branch heuristic based optimization.
LLVM compiler recognizes opportunities to transform a branch into IR select instruction(s) - later it will be lowered into X86::CMOV instruction, assuming no other optimization eliminated the SelectInst.
However, it is not always profitable to emit X86::CMOV instruction. For example, branch is preferable over an X86::CMOV instruction when:
1. Branch is well predicted
2. Condition operand is expensive, compared to True-value and the False-value operands
In CodeGenPrepare pass there is a shallow optimization that tries to convert SelectInst into branch, but it is not enough.
This commit, implements machine optimization pass that converts X86::CMOV instruction(s) into branch, based on a conservative heuristic.
Some platforms have problems with emmiting constructors when class
templates get explicitly instantiated.
This patch fixes the bug reported in D35315 by replacing `= default`
with an empty constructor body.
Teresa Johnson [Sun, 16 Jul 2017 00:28:22 +0000 (00:28 +0000)]
Fix bot failures from r308114
Finally figured out that some bots were failing from r308114
with the message:
llvm-lto2: LTO::run failed: No available targets are compatible with this triple.
after adding in some other checking that finally caused this to show up
in the FileCheck output.
Added "REQUIRES: x86-registered-target" which should fix it.
Teresa Johnson [Sat, 15 Jul 2017 22:58:06 +0000 (22:58 +0000)]
Restore with fix "[ThinLTO] Ensure we always select the same function copy to import"
This restores r308078/r308079 with a fix for bot non-determinisim (make
sure we run llvm-lto in single threaded mode so the debug output doesn't get
interleaved).
[IR] Implement Constant::isNegativeZeroValue/isZeroValue/isAllOnesValue/isOneValue/isMinSignedValue for ConstantDataVector without going through getElementAsConstant
Summary:
Currently these methods call ConstantDataVector::getSplatValue which uses getElementsAsConstant to create a Constant object representing the element value. This method incurs a map lookup to see if we already have created such a Constant before and if not allocates a new Constant object.
This patch changes these methods to use getElementAsAPFloat and getElementAsInteger so we can just examine the data values directly.
[InstCombine] Improve the expansion in SimplifyUsingDistributiveLaws to handle cases where one side doesn't simplify, but the other side resolves to an identity value
Summary:
If one side simplifies to the identity value for inner opcode, we can replace the value with just the operation that can't be simplified.
I've removed a couple now unneeded special cases in visitAnd and visitOr. There are probably other cases I missed.
This fold hit the trifecta:
1. It was untested.
2. It oversteps (multiuse is not checked, so increases instruction count).
3. It is incomplete (doesn't work for vectors).
Revert r308078 (and subsequent tweak in r308079) which introduces a test
that appears to exhibit non-determinism and is flaking on the bots
pretty consistently.
r308078: [ThinLTO] Ensure we always select the same function copy to import
r308079: Require asserts in new test that uses debug flag
[PM/LCG] Teach the LazyCallGraph to maintain reference edges from every
function to every defined function known to LLVM as a library function.
LLVM can introduce calls to these functions either by replacing other
library calls or by recognizing patterns (such as memset_pattern or
vector math patterns) and replacing those with calls. When these library
functions are actually defined in the module, we need to have reference
edges to them initially so that we visit them during the CGSCC walk in
the right order and can effectively rebuild the call graph afterward.
This was discovered when building code with Fortify enabled as that is
a common case of both inline definitions of library calls and
simplifications of code into calling them.
This can in extreme cases of LTO-ing with libc introduce *many* more
reference edges. I discussed a bunch of different options with folks but
all of them are unsatisfying. They either make the graph operations
substantially more complex even when there are *no* defined libfuncs, or
they introduce some other complexity into the callgraph. So this patch
goes with the simplest possible solution of actual synthetic reference
edges. If this proves to be a memory problem, I'm happy to implement one
of the clever techniques to save memory here.
Simon Atanasyan [Sat, 15 Jul 2017 07:14:25 +0000 (07:14 +0000)]
[mips] Handle the `long-calls` feature flags in the MIPS backend
If the `long-calls` feature flags is enabled, disable use of the `jal`
instruction. Instead of that call a function by by first loading its
address into a register, and then using the contents of that register.
Matt Arsenault [Sat, 15 Jul 2017 05:52:59 +0000 (05:52 +0000)]
AMDGPU: Return correct type during argument lowering
The type needs to be casted back to the original argument type.
Fixes an assert that for some reason is only run when
using -debug.
Includes an additional combine to avoid test regressions
from having conversions mixed with multiple Assert[SZ]ext
nodes. On subtargets where i16 is legal, this was producing an i32
register with an i16 AssertZExt, truncated to i16 with another i8
AssertZExt.
Yonghong Song [Sat, 15 Jul 2017 05:41:42 +0000 (05:41 +0000)]
bpf: generate better lowering code for certain select/setcc instructions
Currently, for code like below,
===
inner_map = bpf_map_lookup_elem(outer_map, &port_key);
if (!inner_map) {
inner_map = &fallback_map;
}
===
the compiler generates (pseudo) code like the below:
===
I1: r1 = bpf_map_lookup_elem(outer_map, &port_key);
I2: r2 = 0
I3: if (r1 == r2)
I4: r6 = &fallback_map
I5: ...
===
During kernel verification process, After I1, r1 holds a state
map_ptr_or_null. If I3 condition is not taken
(path [I1, I2, I3, I5]), supposedly r1 should become map_ptr.
Unfortunately, kernel does not recognize this pattern
and r1 remains map_ptr_or_null at insn I5. This will cause
verificaiton failure later on.
Kernel, however, is able to recognize pattern "if (r1 == 0)"
properly and give a map_ptr state to r1 in the above case.
LLVM here generates suboptimal code which causes kernel verification
failure. This patch fixes the issue by changing BPF insn pattern
matching and lowering to generate proper codes if the righthand
parameter of the above condition is a constant. A test case
is also added.
Signed-off-by: Yonghong Song <yhs@fb.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@308080 91177308-0d34-0410-b5e6-96231b3b80d8
Teresa Johnson [Sat, 15 Jul 2017 04:53:05 +0000 (04:53 +0000)]
[ThinLTO] Ensure we always select the same function copy to import
Summary:
Check if the first eligible callee is under the instruction threshold.
Checking this on the first eligible callee ensures that we don't end
up selecting different callees to import when we invoke this routine
with different thresholds due to reaching the callee via paths that
are shallower or hotter (when there are multiple copies, i.e. with
weak or linkonce linkage). We don't want to leave the decision of which
copy to import up to the backend.
Now, getUserCost() only checks the src and dst types of EXT to decide it is free
or not. This change first checks the types, then calls isExtFreeImpl(), and
check if EXT can form ExtLoad at last. Currently, only AArch64 has customized
implementation of isExtFreeImpl() to check if EXT can be folded into its use.
Jakub Kuderski [Sat, 15 Jul 2017 01:27:16 +0000 (01:27 +0000)]
[Dominators] Fix reachable visitation and reenable a unit test
This fixes a minor bug in insertion to a reachable node that caused
DominatorTree.InsertDeleteExhaustive flakiness. The patch also adds
a new testcase for this exact failure.
Jakub Kuderski [Fri, 14 Jul 2017 23:49:12 +0000 (23:49 +0000)]
[Dominators] Temporarily disable a flaky unit test
The DominatorTree.InsertDeleteExhaustive uses a RNG with a
constant seed to generate different sequences of updates. The test
fails on some buildbots and this patch disables it for now.
[libFuzzer] Allow non-fuzzer args after -ignore_remaining_args=1
With this change, libFuzzer will ignore any arguments after a sigil
argument, but it will preserve these arguments at the end of the
command line when launching subprocesses. Using this, its possible to
handle positional and single-dash arguments to the program under test
by discarding everything up to -ignore_remaining_args=1 in
LLVMFuzzerInitialize.
Jakub Kuderski [Fri, 14 Jul 2017 21:58:53 +0000 (21:58 +0000)]
[Dominators] Implement incremental deletions
Summary:
This patch implements incremental edge deletions.
It also makes DominatorTreeBase store a pointer to the parent function. The parent function is needed to perform full rebuilts during some deletions, but it is also used to verify that inserted and deleted edges come from the same function.
[AArch64][Falkor] Avoid HW prefetcher tag collisions (step 1)
Summary:
This patch is the first step in reducing HW prefetcher instruction tag
collisions in inner loops for Falkor. It adds a pass that annotates IR
loads with metadata to indicate that they are known to be strided loads,
and adds a target lowering hook that translates this metadata to a
target-specific MachineMemOperand flag.
A follow on change will use this MachineMemOperand flag to re-write
instructions to reduce tag collisions.
Summary:
When checking for memory dependencies between calls using MemorySSA,
handle cases where the calls have no MemoryAccess associated with them
because the AA analysis being used has determined that the call does not
read/write memory.
The Select in the above pattern will be unfolded and then jump-threaded. The
current implementation does not allow CMP in the middle of PHI and Select.