Florian Hahn [Wed, 23 Aug 2017 11:53:24 +0000 (11:53 +0000)]
[ARM] Check for assembler instructions in test.
Currently this test causes test failures on some machines, due to isel not being registered. Update the test to run all passes and check emitted assembly instructions for now.
Florian Hahn [Wed, 23 Aug 2017 10:20:59 +0000 (10:20 +0000)]
[ARM] Add missing patterns for insert_subvector.
Summary: In some cases, shufflevector instruction can be transformed involving insert_subvector instructions. The ARM backend was missing some insert_subvector patterns, causing a failure during instruction selection. AArch64 has similar patterns.
Davide Italiano [Wed, 23 Aug 2017 09:14:37 +0000 (09:14 +0000)]
[InstCombine] Fold branches with irrelevant conditions to a constant.
InstCombine folds instructions with irrelevant conditions to undef.
This, as Nuno confirmed is a bug.
(see https://bugs.llvm.org/show_bug.cgi?id=33409#c1 )
Given the original motivation for the change is that of removing an
USE, we now fold to false instead (which reaches the same goal
without undesired side effects).
Hiroshi Inoue [Wed, 23 Aug 2017 08:55:18 +0000 (08:55 +0000)]
[PowerPC] better instruction selection for OR (XOR) with a 32-bit immediate
- recommitting after fixing a test failure on MacOS
On PPC64, OR (XOR) with a 32-bit immediate can be done with only two instructions, i.e. ori + oris.
But the current LLVM generates three or four instructions for this purpose (and also it clobbers one GPR).
This patch makes PPC backend generate ori + oris (xori + xoris) for OR (XOR) with a 32-bit immediate.
e.g. (x | 0xFFFFFFFF) should be
ori 3, 3, 65535
oris 3, 3, 65535
but LLVM generates without this patch
li 4, 0
oris 4, 4, 65535
ori 4, 4, 65535
or 3, 3, 4
Sjoerd Meijer [Wed, 23 Aug 2017 08:18:37 +0000 (08:18 +0000)]
[AArch64] ISel legalization debug messages. NFCI.
Debugging AArch64 instruction legalization and custom lowering is really an
unpleasant experience because it shows nodes that appear out of thin air.
In commit r311444, some debug messages have been added to SelectionDAG, the
target independent part, and this patch adds some AArch64 specific messages.
Craig Topper [Wed, 23 Aug 2017 05:46:07 +0000 (05:46 +0000)]
[InstCombine] Remove an unnecessary dyn_cast to Instruction and a switch over two opcodes. Just dyn_cast to the specific instruction classes individually. NFC
Change the helper methods to take the more specific class as well.
Hiroshi Inoue [Wed, 23 Aug 2017 05:15:15 +0000 (05:15 +0000)]
[PowerPC] better instruction selection for OR (XOR) with a 32-bit immediate
On PPC64, OR (XOR) with a 32-bit immediate can be done with only two instructions, i.e. ori + oris.
But the current LLVM generates three or four instructions for this purpose (and also it clobbers one GPR).
This patch makes PPC backend generate ori + oris (xori + xoris) for OR (XOR) with a 32-bit immediate.
e.g. (x | 0xFFFFFFFF) should be
ori 3, 3, 65535
oris 3, 3, 65535
but LLVM generates without this patch
li 4, 0
oris 4, 4, 65535
ori 4, 4, 65535
or 3, 3, 4
[XRay][CodeGen] Use PIC-friendly code in XRay sleds; remove synthetic references in .text
Summary:
This change achieves two things:
- Redefine the Custom Event handling instrumentation points emitted by
the compiler to not require dynamic relocation of references to the
__xray_CustomEvent trampoline.
- Remove the synthetic reference we emit at the end of a function that
we used to keep auxiliary sections alive in favour of SHF_LINK_ORDER
associated with the section where the function is defined.
To achieve the custom event handling change, we've had to introduce the
concept of sled versioning -- this will need to be supported by the
runtime to allow us to understand how to turn on/off the new version of
the custom event handling sleds. That change has to land first before we
change the way we write the sleds.
To remove the synthetic reference, we rely on a relatively new linker
feature that preserves the sections that are associated with each other.
This allows us to limit the effects on the .text section of ELF
binaries.
Because we're still using absolute references that are resolved at
runtime for the instrumentation map (and function index) maps, we mark
these sections write-able. In the future we can re-define the entries in
the map to use relative relocations instead that can be statically
determined by the linker. That change will be a bit more invasive so we
defer this for later.
Yonghong Song [Wed, 23 Aug 2017 04:25:57 +0000 (04:25 +0000)]
bpf: add variants of -mcpu=# and support for additional jmp insns
-mcpu=# will support:
. generic: the default insn set
. v1: insn set version 1, the same as generic
. v2: insn set version 2, version 1 + additional jmp insns
. probe: the compiler will probe the underlying kernel to
decide proper version of insn set.
We did not not use -mcpu=native since llc/llvm will interpret -mcpu=native
as the underlying hardware architecture regardless of -march value.
Currently, only x86_64 supports -mcpu=probe. Other architecture will
silently revert to "generic".
Also added -mcpu=help to print available cpu parameters.
llvm will print out the information only if there are at least one
cpu and at least one feature. Add an unused dummy feature to
enable the printout.
Examples for usage:
$ llc -march=bpf -mcpu=v1 -filetype=asm t.ll
$ llc -march=bpf -mcpu=v2 -filetype=asm t.ll
$ llc -march=bpf -mcpu=generic -filetype=asm t.ll
$ llc -march=bpf -mcpu=probe -filetype=asm t.ll
$ llc -march=bpf -mcpu=v3 -filetype=asm t.ll
'v3' is not a recognized processor for this target (ignoring processor)
...
$ llc -march=bpf -mcpu=help -filetype=asm t.ll
Available CPUs for this target:
generic - Select the generic processor.
probe - Select the probe processor.
v1 - Select the v1 processor.
v2 - Select the v2 processor.
Available features for this target:
dummy - unused feature.
Use +feature to enable a feature, or -feature to disable it.
For example, llc -mcpu=mycpu -mattr=+feature1,-feature2
...
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Yonghong Song <yhs@fb.com> Acked-by: Alexei Starovoitov <ast@kernel.org>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@311522 91177308-0d34-0410-b5e6-96231b3b80d8
Matthias Braun [Wed, 23 Aug 2017 03:17:59 +0000 (03:17 +0000)]
Add test case for r311511
This also changes the TailDuplicator to be configured explicitely
pre/post regalloc rather than relying on the isSSA() flag. This was
necessary to have `llc -run-pass` work reliably.
Matthias Braun [Tue, 22 Aug 2017 23:56:30 +0000 (23:56 +0000)]
TargetInstrInfo: Change duplicate() to work on bundles.
Adds infrastructure to clone whole instruction bundles rather than just
single instructions. This fixes a bug where tail duplication would
unbundle instructions while cloning.
This should unbreak the "Clang Stage 1: cmake, RA, with expensive checks
enabled" build on greendragon. The bot broke with r311139 hitting this
pre-existing bug.
Craig Topper [Tue, 22 Aug 2017 23:54:13 +0000 (23:54 +0000)]
[SelectionDAG] Make ISD::isConstantSplatVector always return an element sized APInt.
This partially reverts r311429 in favor of making ISD::isConstantSplatVector do something not confusing. Turns out the only other user of it was also having to deal with the weird property of it returning a smaller size.
So rather than continue to deal with this quirk everywhere, just make the interface do something sane.
Craig Topper [Tue, 22 Aug 2017 23:40:15 +0000 (23:40 +0000)]
[InstCombine] Remove check for sext of vector icmp from shouldOptimizeCast
Looks like for 'and' and 'or' we end up performing at least some of the transformations this is bocking in a round about way anyway.
For 'and sext(cmp1), sext(cmp2) we end up later turning it into 'select cmp1, sext(cmp2), 0'. Then we optimize that back to sext (and cmp1, cmp2). This is the same result we would have gotten if shouldOptimizeCast hadn't blocked it. We do something analogous for 'or'.
With this patch we allow that transformation to happen directly in foldCastedBitwiseLogic. And we now support the same thing for 'xor'. This is definitely opening up many other cases, but since we already went around it for some cases hopefully it's ok.
Jakub Kuderski [Tue, 22 Aug 2017 16:30:21 +0000 (16:30 +0000)]
[ADCE][Dominators] Reapply: Teach ADCE to preserve dominators
Summary:
This patch teaches ADCE to preserve both DominatorTrees and PostDominatorTrees.
This is reapplies the original patch r311057 that was reverted in r311381.
The previous version wasn't using the batch update api for updating dominators,
which in vary rare cases caused assertion failures.
[Debug info] Add new DbgValues after looping over DAG
I was contacted by Jesper Antonsson from Ericsson who ran into problems
with r311181 in their test suites with for an out-of-tree target.
Because of the latter I don't have a reproducer, but we definitely don't
want to modify the data structure on which we are iterating inside the
loop.
Sanjay Patel [Tue, 22 Aug 2017 16:21:45 +0000 (16:21 +0000)]
[x86] simplify runs and auto-generate full checks
I've replaced the two OS-specific runs with a generic run because
there's no functional difference in the resulting output that
we're checking. Also, the script still doesn't work with a Win
target.
Sam Parker [Tue, 22 Aug 2017 11:08:21 +0000 (11:08 +0000)]
[ARM][AArch64] v8.3-A Javascript Conversion
Armv8.3-A adds instructions that convert a double-precision floating
point number to a signed 32-bit integer with round towards zero,
designed for improving Javascript performance.
Sjoerd Meijer [Tue, 22 Aug 2017 10:43:51 +0000 (10:43 +0000)]
[SelectionDAG] Add getNode debug messages
This adds debug messages to various functions that create new SDValue nodes.
This is e.g. useful to have during legalization, as otherwise it can prints
legalization info of nodes that did not appear in the dumps before.
Alex Bradbury [Tue, 22 Aug 2017 09:11:41 +0000 (09:11 +0000)]
Use report_fatal_error for unsupported calling conventions
The calling convention can be specified by the user in IR. Failing to support
a particular calling convention isn't a programming error, and so relying on
llvm_unreachable to catch and report an unsupported calling convention is not
appropriate.
George Rimar [Tue, 22 Aug 2017 08:50:56 +0000 (08:50 +0000)]
[lib/Analysis] - Mark personality functions as live.
This is PR33245.
Case I am fixing is next:
Imagine we have 2 BC files, one defines and uses personality routine,
second has only declaration and also uses it.
Previously algorithm computing dead symbols (llvm::computeDeadSymbols) did
not know about personality routines and leaved them dead even if function that
has routine was live.
As a result thinLTOInternalizeAndPromoteGUID() method changed binding for
such symbol to local. Later when LLD tried to link these objects it failed
because one object had undefined global symbol for routine and second
object contained local definition instead of global.
Patch set the live root flag on the corresponding FunctionSummary
for personality routines when we build the per-module summaries
during the compile step.
Craig Topper [Tue, 22 Aug 2017 05:40:17 +0000 (05:40 +0000)]
[X86] Prevent several calls to ISD::isConstantSplatVector from returning a narrower APInt than the original scalar type
ISD::isConstantSplatVector can shrink to the smallest splat width. But we don't check the size of the resulting APInt at all. This can cause us to misinterpret the results.
This patch just adds a flag to prevent the APInt from changing width.
Justin Bogner [Mon, 21 Aug 2017 22:57:06 +0000 (22:57 +0000)]
Re-apply "Introduce FuzzMutate library"
Same as r311392 with some fixes for library dependencies. Thanks to
Chapuni for helping work those out!
Original commit message:
This introduces the FuzzMutate library, which provides structured
fuzzing for LLVM IR, as described in my EuroLLVM 2017 talk. Most of
the basic mutators to inject and delete IR are provided, with support
for most basic operations.
Quentin Colombet [Mon, 21 Aug 2017 22:56:18 +0000 (22:56 +0000)]
[RegAlloc] Make sure live-ranges reflect the state of the IR when removing them
When removing a live-range we used to not touch them making debug
prints harder to read because the IR was not matching what the
live-ranges information was saying.
This only affects debug printing and allows to put stronger asserts in
the code (see r308906 for instance).
Craig Topper [Mon, 21 Aug 2017 22:56:12 +0000 (22:56 +0000)]
[ValueTracking] Add assertions that the starting Depth in isKnownToBeAPowerOfTwo and ComputeNumSignBitsImpl is not above MaxDepth
The function does an equality check later to terminate the recursion, but that won't work if its starts out too high. Similar assert already exists in computeKnownBits.
Justin Bogner [Mon, 21 Aug 2017 22:25:04 +0000 (22:25 +0000)]
Re-apply "Introduce FuzzMutate library"
Redo r311356 with a fix to avoid std::uniform_int_distribution<bool>.
The bool specialization is undefined according to the standard, even
though libc++ seems to have it.
Original commit message:
This introduces the FuzzMutate library, which provides structured
fuzzing for LLVM IR, as described in my [EuroLLVM 2017 talk][1]. Most
of the basic mutators to inject and delete IR are provided, with
support for most basic operations.
Tim Northover [Mon, 21 Aug 2017 21:56:11 +0000 (21:56 +0000)]
GlobalISel (AArch64): fix ABI at border between GPRs and SP.
If a struct would end up half in GPRs and half on SP the ABI says it should
actually go entirely on the stack. We were getting this wrong in GlobalISel
before, causing compatibility issues.
Steven Wu [Mon, 21 Aug 2017 21:49:13 +0000 (21:49 +0000)]
[IR] AutoUpgrade ModuleFlagBehavior for PIC and PIE level
Summary:
From r303590, ModuleFlagBehavior for PIC and PIE level is changed from
Error to Max. This will cause bitcode compatibility issue when linking
against a bitcode static archive built with old compiler.
Add an auto-ugprade path to upgrade the the ModuleFlagBehavior in the
old bitcode to match the new one so IRLinker can link them.
Craig Topper [Mon, 21 Aug 2017 21:00:45 +0000 (21:00 +0000)]
[InstCombine] Move the checks for pointer types in getMaskedTypeForICmpPair earlier in the function
I don't think there's any reason to have them scattered about and on all 4 operands. We already have an early check that both compares must be the same type. And within a given compare the LHS and RHS must have the same type. Beyond that I don't think there's anyway this function returns anything valid for pointer types. So let's just return early and be done with it.
[Support, Windows] Handle long paths with unix separators
Summary:
The function widenPath() for Windows also normalizes long path names by
iterating over the path's components and calling append(). The
assumption during the iteration that separators are not returned by the
iterator doesn't hold because the iterators do return a separator when
the path has a drive name. Handle this case by ignoring separators
during iteration.
Zachary Turner [Mon, 21 Aug 2017 20:17:19 +0000 (20:17 +0000)]
[PDB] Serialize records into a stack-allocated buffer.
We were using a std::vector<> and resizing to MaxRecordLength,
which is ~64KB. We would then do this repeatedly often many
times in a tight loop, which was causing measurable performance
impact when linking PDBs.
Patch by Alex Telishev
Differential Revision: https://reviews.llvm.org/D36940
Zachary Turner [Mon, 21 Aug 2017 20:08:40 +0000 (20:08 +0000)]
[lld/pdb] Speed up construction of publics & globals addr map.
computeAddrMap function calls std::stable_sort with a comparison
function that computes deserialized symbols every time its called.
In the result deserializeAs<PublicSym32> is called 20-30 times per
symbol. It's much faster to calculate it beforehand and pass a
pointer to it to the comparison function.
Patch by Alex Telishev
Differential Revision: https://reviews.llvm.org/D36941
Haicheng Wu [Mon, 21 Aug 2017 20:00:09 +0000 (20:00 +0000)]
[InlineCost] Add cl::opt to allow full inline cost to be computed for debugging purposes.
Currently, the inline cost model will bail once the inline cost exceeds the
inline threshold in order to avoid unnecessary compile-time. However, when
debugging it is useful to compute the full cost, so this command line option
is added to override the default behavior.
I took over this work from Chad Rosier (mcrosier@codeaurora.org).
Zachary Turner [Mon, 21 Aug 2017 19:46:46 +0000 (19:46 +0000)]
[BinaryStream] Defaultify copy and move constructors.
The various BinaryStream classes had explicit copy constructors
which resulted in deleted move constructors. This was causing
the internal std::shared_ptr to get copied rather than moved
very frequently, since these classes are often used as return
values.
Patch by Alex Telishev
Differential Revision: https://reviews.llvm.org/D36942
Sanjay Patel [Mon, 21 Aug 2017 19:13:14 +0000 (19:13 +0000)]
[LibCallSimplifier] try harder to fold memcmp with constant arguments (2nd try)
The 1st try was reverted because it could inf-loop by creating a dead instruction.
Fixed that to not happen and added a test case to verify.
Original commit message:
Try to fold:
memcmp(X, C, ConstantLength) == 0 --> load X == *C
Without this change, we're unnecessarily checking the alignment of the constant data,
so we miss the transform in the first 2 tests in the patch.
I noted this shortcoming of LibCallSimpifier in one of the recent CGP memcmp expansion
patches. This doesn't help the example in:
https://bugs.llvm.org/show_bug.cgi?id=34032#c13
...directly, but it's worth short-circuiting more of these simple cases since we're
already trying to do that.
The benefit of transforming to load+cmp is that existing IR analysis/transforms may
further simplify that code. For example, if the load of the variable is common to
multiple memcmp calls, CSE can remove the duplicate instructions.
Craig Topper [Mon, 21 Aug 2017 19:02:06 +0000 (19:02 +0000)]
[InstCombine] Teach foldSelectICmpAnd to recognize a (icmp slt X, 0) and (icmp sgt X, -1) as equivalent to an and with the sign bit of the truncated type
This is similar to what was already done in foldSelectICmpAndOr. Ultimately I'd like to see if we can call foldSelectICmpAnd from foldSelectIntoOp if we detect a power of 2 constant. This would allow us to remove foldSelectICmpAndOr entirely.
Justin Bogner [Mon, 21 Aug 2017 17:44:36 +0000 (17:44 +0000)]
Introduce FuzzMutate library
This introduces the FuzzMutate library, which provides structured
fuzzing for LLVM IR, as described in my [EuroLLVM 2017 talk][1]. Most
of the basic mutators to inject and delete IR are provided, with
support for most basic operations.
I will follow up with the instruction selection fuzzer, which is
implemented in terms of this library.
Craig Topper [Mon, 21 Aug 2017 16:04:04 +0000 (16:04 +0000)]
[X86] When selecting sse_load_f32/f64 pattern, make sure there's only one use of every node all the way back to the root of the match
Summary: With masked operations, its possible for the operation node like fadd, fsub, etc. to be used by multiple different vselects. Since the pattern matching will start at the vselect, we need to make sure the operation node itself is only used once before we can fold a load. Otherwise we'll end up folding the same load into multiple instructions.
Zachary Turner [Mon, 21 Aug 2017 14:53:25 +0000 (14:53 +0000)]
[llvm-pdbutil] Add support for dumping detailed module stats.
This adds support for dumping a summary of module symbols
and CodeView debug chunks. This option prints a table for
each module of all of the symbols that occurred in the module
and the number of times it occurred and total byte size. Then
at the end it prints the totals for the entire file.
Additionally, this patch adds the -jmc (just my code) option,
which suppresses modules which are from external libraries or
linker imports, so that you can focus only on the object files
and libraries that originate from your own source code.
Sanjay Patel [Mon, 21 Aug 2017 13:55:49 +0000 (13:55 +0000)]
[LibCallSimplifier] try harder to fold memcmp with constant arguments
Try to fold:
memcmp(X, C, ConstantLength) == 0 --> load X == *C
Without this change, we're unnecessarily checking the alignment of the constant data,
so we miss the transform in the first 2 tests in the patch.
I noted this shortcoming of LibCallSimpifier in one of the recent CGP memcmp expansion
patches. This doesn't help the example in:
https://bugs.llvm.org/show_bug.cgi?id=34032#c13
...directly, but it's worth short-circuiting more of these simple cases since we're
already trying to do that.
The benefit of transforming to load+cmp is that existing IR analysis/transforms may
further simplify that code. For example, if the load of the variable is common to
multiple memcmp calls, CSE can remove the duplicate instructions.
Stefan Pintilie [Mon, 21 Aug 2017 13:36:18 +0000 (13:36 +0000)]
[PowerPC] Check if the pre-increment PHI Node already exists
Preparations to use the per-increment are sometimes done in the target
independent pass Loop Strength Reduction. We try to detect them in the PowerPC
specific pass so that they are not done twice and so that we do not add PHIs
that are not required.
Chandler Carruth [Mon, 21 Aug 2017 08:45:22 +0000 (08:45 +0000)]
[x86] Teach the "generic" x86 CPU to avoid patterns that are slow on
widely used processors.
This occured to me when I saw that we were generating 'inc' and 'dec'
when for Haswell and newer we shouldn't. However, there were a few "X is
slow" things that we should probably just set.
I've avoided any of the "X is fast" features because most of those would
be pretty serious regressions on processors where X isn't actually fast.
The slow things are likely to be negligible costs on processors where
these aren't slow and a significant win when they are slow.
In retrospect this seems somewhat obvious. Not sure why we didn't do
this a long time ago.
Chandler Carruth [Mon, 21 Aug 2017 08:45:19 +0000 (08:45 +0000)]
[x86] Handle more cases where we can re-use an atomic operation's flags
rather than doing a separate comparison.
This both saves an explicit comparision and avoids the use of `xadd`
which introduces register constraints and other challenges to the
generated code.
The motivating case is from atomic reference counts where `1` is the
sentinel rather than `0` for whatever reason. This can and should be
lowered efficiently on x86 by just using a different flag, however the
x86 code only handled the `0` case.
There remains some further opportunities here that are currently hidden
due to canonicalization. I've included test cases that show these and
FIXMEs. However, I don't at the moment have any production use cases and
they seem substantially harder to address.
Sam Parker [Mon, 21 Aug 2017 08:43:06 +0000 (08:43 +0000)]
[ARM][AArch64] Cortex-A75 and Cortex-A55 support
This patch introduces support for Cortex-A75 and Cortex-A55, Arm's
latest big.LITTLE A-class cores. They implement the ARMv8.2-A
architecture, including the cryptography and RAS extensions, plus
the optional dot product extension. They also implement the RCpc
AArch64 extension from ARMv8.3-A.
George Rimar [Mon, 21 Aug 2017 08:00:54 +0000 (08:00 +0000)]
[Support/Parallel] - Do not use a task group for a very small task.
parallel_for_each_n splits a given task into small pieces of tasks and then
passes them to background threads managed by a thread pool to process them
in parallel. TaskGroup then waits for all tasks to be done, which is done by
TaskGroup's destructor.
In the previous code, all tasks were passed to background threads, and the
main thread just waited for them to finish their jobs. This patch changes
the logic so that the main thread processes a task just like other
worker threads instead of just waiting for workers.
This patch improves the performance of parallel_for_each_n for a task which
is too small that we do not split it into multiple tasks. Previously, such task
was submitted to another thread and the main thread waited for its completion.
That involves multiple inter-thread synchronization which is not cheap for
small tasks. Now, such task is processed by the main thread, so no inter-thread
communication is necessary.
[XRay][tools] Support new kinds of instrumentation map entries
Summary:
When extracting the instrumentation map from a binary, we should be able
to recognize the new kinds of instrumentation sleds we've been emitting
with the compiler using -fxray-instrument. This change adds a test for
all the kinds of sleds we currently support (sans the tail-call sled,
which is a bit harder to force in a simple prebuilt input).
Craig Topper [Sun, 20 Aug 2017 19:47:00 +0000 (19:47 +0000)]
[AVX512] Add a test to check what happens when a load is referenced by two different masked scalar intrinsics with the same op inputs, but different masking node.
We're missing some single use checks in the sse_load_f32/f64 handling that cause us to replicate the load.