[LAA] Allow more run-time alias checks by coercing pointer expressions to AddRecExprs
Summary:
LAA can only emit run-time alias checks for pointers with affine AddRec
SCEV expressions. However, non-AddRecExprs can be now be converted to
affine AddRecExprs using SCEV predicates.
This change tries to add the minimal set of SCEV predicates in order
to enable run-time alias checking.
This is a preparatory step for D34515 and also is being recommitted as its
first version caused PR34045.
This change:
- makes nodes ISD::ADDCARRY and ISD::SUBCARRY legal for i32
- lowering is done by first converting the boolean value into the carry flag
using (_, C) ← (ARMISD::ADDC R, -1) and converted back to an integer value
using (R, _) ← (ARMISD::ADDE 0, 0, C). An ARMISD::ADDE between the two
operations does the actual addition.
- for subtraction, given that ISD::SUBCARRY second result is actually a
borrow, we need to invert the value of the second operand and result before
and after using ARMISD::SUBE. We need to invert the carry result of
ARMISD::SUBE to preserve the semantics.
- given that the generic combiner may lower ISD::ADDCARRY and
ISD::SUBCARRYinto ISD::UADDO and ISD::USUBO we need to update their lowering
as well otherwise i64 operations now would require branches. This implies
updating the corresponding test for unsigned.
- add new combiner to remove the redundant conversions from/to carry flags
to/from boolean values (ARMISD::ADDC (ARMISD::ADDE 0, 0, C), -1) → C
- fixes PR34045
[X86] Add an extra instruction to TruncAssertSext.ll to prevent the 'or' from being narrowed so that the movl is really required to avoid a miscompile.
If we allow the OR to be narrowed then the upper bits really are zero and we can't tell if the zeroing movl was removed on purpose.
While here regenerate the test with update_llc_test_checks.py
Hans Wennborg [Mon, 11 Sep 2017 23:52:02 +0000 (23:52 +0000)]
Revert r312898 "[ARM] Use ADDCARRY / SUBCARRY"
It caused PR34564.
> This is a preparatory step for D34515 and also is being recommitted as its
> first version caused PR34045.
>
> This change:
> - makes nodes ISD::ADDCARRY and ISD::SUBCARRY legal for i32
> - lowering is done by first converting the boolean value into the carry flag
> using (_, C) ← (ARMISD::ADDC R, -1) and converted back to an integer value
> using (R, _) ← (ARMISD::ADDE 0, 0, C). An ARMISD::ADDE between the two
> operations does the actual addition.
> - for subtraction, given that ISD::SUBCARRY second result is actually a
> borrow, we need to invert the value of the second operand and result before
> and after using ARMISD::SUBE. We need to invert the carry result of
> ARMISD::SUBE to preserve the semantics.
> - given that the generic combiner may lower ISD::ADDCARRY and
> ISD::SUBCARRYinto ISD::UADDO and ISD::USUBO we need to update their lowering
> as well otherwise i64 operations now would require branches. This implies
> updating the corresponding test for unsigned.
> - add new combiner to remove the redundant conversions from/to carry flags
> to/from boolean values (ARMISD::ADDC (ARMISD::ADDE 0, 0, C), -1) → C
> - fixes PR34045
>
> Differential Revision: https://reviews.llvm.org/D35192
Yonghong Song [Mon, 11 Sep 2017 23:43:35 +0000 (23:43 +0000)]
bpf: add " ll" in the LD_IMM64 asmstring
This partially revert previous fix in commit f5858045aa0b
("bpf: proper print imm64 expression in inst printer").
In that commit, the original suffix "ll" is removed from
LD_IMM64 asmstring. In the customer print method, the "ll"
suffix is printed if the rhs is an immediate. For example,
"r2 = 5ll" => "r2 = 5ll", and "r3 = varll" => "r3 = var".
This has an issue though for assembler. Since assembler
relies on asmstring to do pattern matching, it will not
be able to distiguish between "mov r2, 5" and
"ld_imm64 r2, 5" since both asmstring is "r2 = 5".
In such cases, the assembler uses 64bit load for all
"r = <val>" asm insts.
This patch adds back " ll" suffix for ld_imm64 with one
additional space for "#reg = #global_var" case.
Adrian Prantl [Mon, 11 Sep 2017 22:59:45 +0000 (22:59 +0000)]
llvm-dwarfdump: Replace -debug-dump=sect option with individual options.
As discussed on llvm-dev in
http://lists.llvm.org/pipermail/llvm-dev/2017-September/117301.html
this changes the command line interface of llvm-dwarfdump to match the
one used by the dwarfdump utility shipping on macOS. In addition to
being shorter to type this format also has the advantage of allowing
more than one section to be specified at the same time.
Eli Friedman [Mon, 11 Sep 2017 22:56:20 +0000 (22:56 +0000)]
[llvm-cov] Allow hiding instantiation/region coverage from summary tables
Region coverage is difficult to explain without going deep into how
coverage is implemented. Instantiation coverage is easier to explain,
but probably not useful in most cases (templates don't exist in C, and
most C++ code contains relatively few templates).
This patch adds the options "-show-region-summary" and
"-show-instantiation-summary" to allow hiding those columns.
"-show-instantiation-summary" is turned off by default.
WholeProgramDevirt: Add import/export support for targets without absolute symbol constants.
Not all targets support the use of absolute symbols to export
constants. In particular, ARM has a wide variety of constant encodings
that cannot currently be relocated by linkers. So instead of exporting
the constants using symbols, export them directly in the summary.
The values of the constants are left as zeroes on targets that support
symbolic exports.
This may result in more cache misses when targeting those architectures
as a result of arbitrary changes in constant values, but this seems
somewhat unavoidable for now.
[InstSimplify] add tests for possible sdiv/srem simplifications; NFC
As noted in PR34517, the handling of signed div/rem is not on par with
unsigned div/rem. Signed is harder to reason about, but it should be
possible to handle at least some of these using the same technique that
we use for unsigned: use icmp logic to see if there's a relationship
between the quotient and divisor.
Unmerge GEPs to reduce register pressure on IndirectBr edges.
Summary:
GEP merging can sometimes increase the number of live values and register
pressure across control edges and cause performance problems particularly if the
increased register pressure results in spills.
This change implements GEP unmerging around an IndirectBr in certain cases to
mitigate the issue. This is in the CodeGenPrepare pass (after all the GEP
merging has happened.)
With this patch, the Python interpreter loop runs faster by ~5%.
[AMDGPU] Produce madak and madmk from the two-address pass
These two instructions are normally selected, but when the
two address pass converts mac into mad we end up with the
mad where we could have one of these.
[ScalarEvolution] Refactor forgetLoop() to improve performance
forgetLoop() has pretty bad performance because it goes over
the same instructions over and over again in particular when
nested loop are involved.
The refactoring changes the function to a not-recursive function
and reusing the allocation for data-structures and the Visited
set.
Tim Renouf [Mon, 11 Sep 2017 13:55:39 +0000 (13:55 +0000)]
[AMDGPU] exp should not be in WQM mode
A mrt exp with vm=1 must be in exact (non-WQM) mode, as it also exports
the exec mask as the valid mask to determine which pixels to render.
This commit marks any exp as needing to be in exact mode.
Actually, if there are multiple mrt exps, only one needs to have vm=1,
and only that one needs to be in exact mode. But that is an optimization
for another day.
I'm trying to refactor some shared code for integer div/rem,
but I keep having to scroll through fdiv. The FP ops have
nothing in common with the integer ops, so I'm moving FP
below everything else.
While here, improve a couple of comments and fix some formatting.
Gadi Haber [Mon, 11 Sep 2017 11:26:20 +0000 (11:26 +0000)]
[X86][SKX][KNL] Updating several CodeGen tests to use the attr flag instead of mcpu flag
NFC.
Updated 3 Codegen regression tests to use the -mattr flag instead of the -mcpu flags as follows:
Instead of -mcpu=skx use -mattr=+avx512f,+avx512bw,+avx512vl,+avx512dq
Instead of -mcpu=knl use -mattr=+avx512f
This is a preparatory step for D34515 and also is being recommitted as its
first version caused PR34045.
This change:
- makes nodes ISD::ADDCARRY and ISD::SUBCARRY legal for i32
- lowering is done by first converting the boolean value into the carry flag
using (_, C) ← (ARMISD::ADDC R, -1) and converted back to an integer value
using (R, _) ← (ARMISD::ADDE 0, 0, C). An ARMISD::ADDE between the two
operations does the actual addition.
- for subtraction, given that ISD::SUBCARRY second result is actually a
borrow, we need to invert the value of the second operand and result before
and after using ARMISD::SUBE. We need to invert the carry result of
ARMISD::SUBE to preserve the semantics.
- given that the generic combiner may lower ISD::ADDCARRY and
ISD::SUBCARRYinto ISD::UADDO and ISD::USUBO we need to update their lowering
as well otherwise i64 operations now would require branches. This implies
updating the corresponding test for unsigned.
- add new combiner to remove the redundant conversions from/to carry flags
to/from boolean values (ARMISD::ADDC (ARMISD::ADDE 0, 0, C), -1) → C
- fixes PR34045
Fixed a bug in splitting Scatter operation in the Type Legalizer.
After the split of the Scatter operation, the order of the new instructions is well defined - Lo goes before Hi. Otherwise the semantic of Scatter (from LSB to MSB) is broken.
I'm chaining 2 nodes to prevent reordering.
[InstSimplify] refactor udiv/urem code and add tests; NFCI
This removes some duplicated code and makes it easier to support signed div/rem
in a similar way if we want to do that. Note that the existing comments were not
accurate - we don't need a constant divisor to simplify; icmp simplification does
more than that. But as the added tests show, it could go even further.
Merge isKnownNonNull into isKnownNonZero
It now knows the tricks of both functions.
Also, fix a bug that considered allocas of non-zero address space to be always non null
MinSeong Kim [Sat, 9 Sep 2017 14:17:52 +0000 (14:17 +0000)]
[CMake] Update GetSVN.cmake to handle repo
Summary:
When repo is used with git, 'clang --version' option does not display
the correct revision information (i.e. git hash on TOP) as the following:
clang version 6.0.0 --->
clang version 6.0.0 (clang version) (llvm version)
This is because repo also creates .git/svn folder as git-svn does and
this makes repo with git uses "git svn info" command, which is only for
git-svn, to retrieve its revision information, making null for the info.
To correctly distinguish between git-svn and repo with git, the folder
hierarchy to specify for git-svn should be .git/svn/refs as the "git svn
info" command depends on the revision data in .git/svn/refs. This patch
in turn makes repo with git passes through to the third macro,
get_source_info_git, in get_source_info function, resulting in correctly
retrieving the revision information for repo with git using "git log ..."
command.
This patch is tested with git, svn, git-svn, and repo with git.
[DivRempairs] add a pass to optimize div/rem pairs (PR31028)
This is intended to be a superset of the functionality from D31037 (EarlyCSE) but implemented
as an independent pass, so there's no stretching of scope and feature creep for an existing pass.
I also proposed a weaker version of this for SimplifyCFG in D30910. And I initially had almost
this same functionality as an addition to CGP in the motivating example of PR31028:
https://bugs.llvm.org/show_bug.cgi?id=31028
The advantage of positioning this ahead of SimplifyCFG in the pass pipeline is that it can allow
more flattening. But it needs to be after passes (InstCombine) that could sink a div/rem and
undo the hoisting that is done here.
Decomposing remainder may allow removing some code from the backend (PPC and possibly others).
[X86] Call removeDeadNode when we're done doing custom isel for mul, div and test
Summary:
Once we've done our custom isel for these nodes, I think we should be calling removeDeadNode to prune them out of the DAG. Table driven isel ultimately either calls morphNodeTo which modifies a node and doesn't leave dead nodes. Or it emits new nodes and then calls removeDeadNode as part of Opc_CompleteMatch.
If you run a simple multiply test case like this through llc with -debug you'll see a umul_lohi node get printed as part of the dump for Instruction Selection ends.
- Use range based for
- Variable names should start with upper case
- Add `const`
- Change class name to match filename
- Fix doxygen comments
- Use MCPhysReg instead of unsigned
- Use references instead of pointers where things cannot be nullptr
- Misc coding style improvements
PPC: Don't select lxv/stxv for insufficiently aligned stack slots.
The lxv/stxv instructions require an offset that is 0 % 16. Previously we were
selecting lxv/stxv for loads and stores to the stack where the offset from the
slot was a multiple of 16, but the stack slot was not 16 or more byte aligned.
When the frame gets lowered these transform to r(1|31) + slot + offset.
If slot is not aligned, slot + offset may not be 0 % 16.
Now we require 16 byte or more alignment for select lxv/stxv to stack slots.
Includes a testcase that shows both sufficiently and insufficiently aligned
stack slots.
[TargetTransformInfo] Remove the extra "default" in a switch that all enum values has been covered.
In function TargetTransformInfo::getInstructionCost, all enum values in the switch statement has been covered, so the default is unnecessary, and may cause error with option -Werror,-Wcovered-switch-default, so remove it.
Yonghong Song [Fri, 8 Sep 2017 23:32:38 +0000 (23:32 +0000)]
bpf: proper print imm64 expression in inst printer
Fixed an issue in printImm64Operand where if the value is
an expression, print out the expression properly. Currently,
it will print
r1 = <MCOperand Expr:(tx_port)>ll
With the patch, the printout will be
r1 = tx_port
Suggested-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Yonghong Song <yhs@fb.com> Acked-by: Alexei Starovoitov <ast@kernel.org>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312833 91177308-0d34-0410-b5e6-96231b3b80d8
[TargetTransformInfo] Add a new public interface getInstructionCost
Current TargetTransformInfo can support throughput cost model and code size model, but sometimes we also need instruction latency cost model in different optimizations. Hal suggested we need a single public interface to query the different cost of an instruction. So I proposed following interface:
enum TargetCostKind {
TCK_RecipThroughput, ///< Reciprocal throughput.
TCK_Latency, ///< The latency of instruction.
TCK_CodeSize ///< Instruction code size.
};
int getInstructionCost(const Instruction *I, enum TargetCostKind kind) const;
All clients should mainly use this function to query the cost of an instruction, parameter <kind> specifies the desired cost model.
This patch also provides a simple default implementation of getInstructionLatency.
The default getInstructionLatency provides latency numbers for only small number of instruction classes, those latency numbers are only reasonable for modern OOO processors. It can be extended in following ways:
Add more detail into this function.
Add getXXXLatency function and call it from here.
Implement target specific getInstructionLatency function.
Petr Hosek [Fri, 8 Sep 2017 22:26:50 +0000 (22:26 +0000)]
[CMake][runtimes] Use the same configuration for non-target and "default" target
The default host target for builtins and runtimes has special behavior
on some platforms, e.g. on Linux both i386 and x86_64 targets are being
built. Specifying "default" as a target name should lead to the same
behavior, which wasn't the case in the past. This patch unifies the
configuration between the non-target and "default" target to produce the
same behavior by moving the default configuration into a function that
can be used from both paths.
On a Windows bot, I see a FileCheck error where the source being matched
over no longer exists, i.e it seems like it's FileCheck'ing some stale
output:
Matt Arsenault [Fri, 8 Sep 2017 19:09:13 +0000 (19:09 +0000)]
AMDGPU: Start using !con operator
We have a lot of operand definition work essentially producing
every valid permutation of operands to workaround builiding
operand lists based on the instruction features. Apparently tablegen
already has a mostly undocumented operator to concat dags which
simplies this.
Convert one simple place to use this. The BUF instruction definitions
have much more complicated logic that can be totally rewritten now.
[llvm-cov] Disable name-compression in a test binary
This should fix the lld bot:
The Buildbot has detected a new failure on builder llvm-clang-lld-x86_64-scei-ps4-ubuntu-fast while building cfe.
Full details are available at:
http://lab.llvm.org:8011/builders/llvm-clang-lld-x86_64-scei-ps4-ubuntu-fast/builds/16993
A coverage segment contains a starting line and column, an execution
count, and some other metadata. Clients of the coverage library use
segments to prepare line-oriented reports.
Users of the coverage library depend on segments being unique and sorted
in source order. Currently this is not guaranteed (this is why the clang
change which introduced deferred regions was reverted).
This commit documents the "unique and sorted" condition and asserts that
it holds. It also fixes the SegmentBuilder so that it produces correct
output in some edge cases.
Testing: I've added unit tests for some edge cases. I've also checked
that the new SegmentBuilder implementation is fully covered. Apart from
running check-profile and the llvm-cov tests, I've successfully used a
stage1 llvm-cov to prepare a coverage report for an instrumented clang
binary.
[Coverage] Report errors when reading malformed source regions
Each source region has a start and end location. Report an error when
the end location does not precede the begin location.
The old lineExecutionCounts.covmapping test actually had a buggy source
region in it. This commit introduces a regenerated copy of the coverage
and moves the old copy to malformedRegions.covmapping, for a test.
[llvm-cov] Unify region marker placement between text/html modes
Make sure that the text and html emitters always emit the same set of
region markers, and avoid emitting redundant markers for line segments
which don't end on the line they start on.