Correctly combine alias.scope metadata by a union instead of intersecting
Summary:
The alias.scope metadata represents sets of things an instruction might
alias with. When generically combining the metadata from two
instructions the result must be the union of the original sets, because
the new instruction might alias with anything any of the original
instructions aliased with.
Gather and Scatter are new introduced intrinsics, comming after recently implemented masked load and store.
This is the first patch for Gather and Scatter intrinsics. It includes only the syntax, parsing and verification.
Gather and Scatter intrinsics allow to perform multiple memory accesses (read/write) in one vector instruction.
The intrinsics are not target specific and will have the following syntax:
Gather:
declare <16 x i32> @llvm.masked.gather.v16i32(<16 x i32*> <vector of ptrs>, i32 <alignment>, <16 x i1> <mask>, <16 x i32> <passthru>)
declare <8 x float> @llvm.masked.gather.v8f32(<8 x float*><vector of ptrs>, i32 <alignment>, <8 x i1> <mask>, <8 x float><passthru>)
Scatter:
declare void @llvm.masked.scatter.v8i32(<8 x i32><vector value to be stored> , <8 x i32*><vector of ptrs> , i32 <alignment>, <8 x i1> <mask>)
declare void @llvm.masked.scatter.v16i32(<16 x i32> <vector value to be stored> , <16 x i32*> <vector of ptrs>, i32 <alignment>, <16 x i1><mask> )
Vector of ptrs - a set of source/destination addresses, to load/store the value.
Mask - switches on/off vector lanes to prevent memory access for switched-off lanes
vector of ptrs, value and mask should have the same vector width.
These are code examples where gather / scatter should be used and will allow function vectorization
;void foo1(int * restrict A, int * restrict B, int * restrict C) {
; for (int i=0; i<SIZE; i++) {
; A[i] = B[C[i]];
; }
;}
;void foo3(int * restrict A, int * restrict B) {
; for (int i=0; i<SIZE; i++) {
; A[B[i]] = i+5;
; }
;}
Tests will come in the following patches, with CodeGen and Vectorizer.
Tim Northover [Sun, 8 Feb 2015 00:50:47 +0000 (00:50 +0000)]
ARM & AArch64: teach LowerVSETCC that output type size may differ from input.
While various DAG combines try to guarantee that a vector SETCC
operation will have the same output size as input, there's nothing
intrinsic to either creation or LegalizeTypes that actually guarantees
it, so the function needs to be ready to handle a mismatch.
Fortunately this is easy enough, just extend or truncate the naturally
compared result.
I couldn't reproduce the failure in other backends that I know have
SIMD, so it's probably only an issue for these two due to shared
heritage.
Zachary Turner [Sun, 8 Feb 2015 00:29:29 +0000 (00:29 +0000)]
Some cleanup for libpdb.
This patch implements a few of the optional suggestions from the
initial patch comitting libpdb. In particular, it implements a
virtual function out of line for each of the concrete classes.
A few other minor cleanups exist as well, such as using override
instead of virtual, etc.
Benjamin Kramer [Sat, 7 Feb 2015 21:37:08 +0000 (21:37 +0000)]
LoopIdiom: Use utility functions.
The only difference between deleteIfDeadInstruction and
RecursivelyDeleteTriviallyDeadInstructions is that the former also
manually invalidates SCEV. That's unnecessary because SCEV automatically
gets informed when an instruction is deleted via a ValueHandle. NFC.
Avoid integer overflows around realloc calls resulting in potential
heap. Problem identified by Guido Vranken. Changes differ from original
OpenBSD sources by not depending on non-portable reallocarray.
Ahmed Bougacha [Sat, 7 Feb 2015 17:04:29 +0000 (17:04 +0000)]
[BasicAA] Try to disambiguate GEPs through arrays of structs into
different fields.
We can show that two GEPs off of the same (possibly multidimensional)
array of structs, into different fields, can't alias. Quoting:
For two GEPOperators GEP1 and GEP2, if we find that:
- both GEPs begin indexing from the exact same pointer;
- the last indices in both GEPs are constants, indexing into a struct;
- said indices are different, hence,the pointed-to fields are different;
- and both GEPs only index through arrays prior to that;
this lets us determine that the struct that GEP1 indexes into and the
struct that GEP2 indexes into must either precisely overlap or be
completely disjoint. Because they cannot partially overlap, indexing
into different non-overlapping fields of the struct will never alias.
The other BasicAA::aliasGEP rules worked in some cases, but not all
(for example, the i32x3 struct in the testcase).
We can add this simple ad-hoc rule to complement them.
David Majnemer [Sat, 7 Feb 2015 08:26:40 +0000 (08:26 +0000)]
MC: Emit COFF section flags in the "proper" order
COFF section flags are not idempotent:
'rd' will make a read-write section because 'd' implies write
'dr' will make a read-only section because 'r' disables write
Hal Finkel [Sat, 7 Feb 2015 07:32:58 +0000 (07:32 +0000)]
[PowerPC] Handle loop predecessor invokes
If a loop predecessor has an invoke as its terminator, and the return value
from that invoke is used to determine the loop iteration space, then we can't
insert a computation based on that value in the loop predecessor prior to the
terminator (oops). If there's such an invoke, or just no predecessor for that
matter, insert a new loop preheader.
Zachary Turner [Sat, 7 Feb 2015 01:47:14 +0000 (01:47 +0000)]
Resubmit unittests for DebugInfoPDB.
These were originally submitted as part of r228428, but this part
caused a build breakage in LLVMConfig. The library portion was
resubmitted independently since it was not causing breakage.
There were two reasons this was causing the build to fail. The
first is that there were no Makefiles added for the PDB tests. And
the second is that the DebugInfoPDB library was only being built by
CMake behind an "if (MSVC)" check. This is wrong since this the
library hides platform specific details, and it was causing
LLVM-Config to not find the library when trying to build unittests.
Lang Hames [Fri, 6 Feb 2015 23:26:33 +0000 (23:26 +0000)]
[Orc] Add a Kaleidoscope/Orc tutorial demonstrating lazy-irgen.
This tutorial builds on the lazy_codegen kaleidoscope/orc tutorial by making
a small set of changes (~75 lines diff) to defer ir-generation for function
definitions until functions are actually referenced.
Ahmed Bougacha [Fri, 6 Feb 2015 23:15:39 +0000 (23:15 +0000)]
[AArch64] Use the source location of the IR branch when creating Bcc
from a conditional branch fed by an add/sub/mul-with-overflow node.
We previously used the SDLoc of the overflow node, for no good reason.
In some cases, this led to the Bcc and B terminators having different
source orders, and DBG_VALUEs being inserted between them.
The real issue is with the code that can't handle DBG_VALUEs between
terminators: the few places affected by this will be fixed soon.
In the meantime, fixing the SDLoc is a positive change no matter what.
No tests, as I have no idea how to get .loc emitted for branches?
Hal Finkel [Fri, 6 Feb 2015 23:07:40 +0000 (23:07 +0000)]
Revert "r227976 - [PowerPC] Yet another approach to __tls_get_addr" and related fixups
Unfortunately, even with the workaround of disabling the linker TLS
optimizations in Clang restored (which has already been done), this still
breaks self-hosting on my P7 machine (-O3 -DNDEBUG -mcpu=native).
Bill is currently working on an alternate implementation to address the TLS
issue in a way that also fully elides the linker bug (which, unfortunately,
this approach did not fully), so I'm reverting this now.
Lang Hames [Fri, 6 Feb 2015 23:04:53 +0000 (23:04 +0000)]
[Orc] Add a Kaleidoscope/Orc tutorial demonstrating lazy-codegen.
This tutorial builds on the initial kaleidoscope/orc tutorial by adding a
LazyEmittingLayer to the custom stack. This extra layer defers compilation
of modules in the JIT until they are statically referenced.
Lang Hames [Fri, 6 Feb 2015 22:52:04 +0000 (22:52 +0000)]
[Orc] Add a Kaleidoscope tutorial for Orc demonstrating eager compilation.
This tutorial demonstrates a very basic custom Orc JIT stack that performs eager
compilation: All modules are CodeGen'd immediately upon being added to the JIT.
Remove unnecessary restriction of 24-bits for line numbers in
`MDLocation`.
The rest of the debug info schema (with the exception of local
variables) uses 32-bits for line numbers. As I introduce the
specialized nodes, it makes sense to canonicalize on one size or the
other.
An atomic store always make the target location fully initialized (in the
current implementation). It should not store origin. Initialized memory can't
have meaningful origin, and, due to origin granularity (4 bytes) there is a
chance that this extra store would overwrite meaningfull origin for an adjacent
location.
Zachary Turner [Fri, 6 Feb 2015 20:30:52 +0000 (20:30 +0000)]
Resubmit "Create lib/DebugInfo/PDB" (r228428)
This change resubmits the patch that broke the build, this time
without unittests. The unittests will be submitted separately
after the problem has been addressed:
--Original Commit Message--
Create lib/DebugInfo/PDB.
This patch creates a platform-independent interface to a PDB reader.
There is currently no implementation of this interface, which will
be provided in future patches. This defines the basic object model
which any implementation must conform to.
Reviewed by: David Blaikie
Differential Revision: http://reviews.llvm.org/D7356
Use estimated number of optimized insns in unroll-threshold computation.
If complete-unroll could help us to optimize away N% of instructions, we
might want to do this even if the final size would exceed loop-unroll
threshold. However, we don't want to unroll huge loop, and we are add
AbsoluteThreshold to avoid that - this threshold will never be crossed,
even if we expect to optimize 99% instructions after that.
It is a variation of SimplifyBinOp, but it takes into account
FastMathFlags.
It is needed in inliner and loop-unroller to accurately predict the
transformation's outcome (previously we dropped the flags and were too
conservative in some cases).
Example:
float foo(float *a, float b) {
float r;
if (a[1] * b)
r = /* a lot of expensive computations */;
else
r = 1;
return r;
}
float boo(float *a) {
return foo(a, 0.0);
}
Without this patch, we don't inline 'foo' into 'boo'.
Zachary Turner [Fri, 6 Feb 2015 19:44:09 +0000 (19:44 +0000)]
Create lib/DebugInfo/PDB.
This patch creates a platform-independent interface to a PDB reader.
There is currently no implementation of this interface, which will
be provided in future patches. This defines the basic object model
which any implementation must conform to.
Reviewed by: David Blaikie
Differential Revision: http://reviews.llvm.org/D7356
Lang Hames [Fri, 6 Feb 2015 19:34:04 +0000 (19:34 +0000)]
[Orc] Fix syntax error in LazyEmittingLayer::removeModuleSet.
This was a trivial think-o, but it's in a method of a templated class
and doesn't have any callers yet, so the compiler let it pass. I hope
to add a unit test to cover this soon.
[LiveIntervalAnalysis] Speed up creation of live ranges for physical registers
by using a segment set.
The patch addresses a compile-time performance regression in the LiveIntervals
analysis pass (see http://llvm.org/bugs/show_bug.cgi?id=18580). This regression
is especially critical when compiling long functions. Our analysis had shown
that the most of time is taken for generation of live intervals for physical
registers. Insertions in the middle of the array of live ranges cause quadratic
algorithmic complexity, which is apparently the main reason for the slow-down.
Overview of changes:
- The patch introduces an additional std::set<Segment>* member in LiveRange for
storing segments in the phase of initial creation. The set is used if this
member is not NULL, otherwise everything works the old way.
- The set of operations on LiveRange used during initial creation (i.e. used by
createDeadDefs and extendToUses) have been reimplemented to use the segment
set if it is available.
- After a live range is created the contents of the set are flushed to the
segment vector, because the set is not as efficient as the vector for the
later uses of the live range. After the flushing, the set is deleted and
cannot be used again.
- The set is only for live ranges computed in
LiveIntervalAnalysis::computeLiveInRegUnits() and getRegUnit() but not in
computeVirtRegs(), because I did not bring any performance benefits to
computeVirtRegs() and for some examples even brought a slow down.
Patch by Vaidas Gasiunas <vaidas.gasiunas@sap.com>
Adam Nemet [Fri, 6 Feb 2015 18:31:04 +0000 (18:31 +0000)]
[LV] Move addRuntimeCheck to LoopAccessAnalysis
This will allow it to be shared with the new Loop Distribution pass.
getFirstInst is currently duplicated across LoopVectorize.cpp and
LoopAccessAnalysis.cpp. This is a short-term work-around until we figure out
a better solution.
NFC. (The code moved is adjusted a bit for the name of the Loop member and
that PtrRtCheck is now a reference rather than a pointer.)
Matthias Braun [Fri, 6 Feb 2015 17:49:36 +0000 (17:49 +0000)]
InstCombine: Combine select sequences into a single select
Normalize
select(C0, select(C1, a, b), b) -> select((C0 & C1), a, b)
select(C0, a, select(C1, a, b)) -> select((C0 | C1), a, b)
This normal form may enable further combines on the And/Or and shortens
paths for the values. Many targets prefer the other but can go back
easily in CodeGen.
llvm-mode was previously confused when variable names contained keywords.
This changes ensures that keywords are only highlighted when they're standalone.
Introduce print-memderefs to test isDereferenceablePointer
Since testing the function indirectly is tricky, introduce a direct
print-memderefs pass, in the same spirit as print-memdeps, which prints
dereferenceability information matched by FileCheck.
Matthias Braun [Thu, 5 Feb 2015 23:52:14 +0000 (23:52 +0000)]
AArch64: Make test more robust.
Avoid the creation of select instructions which can result in different
scheduling of the selects.
I also added a bunch of additional store volatiles. Those avoid A
CodeGen problem (bug?) where normalizes and denomarlizing the control
moves all shift instructions into the first block where ISel can't match
them together with the cmps.
Daniel Jasper [Thu, 5 Feb 2015 22:39:46 +0000 (22:39 +0000)]
Small cleanup of MachineLICM.cpp
Specifically:
- Calculate the loop pre-header once at the stat of HoistOutOfLoop, so:
- We don't-DFS walk the MachineDomTree if we aren't going to do anything
- Don't call getCurPreheader for each Scope
- Don't needlessly use a do-while loop
- Use early exit for Scopes.size() == 0
Ahmed Bougacha [Thu, 5 Feb 2015 21:10:14 +0000 (21:10 +0000)]
[BasicAA] Add datalayouts to make some tests more useful. NFC.
Fixes PR22462: two of the tests have regressed for a while,
but were using CHECK-NOT to match "May:". The actual output
was changed to "MayAlias:" at some point, which made the tests
useless.
Two others return MayAlias only because of a lack of analysis;
BasicAA returns PartialAlias in those cases, when a datalayout
is present.