[objc-arc] Add the predicate CanDecrementRefCount.
This is different from CanAlterRefCount since CanDecrementRefCount is
attempting to prove specifically whether or not an instruction can
decrement instead of the more general question of whether it can
decrement or increment.
When trying to match the current schema with the new debug info
hierarchy, I downgraded `SizeInBits`, `AlignInBits` and `OffsetInBits`
to 32-bits (oops!). Caught this while testing my upgrade script to move
the hierarchy into place. Bump it back up to 64-bits and update tests.
Ahmed Bougacha [Thu, 19 Feb 2015 23:52:41 +0000 (23:52 +0000)]
[ARM] Re-re-apply VLD1/VST1 base-update combine.
This re-applies r223862, r224198, r224203, and r224754, which were
reverted in r228129 because they exposed Clang misalignment problems
when self-hosting.
The combine caused the crashes because we turned ISD::LOAD/STORE nodes
to ARMISD::VLD1/VST1_UPD nodes. When selecting addressing modes, we
were very lax for the former, and only emitted the alignment operand
(as in "[r1:128]") when it was larger than the standard alignment of
the memory type.
However, for ARMISD nodes, we just used the MMO alignment, no matter
what. In our case, we turned ISD nodes to ARMISD nodes, and this
caused the alignment operands to start being emitted.
And that's how we exposed alignment problems that were ignored before
(but I believe would have been caught with SCTRL.A==1?).
To fix this, we can just mirror the hack done for ISD nodes: only
take into account the MMO alignment when the access is overaligned.
Original commit message:
We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD
when the base pointer is incremented after the load/store.
We can do the same thing for generic load/stores.
Note that we can only combine the first load/store+adds pair in
a sequence (as might be generated for a v16f32 load for instance),
because other combines turn the base pointer addition chain (each
computing the address of the next load, from the address of the last
load) into independent additions (common base pointer + this load's
offset).
Eric Christopher [Thu, 19 Feb 2015 23:52:35 +0000 (23:52 +0000)]
Only use the initialized MCInstrInfo if it's been initialized already
during SetupMachineFunction. This is also the single use of MII
and it'll be changing to TargetInstrInfo (which is MachineFunction
based) in the next commit here.
There's no way for `DIBuilder` to create a subprogram or global variable
where `getName()` and `getDisplayName()` give different answers. This
testcase managed to achieve the feat though. This was probably just
left behind in some sort of upgrade along the way.
Ahmed Bougacha [Thu, 19 Feb 2015 23:30:37 +0000 (23:30 +0000)]
[ARM] Minor cleanup to CombineBaseUpdate. NFC.
In preparation for a future patch:
- rename isLoad to isLoadOp: the former is confusing, and can be taken
to refer to the fact that the node is an ISD::LOAD. (it isn't, yet.)
- change formatting here and there.
- add some comments.
- const-ify bools.
Eric Christopher [Thu, 19 Feb 2015 23:29:42 +0000 (23:29 +0000)]
Migrate away a use of the subtarget (and TargetMachine) from
AsmPrinterDwarf since the information is on the MCRegisterInfo
via the MCContext and MMI that we already have on the AsmPrinter.
Add missing `nullptr` from `MDSubroutineType`'s operands for
`MDCompositeTypeBase::getIdentifier()` (and add tests for all the other
unused fields). This highlights just how crazy it is that
`MDSubroutineType` inherits from `MDCompositeTypeBase`.
Eric Christopher [Thu, 19 Feb 2015 21:24:23 +0000 (21:24 +0000)]
Remove a call to TargetMachine::getSubtarget from the inline
asm support in the asm printer. If we can get a subtarget from
the machine function then we should do so, otherwise we can
go ahead and create a default one since we're at the module
level.
[objc-arc] Convert the bodies of ARCInstKind predicates into covered switches.
This is much better than the previous manner of just using
short-curcuiting booleans from:
1. A "naive" efficiency perspective: we do not have to rely on the
compiler to change the short circuiting boolean operations into a
switch.
2. An understanding perspective by making the implicit behavior of
negative predicates explicit.
3. A maintainability perspective through the covered switch flag making
it easy to know where to update code when adding new ARCInstKinds.
Chris Bieneman [Thu, 19 Feb 2015 19:50:52 +0000 (19:50 +0000)]
Checking if TARGET_OS_IPHONE is defined isn't good enough for 10.7 and earlier.
Older versions of the TargetConditionals header always defined TARGET_OS_IPHONE to something (0 or 1), so we need to test not only for the existence but also if it is 1.
Adam Nemet [Thu, 19 Feb 2015 19:15:19 +0000 (19:15 +0000)]
[LoopAccesses] Add -analyze support
The LoopInfo in combination with depth_first is used to enumerate the
loops.
Right now -analyze is not yet complete. It only prints the result of
the analysis, the report and the run-time checks. Printing the unsafe
depedences will require a bit more reshuffling which I'd like to do in a
follow-on to this patchset. Unsafe dependences are currently checked
via -debug-only=loop-accesses in the new test.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
Adam Nemet [Thu, 19 Feb 2015 19:15:15 +0000 (19:15 +0000)]
[LoopAccesses] Split out LoopAccessReport from VectorizerReport
The only difference between these two is that VectorizerReport adds a
vectorizer-specific prefix to its messages. When LAA is used in the
vectorizer context the prefix is added when we promote the
LoopAccessReport into a VectorizerReport via one of the constructors.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
Adam Nemet [Thu, 19 Feb 2015 19:15:10 +0000 (19:15 +0000)]
[LoopAccesses] Add canAnalyzeLoop
This allows the analysis to be attempted with any loop. This feature
will be used with -analysis. (LV only requests the analysis on loops
that have already satisfied these tests.)
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
Adam Nemet [Thu, 19 Feb 2015 19:15:04 +0000 (19:15 +0000)]
[LoopAccesses] Create the analysis pass
This is a function pass that runs the analysis on demand. The analysis
can be initiated by querying the loop access info via LAA::getInfo. It
either returns the cached info or runs the analysis.
Symbolic stride information continues to reside outside of this analysis
pass. We may move it inside later but it's not a priority for me right
now. The idea is that Loop Distribution won't support run-time stride
checking at least initially.
This means that when querying the analysis, symbolic stride information
can be provided optionally. Whether stride information is used can
invalidate the cache entry and rerun the analysis. Note that if the
loop does not have any symbolic stride, the entry should be preserved
across Loop Distribution and LV.
Since currently the only user of the pass is LV, I just check that the
symbolic stride information didn't change when using a cached result.
On the LV side, LoopVectorizationLegality requests the info object
corresponding to the loop from the analysis pass. A large chunk of the
diff is due to LAI becoming a pointer from a reference.
A test will be added as part of the -analyze patch.
Also tested that with AVX, we generate identical assembly output for the
testsuite (including the external testsuite) before and after.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
Adam Nemet [Thu, 19 Feb 2015 19:15:00 +0000 (19:15 +0000)]
[LoopAccesses] Cache the result of canVectorizeMemory
LAA will be an on-demand analysis pass, so we need to cache the result
of the analysis. canVectorizeMemory is renamed to analyzeLoop which
computes the result. canVectorizeMemory becomes the query function for
the cached result.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
Adam Nemet [Thu, 19 Feb 2015 19:14:52 +0000 (19:14 +0000)]
[LoopAccesses] Make VectorizerParams global + fix for cyclic dep
As LAA is becoming a pass, we can no longer pass the params to its
constructor. This changes the command line flags to have external
storage. These can now be accessed both from LV and LAA.
VectorizerParams is moved out of LoopAccessInfo in order to shorten the
code to access it.
This commits also has the fix (D7731) to the break dependence cycle
between the analysis and vector libraries.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
Adam Nemet [Thu, 19 Feb 2015 19:14:34 +0000 (19:14 +0000)]
Revert "Reformat."
This reverts commit r229651.
I'd like to ultimately revert r229650 but this reformat stands in the
way. I'll reformat the affected files once the the loop-access pass is
fully committed.
Ben Langmuir [Thu, 19 Feb 2015 18:22:35 +0000 (18:22 +0000)]
Assume the original file is created before release in LockFileManager
This is true in clang, and let's us remove the problematic code that
waits around for the original file and then times out if it doesn't get
created in short order. This caused any 'dead' lock file or legitimate
time out to cause a cascade of timeouts in any processes waiting on the
same lock (even if they only just showed up).
Sanjay Patel [Thu, 19 Feb 2015 16:59:11 +0000 (16:59 +0000)]
add X86 load folding tests for unary math ops
X86 load folding is fragile; eg, the tests here
don't work without AVX even though they should. This
is because we have a mix of tablegen patterns that have
been added over time, and we have a load folding table
used by the peephole optimizer that has to be kept in
sync with the ever-changing ISA and tablegen defs.
Chandler Carruth [Thu, 19 Feb 2015 15:21:57 +0000 (15:21 +0000)]
[x86] Delete still more piles of complex code now that we have a good
systematic lowering of v8i16.
This required a slight strategy shift to prefer unpack lowerings in more
places. While this isn't a cut-and-dry win in every case, it is in the
overwhelming majority. There are only a few places where the old
lowering would probably be a touch faster, and then only by a small
margin.
In some cases, this is yet another significant improvement.
Chandler Carruth [Thu, 19 Feb 2015 14:08:24 +0000 (14:08 +0000)]
[x86] Dramatically improve v8i16 shuffle lowering by not using its
terribly complex partial blend logic.
This code path was one of the more complex and bug prone when it first
went in and it hasn't faired much better. Ultimately, with the simpler
basis for unpack lowering and support bit-math blending, this is
completely obsolete. In the worst case without this we generate
different but equivalent instructions. However, in many cases we
generate much better code. This is especially true when blends or pshufb
is available.
This does expose one (minor) weakness of the unpack lowering that I'll
try to address.
In case you were wondering, this is actually a big part of what I've
been trying to pull off in the recent string of commits.
Chandler Carruth [Thu, 19 Feb 2015 13:56:49 +0000 (13:56 +0000)]
[x86] Remove the final fallback in the v8i16 lowering that isn't really
needed, and significantly improve the SSSE3 path.
This makes the new strategy much more clear. If we can blend, we just go
with that. If we can't blend, we try to permute into an unpack so
that we handle cases where the unpack doing the blend also simplifies
the shuffle. If that fails and we've got SSSE3, we now call into
factored-out pshufb lowering code so that we leverage the fact that
pshufb can set up a blend for us while shuffling. This generates great
code, especially because we *know* we don't have a fast blend at this
point. Finally, we fall back on decomposing into permutes and blends
because we do at least have a bit-math-based blend if we need to use
that.
This pretty significantly improves some of the v8i16 code paths. We
never need to form pshufb for the single-input shuffles because we have
effective target-specific combines to form it there, but we were missing
its effectiveness in the blends.
Chandler Carruth [Thu, 19 Feb 2015 13:15:12 +0000 (13:15 +0000)]
[x86] Simplify the pre-SSSE3 v16i8 lowering significantly by decomposing
them into permutes and a blend with the generic decomposition logic.
This works really well in almost every case and lets the code only
manage the expansion of a single input into two v8i16 vectors to perform
the actual shuffle. The blend-based merging is often much nicer than the
pack based merging that this replaces. The only place where it isn't we
end up blending between two packs when we could do a single pack. To
handle that case, just teach the v2i64 lowering to handle these blends
by digging out the operands.
With this we're down to only really random permutations that cause an
explosion of instructions.
Chandler Carruth [Thu, 19 Feb 2015 12:10:37 +0000 (12:10 +0000)]
[x86] Remove the insanely over-aggressive unpack lowering strategy for
v16i8 shuffles, and replace it with new facilities.
This uses precise patterns to match exact unpacks, and the new
generalized unpack lowering only when we detect a case where we will
have to shuffle both inputs anyways and they terminate in exactly
a blend.
This fixes all of the blend horrors that I uncovered by always lowering
blends through the vector shuffle lowering. It also removes *sooooo*
much of the crazy instruction sequences required for v16i8 lowering
previously. Much cleaner now.
The only "meh" aspect is that we sometimes use pshufb+pshufb+unpck when
it would be marginally nicer to use pshufb+pshufb+por. However, the
difference there is *tiny*. In many cases its a win because we re-use
the pshufb mask. In others, we get to avoid the pshufb entirely. I've
left a FIXME, but I'm dubious we can really do better than this. I'm
actually pretty happy with this lowering now.
For SSE2 this exposes some horrors that were really already there. Those
will have to fixed by changing a different path through the v16i8
lowering.
Chandler Carruth [Thu, 19 Feb 2015 11:43:37 +0000 (11:43 +0000)]
[x86] The SELECT x86 DAG combine also does legalization. It used to rely
on things not being marked as either custom or legal, but we now do
custom lowering of more VSELECT nodes. To cope with this, manually
replicate the legality tests here. These have to stay in sync with the
set of tests used in the custom lowering of VSELECT.
Ideally, we wouldn't do any of this combine-based-legalization when we
have an actual custom legalization step for VSELECT, but I'm not going
to be able to rewrite all of that today.
I don't have a test case for this currently, but it was found when
compiling a number of the test-suite benchmarks. I'll try to reduce
a test case and add it.
This should at least fix the test-suite fallout on build bots.
Chandler Carruth [Thu, 19 Feb 2015 10:46:52 +0000 (10:46 +0000)]
[x86] Add support for bit-wise blending and use it in the v8 and v16
lowering paths. I'm going to be leveraging this to simplify a lot of the
overly complex lowering of v8 and v16 shuffles in pre-SSSE3 modes.
Sadly, this isn't profitable on v4i32 and v2i64. There, the float and
double blending instructions for pre-SSE4.1 are actually pretty good,
and we can't beat them with bit math. And once SSE4.1 comes around we
have direct blending support and this ceases to be relevant.
Also, some of the test cases look odd because the domain fixer
canonicalizes these to floating point domain. That's OK, it'll use the
integer domain when it matters and some day I may be able to update
enough of LLVM to canonicalize the other way.
This restores almost all of the regressions from teaching x86's vselect
lowering to always use vector shuffle lowering for blends. The remaining
problems are because the v16 lowering path is still doing crazy things.
I'll be re-arranging that strategy in more detail in subsequent commits
to finish recovering the performance here.
Chandler Carruth [Thu, 19 Feb 2015 10:36:19 +0000 (10:36 +0000)]
[x86,sdag] Two interrelated changes to the x86 and sdag code.
First, don't combine bit masking into vector shuffles (even ones the
target can handle) once operation legalization has taken place. Custom
legalization of vector shuffles may exist for these patterns (making the
predicate return true) but that custom legalization may in some cases
produce the exact bit math this matches. We only really want to handle
this prior to operation legalization.
However, the x86 backend, in a fit of awesome, relied on this. What it
would do is mark VSELECTs as expand, which would turn them into
arithmetic, which this would then match back into vector shuffles, which
we would then lower properly. Amazing.
Instead, the second change is to teach the x86 backend to directly form
vector shuffles from VSELECT nodes with constant conditions, and to mark
all of the vector types we support lowering blends as shuffles as custom
VSELECT lowering. We still mark the forms which actually support
variable blends as *legal* so that the custom lowering is bypassed, and
the legal lowering can even be used by the vector shuffle legalization
(yes, i know, this is confusing. but that's how the patterns are
written).
This makes the VSELECT lowering much more sensible, and in fact should
fix a bunch of bugs with it. However, as you'll see in the test cases,
right now what it does is point out the *hilarious* deficiency of the
new vector shuffle lowering when it comes to blends. Fortunately, my
very next patch fixes that. I can't submit it yet, because that patch,
somewhat obviously, forms the exact and/or pattern that the DAG combine
is matching here! Without this patch, teaching the vector shuffle
lowering to produce the right code infloops in the DAG combiner. With
this patch alone, we produce terrible code but at least lower through
the right paths. With both patches, all the regressions here should be
fixed, and a bunch of the improvements (like using 2 shufps with no
memory loads instead of 2 andps with memory loads and an orps) will
stay. Win!
There is one other change worth noting here. We had hilariously wrong
vectorization cost estimates for vselect because we fell through to the
code path that assumed all "expand" vector operations are scalarized.
However, the "expand" lowering of VSELECT is vector bit math, most
definitely not scalarized. So now we go back to the correct if horribly
naive cost of "1" for "not scalarized". If anyone wants to add actual
modeling of shuffle costs, that would be cool, but this seems an
improvement on its own. Note the removal of 16 and 32 "costs" for doing
a blend. Even in SSE2 we can blend in fewer than 16 instructions. ;] Of
course, we don't right now because of OMG bad code, but I'm going to fix
that. Next patch. I promise.
Previously, subtarget features were a bitfield with the underlying type being uint64_t.
Since several targets (X86 and ARM, in particular) have hit or were very close to hitting this bound, switching the features to use a bitset.
Dmitri Gribenko [Thu, 19 Feb 2015 05:30:16 +0000 (05:30 +0000)]
Provide the same ABI regardless of NDEBUG
For projects depending on LLVM, I find it very useful to combine a
release-no-asserts build of LLVM with a debug+asserts build of the dependent
project. The motivation is that when developing a dependent project, you are
debugging that project itself, not LLVM. In my usecase, a significant part of
the runtime is spent in LLVM optimization passes, so I would like to build LLVM
without assertions to get the best performance from this combination.
Currently, `lib/Support/Debug.cpp` changes the set of symbols it provides
depending on NDEBUG, while `include/llvm/Support/Debug.h` requires extra
symbols when NDEBUG is not defined. Thus, it is not possible to enable
assertions in an external project that uses facilities of `Debug.h`.
This patch changes `Debug.cpp` and `Valgrind.cpp` to always define the symbols
that other code may depend on when #including LLVM headers without NDEBUG.
Eric Christopher [Thu, 19 Feb 2015 01:10:49 +0000 (01:10 +0000)]
Remove the DisasmEnabled AsmPrinter variable and just look it
up on the subtarget where it's set anyhow than looking it up
2-3 times in the same place.
[objc-arc] Introduce the concept of RCIdentity and rename all relevant functions to use that name. NFC.
The RCIdentity root ("Reference Count Identity Root") of a value V is a
dominating value U for which retaining or releasing U is equivalent to
retaining or releasing V. In other words, ARC operations on V are
equivalent to ARC operations on U.
This is a useful property to ascertain since we can use this in the ARC
optimizer to make it easier to match up ARC operations by always mapping
ARC operations to RCIdentityRoots instead of pointers themselves. Then
we perform pairing of retains, releases which are applied to the same
RCIdentityRoot.
In general, the two ways that we see RCIdentical values in ObjC are via:
1. PointerCasts
2. Forwarding Calls that return their argument verbatim.
As such in ObjC, two RCIdentical pointers must always point to the same
memory location.
Previously this concept was implicit in the code and various methods
that dealt with this concept were given functional names that did not
conform to any name in the "ARC" model. This often times resulted in
code that was hard for the non-ARC acquanted to understand resulting in
unhappiness and confusion.
Follow-up to r229740, which removed `DITemplate*::getContext()` after my
upgrade script revealed that scopes are always `nullptr` for template
parameters. This is the other shoe: drop `scope:` from
`MDTemplateParameter` and its two subclasses. (Note: a bitcode upgrade
would be pointless, since the hierarchy hasn't been moved into place.)
Eric Christopher [Thu, 19 Feb 2015 00:08:27 +0000 (00:08 +0000)]
Remove all use of is64bit off of NVPTXSubtarget and clean up code
accordingly. This changes the constructors of a number of classes
that don't need to know the subtarget's 64-bitness.
Eric Christopher [Thu, 19 Feb 2015 00:08:14 +0000 (00:08 +0000)]
Migrate the NVPTX backend asm printer to a per function subtarget.
This involved moving two non-subtarget dependent features (64-bitness
and the driver interface) to the NVPTX target machine and updating
the uses (or migrating around the subtarget use for ease of review).
Otherwise use the cached subtarget or create a default subtarget
based on the TargetMachine cpu and feature string for the module
level assembler emission.
Put the name before the value in assembly for `MDEnum`. While working
on the testcase upgrade script for the new hierarchy, I noticed that it
"looks nicer" to have the name first, since it lines the names up in the
(somewhat typical) case that they have a common prefix.
Add `replaceElements()`, `replaceVTableHolder()`, and
`replaceTemplateParams()` to `MDCompositeTypeBase`. Included an
assertion in `replaceElements()` to match the one in
`DICompositeType::replaceArrays()`.