Matt Arsenault [Sat, 15 Aug 2015 00:12:30 +0000 (00:12 +0000)]
AMDGPU/SI: Make comments more precise.
True branch instructions do behave as expected with liveness.
Avoid the phrasing "branch decision is based on a value in an SGPR"
because this could be misleading. A VALU compare instruction's
result is still based on an SGPR, even though that condition
may be divergent.
[SCEV] Apply NSW and NUW flags via poison value analysis for sub, mul and shl
Summary:
http://reviews.llvm.org/D11212 made Scalar Evolution able to propagate NSW and NUW flags from instructions to SCEVs for add instructions. This patch expands that to sub, mul and shl instructions.
This change makes LSR able to generate pointer induction variables for loops like these, where the index is 32 bit and the pointer is 64 bit:
for (int i = 0; i < numIterations; ++i)
sum += ptr[i - offset];
for (int i = 0; i < numIterations; ++i)
sum += ptr[i * stride];
for (int i = 0; i < numIterations; ++i)
sum += ptr[3 * (i << 7)];
Pat Gavlin [Fri, 14 Aug 2015 22:41:43 +0000 (22:41 +0000)]
Add a target environment for CoreCLR.
Although targeting CoreCLR is similar to targeting MSVC, there are
certain important differences that the backend must be aware of
(e.g. differences in stack probes, EH, and library calls).
We canonicalize V64 vectors to V128 through insert_subvector: the other
FMLA/FMLS/FMUL/FMULX patterns match that already, but this one doesn't,
so we'd fail to match fmls and generate fneg+fmla instead.
The vector equivalents are already tested and functional.
Michael Kruse [Fri, 14 Aug 2015 20:20:00 +0000 (20:20 +0000)]
[RegionInfo] Remove unused and broken function splitBlock
Summary:
It always makes NewBB the entry of the region instead of OldBB. This breaks if there are edges from inside the region to OldBB. OldBB is moved out of the region and hence there are exiting edges to OldBB and the region's exit block, contradicting the single-exit condition for regions.
The only use from Polly is going to be removed, hence I propose to remove the function completely.
Vedant Kumar [Fri, 14 Aug 2015 18:36:47 +0000 (18:36 +0000)]
[ARM] Fix MachO CPU Subtype selection
This patch makes the Darwin ARM backend take advantage of TargetParser. It
also teaches TargetParser about ARMV7K for the first time. This makes target
triple parsing more consistent across llvm.
This patch fixes the x86 implementation of allowsMisalignedMemoryAccess() to correctly
return the 'Fast' output parameter for 32-byte accesses. To test that, an existing load
merging optimization is changed to use the TLI hook. This exposes a shortcoming in the
current logic and results in the regression test update. Changing other direct users of
the isUnalignedMem32Slow() x86 CPU attribute would be a follow-on patch.
Without the fix in allowsMisalignedMemoryAccesses(), we will infinite loop when targeting
SandyBridge because LowerINSERT_SUBVECTOR() creates 32-byte loads from two 16-byte loads
while PerformLOADCombine() splits them back into 16-byte loads.
This change adds RTTI and Exception flags to llvm-config's cxxflags. This solution is a minimal patch to solve the issue, and is recommended for the 3.7 release branch. Tom Stellard's outstanding work is the longer term solution.
Rafael Espindola [Fri, 14 Aug 2015 13:31:17 +0000 (13:31 +0000)]
Centralize the information about which object format we are using.
Other than some places that were handling unknown as ELF, this should
have no change. The test updates are because we were detecting
arm-coff or x86_64-win64-coff as ELF targets before.
It is not clear if the enum should live on the Triple. At least now it lives
in a single location and should be easier to move somewhere else.
James Molloy [Fri, 14 Aug 2015 09:08:50 +0000 (09:08 +0000)]
[AArch64] FMINNAN/FMAXNAN on f16 is not legal.
Spotted by Ahmed - in r244594 I inadvertently marked f16 min/max as legal.
I've reverted it here, and marked min/max on scalar f16's as promote. I've also added a testcase. The test just checks that the compiler doesn't fall over - it doesn't create fmin nodes for f16 yet.
Lang Hames [Fri, 14 Aug 2015 06:26:42 +0000 (06:26 +0000)]
[RuntimeDyld] Make sure code-sections aren't under-aligned.
Code-section alignment should be at least as high as the minimum
stub alignment. If the section alignment is lower it can cause
padding to be emitted resulting in alignment errors if the section
is mapped to a higher alignment on the target.
E.g. If a text section with a 4-byte alignment gets 4-bytes of
padding to guarantee 8-byte alignment for stubs but is re-mapped to
an 8-byte alignment on the target, the 4-bytes of padding will push
the stubs to 4-byte alignment causing a crash.
No test case: There is currently no way to control host section
alignment in llvm-rtdyld. This could be made testable by adding
a custom memory manager. I'll look at that in a follow-up patch.
David Majnemer [Fri, 14 Aug 2015 05:09:07 +0000 (05:09 +0000)]
[IR] Add token types
This introduces the basic functionality to support "token types".
The motivation stems from the need to perform operations on a Value
whose provenance cannot be obscured.
There are several applications for such a type but my immediate
motivation stems from WinEH. Our personality routine enforces a
single-entry - single-exit regime for cleanups. After several rounds of
optimizations, we may be left with a terminator whose "cleanup-entry
block" is not entirely clear because control flow has merged two
cleanups together. We have experimented with using labels as operands
inside of instructions which are not terminators to indicate where we
came from but found that LLVM does not expect such exotic uses of
BasicBlocks.
Instead, we can use this new type to clearly associate the "entry point"
and "exit point" of our cleanup. This is done by having the cleanuppad
yield a Token and consuming it at the cleanupret.
The token type makes it impossible to obscure or otherwise hide the
Value, making it trivial to track the relationship between the two
points.
What is the burden to the optimizer? Well, it turns out we have already
paid down this cost by accepting that there are certain calls that we
are not permitted to duplicate, optimizations have to watch out for
such instructions anyway. There are additional places in the optimizer
that we will probably have to update but early examination has given me
the impression that this will not be heroic.
Chandler Carruth [Fri, 14 Aug 2015 02:50:34 +0000 (02:50 +0000)]
[PM/AA] Hoist the value handle definition for CFLAA into the header to
satisfy libc++'s std::forward_list which requires the value type to be
complete.
Chandler Carruth [Fri, 14 Aug 2015 02:42:20 +0000 (02:42 +0000)]
[PM/AA] Extract a minimal interface for CFLAA to its own header file.
I've used forward declarations and reorderd the source code some to make
this reasonably clean and keep as much of the code as possible in the
source file, including all the stratified set details. Just the basic AA
interface and the create function are in the header file, and the header
file is now included into the relevant locations.
Chandler Carruth [Fri, 14 Aug 2015 02:16:12 +0000 (02:16 +0000)]
[PM/AA] Delete two pointlessly overridden methods on the AA interface by
the AA counter pass.
For pointsToConstantMemory, I think this is a "bug fix" as I think the
code as written will actually infloop if ever reached. For the
getModRefInfo, this is a no-op change but with a significantly simpler
form.
Chandler Carruth [Fri, 14 Aug 2015 02:05:41 +0000 (02:05 +0000)]
[PM/AA] Hoist the AA counter pass into a header to match the analysis
pattern.
Also hoist the creation routine out of the generic header and into the
pass header now that we have one.
I've worked to not make any changes, even formatting ones here. I'll
clean up the formatting and other things in a follow-up patch now that
the code is in the right place.
Jingyue Wu [Fri, 14 Aug 2015 02:02:05 +0000 (02:02 +0000)]
[SeparateConstOffsetFromGEP] sext(a)+sext(b) => sext(a+b) when a+b can't sign-overflow.
Summary:
This patch implements my promised optimization to reunites certain sexts from
operands after we extract the constant offset. See the header comment of
reuniteExts for its motivation.
One key building block that enables this optimization is Bjarke's poison value
analysis (D11212). That helps to prove "a +nsw b" can't overflow.
Chandler Carruth [Fri, 14 Aug 2015 00:21:10 +0000 (00:21 +0000)]
[LIR] Re-instate r244880, reverted in r244884, factoring the handling of
AliasAnalysis in LoopIdiomRecognize.
The previous commit to LIR, r244879, exposed some scary bug in the loop
pass pipeline with an assert failure that showed up on several bots.
This patch got reverted as part of getting that revision reverted, but
they're actually independent and unrelated. This patch has no functional
change and should be completely safe. It is also useful for my current
work on the AA infrastructure.
Alex Lorenz [Thu, 13 Aug 2015 23:10:16 +0000 (23:10 +0000)]
MIR Serialization: Change MIR syntax - use custom syntax for MBBs.
This commit modifies the way the machine basic blocks are serialized - now the
machine basic blocks are serialized using a custom syntax instead of relying on
YAML primitives. Instead of using YAML mappings to represent the individual
machine basic blocks in a machine function's body, the new syntax uses a single
YAML block scalar which contains all of the machine basic blocks and
instructions for that function.
This is an example of a function's body that uses the old syntax:
This syntax change is motivated by the fact that the bundled machine
instructions didn't map that well to the old syntax which was using a single
YAML sequence to store all of the machine instructions in a block. The bundled
machine instructions internally use flags like BundledPred and BundledSucc to
determine the bundles, and serializing them as MI flags using the old syntax
would have had a negative impact on the readability and the ease of editing
for MIR files. The new syntax allows me to serialize the bundled machine
instructions using a block construct without relying on the internal flags,
for example:
This commit also converts the MIR testcases to the new syntax. I developed
a script that can convert from the old syntax to the new one. I will post the
script on the llvm-commits mailing list in the thread for this commit.
Ahmed Bougacha [Thu, 13 Aug 2015 21:09:13 +0000 (21:09 +0000)]
[AArch64] Provide "too few operands" diags on short-form NEON also.
We used to just say "invalid type suffix for instruction", which is
misleading. This is because we fallback to the long-form matcher if the
short-form matcher failed, losing the error information on the way.
Save it, so that we can provide a little better diagnostics when the
long-form matcher thinks a suffix is the cause of the error.
Davide Italiano [Thu, 13 Aug 2015 20:34:26 +0000 (20:34 +0000)]
[SimplifyLibCalls] Correctly set the is_zero_undef flag for llvm.cttz
If <src> is non-zero we can safely set the flag to true, and this
results in less code generated for, e.g. ffs(x) + 1 on FreeBSD.
Thanks to majnemer for suggesting the fix and reviewing.
Alex Lorenz [Thu, 13 Aug 2015 20:33:33 +0000 (20:33 +0000)]
MIR Parser: Extract the code that parses the alignment into a new method. NFC.
This commit extracts the code that parses the memory operand's alignment into
a new method named 'parseAlignment' so that it can be reused when parsing the
basic block's alignment attribute.
Alex Lorenz [Thu, 13 Aug 2015 20:30:11 +0000 (20:30 +0000)]
MIR Parser: Rename the method 'diagFromLLVMAssemblyDiag'. NFC.
This commit renames the method 'diagFromLLVMAssemblyDiag' to
'diagFromBlockStringDiag'. This method will be used when converting diagnostics
from other YAML block strings, and not just the LLVM module block string, so
the new name should reflect that.
Jingyue Wu [Thu, 13 Aug 2015 18:48:49 +0000 (18:48 +0000)]
[SeparateConstOffsetFromGEP] strengthen the inbounds attribute
We used to be over-conservative about preserving inbounds. Actually, the second
GEP (which applies the constant offset) can inherit the inbounds attribute of
the original GEP, because the resultant pointer is equivalent to that of the
original GEP. For example,
x = GEP inbounds a, i+5
=>
y = GEP a, i // inbounds removed
x = GEP inbounds y, 5 // inbounds preserved
Yaron Keren [Thu, 13 Aug 2015 18:12:56 +0000 (18:12 +0000)]
Remove and forbid raw_svector_ostream::flush() calls.
After r244870 flush() will only compare two null pointers and return,
doing nothing but wasting run time. The call is not required any more
as the stream and its SmallString are always in sync.
Nemanja Ivanovic [Thu, 13 Aug 2015 17:40:44 +0000 (17:40 +0000)]
Scalar to vector conversions using direct moves
This patch corresponds to review:
http://reviews.llvm.org/D11471
It improves the code generated for converting a scalar to a vector value. With
direct moves from GPRs to VSRs, we no longer require expensive stack operations
for this. Subsequent patches will handle the reverse case and more general
operations between vectors and their scalar elements.
James Molloy [Thu, 13 Aug 2015 17:28:20 +0000 (17:28 +0000)]
[ARM] Allow vmin/vmax of scalars to be emitted without UseNEONForFP.
This overrides the default to more closely resemble the hand-crafted matching logic in ISelLowering. It makes sense, as there is no VFP equivalent of vmin or vmax, to use them when they're available even if in general VFP ops should be preferred.
James Molloy [Thu, 13 Aug 2015 17:28:16 +0000 (17:28 +0000)]
[ARM] Rejig vmax tests a bit
They rely on global fast-math options, but soon ISel will rely only on fast-math flags on the instructions themselves. Rip the fast checks out into their own file so we can mark their instructions as fast.
James Molloy [Thu, 13 Aug 2015 17:28:10 +0000 (17:28 +0000)]
[AArch64] Small rejig of fmax tests, NFCI.
These tests relied on -enable-no-nans-fp-math, whereas soon they'll take their no-nans hint
from the FCMP instruction itself, so split the no-nans stuff out into its own test.
Also do a slight rejig of instruction order. The old FMIN/MAX backend matching had to deal with looking through casts, which it never did particularly well. Now, instcombine will recognize such patterns and canonicalize the cast outside the select. So modify the test inputs to assume that instcombine has already run.
Erik Eckstein [Thu, 13 Aug 2015 15:36:11 +0000 (15:36 +0000)]
[DeadStoreElimination] remove a redundant store even if the load is in a different block.
DeadStoreElimination does eliminate a store if it stores a value which was loaded from the same memory location.
So far this worked only if the store is in the same block as the load.
Now we can also handle stores which are in a different block than the load.
Example:
define i32 @test(i1, i32*) {
entry:
%l2 = load i32, i32* %1, align 4
br i1 %0, label %bb1, label %bb2
bb1:
br label %bb3
bb2:
; This store is redundant
store i32 %l2, i32* %1, align 4
br label %bb3
bb3:
ret i32 0
}
Joseph Tremoulet [Thu, 13 Aug 2015 14:30:10 +0000 (14:30 +0000)]
[WinEHPrepare] Update demotion logic
Summary:
Update the demotion logic in WinEHPrepare to avoid creating new cleanups by
walking predecessors as necessary to insert stores for EH-pad PHIs.
Also avoid creating stores for EH-pad PHIs that have no uses.
The store/load placement is still pretty naive. Likely future improvements
(at least for optimized compiles) include:
- Share loads for related uses as possible
- Coalesce non-interfering use/def-related PHIs
- Store at definition point rather than each PHI pred for non-interfering
lifetimes.
Ulrich Weigand [Thu, 13 Aug 2015 13:37:06 +0000 (13:37 +0000)]
[SystemZ] Support large LLVM IR struct return values
Recent mesa/llvmpipe crashes on SystemZ due to a failed assertion when
attempting to compile a routine with a return type of
{ <4 x float>, <4 x float>, <4 x float>, <4 x float> }
on a system without vector instruction support.
This is because after legalizing the vector type, we get a return value
consisting of 16 floats, which cannot all be returned in registers.
Usually, what should happen in this case is that the target's CanLowerReturn
routine rejects the return type, in which case SelectionDAG falls back to
implementing a structure return in memory via implicit reference.
However, the SystemZ target never actually implemented any CanLowerReturn
routine, and thus would accept any struct return type.
This patch fixes the crash by implementing CanLowerReturn. As a side effect,
this also handles fp128 return values, fixing a todo that was noted in
SystemZCallingConv.td.
In this common case the add can be moved to the %if.else basic block, because
adding zero is an identity operation. If we go though %if.then branch it's
always a win, because add is not executed; if not, the number of instructions
stays the same.
This pattern applies also to other instructions like sub, shl, shr, ashr | 0,
mul, sdiv, div | 1.
John Brawn [Thu, 13 Aug 2015 10:48:22 +0000 (10:48 +0000)]
[ARM] Reorganise and simplify thumb-1 load/store selection
Other than PC-relative loads/store the patterns that match the various
load/store addressing modes have the same complexity, so the order that they
are matched is the order that they appear in the .td file.
Rearrange the instruction definitions in ARMInstrThumb.td, and make use of
AddedComplexity for PC-relative loads, so that the instruction matching order
is the order that results in the simplest selection logic. This also makes
register-offset load/store be selected when it should, as previously it was
only selected for too-large immediate offsets.
Chandler Carruth [Thu, 13 Aug 2015 10:00:53 +0000 (10:00 +0000)]
[LIR] Handle access to AliasAnalysis the same way as the other analysis
in LoopIdiomRecognize. This is what started me staring at this code. Now
migrating it with the new AA stuff will be trivial.
Chandler Carruth [Thu, 13 Aug 2015 09:56:20 +0000 (09:56 +0000)]
[LIR] Start leveraging the fundamental guarantees of a loop in
simplified form to remove redundant checks and simplify the code for
popcount recognition. We don't actually need to handle all of these
cases.
I've left a FIXME for one in particular until I finish inspecting to
make sure we don't actually *rely* on the predicate in any way.