Wei Mi [Thu, 30 Jul 2015 17:40:39 +0000 (17:40 +0000)]
[SLP vectorizer]: Choose the best consecutive candidate to pair with a store instruction.
The patch changes the SLPVectorizer::vectorizeStores to choose the immediate
succeeding or preceding candidate for a store instruction when it has multiple
consecutive candidates. In this way it has better chance to find more slp
vectorization opportunities.
Tom Stellard [Thu, 30 Jul 2015 16:20:42 +0000 (16:20 +0000)]
AMDGPU/SI: Simplify moveSMRDToVALU()
Summary:
Replace the switch on instruction opcode with a switch on register size.
This way we don't need to update the switch statement when we add new
SMRD variants.
[mips] Fix out-of-date debug information in test file.
Update the debug info in the check-lines because the change in r243638
introduced a constant initialization before the prologue's end as part
of a register spill.
Summary:
This hidden option would disable code generation through FastISel by
default. It was removed from the available options and from the
Fast-ISel tests that required it in order to run the tests.
[mips][FastISel] Apply only zero-extension to constants prior to their materialization.
Summary:
Previously, we would sign-extend non-boolean negative constants and
zero-extend otherwise. This was problematic for PHI instructions with
negative values that had a type with bitwidth less than that of the
register used for materialization.
More specifically, ComputePHILiveOutRegInfo() assumes the constants
present in a PHI node are zero extended in their container and
afterwards deduces the known bits.
For example, previously we would materialize an i16 -4 with the
following instruction:
addiu $r, $zero, -4
The register would end-up with the 32-bit 2's complement representation
of -4. However, ComputePHILiveOutRegInfo() would generate a constant
with the upper 16-bits set to zero. The SelectionDAG builder would use
that information to generate an AssertZero node that would remove any
subsequent trunc & zero_extend nodes.
In theory, we should modify ComputePHILiveOutRegInfo() to consult
target-specific hooks about the way they prefer to materialize the
given constants. However, git-blame reports that this specific code
has not been touched since 2011 and it seems to be working well for every
target so far.
Adam Nemet [Thu, 30 Jul 2015 04:21:13 +0000 (04:21 +0000)]
[LoopVer] Add missing std::move
The reason I was passing this vector by value in the constructor so that
I wouldn't have to copy when initializing the corresponding member but
then I forgot the std::move.
The use-case is LoopDistribution which filters the checks then
std::moves it to LoopVersioning's constructor. With this interface we
can avoid any copies.
Adam Nemet [Thu, 30 Jul 2015 03:29:16 +0000 (03:29 +0000)]
[LDist] Filter the checks locally rather than in LAA, NFC
Before, we were passing the pointer partitions to LAA. Now, we get all
the checks from LAA and filter out the checks within partitions in
LoopDistribution.
This effectively concludes the steps to move filtering memchecks from
LAA into its clients. There is still some cleanup left to remove the
unused interfaces in LAA that still take PtrPartition.
(Moving this functionality to LoopDistribution also requires
needsChecking on pointers to be made public.)
This avoids stack overflows when the the compiler does not perform tail call
elimination. Apparently this happens for MSVC with the /Ob2 switch which
may be used by external code including this header.
Reported by and based on a patch from Jean-Francois Riendeau.
Nick Lewycky [Wed, 29 Jul 2015 22:32:47 +0000 (22:32 +0000)]
Fix typo "fuction" noticed in comments in AssumptionCache.h, and also all the other files that have the same typo. All comments, no functionality change! (Merely a "fuctionality" change.)
Bonus change to remove emacs major mode marker from SystemZMachineFunctionInfo.cpp because emacs already knows it's C++ from the extension. Also fix typo "appeary" in AMDGPUMCAsmInfo.h.
Prevent all the unrelated LLVM options to appear in the -help output
by introducing a tool specific option category. As a drive-by improve
the wording of the help message.
The dsymutil-classic -v option dumps the tool version rather than
putting it in verbose mode. Rename -v to -verbose and update the
tests that use it (in the process removing it from a few tests that
didn't require it anymore since the -dump-debug-map option was
introduced).
A followup commit will reintroduce the -v option that dumps the
version.
Eric Christopher [Wed, 29 Jul 2015 22:09:48 +0000 (22:09 +0000)]
Rename hasCompatibleFunctionAttributes->areInlineCompatible based
on suggestions. Currently the function is only used for inline purposes
and this is more descriptive for the use.
Simon Pilgrim [Wed, 29 Jul 2015 21:44:27 +0000 (21:44 +0000)]
[X86][SSE] Keep 32-bit target i64 vector shifts on SSE unit.
This patch improves the 32-bit target i64 constant matching to detect the shuffle vector splats that are introduced by i64 vector shift vectorization (D8416).
Tim Northover [Wed, 29 Jul 2015 21:34:32 +0000 (21:34 +0000)]
AArch64: use 32-bit MOV rather than UBFX to truncate registers.
It's potentially more efficient on Cyclone, and from the optimization guides &
schedulers looks like it has no effect on Cortex-A53 or A57. In general you'd
expect a MOV to be about the most efficient instruction with its semantics,
even though the official "UXTW" alias is really a UBFX.
Alex Lorenz [Wed, 29 Jul 2015 20:57:11 +0000 (20:57 +0000)]
MIR Parser: Extract the code that parses MBB references into a new method. NFC.
This commit extracts the code that's used by the class 'MIRParserImpl' to parse
the machine basic block references into a new method named 'parseMBBReference'.
Simon Pilgrim [Wed, 29 Jul 2015 20:31:45 +0000 (20:31 +0000)]
[X86][SSE] Vectorize i64 ASHR operations
This patch vectorizes the v2i64/v4i64 ASHR shift operations - the last remaining integer vector shifts that are still being transferred to/from the scalar unit to be completed.
[ASan] Disable dynamic alloca and UAR detection in presence of returns_twice calls.
Summary:
returns_twice (most importantly, setjmp) functions are
optimization-hostile: if local variable is promoted to register, and is
changed between setjmp() and longjmp() calls, this update will be
undone. This is the reason why "man setjmp" advises to mark all these
locals as "volatile".
This can not be enough for ASan, though: when it replaces static alloca
with dynamic one, optionally called if UAR mode is enabled, it adds a
whole lot of SSA values, and computations of local variable addresses,
that can involve virtual registers, and cause unexpected behavior, when
these registers are restored from buffer saved in setjmp.
To fix this, just disable dynamic alloca and UAR tricks whenever we see
a returns_twice call in the function.
move DAGCombiner's allowableAlignment() helper function into the TLI
Making allowableAlignment() more accessible was suggested as a predecessor patch
for D10662, so I've pulled it into TargetLowering. This let's us remove 4 instances
of duplicate logic in LegalizeDAG.
There's a subtle functional change in the implementation: the existing
allowableAlignment() code was using getPrefTypeAlignment() when checking
alignment with the DataLayout and assumed that was fast. In this implementation,
we use getABITypeAlignment() and assume that is fast. See the TODO comment or the
discussion in the Phab review for future improvements in this implementation
(don't use the data layout at all).
There are no regression test changes from this difference, and I'm not sure how to
expose it via a test. I think we actually do want to provide the 'Fast' param when
checking this from DAGCombiner::MergeConsecutiveStores(). Ie, we shouldn't merge
stores if the new stores are not going to be fast. But that change will require
fixing allowsMisalignedMemoryAccess() overrides as noted in D10662.
[asan] Remove special case mapping on Android/AArch64.
ASan shadow on Android starts at address 0 for both historic and
performance reasons. This is possible because the platform mandates
-pie, which makes lower memory region always available.
This is not such a good idea on 64-bit platforms because of MAP_32BIT
incompatibility.
This patch changes Android/AArch64 mapping to be the same as that of
Linux/AAarch64.
Hans Wennborg [Wed, 29 Jul 2015 16:29:06 +0000 (16:29 +0000)]
test-release.sh: Add option for building the OpenMP run-time
This isn't part of the official release process, but provides a convenient way
to build binaries for those who want to experiment with it. Hopefully the run-
time can be part of the regular build and release process for 3.8.
Bill Schmidt [Wed, 29 Jul 2015 14:31:57 +0000 (14:31 +0000)]
[PPC] Fix PR24216: Don't generate splat for misaligned shuffle mask
Given certain shuffle-vector masks, LLVM emits splat instructions
which splat the wrong bytes from the source register. The issue is
that the function PPC::isSplatShuffleMask() in PPCISelLowering.cpp
does not ensure that the splat pattern found is requesting bytes that
are aligned on an EltSize boundary. This patch detects this situation
as not a valid splat mask, resulting in a permute being generated
instead of a splat.
Patch and test case by Tyler Kenney, cleaned up a bit by me.
This is a simple bug fix that would be good to incorporate into 3.7.
This commit defines subtarget feature strict-align and uses it instead of
cl::opt -aarch64-strict-align to decide whether strict alignment should be
forced.
Sanjoy Das [Tue, 28 Jul 2015 23:50:30 +0000 (23:50 +0000)]
[Statepoints] Let patchable statepoints have a symbolic call target.
Summary:
As added initially, statepoints required their call targets to be a
constant pointer null if ``numPatchBytes`` was non-zero. This turns out
to be a problem ergonomically, since there is no way to mark patchable
statepoints as calling a (readable) symbolic value.
This change remove the restriction of requiring ``null`` call targets
for patchable statepoints, and changes PlaceSafepoints to maintain the
symbolic call target through its transformation.
ignore duplicate divisor uses when transforming into reciprocal multiplies (PR24141)
PR24141: https://llvm.org/bugs/show_bug.cgi?id=24141
contains a test case where we have duplicate entries in a node's uses() list.
After r241826, we use CombineTo() to delete dead nodes when combining the uses into
reciprocal multiplies, but this fails if we encounter the just-deleted node again in
the list.
The solution in this patch is to not add duplicate entries to the list of users that
we will subsequently iterate over. For the test case, this avoids triggering the
combine divisors logic entirely because there really is only one user of the divisor.
fix TLI's combineRepeatedFPDivisors interface to return the minimum user threshold
This fix was suggested as part of D11345 and is part of fixing PR24141.
With this change, we can avoid walking the uses of a divisor node if the target
doesn't want the combineRepeatedFPDivisors transform in the first place.
This commit defines subtarget feature strict-align and uses it instead of
cl::opt -arm-strict-align to decide whether strict alignment should be
forced. Also, remove the logic that was checking the OS and architecture
as clang is now responsible for setting strict-align based on the command
line options specified and the target architecute and OS.
[PeepholeOptimizer] Look through PHIs to find additional register sources
Reapply 243271 with more fixes; although we are not handling multiple
sources with coalescable copies, we were not properly skipping this
case.
- Teaches the ValueTracker in the PeepholeOptimizer to look through PHI
instructions.
- Add findNextSourceAndRewritePHI method to lookup into multiple sources
returnted by the ValueTracker and rewrite PHIs with new sources.
With these changes we can find more register sources and rewrite more
copies to allow coaslescing of bitcast instructions. Hence, we eliminate
unnecessary VR64 <-> GR64 copies in x86, but it could be extended to
other archs by marking "isBitcast" on target specific instructions. The
x86 example follows:
[mips][FastISel] Fix call lowering by bailing out on "fastcc" calls.
Summary:
Currently, we support only the MIPS O32 ABI calling convention for call
lowering. With this change we avoid using the O32 calling convetion for
lowering calls marked as using the fast calling convention.
[mips][FastISel] Fix generated code for IR's select instruction.
Summary:
Generate correct code for the select instruction by zero-extending
it's boolean/condition operand to GPR-width. This is necessary because
the conditional-move instructions operate on the whole register.
[SCEV] Apply NSW and NUW flags via poison value analysis
Summary:
Make Scalar Evolution able to propagate NSW and NUW flags from instructions to SCEVs in some cases. This is based on reasoning about when poison from instructions with these flags would trigger undefined behavior. This gives a 13% speed-up on some Eigen3-based Google-internal microbenchmarks for NVPTX.
There does not seem to be clear agreement about when poison should be considered to propagate through instructions. In this analysis, poison propagates only in cases where that should be uncontroversial.
This change makes LSR able to create induction variables for expressions like &ptr[i + offset] for loops like this:
for (int i = 0; i < limit; ++i) {
sum += ptr[i + offset];
}
Here ptr is a 64 bit pointer and offset is a 32 bit integer. For NVPTX, LSR currently creates an induction variable for i + offset instead, which is not as fast. Improving this situation is what brings the 13% speed-up on some Eigen3-based Google-internal microbenchmarks for NVPTX.
There are more details in this discussion on llvmdev.
June: http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-June/thread.html#87234
July: http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-July/thread.html#87392
Alex Lorenz [Tue, 28 Jul 2015 17:52:59 +0000 (17:52 +0000)]
Add a test case for r242191 ([MMX] Use the appropriate instructions for
GR64 <-> VR64 copies).
This commit adds a MIR test case for the commit r242191, which was committed
without one. This test case verifies that the ExpandPostRA pass expands the
GR64 <-> VR64 copies into the appropriate MMX_MOV instructions.
Alex Lorenz [Tue, 28 Jul 2015 17:09:52 +0000 (17:09 +0000)]
MIR Parser: Extract the method 'parseGlobalValue'. NFC.
This commit extracts the code that parses a global value from the method
'parseGlobalAddressOperand' into a new method 'parseGlobalValue', so that this
code can be reused by the method which will parse the block address machine
operands.
Alex Lorenz [Tue, 28 Jul 2015 17:03:40 +0000 (17:03 +0000)]
MIR Parser: Move the function 'lexName'. NFC.
This commit moves the function 'lexName' to the start of the file so it can
be reused by the function which will lex the named LLVM IR block references.
Alex Lorenz [Tue, 28 Jul 2015 16:56:45 +0000 (16:56 +0000)]
MIR Printer: Remove an outdated TODO comment and assertion. NFC.
This commit removes an outdated TODO comment and a corresponding assertion
which asserts that the mir printer can't the print machine basic blocks that
aren't sequentially numbered.
This comment and assertion were correct when I was working on the patch which
serialized the machine basic blocks, but then I decided to add an 'ID'
attribute to the machine basic block's YAML mapping based on the patch review.
This comment and assertion then became invalid as with the 'ID' attribute we
can serialize the non sequential machine basic blocks and their references
without any problems.
Alex Lorenz [Tue, 28 Jul 2015 16:48:37 +0000 (16:48 +0000)]
MIR Parser: Remove redundant parameters. NFC.
This commit removes the redundant parameters from the two methods
'initializeRegisterInfo' and 'initializeFrameInfo'. The removed parameters are
redundant as we are already passing in the 'MachineFunction' to those methods,
and those parameters can be derived from the machine function parameter.
Implement target independent TLS compatible with glibc's emutls.c.
The 'common' section TLS is not implemented.
Current C/C++ TLS variables are not placed in common section.
DWARF debug info to get the address of TLS variables is not generated yet.
clang and driver changes in http://reviews.llvm.org/D10524
Added -femulated-tls flag to select the emulated TLS model,
which will be used for old targets like Android that do not
support ELF TLS models.
Added TargetLowering::LowerToTLSEmulatedModel as a target-independent
function to convert a SDNode of TLS variable address to a function call
to __emutls_get_address.
Added into lib/Target/*/*ISelLowering.cpp to call LowerToTLSEmulatedModel
for TLSModel::Emulated. Although all targets supporting ELF TLS models are
enhanced, emulated TLS model has been tested only for Android ELF targets.
Modified AsmPrinter.cpp to print the emutls_v.* and emutls_t.* variables for
emulated TLS variables.
Modified DwarfCompileUnit.cpp to skip some DIE for emulated TLS variabls.
TODO: Add proper DIE for emulated TLS variables.
Added new unit tests with emulated TLS.
The official specifications state that the value of IMAGE_FILE_MACHINE_ARM64
is 0xAA64 (as per the Microsoft Portable Executable and Common Object Format
Specification v8.3).
Douglas Katzman [Tue, 28 Jul 2015 14:43:53 +0000 (14:43 +0000)]
Use a specified list of languages in cmake project() command.
This allows asm files and Cxx files to be compiled with different flags
rather than treating them identically. LLVM itself has no asm files
other than tests, but this setting is inherited by the compiler-rt
project (unless compiled standalone), which does have asm files.