[CMake] Unbreak add_llvm_external_project when external projects are specified.
LLVM_EXTERNAL_*_SOURCE_DIR is reset as PATH with set(CACHE PATH).
Then the CACHE PATH variable, LLVM_EXTERNAL_*_SOURCE_DIR, is normalized as
${CMAKE_SOURCE_DIR}/${path_var} if ${path_var} is relative.
LegalizeDAG: Fix and improve FCOPYSIGN/FABS legalization
- Factor out code to query and modify the sign bit of a floatingpoint
value as an integer. This also works if none of the targets integer
types is big enough to hold all bits of the floatingpoint value.
- Legalize FABS(x) as FCOPYSIGN(x, 0.0) if FCOPYSIGN is available,
otherwise perform bit manipulation on the sign bit. The previous code
used "x >u 0 ? x : -x" which is incorrect for x being -0.0! It also
takes 34 instructions on ARM Cortex-M4. With this patch we only
require 5:
vldr d0, LCPI0_0
vmov r2, r3, d0
lsrs r2, r3, #31
bfi r1, r2, #31, #1
bx lr
(This could be further improved if the compiler would recognize that
r2, r3 is zero).
- Only lower FCOPYSIGN(x, y) = sign(x) ? -FABS(x) : FABS(x) if FABS is
available otherwise perform bit manipulation on the sign bit.
- Perform the sign(x) test by masking out the sign bit and comparing
with 0 rather than shifting the sign bit to the highest position and
testing for "<s 0". For x86 copysignl (on 80bit values) this gets us:
testl $32768, %eax
rather than:
shlq $48, %rax
sets %al
testb %al, %al
Andrew Wilkins [Tue, 14 Jul 2015 01:23:06 +0000 (01:23 +0000)]
Add capability to get and set the personalitty function from the C API
Summary:
The capability was lost with D10429 where the personality function was set at function level rather than landing pad level. Now there is no way to get/set the personality function from the C API. That is a problem.
Note that the whole thing could be avoided by improving the C API testing, as started by D10725
Chris Bieneman [Tue, 14 Jul 2015 01:17:43 +0000 (01:17 +0000)]
[CMake] We shouldn't be storing values in the cache unless they actually need CMake cache behavior.
add_llvm_external_project puts LLVM_EXTERNAL_${nameUPPER}_SOURCE_DIR into the cache even if it is just the in-tree default path. This causes all sorts of oddness, and makes it so that I can't change the behavior of this variable.
This patch never puts LLVM_EXTERNAL_${nameUPPER}_SOURCE_DIR into the cache. It will only end up in the cache if it is specified on the command line, which is the correct behavior.
There is also a temporary change to remove non-default values from the cache if they are already present. This should have the impact of cleaning out unncecissary values from the caches on the buildbots and people's local build directories. This part of the change is marked with a TODO and can be removed in a few days.
Update enforceKnownAlignment after the isWeakForLinker semantic change
Previously we would refrain from attempting to increase the linkage of
available_externally globals because they were considered weak for the
linker. Now they are treated more like a declaration instead of a weak
definition.
This was causing SSE alignment faults in Chromuim, when some code
assumed it could increase the alignment of a dllimported global that it
didn't control. http://crbug.com/509256
Bill Schmidt [Mon, 13 Jul 2015 22:58:19 +0000 (22:58 +0000)]
[PPC64LE] More improvements to VSX swap optimization
This patch allows VSX swap optimization to succeed more frequently.
Specifically, it is concerned with common code sequences that occur
when copying a scalar floating-point value to a vector register. This
patch currently handles cases where the floating-point value is
already in a register, but does not yet handle loads (such as via an
LXSDX scalar floating-point VSX load). That will be dealt with later.
A typical case is when a scalar value comes in as a floating-point
parameter. The value is copied into a virtual VSFRC register, and
then a sequence of SUBREG_TO_REG and/or COPY operations will convert
it to a full vector register of the class required by the context. If
this vector register is then used as part of a lane-permuted
computation, the original scalar value will be in the wrong lane. We
can fix this by adding a swap operation following any widening
SUBREG_TO_REG operation. Additional COPY operations may be needed
around the swap operation in order to keep register assignment happy,
but these are pro forma operations that will be removed by coalescing.
If a scalar value is otherwise directly referenced in a computation
(such as by one of the many XS* vector-scalar operations), we
currently disable swap optimization. These operations are
lane-sensitive by definition. A MentionsPartialVR flag is added for
use in each swap table entry that mentions a scalar floating-point
register without having special handling defined.
A common idiom for PPC64LE is to convert a double-precision scalar to
a vector by performing a splat operation. This ensures that the value
can be referenced as V[0], as it would be for big endian, whereas just
converting the scalar to a vector with a SUBREG_TO_REG operation
leaves this value only in V[1]. A doubleword splat operation is one
form of an XXPERMDI instruction, which takes one doubleword from a
first operand and another doubleword from a second operand, with a
two-bit selector operand indicating which doublewords are chosen. In
the general case, an XXPERMDI can be permitted in a lane-swapped
region provided that it is properly transformed to select the
corresponding swapped values. This transformation is to reverse the
order of the two input operands, and to reverse and complement the
bits of the selector operand (derivation left as an exercise to the
reader ;).
A new test case that exercises the scalar-to-vector and generalized
XXPERMDI transformations is added as CodeGen/PowerPC/swaps-le-5.ll.
The patch also requires a change to CodeGen/PowerPC/swaps-le-3.ll to
use CHECK-DAG instead of CHECK for two independent instructions that
now appear in reverse order.
There are two small unrelated changes that are added with this patch.
First, the XXSLDWI instruction was incorrectly omitted from the list
of lane-sensitive instructions; this is now fixed. Second, I observed
that the same webs were being rejected over and over again for
different reasons. Since it's sufficient to reject a web only once, I
added a check for this to speed up the compilation time slightly.
Pete Cooper [Mon, 13 Jul 2015 21:50:35 +0000 (21:50 +0000)]
Remove unnecessary lines from the test in r242068.
This test case was breaking the hexagon elf bot. The failing lines
were actually unnecessary as checking that the store still reads the
correct value demonstrates that everything is working fine now.
Reduce memory usage of ComputeEditDistance() by (almost) 50%
ComputeEditDistance() currently keeps two rows of the edit distance matrix in
memory. That's unnecessary, one row plus one additional element are sufficient.
With this change, strings up to 64 chars can be processed without going to the
heap, compared to 32 chars previously. (But the main motivation is that the
code gets a bit simpler.)
[WinEH] Emit the LSDA even if no lpads remain but outlining occurred
The outlined funclets call intrinsics which reference labels from the
LSDA. This situation can easily arise in small functions with a single
cleanup at -O0, where Clang marks a definition as nounwind, and then
WinEHPrepare "discovers" that the landingpad is dead by accident and
deletes it.
We now need to ask the LLVM IR Function for it's personality directly,
rather than going through MachineModuleInfo.
Chris Bieneman [Mon, 13 Jul 2015 20:23:15 +0000 (20:23 +0000)]
[CMake] Cleanup tools/CMakeLists.txt to take advantage of the auto-registration that was already partially working.
Summary:
This change re-lands r241621, with an additional fix that was required to allow tool sources to live outside the llvm checkout. It also no longer renames LLVM_EXTERNAL_*_SOURCE_DIR. This change was reverted in r241663, because it renamed several variables of the format LLVM_EXTERNAL_*_* to LLVM_TOOL_*_*.
Original Summary:
The tools CMakeLists file already had implicit tool registration, but there were a few things off about it that needed to be altered to make it work. This change addresses all that. The changes in this patch are:
* factored out canonicalizing tool names from paths to CMake variables * removed the LLVM_IMPLICIT_PROJECT_IGNORE mechanism in favor of LLVM_EXTERNAL_${nameUPPER}_BUILD which I renamed to LLVM_TOOL_${nameUPPER}_BUILD because it applies to internal and external tools
* removed ignore_llvm_tool_subdirectory() in favor of just setting LLVM_TOOL_${nameUPPER}_BUILD to Off
* Added create_llvm_tool_options() to resolve a bug in add_llvm_external_project() - the old LLVM_EXTERNAL_${nameUPPER}_BUILD would not work on a clean CMake directory because the option could be created after it was set in code.
* Removed all but the minimum required calls to add_llvm_external_project from tools/CMakeLists.txt
Mark Heffernan [Mon, 13 Jul 2015 18:33:21 +0000 (18:33 +0000)]
Enable partial and runtime loop unrolling for NVPTX.
Enable partial and runtime loop unrolling for NVPTX backend via
TTI::UnrollingPreferences with a small threshold. This partially unrolls
small loops which are often unrolled by the PTX to SASS compiler
and unrolling earlier can be beneficial.
Mark Heffernan [Mon, 13 Jul 2015 18:26:27 +0000 (18:26 +0000)]
Enable runtime unrolling with unroll pragma metadata
Enable runtime unrolling for loops with unroll count metadata ("#pragma unroll N")
and a runtime trip count. Also, do not unroll loops with unroll full metadata if the
loop has a runtime loop count. Previously, such loops would be unrolled with a
very large threshold (pragma-unroll-threshold) if runtime unrolled happened to be
enabled resulting in a very large (and likely unwise) unroll factor.
Alex Lorenz [Mon, 13 Jul 2015 18:07:26 +0000 (18:07 +0000)]
MIR Serialization: Serialize the fixed stack objects.
This commit serializes the fixed stack objects, including fixed spill slots.
The fixed stack objects are serialized using a YAML sequence of YAML inline
mappings. Each mapping has the object's ID, type, size, offset, and alignment.
The objects that aren't spill slots also serialize the isImmutable and isAliased
flags.
The fixed stack objects are a part of the machine function's YAML mapping.
James Y Knight [Mon, 13 Jul 2015 16:36:22 +0000 (16:36 +0000)]
Fix handling of the 'n' asm constraint with invalid operands.
It had accidently accepted a symbol+offset value (and emitted
incorrect code for it, keeping only the offset part) instead of
properly reporting the constraint as invalid.
The 64/128-bit vector types are legal if NEON instructions are
available. However, there was no matching patterns for @llvm.cttz.*()
intrinsics and result in fatal error.
This commit fixes the problem by lowering cttz to:
a. ctpop((x & -x) - 1)
b. width - ctlz(x & -x) - 1
Cleanup after r241809 - remove uncessary call to std::sort
Summary:
The iteration order within a member of DepCands is deterministic
and therefore we don't have to sort the accesses within a member.
We also don't have to copy the indices of the pointers into a
vector, since we can iterate over the members of the class.
AVX-512: Added all AVX-512 forms of Vector Convert for Float/Double/Int/Long types.
In this patch I have only encoding. Intrinsics and DAG lowering will be in the next patch.
I temporary removed the old intrinsics test (just to split this patch).
Half types are not covered here.
Daniel Sanders [Mon, 13 Jul 2015 09:24:21 +0000 (09:24 +0000)]
[mips] Explained the 'w' modifier in the Inline Assembler documentation.
It exists for compatibility with GCC which requires it to print MSA registers
for the 'f' constraint. Although LLVM doesn't need it, the 'w' modifier should
still be used for portability between the two compilers.
[LSR] don't attempt to promote ephemeral values to indvars
Summary:
This at least saves compile time. I also encountered a case where
ephemeral values affect whether other variables are promoted, causing
performance issues. It may be a bug in LSR, but I didn't manage to
reduce it yet. Anyhow, I believe it's in general not worth considering
ephemeral values in LSR.
Register r12 ('ip') is used by GCC for this purpose
and hence is used here. As discussed on the GCC mailing
list, the register choice is an ABI issue and so
choosing the same register as GCC means
__builtin_call_with_static_chain is compatible.
A similar patch has just gone in the AArch64 backend,
so this is just the ARM counterpart, following the same
discussion.
Simon Pilgrim [Sun, 12 Jul 2015 11:15:19 +0000 (11:15 +0000)]
[X86][SSE] Vectorized v4i32 non-uniform shifts.
While the v4i32 shl operation is already vectorized using a cvttps2dq/pmulld pattern, the lshr/ashr opeations are still scalarized.
This patch adds vectorization support for non-uniform v4i32 shift operations - it splats constant shift amounts to allow them to use the immediate sse shift instructions, or extracts/zero-extends non-constant shift amounts. The individual results are then blended together.
David Majnemer [Sun, 12 Jul 2015 03:53:05 +0000 (03:53 +0000)]
[LICM] Don't try to sink values out of loops without any exits
There is no suitable basic block to sink instructions in loops without
exits. The only way an instruction in a loop without exits can be used
is as an incoming value to a PHI. In such cases, the incoming block for
the corresponding value is unreachable.
Hal Finkel [Sun, 12 Jul 2015 02:33:57 +0000 (02:33 +0000)]
[PowerPC] Make use of the TargetRecip system
r238842 added the TargetRecip system for controlling use of reciprocal
estimates for sqrt and division using a set of parameters that can be set by
the frontend. Clang now supports a sophisticated -mrecip option, and this will
allow that option to effectively control the relevant code-generation
functionality of the PPC backend.
Hal Finkel [Sun, 12 Jul 2015 00:37:44 +0000 (00:37 +0000)]
[PowerPC] Support the nest parameter attribute
This adds support for the 'nest' attribute, which allows the static chain
register to be set for functions calls under non-Darwin PPC/PPC64 targets. r11
is the chain register (which the PPC64 ELF ABI calls the "environment
pointer"). For indirect calls under PPC64 ELFv1, this would normally be loaded
from the function descriptor, but providing an explicit 'nest' parameter will
override that process and use the value provided.
This allows __builtin_call_with_static_chain to work as expected on PowerPC.
r236894 caused PR23626 (Clang miscompiles webkit's base64 decoder), and was
reverted in r237984. This reapplies the patch with an additional test case for
PR23626 and the associated fix (both scales and offsets in the
BasicAliasAnalysis::constantOffsetHeuristic should initially be zero).
Igor Laevsky [Sat, 11 Jul 2015 10:30:36 +0000 (10:30 +0000)]
Add argmemonly attribute.
This change adds new attribute called "argmemonly". Function marked with this attribute can only access memory through it's argument pointers. This attribute directly corresponds to the "OnlyAccessesArgumentPointees" ModRef behaviour in alias analysis.
Owen Anderson [Sat, 11 Jul 2015 07:01:27 +0000 (07:01 +0000)]
Define a new intrinsic @llvm.canonicalize.
This is used the canonicalize floating point values, which is useful for
implementing certain numeric primitives. See the LangRef changes for
the full details of its semantics.
[PM/AA] Completely remove the AliasAnalysis::copyValue interface.
No in-tree alias analysis used this facility, and it was not called in
any particularly rigorous way, so it seems unlikely to be correct.
Note that one of the only stateful AA implementations in-tree,
GlobalsModRef is completely broken currently (and any AA passes like it
are equally broken) because Module AA passes are not effectively
invalidated when a function pass that fails to update the AA stack runs.
Ultimately, it doesn't seem like we know how we want to build stateful
AA, and until then trying to support and maintain correctness for an
untested API is essentially impossible. To that end, I'm planning to rip
out all of the update API. It can return if and when we need it and know
how to build it on top of the new pass manager and as part of *tested*
stateful AA implementations in the tree.
Drop 8 bytes off of `MCDwarfLoc` by restricting the `Isa`, `Column`, and
`Flags` members to appropriate sizes (from `DWARFDebugLine::Row`).
Saves a little over 0.5% off the heap of llc with no real functionality
change.
(I'm looking at `llc` memory usage on `verify-uselistorder.lto.opt.bc`;
see r236629 for details.)
MC: Only allow changing feature bits in MCSubtargetInfo
Disallow all mutation of `MCSubtargetInfo` expect the feature bits.
Besides deleting the assignment operators -- which were dead "code" --
this restricts `InitMCProcessorInfo()` to subclass initialization
sequences, and exposes a new more limited function called
`setDefaultFeatures()` for use by the ARMAsmParser `.cpu` directive.
There's a small functional change here: ARMAsmParser used to adjust
`MCSubtargetInfo::CPUSchedModel` as a side effect of calling
`InitMCProcessorInfo()`, but I've removed that suspicious behaviour.
Since the AsmParser shouldn't be doing any scheduling, there shouldn't
be any observable change...
Matt Arsenault [Fri, 10 Jul 2015 22:51:36 +0000 (22:51 +0000)]
AMDGPU: Fix chains for memory ops dependent on argument loads
Most loads and stores are derived from pointers derived from
a kernel argument load inserted during argument lowering.
This was just using the EntryToken chain for the argument loads,
and any users of these loads were also on the EntryToken chain.
Return the chain of the lowered argument load so that dependent loads
end up on the correct chain.
No test since I'm not aware of any case where this actually
broke.
Force all creators of `MCSubtargetInfo` to immediately initialize it,
merging the default constructor and the initializer into an initializing
constructor. Besides cleaning up the code a little, this makes it clear
that the initializer is never called again later.
Out-of-tree backends need a trivial change: instead of calling:
auto *X = new MCSubtargetInfo();
InitXYZMCSubtargetInfo(X, ...);
return X;
Remove all calls to `MCSubtargetInfo::InitCPUSched()` and merge its body
into the only relevant caller, `MCSubtargetInfo::InitMCProcessorInfo()`.
We were only calling the former after explicitly calling the latter with
the same CPU; it's confusing to have both methods exposed.
Besides a minor (surely unmeasurable) speedup in ARM and X86 from
avoiding running the logic twice, no functionality change.
This in turn would sometimes introduce new cleanupblocks that didn't
previously exist. The uses were being introduced by SSA value demotion.
We actually want to *promote* uses of EH pointers and selectors, so I
added some spcecial casing to avoid demoting such instructions. This is
getting overly complicated, but hopefully we'll come along and delete it
in the new representation.
MC: Remove the copy of MCSchedModel in MCSubtargetInfo
`MCSchedModel` is large. Make `MCSchedModel::GetDefaultSchedModel()`
return by-reference instead of by-value, so we can store a pointer in
`MCSubtargetInfo::CPUSchedModel` instead of a copy.
Note: since `MCSchedModel` is POD, this doesn't create a static
constructor.
Fix AArch64 prologue for empty frame with dynamic allocas.
Fixes PR23804: assertion failure in emitPrologue in the case of a
function with an empty frame and a dynamic alloca that needs stack
realignment. This is a typical case for AddressSanitizer.
Summary:
Following the discussion on r241884, it's more reasonable to assume that a
target has no vector registers by default instead of letting every such
target overrides getNumberOfRegisters.
Therefore, this patch modifies BasicTTIImpl::getNumberOfRegisters to
return 0 when Vector is true, and partially reverts r241884 which
modifies NVPTXTTIImpl::getNumberOfRegisters.
It also fixes a performance bug in LoopVectorizer. Even if a target has
no vector registers, vectorization may still help ILP. So, we need both
checks to be false before disabling loop vectorization all together.