Use function attribute "trap-func-name" and remove TargetOptions::TrapFuncName.
This commit changes normal isel and fast isel to read the user-defined trap
function name from function attribute "trap-func-name" attached to llvm.trap or
llvm.debugtrap instead of from TargetOptions::TrapFuncName. This is needed to
use clang's command line option "-ftrap-function" for LTO and enable changing
the trap function name on a per-call-site basis.
Out-of-tree projects currently using TargetOptions::TrapFuncName to specify the
trap function name should attach attribute "trap-func-name" to the call sites
of llvm.trap and llvm.debugtrap instead.
Bill Schmidt [Thu, 2 Jul 2015 19:01:22 +0000 (19:01 +0000)]
[PPC64LE] Remove implicit-subreg restriction from VSX swap removal
In r241285, I removed the SUBREG_TO_REG restriction from VSX swap
removal, determining that this was overly conservative. We have
another form of the same restriction in that we check for the presence
of implicit subregs in vector operations. As with SUBREG_TO_REG for
partial register conversions, an implicit subreg is safe in and of
itself, provided no other operation makes a lane-sensitive assumption
about the result. This patch removes that restriction, by removing
the HasImplicitSubreg flag and all code that relies on it.
I've added a test case that fails to optimize before this patch is
applied, and optimizes properly with the patch. Test based on a
report from Anton Blanchard.
Bill Schmidt [Thu, 2 Jul 2015 17:03:06 +0000 (17:03 +0000)]
[PPC64LE] Teach swap optimization about the doubleword splat idiom
With a previous patch, the VSX swap optimization is able to recognize
the doubleword load-splat idiom that can be implemented using lxvdsx.
However, that does not cover a doubleword splat where the source is a
register. We can implement this using xxspltd (a special form of
xxpermdi). This patch teaches the swap optimization pass about this
idiom.
As a prerequisite, it also permits swap optimization to succeed for
all forms of SUBREG_TO_REG. Previously we were conservative and only
allowed SUBREG_TO_REG when it copied a full register. However, on
reflection any form of SUBREG_TO_REG is safe in and of itself, so long
as an unsafe operation is not performed on its result. In particular,
a widening SUBREG_TO_REG often occurs as an input to a doubleword
splat idiom, particularly in auto-vectorized code.
The doubleword splat idiom is an XXPERMDI operation where both source
registers are identical, and the selection mask is either 0 (splat the
first element) or 3 (splat the second element). To determine whether
the registers are identical, we use the existing mechanism for looking
through "copy-like" operations. That mechanism has a side effect of
marking the XXPERMDI operation as using a physical register, which
would invalidate its presence in a swap-optimized region. This is
correct for the form of XXPERMDI that performs a swap and hence would
be removed, but is not what we want for a doubleword-splat variety of
XXPERMDI. Therefore we reset the physical-register flag on the
XXPERMDI when it represents a splat.
A simple test case is added to verify that we generate the splat and
that we also remove the xxswapd instructions that would otherwise be
associated with the load and store of another operand.
Fix for PR23310: llvm-dis crashes when trying to upgrade an intrinsic.
When trying to upgrade @llvm.x86.sse2.psrl.dq while parsing a module,
BitcodeReader adds the function to its worklist twice, resulting in a
crash when accessing it the second time.
This patch changes linkage with dbghlp.dll for clang from static (at load time)
to on demand (at the first use of required functions). Clang uses dbghlp.dll
only in minor use-cases. First of all in case of crash and in case of plugin load.
The dbghlp.dll library can be absent on system. In this case clang will fail
to load. With lazy load of dbghlp.dll clang can work even if dbghlp.dll
is not available.
The code responsible for shl folding in the DAGCombiner was assuming incorrectly that all constants are less than 64 bits. This patch simply changes the way values are compared.
It has been reverted previously because of some problems with comparing APInt with raw uint64_t. That has been fixed/changed with r241204.
Charlie Turner [Thu, 2 Jul 2015 09:32:07 +0000 (09:32 +0000)]
[GraphWriter] Don't wait on xdg-open when not on Apple.
By default, the GraphWriter code assumes that the generic file open
program (`open` on Apple, `xdg-open` on other systems) can wait on the
forked proces to complete. When the fork ends, the code would delete
the temporary dot files created, and return.
On GNU/Linux, the xdg-open program does not have a "wait for your fork
to complete before dying" option. So the behaviour was that xdg-open
would launch a process, quickly die itself, and then the GraphWriter
code would think its OK to quickly delete all the temporary files.
Once the temporary files were deleted, the dot viewers would get very
upset, and often give you weird errors.
This change only waits on the generic open program on Apple platforms.
Elsewhere, we don't wait on the process, and hence we don't try and
clean up the temporary files.
Sanjoy Das [Thu, 2 Jul 2015 02:03:58 +0000 (02:03 +0000)]
[LazyCallGraph] Port test case from r240039 to LCG.
Summary:
r240039 adds a test case to check that CallGraph does the right thing
with respect to non-leaf intrinsics like statepoint and patchpoint.
This ports the same test case to LazyCallGraph. LazyCallGraph already
does the right thing with respect to escaping function pointers so there
is no need to change any code.
Implement TargetTransformInfo::hasCompatibleFunctionAttributes for X86.
This checks subtarget feature compatibility for inlining by verifying
that the callee is a strict subset of the caller's features. This includes
the cpu as part of the subtarget we can get via the incoming functions as
the backend takes CPUs as feature sets.
This allows us to inline things like:
int foo() { return baz(); }
int __attribute__((target("sse4.2"))) bar() {
return foo();
}
so that generic code can be inlined into specialized functions.
Summary:
* Add 64-bit address space feature.
* Rename SIMD feature to SIMD128.
* Handle single-thread model with an IR pass (same way ARM does).
* Rename generic processor to MVP, to follow design's lead.
* Add bleeding-edge processors, with all features included.
* Fix a few DEBUG_TYPE to match other backends.
Summary:
This patch changes the way APInt is compared with a value of type uint64_t.
Before the uint64_t value was truncated to the size of APInt before comparison.
Now the comparison takes into account full 64-bit precision.
Test Plan: Unit tests added. No regressions. Self-hosted check-all done as well.
[LoopVectorize] Use ReplaceInstWithInst() helper where appropriate.
This is mostly an NFC, which increases code readability (instead of
saving old terminator, generating new one in front of old, and deleting
old, we just call a function). However, it would additionaly copy
the debug location from old instruction to replacement, which
would help PR23837.
[NVPTX] expand extload/truncstore for vectors of floats
Summary:
According to PTX ISA:
For convenience, ld, st, and cvt instructions permit source and destination data operands to be wider than the instruction-type size, so that narrow values may be loaded, stored, and converted using regular-width registers. For example, 8-bit or 16-bit values may be held directly in 32-bit or 64-bit registers when being loaded, stored, or converted to other types and sizes. The operand type checking rules are relaxed for bit-size and integer (signed and unsigned) instruction types; floating-point instruction types still require that the operand type-size matches exactly, unless the operand is of bit-size type.
So, the ISA does not support load with extending/store with truncatation for floating numbers. This is reflected in setting the loadext/truncstore actions to expand in the code for floating numbers, but vectors of floating numbers are not taken care of.
As a result, loading a vector of floats followed by a fp_extend may be combined by DAGCombiner to a extload, and the extload may be lowered to NVPTXISD::LoadV2 with extending information. However, NVPTXISD::LoadV2 does not perform extending, and no extending instructions are inserted. Finally, PTX instructions with mismatched types are generated, like
ld.v2.f32 {%fd3, %fd4}, [%rd2]
This patch adds the correct actions for vectors of floats, so DAGCombiner would not create loads with extending, and correct code is generated.
[WinEH] Use llvm.x86.seh.recoverfp in WinEHPrepare
Don't pattern match for frontend outlined finally calls on non-x64
platforms. The 32-bit runtime uses a different funclet prototype. Now,
the frontend is pre-outlining the finally bodies so that it ends up
doing most of the heavy lifting for variable capturing. We're just
outlining the callsite, and adapting the frameaddress(0) call to line up
the frame pointer recovery.
[NVPTX] Move NVPTXPeephole after NVPTXPrologEpilogPass
Summary:
Offset of frame index is calculated by NVPTXPrologEpilogPass. Before
that the correct offset of stack objects cannot be obtained, which
leads to wrong offset if there are more than 2 frame objects. This patch
move NVPTXPeephole after NVPTXPrologEpilogPass. Because the frame index
is already replaced by %VRFrame in NVPTXPrologEpilogPass, we check
VRFrame register instead, and try to remove the VRFrame if there
is no usage after NVPTXPeephole pass.
Patched by Xuetian Weng.
Test Plan:
Strengthened test/CodeGen/NVPTX/local-stack-frame.ll to check the
offset calculation based on SP and SPL.
Bill Schmidt [Wed, 1 Jul 2015 19:40:07 +0000 (19:40 +0000)]
[PPC64LE] Enable missing lxvdsx optimization, and related swap optimization
When adding little-endian vector support for PowerPC last year, I
inadvertently disabled an optimization that recognizes a load-splat
idiom and generates the lxvdsx instruction. This patch moves the
offending logic so lxvdsx is once again generated.
This pattern is frequently generated by the vectorizer for scalar
loads of an effective constant. Previously the lxvdsx instruction was
wrongly listed as lane-sensitive for the VSX swap optimization (since
both doublewords are identical, swaps are safe). This patch fixes
this as well, so that vectorized code using lxvdsx can now have swaps
removed from the computation.
There is an existing test (@test50) in test/CodeGen/PowerPC/vsx.ll
that checks for the missing optimization. However, vsx.ll was only
being tested for POWER7 with big-endian code generation. I've added
a little-endian RUN statement and expected LE code generation for all
the tests in vsx.ll to give us a bit better VSX coverage, including
what's needed for this patch.
add a cl::opt override for TargetLoweringBase's JumpIsExpensive
This patch is not intended to change existing codegen behavior for any target.
It just exposes the JumpIsExpensive setting on the command-line to allow for
easier testing and emergency overrides.
Also, change the existing regression test to use FileCheck, explicitly specify
the jump-is-expensive option, and use more precise checks.
[SEH] Don't assert if the parent function lacks a personality
The EH code might have been deleted as unreachable and the personality
pruned while the filter is still present. Currently I'm hitting this at
-O0 due to the clang bug PR24009.
Those instructions are an alternate syntax available to assembly coders,
and are needed in order to support code already compiling with some other
assemblers (gas). They are documented in the "ARMv8 Instruction Set
Overview", in the "Arithmetic (immediate)" section. This makes llvm-mc
a programmer-friendly assembler !
This also fixes PR20978: "Assembly handling of adding negative numbers
not as smart as gas".
* getSection(uint_32): only return an error if the index is out of bounds. The
index 0 corresponds to a perfectly valid entry.
* getSection(Elf_Sym): Returns null for symbols that normally don't have
sections and error for out of bound indexes.
In many places this just moves the report_fatal_error up the stack, but those
can then be fixed in smaller patches.
[DWARF] Fix debug info generation for function static variables, typedefs, and records
Function static variables, typedefs and records (class, struct or union) declared inside
a lexical scope were associated with the function as their parent scope, rather than the
lexical scope they are defined or declared in.
[X86] Avoid over-relaxation of 8-bit immediates in integer arithmetic instructions.
Only consider an instruction a candidate for relaxation if the last operand of the
instruction is an expression. We previously checked whether any operand is an expression,
which is useless, since for all instructions concerned, the only operand that may be
affected by relaxation is the last one.
In addition, this removes the check for having RIP as an argument, since it was
plain wrong - even when one of the arguments is RIP, relaxation may still be needed.
Fix PR23872: Integrated assembler error message when using .type directive with @ in AArch32 assembly.
The AArch32 assembler parses the '@' as a comment symbol, so the error message shouldn't suggest
that '@<type>' is a valid replacement when assembling for AArch32 target.
David Majnemer [Wed, 1 Jul 2015 05:38:07 +0000 (05:38 +0000)]
[LoopUnroll] Use undef for phis with no value live
We would create a phi node with a zero initialized operand instead of
undef in the case where no value was originally available. This was
problematic for x86_mmx which has no null value.
[NaryReassociate] enhances nsw by leveraging @llvm.assume
Summary:
nsw are flaky and can often be removed by optimizations. This patch enhances
nsw by leveraging @llvm.assume in the IR. Specifically, NaryReassociate now
understands that
assume(a + b >= 0) && assume(a >= 0) ==> a +nsw b
As a result, it can split more sext(a + b) into sext(a) + sext(b) for CSE.
Reid Kleckner [Tue, 30 Jun 2015 22:46:59 +0000 (22:46 +0000)]
[SEH] Add new intrinsics for recovering and restoring parent frames
The incoming EBP value established by the runtime is actually a pointer
to the end of the EH registration object, and not the true parent
function frame pointer. Clang doesn't need llvm.x86.seh.exceptioninfo
anymore because we know that the exception info pointer is at a fixed
offset from this incoming EBP.
The llvm.x86.seh.recoverfp intrinsic takes an EBP value provided by the
EH runtime and returns a pointer that is usable with llvm.framerecover.
The llvm.x86.seh.restoreframe intrinsic is inserted by the 32-bit
specific preparation pass in blocks targetted by the EH runtime. It
re-establishes any physical registers used by the parent function to
address the stack, such as the frame, base, and stack pointers.
Neither of these intrinsics correctly handle stack realignment prologues
yet, but it's possible to add that later.
Sanjoy Das [Tue, 30 Jun 2015 21:22:32 +0000 (21:22 +0000)]
[FaultMaps] Let the frontend pre-select implicit null check candidates.
Summary:
This change introduces a !make.implicit metadata that allows the
frontend to pre-select the set of explicit null checks that will be
considered for transformation into implicit null checks.
The reason for not using profiling data instead of !make.implicit is
explained in the change to `FaultMaps.rst`.
Pete Cooper [Tue, 30 Jun 2015 20:54:21 +0000 (20:54 +0000)]
Pack MCSymbol::HasName in to a spare bit in the section/fragment union.
This is part of an effort to pack the average MCSymbol down to 24 bytes.
The HasName bit was pushing the size of the bitfield over to another word,
so this change uses a PointerIntPair to fit in it to unused bits of a
PointerUnion.
COFF: Do not assign linker-weak symbols to selectany comdat sections.
It is mandatory to specify a comdat in order to receive comdat semantics
for a symbol. We were previously getting this wrong in -function-sections
mode; linker-weak symbols were being emitted in a selectany comdat. This
change causes such symbols to use a noduplicates comdat instead, fixing
the inconsistency.
Alexey Samsonov [Tue, 30 Jun 2015 19:07:20 +0000 (19:07 +0000)]
[DebugInfo] Let IRBuilder::SetInsertPoint(BB::iterator) update current debug location.
IRBuilder::SetInsertPoint(BB, BB::iterator) is an older version of
IRBuilder::SetInsertPoint(Instruction). However, the latter updates
the current debug location of emitted instruction, while the former
doesn't, which is confusing.
Unify the behavior of these methods: now they both set current debug
location to the debug location of instruction at insertion point.
The callers of IRBuilder::SetInsertPoint(BB, BB::iterator) doesn't
seem to depend on the old behavior (keeping the original debug info
location). On the contrary, sometimes they (e.g. SCEV) *should* be
updating debug info location, but don't. I'll look at gdb bots after
the commit to check that we don't regress on debug info somewhere.
This change may make line table more fine-grained, thus increasing
debug info size. I haven't observed significant increase, though:
it varies from negligible to 0.3% on several binaries and self-hosted
Clang.
This is yet another change targeted at resolving PR23837.
Alex Lorenz [Tue, 30 Jun 2015 18:16:42 +0000 (18:16 +0000)]
MIR Serialization: Serialize MBB successors.
This commit implements serialization of the machine basic block successors. It
uses a YAML flow sequence that contains strings that have the MBB references.
The MBB references in those strings use the same syntax as the MBB machine
operands in the machine instruction strings.
Alex Lorenz [Tue, 30 Jun 2015 17:55:00 +0000 (17:55 +0000)]
MIR Parser: refactor error reporting for machine instruction parser errors. NFC.
This commit extracts the code that reports an error that's produced by the
machine instruction parser into a new method that can be reused in other places.
Alex Lorenz [Tue, 30 Jun 2015 17:47:50 +0000 (17:47 +0000)]
MIR Parser: make the machine instruction parsing interface more consistent. NFC.
This commit refactors the interface for machine instruction parser. It adopts
the pattern of returning a bool and passing in the result in the first argument
that is used by the other parsing methods for the the method 'parse' and the
function 'parseMachineInstr'.