Wei Mi [Tue, 25 Aug 2015 16:43:47 +0000 (16:43 +0000)]
The patch replace the overflow check in loop vectorization with the minimum loop iterations check.
The loop minimum iterations check below ensures the loop has enough trip count so the generated
vector loop will likely be executed, and it covers the overflow check.
Sanjay Patel [Tue, 25 Aug 2015 16:29:21 +0000 (16:29 +0000)]
make fast unaligned memory accesses implicit with SSE4.2 or SSE4a
This is a follow-on from the discussion in http://reviews.llvm.org/D12154.
This change allows memset/memcpy to use SSE or AVX memory accesses for any chip that has
generally fast unaligned memory ops.
A motivating use case for this change is a clang invocation that doesn't explicitly set
the CPU, but does target a feature that we know only exists on a CPU that supports fast
unaligned memops. For example:
$ clang -O1 foo.c -mavx
This resolves a difference in lowering noted in PR24449:
https://llvm.org/bugs/show_bug.cgi?id=24449
Before this patch, we used different store types depending on whether the example can be
lowered as a memset or not.
Reid Kleckner [Tue, 25 Aug 2015 16:07:06 +0000 (16:07 +0000)]
[lit] Speculatively fix PR24554 by manually closing the process handle
My theory is that somehow Python's refcounting and GC strategy isn't
closing the subprocess handle in a timely fashion. This accesses the
private '_handle' field of the Popen object, but I see no other way to
do this. If this doesn't address the problem on the sanitizer-windows
buildbot, we can revert this change. If it does, then let's keep the
hack.
Diego Novillo [Tue, 25 Aug 2015 15:25:11 +0000 (15:25 +0000)]
Convert SampleProfile pass into a Module pass.
Eventually, we will need sample profiles to be incorporated into the
inliner's cost models. To do this, we need the sample profile pass to
be a module pass.
This patch makes no functional changes beyond the mechanical adjustments
needed to run SampleProfile as a module pass.
Davide Italiano [Tue, 25 Aug 2015 15:02:23 +0000 (15:02 +0000)]
[MachO] Introduce MinVersion API.
While introducing support for MinVersionLoadCommand in llvm-readobj I noticed there's
no API to extract Major/Minor/Update components conveniently. Currently consumers
do the bit twiddling on their own, but this will change from now on.
I'll convert llvm-objdump (and llvm-readobj) in a later commit.
Differential Revision: http://reviews.llvm.org/D12282
Reviewed by: rafael
This fixes two issues in x86 fptoui lowering.
1) Makes conversions from f80 go through the right path on AVX-512.
2) Implements an inline sequence for fptoui i64 instead of a library
call. This improves performance by 6X on SSE3+ and 3X otherwise.
Incidentally, it also removes the use of ftol2 for fptoui, which was
wrong to begin with, as ftol2 converts to a signed i64, producing
wrong results for values >= 2^63.
Piotr Padlewski [Tue, 25 Aug 2015 01:34:15 +0000 (01:34 +0000)]
Assume intrinsic handling in global opt
It doesn't solve the problem, when for example we load something, and
then assume that it is the same as some constant value, because
globalopt will fail on unknown load instruction. The proposed solution
would be to skip some instructions that we can't evaluate and they are
safe to skip (f.e. load, assume and many others) and see if they are
required to perform optimization (f.e. we don't care about ephemeral
instructions that may appear using @llvm.assume())
Mehdi Amini [Tue, 25 Aug 2015 01:07:25 +0000 (01:07 +0000)]
Fix LLVM C API for DataLayout
We removed access to the DataLayout on the TargetMachine and
deprecated the C API function LLVMGetTargetMachineData() in r243114.
However the way I tried to be backward compatible was broken: I
changed the wrapper of the TargetMachine to be a structure that
includes the DataLayout as well. However the TargetMachine is also
wrapped by the ExecutionEngine, in the more classic way. A client
using the TargetMachine wrapped by the ExecutionEngine and trying
to get the DataLayout would break.
It seems tricky to solve the problem completely in the C API
implementation. This patch tries to address this backward
compatibility in a more lighter way in the C++ API. The C API is
restored in its original state and the removed C++ API is
reintroduced, but privately. The C API is friended to the
TargetMachine and should be the only consumer for this API.
Vedant Kumar [Tue, 25 Aug 2015 00:09:47 +0000 (00:09 +0000)]
[docs] Improvements to CMake.rst
- Fix some grammatical and typographical errors.
- Try to improve upon some awkward/nonstandard phrasings.
- Expand slightly the treatment of how you specify arguments to cmake.
- Update the list of possible LLVM_BUILD_TESTS and state where to find the
definitive list.
- Correct the name of llvm-tblgen.
- Expand slightly the treatment of several build options, including
LLVM_LIT_TOOLS_DIR, LLVM_ENABLE_FFI, and LLVM_EXTERNAL_project_SOURCE_DIR.
Hal Finkel [Mon, 24 Aug 2015 23:48:28 +0000 (23:48 +0000)]
[PowerPC] PPCVSXFMAMutate should ignore trivial-copy addends
We might end up with a trivial copy as the addend, and if so, we should ignore
the corresponding FMA instruction. The trivial copy can be coalesced away later,
so there's nothing to do here. We should not, however, assert. Fixes PR24544.
Matthias Braun [Mon, 24 Aug 2015 23:30:39 +0000 (23:30 +0000)]
Try to fix buildbots
Apparently std::vector::erase(const_iterator) (as opposed to the
non-const iterator) is a part of C++11 but it seems this is not available
on all the buildbots.
This change moves LTOCodeGenerator's ownership of the merged module to a
field of type std::unique_ptr<Module>. This helps simplify parts of the code
and clears the way for the module to be consumed by LLVM CodeGen (see D12132
review comments).
Bill Schmidt [Mon, 24 Aug 2015 19:27:27 +0000 (19:27 +0000)]
[PPC64LE] Fix PR24546 - Swap optimization and debug values
This patch fixes PR24546, which demonstrates a segfault during the VSX
swap removal pass. The problem is that debug value instructions were
not excluded from the list of instructions to be analyzed for webs of
related computation. I've added the test case from the PR as a crash
test in test/CodeGen/PowerPC.
[sanitizers] Add DFSan support for AArch64 42-bit VMA
This patch adds support for dfsan on aarch64-linux with 42-bit VMA
(current default config for 64K pagesize kernels). The support is
enabled by defining the SANITIZER_AARCH64_VMA to 42 at build time
for both clang/llvm and compiler-rt. The default VMA is 39 bits.
Oliver Stannard [Mon, 24 Aug 2015 09:47:45 +0000 (09:47 +0000)]
Add DAG optimisation for FP16_TO_FP
The FP16_TO_FP node only uses the bottom 16 bits of its input, so the
following pattern can be optimised by removing the AND:
(FP16_TO_FP (AND op, 0xffff)) -> (FP16_TO_FP op)
This is a common pattern for ARM targets when functions have __fp16
arguments, as they are passed as floats (so that they get passed in the
correct registers), but then bitcast and truncated to ignore the top 16
bits.
Mehdi Amini [Sun, 23 Aug 2015 22:15:49 +0000 (22:15 +0000)]
Require Dominator Tree For SROA, improve compile-time
TL-DR: SROA is followed by EarlyCSE which requires the DominatorTree.
There is no reason not to require it up-front for SROA.
Some history is necessary to understand why we ended-up here.
r123437 switched the second (Legacy)SROA in the optimizer pipeline to
use SSAUpdater in order to avoid recomputing the costly
DominanceFrontier. The purpose was to speed-up the compile-time.
Later r123609 removed the need for the DominanceFrontier in
(Legacy)SROA.
Right after, some cleanup was made in r123724 to remove any reference
to the DominanceFrontier. SROA existed in two flavors: SROA_SSAUp and
SROA_DT (the latter replacing SROA_DF).
The second argument of `createScalarReplAggregatesPass` was renamed
from `UseDomFrontier` to `UseDomTree`.
I believe this is were a mistake was made. The pipeline was not
updated and the call site was still:
PM->add(createScalarReplAggregatesPass(-1, false));
At that time, SROA was immediately followed in the pipeline by
EarlyCSE which required alread the DominatorTree. Not requiring
the DominatorTree in SROA didn't save anything, but unfortunately
it was lost at this point.
When the new SROA Pass was introduced in r163965, I believe the goal
was to have an exact replacement of the existing SROA, this bug
slipped through.
You can see currently:
$ echo "" | clang -x c++ -O3 -c - -mllvm -debug-pass=Structure
...
...
FunctionPass Manager
SROA
Dominator Tree Construction
Early CSE
After this patch:
$ echo "" | clang -x c++ -O3 -c - -mllvm -debug-pass=Structure
...
...
FunctionPass Manager
Dominator Tree Construction
SROA
Early CSE
This improves the compile time from 88s to 23s for PR17855.
https://llvm.org/bugs/show_bug.cgi?id=17855
And from 113s to 12s for PR16756
https://llvm.org/bugs/show_bug.cgi?id=16756
Frederic Riss [Sun, 23 Aug 2015 04:44:21 +0000 (04:44 +0000)]
[dwarfdump] Do not apply relocations in mach-o files if there is no LoadedObjectInfo.
Not only do we not need to do anything to read correct values from the
object files, but the current logic actually wrongly applies twice the
section base address when there is no LoadedObjectInfo passed to the
DWARFContext creation (as the added test shows).
Simply do not apply any relocations on the mach-o debug info if there is
no load offset to apply.
Joseph Tremoulet [Sun, 23 Aug 2015 00:26:33 +0000 (00:26 +0000)]
[WinEH] Require token linkage in EH pad/ret signatures
Summary:
WinEHPrepare is going to require that cleanuppad and catchpad produce values
of token type which are consumed by any cleanupret or catchret exiting the
pad. This change updates the signatures of those operators to require/enforce
that the type produced by the pads is token type and that the rets have an
appropriate argument.
The catchpad argument of a `CatchReturnInst` must be a `CatchPadInst` (and
similarly for `CleanupReturnInst`/`CleanupPadInst`). To accommodate that
restriction, this change adds a notion of an operator constraint to both
LLParser and BitcodeReader, allowing appropriate sentinels to be constructed
for forward references and appropriate error messages to be emitted for
illegal inputs.
Also add a verifier rule (noted in LangRef) that a catchpad with a catchpad
predecessor must have no other predecessors; this ensures that WinEHPrepare
will see the expected linear relationship between sibling catches on the
same try.
Lastly, remove some superfluous/vestigial casts from instruction operand
setters operating on BasicBlocks.
David Blaikie [Sat, 22 Aug 2015 22:38:44 +0000 (22:38 +0000)]
Update test case so it passes the verifier
Some debug info was drastically out of date, from the days where we used
to emit a list of length one (with a single null entry) rather than an
empty list (or, more recently, no list at all) for list fields that have
no elements.
David Blaikie [Sat, 22 Aug 2015 22:36:40 +0000 (22:36 +0000)]
Verifier: Don't crash on null entries in debug info retained types list
There was already a good error path for this. Added a test for it & made
a minor code change to ensure the error path was actually reached,
rather than crashing before we got that far.
NAKAMURA Takumi [Sat, 22 Aug 2015 04:53:52 +0000 (04:53 +0000)]
[CMake] Make LLVM_EXTERNAL_*_SOURCE_DIR consistent against older buildsites.
If corresponding in-tree subdirectory exists, just ignore LLVM_EXTERNAL* stuff.
Otherwise, set LLVM_TOOL_*_BUILD ON/OFF properly according to LLVM_EXTERNAL_*.
This makes easier to walk among old revisions *without* deleteing CMakeCache.txt.
Before r242059, LLVM_EXTERNAL_* was working like;
if(EXISTS ${*_SOURCE_DIR}/CMakeLists.txt)
set(*_BUILD ON CACHE)
if(*_BUILD is ON)
add_subdirectory(*_SOURCE_DIR)
endif()
endif()
JF Bastien [Fri, 21 Aug 2015 23:27:24 +0000 (23:27 +0000)]
Improve the determinism of MergeFunctions
Summary:
Merge functions previously relied on unsigned comparisons of pointer values to
order functions. This caused observable non-determinism in the compiler for
large bitcode programs. Basically, opt -mergefuncs program.bc | md5sum produces
different hashes when run repeatedly on the same machine. Differing output was
observed on three large bitcodes, but it was less frequent on the smallest file.
It is possible that this only manifests on the large inputs, hence remaining
undetected until now.
This patch fixes this by removing (almost, see below) all places where
comparisons between pointers are used to order functions. Most of these changes
are local, but the comparison of global values requires assigning an identifier
to each local in the order it is visited. This is very similar to the way the
comparison function identifies Value*'s defined within a function. Because the
order of visiting the functions and their subparts is deterministic, the
identifiers assigned to the globals will be as well, and the order of functions
will be deterministic.
With these changes, there is no more observed non-determinism. There is also
only minor slowdowns (negligible to 4%) compared to the baseline, which is
likely a result of the fact that global comparisons involve hash lookups and not
just pointer comparisons.
The one caveat so far is that programs containing BlockAddress constants can
still be non-deterministic. It is not clear what the right solution is here. In
particular, even if the global numbers are used to order by function, we still
need a way to order the BasicBlock*'s. Unfortunately, we cannot just bail out
and fail to order the functions or consider them equal, because we require a
total order over functions. Note that programs with BlockAddress constants are
relatively rare, so the impact of leaving this in is minor as long as this pass
is opt-in.
Adam Nemet [Fri, 21 Aug 2015 23:19:57 +0000 (23:19 +0000)]
[LAA] Hold bounds via ValueHandles during SCEV expansion
SCEV expansion can invalidate previously expanded values. For example
in SCEVExpander::ReuseOrCreateCast, if we already have the requested
cast value but it's not at the desired location, a new cast is inserted
and the old cast will be invalidated.
Therefore, when expanding the bounds for the pointers, a later entry can
invalidate the IR value for an earlier one. The fix is to store a value
handle rather than the value itself.
The newly added test has a more detailed description of how the bug
triggers.
This bug can have a negative but potentially highly variable performance
impact in Loop Distribution. Because one of the bound values was
invalidated and is an undef expression now, InstCombine is free to
transform the array overlap check:
Start0 <= End1 && Start1 <= End0
into:
Start0 <= End1
So depending on the runtime location of the arrays, we would detect a
conflict and fall back on the original loop of the versioned loop.
Also tested compile time with SPEC2006 LTO bc files.
Tom Stellard [Fri, 21 Aug 2015 22:47:27 +0000 (22:47 +0000)]
AMDGPU/SI: Better handle s_wait insertion
We can wait on either VM, EXP or LGKM.
The waits are independent.
Without this patch, a wait inserted because of one of them
would also wait for all the previous others.
This patch makes s_wait only wait for the ones we need for the next
instruction.
Here's an example of subtle perf reduction this patch solves:
The s_waitcnt vmcnt(1) is useless.
The reason it is added is because the last
buffer_load_format_xyzw needs s[44:47], which was issued
by the first s_load_dwordx4. It waits for all VM
before that call to have finished.
Internally after every instruction, 3 counters (for VM, EXP and LGTM)
are updated after every instruction. For example buffer_load_format_xyzw
will
increase the VM counter, and s_load_dwordx4 the LGKM one.
Without the patch, for every defined register,
the current 3 counters are stored, and are used to know
how long to wait when an instruction needs the register.
Because of that, the s[44:47] counter includes that to use the register
you need to wait for the previous buffer_load_format_xyzw.
Instead this patch stores only the counters that matter for the
register,
and puts zero for the other ones, since we don't need any wait for them.
Hal Finkel [Fri, 21 Aug 2015 21:34:24 +0000 (21:34 +0000)]
[PowerPC] PPCVSXFMAMutate should not segfault on undef input registers
When PPCVSXFMAMutate would look at the input addend register, it would get its
input value number. This would fail, however, if the register was undef,
causing a segfault. Don't segfault (just skip such FMA instructions).
Fixes the test case from PR24542 (although that may have been over-reduced).
Alex Lorenz [Fri, 21 Aug 2015 21:32:39 +0000 (21:32 +0000)]
AsmParser: Save and restore the parsing state for types using SlotMapping.
This commit extends the 'SlotMapping' structure and includes mappings for named
and numbered types in it. The LLParser is extended accordingly to fill out
those mappings at the end of module parsing.
This information is useful when we want to parse standalone constant values
at a later stage using the 'parseConstantValue' method. The constant values
can be constant expressions, which can contain references to types. In order
to parse such constant values, we have to restore the internal named and
numbered mappings for the types in LLParser, otherwise the parser will report
a parsing error. Therefore, this commit also introduces a new method called
'restoreParsingState' to LLParser, which uses the slot mappings to restore
some of its internal parsing state.
This commit is required to serialize constant value pointers in the machine
memory operands for the MIR format.
Alex Lorenz [Fri, 21 Aug 2015 21:12:44 +0000 (21:12 +0000)]
MIR Serialization: Print MCSymbol operands.
This commit allows the MIR printer to print the MCSymbol machine operands.
Unfortunately they can't be parsed at this time. I will create a bug that will
track the fact that the MCSymbol operands can't be parsed yet.
Sanjay Patel [Fri, 21 Aug 2015 20:17:26 +0000 (20:17 +0000)]
[x86] invert logic for attribute 'FeatureFastUAMem'
This is a 'no functional change intended' patch. It removes one FIXME, but adds several more.
Motivation: the FeatureFastUAMem attribute may be too general. It is used to determine if any
sized misaligned memory access under 32-bytes is 'fast'. From the added FIXME comments, however,
you can see that we're not consistent about this. Changing the name of the attribute makes it
clearer to see the logic holes.
Changing this to a 'slow' attribute also means we don't have to add an explicit 'fast' attribute
to new chips; fast unaligned accesses have been standard for several generations of CPUs now.
David Blaikie [Fri, 21 Aug 2015 20:16:51 +0000 (20:16 +0000)]
[opaque pointer type]: Pass explicit pointee type when building a constant GEP.
Gets a bit tricky in the ValueMapper, of course - not sure if we should
just expose a list of explicit types for each Value so that the
ValueMapper can be neutral to these special cases (it's OK for things
like load, where the explicit type is the result type - but when that's
not the case, it means plumbing through another "special" type... )