Owen Anderson [Tue, 10 Mar 2015 05:58:21 +0000 (05:58 +0000)]
Fix an issue in the verifier where we could try to read information out of a malformed statepoint intrinsic.
In this situation we would always have already flagged an error on the statepoint intrinsic,
but then we carry on to parse other, related GC intrinsics, and could end up crashing during that
verification when they try to access data from the malformed statepoint.
Owen Anderson [Tue, 10 Mar 2015 05:13:47 +0000 (05:13 +0000)]
Fix an infinite loop in InstCombine when an instruction with no users and side effects can be constant folded.
ReplaceInstUsesWith needs to return nullptr when the input has no users,
because in that case it does not mutate the program. Otherwise, we can
get stuck in an infinite loop of repeatedly attempting to constant fold
and instruction with no users.
Last commit fixed the handling of hash collisions, but it introdcuced
unneeded bucket terminators in some places. The generated table was
correct, it can just be a tiny bit smaller. As the previous table was
correct, the test doesn't need updating. If we really wanted to test
this, I could add the section size to the dwarf dump and test for a
precise value there. IMO the correctness test is sufficient.
CFLAA didn't know how to properly handle ConstantExprs; it would silently
ignore them. This was a problem if the ConstantExpr is, say, a GEP of a global,
because CFLAA wouldn't realize that there's a global there. :)
Mehdi Amini [Tue, 10 Mar 2015 02:37:25 +0000 (02:37 +0000)]
DataLayout is mandatory, update the API to reflect it with references.
Summary:
Now that the DataLayout is a mandatory part of the module, let's start
cleaning the codebase. This patch is a first attempt at doing that.
This patch is not exactly NFC as for instance some places were passing
a nullptr instead of the DataLayout, possibly just because there was a
default value on the DataLayout argument to many functions in the API.
Even though it is not purely NFC, there is no change in the
validation.
I turned as many pointer to DataLayout to references, this helped
figuring out all the places where a nullptr could come up.
I had initially a local version of this patch broken into over 30
independant, commits but some later commit were cleaning the API and
touching part of the code modified in the previous commits, so it
seemed cleaner without the intermediate state.
Frederic Riss [Tue, 10 Mar 2015 00:46:31 +0000 (00:46 +0000)]
DwarfAccelTable: Fix handling of hash collisions.
It turns out accelerator tables where totally broken if they contained
entries with colliding hashes. The failure mode is pretty bad, as it not
only impacted the colliding entries, but would basically make all the
entries after the first hash collision pointing in the wrong place.
The testcase uses the symbol names that where found to collide during a
clang build.
From a performance point of view, the patch adds a sort and a linear
walk over each bucket contents. While it has a measurable impact on the
accelerator table emission, it's not showing up significantly in clang
profiles (and I'd argue that correctness is priceless :-)).
Eric Christopher [Tue, 10 Mar 2015 00:33:27 +0000 (00:33 +0000)]
Temporarily revert r231726 and r231724 as they're breaking the build.:
Author: Lang Hames <lhames@gmail.com>
Date: Mon Mar 9 23:51:09 2015 +0000
[Orc][MCJIT][RuntimeDyld] Add header that was accidentally left out of r231724.
Author: Lang Hames <lhames@gmail.com>
Date: Mon Mar 9 23:44:13 2015 +0000
[Orc][MCJIT][RuntimeDyld] Add symbol flags to symbols in RuntimeDyld. Thread the
new types through MCJIT and Orc.
In particular, add a 'weak' flag. When plumbed through RTDyldMemoryManager, this
will allow us to distinguish between weak and strong definitions and find the
right ones during symbol resolution.
Lang Hames [Mon, 9 Mar 2015 23:44:13 +0000 (23:44 +0000)]
[Orc][MCJIT][RuntimeDyld] Add symbol flags to symbols in RuntimeDyld. Thread the
new types through MCJIT and Orc.
In particular, add a 'weak' flag. When plumbed through RTDyldMemoryManager, this
will allow us to distinguish between weak and strong definitions and find the
right ones during symbol resolution.
Ahmed Bougacha [Mon, 9 Mar 2015 22:51:05 +0000 (22:51 +0000)]
[CodeGen] Replace the reused stores' chain for extractelt expansion.
This fixes a subtle issue that was introduced in r205153.
When reusing a store for the extractelement expansion (to load directly
from it, inserting of going through the stack), later stores to the
same location might have overwritten the data we were expecting to
extract from.
To fix that, we need to explicitly replace the chain going out of the
reused store, so that later stores also have an explicit dependency on
the generated element-extracting loads, and can't clobber them.
Fix the double-deletion of AnalysisResolver when delegating through to
Dwarf EH preparation by creating one from scratch. Hopefully the new
pass manager simplifies this.
Sanjoy Das [Mon, 9 Mar 2015 21:43:43 +0000 (21:43 +0000)]
[SCEV] Unify getUnsignedRange and getSignedRange
Summary:
This removes some duplicated code, and also helps optimization: e.g. in
the test case added, `%idx ULT 128` in `@x` is not currently optimized
to `true` by `-indvars` but will be, after this change.
The only functional change in ths commit is that for add recurrences,
ScalarEvolution::getRange will be more aggressive -- computing the
unsigned (resp. signed) range for a SCEVAddRecExpr will now look at the
NSW (resp. NUW) bits and check for signed (resp. unsigned) overflow.
This can be a strict improvement in some cases (such as the attached
test case), and should be no worse in other cases.
Frederic Riss [Mon, 9 Mar 2015 21:09:50 +0000 (21:09 +0000)]
DwarfAccelTable: fix obvious typo.
I have a test for that issue, but I didn't include it in the commit as it's
a 200KB file for a pretty minor issue. (The reason the file is so big is
that it needs > 1024 variables/functions to trigger and that with debug
information.
The issue/fix on the other side is totally trivial. If poeple want the test
commited, I can do that. It just didn't seem worth it to me.
Reid Kleckner [Mon, 9 Mar 2015 20:23:14 +0000 (20:23 +0000)]
TableGen: Use 'enum : uint64_t' for feature flags to fix -Wmicrosoft
clang-cl would warn that this value is not representable in 'int':
enum { FeatureX = 1ULL << 31 };
All MS enums are 'ints' unless otherwise specified, so we have to use an
explicit type. The AMDGPU target just hit 32 features, triggering this
warning.
Now that we have C++11 strong enum types, we can also eliminate the
'const uint64_t' codepath from tablegen and just use 'enum : uint64_t'.
In the case where just tables are part of the function section, this produces
more readable assembly by avoiding switching to the eh section and back
to .text.
This would also break with non unique section names, as trying to switch to
a unique section actually creates a new one.
Owen Anderson [Mon, 9 Mar 2015 07:13:42 +0000 (07:13 +0000)]
Fix a bug in the LLParser where we failed to diagnose landingpads with non-constant clause operands.
Fixing this also exposed a related issue where the landingpad under construction was not
cleaned up when an error was raised, which would cause bad reference errors before the
error could actually be printed.
Kevin Qin [Mon, 9 Mar 2015 06:14:28 +0000 (06:14 +0000)]
[AArch64] Enable partial & runtime unrolling on cortex-a57
For inner one of nested loops, it is more likely to be a hot loop,
and the runtime check can be promoted out from patch 0001, so the
overhead is less, we can try a doubled threshold to unroll more loops.
Kevin Qin [Mon, 9 Mar 2015 06:14:18 +0000 (06:14 +0000)]
Introduce runtime unrolling disable matadata and use it to mark the scalar loop from vectorization.
Runtime unrolling is an expensive optimization which can bring benefit
only if the loop is hot and iteration number is relatively large enough.
For some loops, we know they are not worth to be runtime unrolled.
The scalar loop from vectorization is one of the cases.
Kevin Qin [Mon, 9 Mar 2015 06:14:07 +0000 (06:14 +0000)]
Run LICM pass after loop unrolling pass.
Runtime unrollng will introduce a runtime check in loop prologue.
If the unrolled loop is a inner loop, then the proglogue will be inside
the outer loop. LICM pass can help to promote the runtime check out if
the checked value is loop invariant.
Mehdi Amini [Mon, 9 Mar 2015 03:20:25 +0000 (03:20 +0000)]
InstCombine: fix fold "fcmp x, undef" to account for NaN
Summary:
See the two test cases.
; Can fold fcmp with undef on one side by choosing NaN for the undef
; Can fold fcmp with undef on both side
; fcmp u_pred undef, undef -> true
; fcmp o_pred undef, undef -> false
; because whatever you choose for the first undef
; you can choose NaN for the other undef
David Majnemer [Sat, 7 Mar 2015 21:47:46 +0000 (21:47 +0000)]
Fix the autoconf build
lib/ExecutionEngine/Targets has no Makefile, causing the autoconf build
to fail. Solve this by bringing the COFF implementation of RuntimeDyld
in line like the Mach-O and ELF implementations.
Benjamin Kramer [Sat, 7 Mar 2015 17:41:00 +0000 (17:41 +0000)]
Make constant arrays that are passed to functions as const.
In theory this allows the compiler to skip materializing the array on
the stack. In practice clang often fails to do that, but that's a
different story. NFC.
This patch fixes the logic in the DAGCombiner that folds an AND node according
to rule: (and (X (load V)), C) -> (X (load V))
An AND between a vector load 'X' and a constant build_vector 'C' can be folded
into the load itself only if we can prove that the AND operation is redundant.
The algorithm implemented by 'visitAND' firstly computes the splat value 'S'
from C, and then checks if S has the lower 'B' bits set (where B is the size in
bits of the vector element type). The algorithm takes into account also the
'undef' bits in the splat mask.
Unfortunately, the algorithm only worked under the assumption that the size of S
is a multiple of the vector element type. With this patch, we conservatively
avoid folding the AND if the splat bits are not compatible with the vector
element type.
Teach the LLVM CMake build how to explicitly use libc++abi when using
libc++. This lets me almost self-host on Linux with libc++ and libc++abi
very simply.
Currently, MCJIT and OrcJIT are failing due to uncaught exceptions, and
the Go binding tests are failing to build due to not linking in the
correct C++ standard library.
[PM] Create a separate library for high-level pass management code.
This will provide the analogous replacements for the PassManagerBuilder
and other code long term. This code is extracted from the opt tool
currently, and I plan to extend it as I build up support for using the
new pass manager in Clang and other places.
Mailing this out for review in part to let folks comment on the terrible names
here. A brief word about why I chose the names I did.
The library is called "Passes" to try and make it clear that it is a high-level
utility and where *all* of the passes come together and are registered in
a common library. I didn't want it to be *limited* to a registry though, the
registry is just one component.
The class is a "PassBuilder" but this name I'm less happy with. It doesn't
build passes in any traditional sense and isn't a Builder-style API at all. The
class is a PassRegisterer or PassAdder, but neither of those really make a lot
of sense. This class is responsible for constructing passes for registry in an
analysis manager or for population of a pass pipeline. If anyone has a better
name, I would love to hear it. The other candidate I looked at was
PassRegistrar, but that doesn't really fit either. There is no register of all
the passes in use, and so I think continuing the "registry" analog outside of
the registry of pass *names* and *types* is a mistake. The objects themselves
are just objects with the new pass manager.
Frederic Riss [Sat, 7 Mar 2015 01:25:09 +0000 (01:25 +0000)]
[dsymutil] Apply relocations to DIE data before cloning.
Doing this gets function's low_pc and global variable's locations right
in the output debug info. It also could get right other attributes
that need to be relocated (in linker terms), but I don't know of any
other than the address attributes.
This doesn't fixup low_pc attributes in compile_unit, lexical_block
or inlined subroutine, nor does it get right high_pc attributes
for function. This will come in a subsequent commit.
Recommit r231324 with a fix to the ARM execution domain code
to disable lane switching if we don't actually have the instruction
set we want to switch to. Models the earlier check above the
conditional for the pass.
The testcase is one that triggered with the assert that's added
as part of the fix, use it to avoid adding a new testcase as it
highlights the same problem.
Frederic Riss [Fri, 6 Mar 2015 23:22:53 +0000 (23:22 +0000)]
[dsymutil] Support cloning DIE reference attributes.
Reference attributes are mainly handled by just creating DIEEntry
attributes for them. There is a special case for DW_FORM_ref_addr
attributes though, because the DIEEntry code needs a DwarfDebug
code to emit them (and we don't have one as we do no CodeGen).
In that case, just use DIEInteger attributes with the right form.
Frederic Riss [Fri, 6 Mar 2015 23:22:50 +0000 (23:22 +0000)]
[dsymutil] Set linked unit start offset early. NFC.
The start offset of a linked unit is known before starting to clone
its DIEs. Handling DW_FORM_ref_addr attributes requires that this
offset is set while cloning the unit. Split CompileUnit::computeOffsets()
into setStartOffset() and computeNextUnitOffset() and call them
repsectively before cloning the DIEs and right after.
[AArch64][LoadStoreOptimizer] Generate LDP + SXTW instead of LD[U]R + LD[U]RSW.
Teach the load store optimizer how to sign extend a result of a load pair when
it helps creating more pairs.
The rational is that loads are more expensive than sign extensions, so if we
gather some in one instruction this is better!