[x86] Tweak my update script to use test case function names starting
with 'stress' to indicate that the specific output isn't interesting and
relax them to only check the last instruction (a ret).
I've updated the one test case that really uses this to name the one
'stress_test' which was actually producing output we can directly check.
With this, the script doesn't introduce noise when run over the v16 test
file.
Support: Re-implement dwarf::TagString() using a .def file, NFC
Also re-implements the `dwarf::Tag` enumerator. I've moved the mock
tags into the enumerator since there's no other way to do this. Really
they shouldn't be used at all (they're just a hack to identify
`MDNode`s, but we have a class hierarchy for that now).
`dwarf::TagString()` shouldn't stringify `DW_TAG_lo_user` or
`DW_TAG_hi_user`. These aren't actual tags; they're markers for the
edge of vendor-specific tag regions.
Simon Pilgrim [Tue, 3 Feb 2015 20:09:18 +0000 (20:09 +0000)]
[X86][SSE] Added general integer shuffle matching for MOVQ instruction
This patch adds general shuffle pattern matching for the MOVQ zero-extend instruction (copy lower 64bits, zero upper) for all 128-bit integer vectors, it is added as a fallback test in lowerVectorShuffleAsZeroOrAnyExtend.
Jingyue Wu [Tue, 3 Feb 2015 19:37:06 +0000 (19:37 +0000)]
Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Sanjay Patel [Tue, 3 Feb 2015 18:54:00 +0000 (18:54 +0000)]
Merge consecutive 16-byte loads into one 32-byte load (PR22329)
This patch detects consecutive vector loads using the existing
EltsFromConsecutiveLoads() logic. This fixes:
http://llvm.org/bugs/show_bug.cgi?id=22329
This patch effectively reverts the tablegen additions of D6492 /
http://reviews.llvm.org/rL224344 ...which in hindsight were a horrible hack.
The test cases that were added with that patch are simply modified to load
from varying offsets of a base pointer. These loads did not match the existing
tablegen patterns.
A happy side effect of doing this optimization earlier is that we can now fold
the load into a math op where possible; this is shown in some of the updated
checks in the test file.
Manman Ren [Tue, 3 Feb 2015 18:39:15 +0000 (18:39 +0000)]
[LTO API] split lto_codegen_compile to lto_codegen_optimize and
lto_codegen_compile_optimized. Also add lto_api_version.
Before this commit, we can only dump the optimized bitcode after running
lto_codegen_compile, but it includes some impacts of running codegen passes,
one example is StackProtector pass. We will get assertion failure when running
llc on the optimized bitcode, because StackProtector is effectively run twice.
After splitting lto_codegen_compile, the linker can choose to dump the bitcode
before running lto_codegen_compile_optimized.
lto_api_version is added so ld64 can check for runtime-availability of the new
API.
Hans Wennborg [Tue, 3 Feb 2015 18:31:29 +0000 (18:31 +0000)]
Fix ProgramFiles path for 64-bit Windows installer
If we are building an 64bit installer on Windows we have to adjust the
Program Files path otherwise it uses the wrong Program Files (x86)
directory. Related CMake bug report
http://public.kitware.com/Bug/view.php?id=14211
Marek Olsak [Tue, 3 Feb 2015 17:38:12 +0000 (17:38 +0000)]
R600/SI: Don't generate non-existent LSHL, LSHR, ASHR B32 variants on VI
This can happen when a REV instruction is commuted.
The trick is not to define the _vi versions of instructions, which has these
consequences:
- code generation will always fail if a pseudo cannot be lowered
(very useful to catch bugs where an unsupported instruction somehow makes
it to the printer)
- ability to query if a pseudo can be lowered, which is done in commuteOpcode
to prevent REV from commuting to non-REV on VI
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227990 91177308-0d34-0410-b5e6-96231b3b80d8
Marek Olsak [Tue, 3 Feb 2015 17:37:57 +0000 (17:37 +0000)]
R600/SI: Determine target-specific encoding of READLANE and WRITELANE early v2
These are VOP2 on SI and VOP3 on VI, and their pseudos are neither, which can
be a problem. In order to make isVOP2 and isVOP3 queries behave as expected,
the encoding must be determined first.
This doesn't fix any known issue, but better safe than sorry.
v2: add and use getMCOpcodeFromPseudo
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227987 91177308-0d34-0410-b5e6-96231b3b80d8
Sanjay Patel [Tue, 3 Feb 2015 17:13:04 +0000 (17:13 +0000)]
Fix program crashes due to alignment exceptions generated for SSE memop instructions (PR22371).
r224330 introduced a bug by misinterpreting the "FeatureVectorUAMem" bit.
The commit log says that change did not affect anything, but that's not correct.
That change allowed SSE instructions to have unaligned mem operands folded into
math ops, and that's not allowed in the default specification for any SSE variant.
The bug is exposed when compiling for an AVX-capable CPU that had this feature
flag but without enabling AVX codegen. Another mistake in r224330 was not adding
the feature flag to all AVX CPUs; the AMD chips were excluded.
This is part of the fix for PR22371 ( http://llvm.org/bugs/show_bug.cgi?id=22371 ).
This feature bit is SSE-specific, so I've renamed it to "FeatureSSEUnalignedMem".
Changed the existing test case for the feature bit to reflect the new name and
renamed the test file itself to better reflect the feature.
Added runs to fold-vex.ll to check for the failing codegen.
Note that the feature bit is not set by default on any CPU because it may require a
configuration register setting to enable the enhanced unaligned behavior.
Bill Schmidt [Tue, 3 Feb 2015 16:16:01 +0000 (16:16 +0000)]
[PowerPC] Yet another approach to __tls_get_addr
This patch is a third attempt to properly handle the local-dynamic and
global-dynamic TLS models.
In my original implementation, calls to __tls_get_addr were hidden
from view until the asm-printer phase, at which point the underlying
branch-and-link instruction was created with proper relocations. This
mostly worked well, but I used some repellent techniques to ensure
that the TLS_GET_ADDR nodes at the SD and MI levels correctly received
input from GPR3 and produced output into GPR3. This proved to work
badly in the presence of multiple TLS variable accesses, with the
copies to and from GPR3 being scheduled incorrectly and generally
creating havoc.
In r221703, I addressed that problem by representing the calls to
__tls_get_addr as true calls during instruction lowering. This had
the advantage of removing all of the bad hacks and relying on the
existing call machinery to properly glue the copies in place. It
looked like this was going to be the right way to go.
However, as a side effect of the recent discovery of problems with
linker optimizations for TLS, we discovered cases of suboptimal code
generation with this strategy. The problem comes when tls_get_addr is
called for the same address, and there is a resulting CSE
opportunity. It turns out that in such cases MachineCSE will common
the addis/addi instructions that set up the input value to
tls_get_addr, but will not common the calls themselves. MachineCSE
does not have any machinery to common idempotent calls. This is
perfectly sensible, since presumably this would be done at the IR
level, and introducing calls in the back end isn't commonplace. In
any case, we end up with two calls to __tls_get_addr when one would
suffice, and that isn't good.
I presumed that the original design would have allowed commoning of
the machine-specific nodes that hid the __tls_get_addr calls, so as
suggested by Ulrich Weigand, I went back to that design and cleaned it
up so that the copies were properly held together by glue
nodes. However, it turned out that this didn't work either...the
presence of copies to physical registers kept the machine-specific
nodes from being commoned also.
All of which leads to the design presented here. This is a return to
the original design, except that no attempt is made to introduce
copies to and from GPR3 during instruction lowering. Virtual registers
are used until prior to register allocation. At that point, a special
pass is run that identifies the machine-specific nodes that hide the
tls_get_addr calls and introduces the copies to and from GPR3 around
them. The register allocator then coalesces these copies away. With
this design, MachineCSE succeeds in commoning tls_get_addr calls where
possible, and we get nice optimal code generation (better than GCC at
the moment, which does not common these calls).
One additional problem must be dealt with: After introducing the
mentions of the physical register GPR3, the aggressive anti-dependence
breaker sees opportunities to improve scheduling by selecting a
different register instead. Flags must be used on the instruction
descriptions to tell the anti-dependence breaker to keep its hands in
its pockets.
One thing missing from the original design was recording a definition
of the link register on the GET_TLS_ADDR nodes. Doing this was found
to be insufficient to force a stack frame to be created, which led to
looping behavior because two different LR values were stored at the
same address. This appears to have been an oversight in
PPCFrameLowering::determineFrameLayout(), which is repaired here.
Because MustSaveLR() returns true for calls to builtin_return_address,
this changed the expected behavior of
test/CodeGen/PowerPC/retaddr2.ll, which now stacks a frame but
formerly did not. I've fixed the test case to reflect this.
There are existing TLS tests to catch regressions; the checks in
test/CodeGen/PowerPC/tls-store2.ll proved to be too restrictive in the
face of instruction scheduling with these changes, so I fixed that
up.
I've added a new test case based on the PrettyStackTrace module that
demonstrated the original problem. This checks that we get correct
code generation and that CSE of the calls to __get_tls_addr has taken
place.
Sanjay Patel [Tue, 3 Feb 2015 15:37:18 +0000 (15:37 +0000)]
Improve test to actually check for a folded load.
This test was checking for lack of a "movaps" (an aligned load)
rather than a "movups" (an unaligned load). It also included
a store which complicated the checking.
Add specific CPU runs to prevent subtarget feature flag overrides
from inhibiting this optimization.
Renato Golin [Tue, 3 Feb 2015 11:20:45 +0000 (11:20 +0000)]
Adding AArch64 support to ASan instrumentation
For the time being, it is still hardcoded to support only the 39 VA bits
variant, I plan to work on supporting 42 and 48 VA bits variants, but I
don't have access to such hardware at the moment.
Craig Topper [Tue, 3 Feb 2015 11:03:43 +0000 (11:03 +0000)]
[X86] Add Requires[In64BitMode] around MOVSX64rr32/MOVSX64rm32. This makes it more strictly mutexed with the ARPL instruction 32-bit mode. Helps with some disassembler changes I'm experimenting with. Should be NFC.
Lang Hames [Tue, 3 Feb 2015 06:14:06 +0000 (06:14 +0000)]
[PBQP Regalloc] Pre-spill vregs that have no legal physregs.
The PBQP::RegAlloc::MatrixMetadata class assumes that matrices have at least two
rows/columns (for the spill option plus at least one physreg). This patch
ensures that that invariant is met by pre-spilling vregs that have no physreg
options so that no node (and no corresponding edges) need be added to the PBQP
graph.
This fixes a bug in an out-of-tree target that was identified by Jonas Paulsson.
Thanks for tracking this down Jonas!
Justin Bogner [Tue, 3 Feb 2015 00:20:11 +0000 (00:20 +0000)]
InstrProf: Simplify RawCoverageMappingReader's API slightly
This is still kind of a weird API, but dropping the (partial) update
of the passed in CoverageMappingRecord makes it a little easier to
understand and use.
Remove unused class variables and update all callers/uses from
the HexagonSplitTFRCondSet pass. Use the subtarget off the machine
function at the same time.
Migrate the HexagonSplitConst32AndConst64 pass from TargetMachine
based getSubtarget to the one cached on the MachineFunction.
Remove unused class variables and update all callers/uses.
LLVM ToT produces poor MMX code compared to 3.5. However, part of the previous
functionality can be achieved by using -x86-experimental-vector-widening-legalization.
Add tests to be sure we don't regress again.
IR: Allow GenericDebugNode construction from MDString
Allow `GenericDebugNode` construction directly from `MDString`, rather
than requiring `StringRef`s. I've refactored the `StringRef`
constructors to use these. There's no real functionality change here,
except for exposing the lower-level API.
The purpose of this is to simplify construction of string operands when
reading bitcode. It's unnecessarily indirect to parse an `MDString` ID,
lookup the `MDString` in the bitcode reader list, get the `StringRef`
out of that, and then have `GenericDebugNode::getImpl()` use
`MDString::get()` to acquire the original `MDString`. Instead, this
allows the bitcode reader to directly pass in the `MDString`.
Lang Hames [Mon, 2 Feb 2015 19:51:18 +0000 (19:51 +0000)]
[Orc] Make OrcMCJITReplacement::addObject calls transfer buffer ownership to the
ObjectLinkingLayer.
There are a two of overloads for addObject, one of which transfers ownership of
the underlying buffer to OrcMCJITReplacement. This commit makes the ownership
transfering version pass ownership down to the ObjectLinkingLayer in order to
prevent the issue described in r227778.
I think this commit will fix the sanitizer bot failures that necessitated the
removal of the load-object-a.ll regression test in r227785, so I'm reinstating
that test.
Move debug-info-centred `Metadata` subclasses into their own
header/source file. A couple of private template functions are needed
from both `Metadata.cpp` and `DebugInfoMetadata.cpp`, so I've moved them
to `lib/IR/MetadataImpl.h`.
Adrian Prantl [Mon, 2 Feb 2015 18:31:58 +0000 (18:31 +0000)]
Debug Info: Relax assertion in isUnsignedDIType() to allow floats to be
described by integer constants. This is a bit ugly, but if the source
language allows arbitrary type casting, the debug info must follow suit.
For example:
void foo() {
float a;
*(int *)&a = 0;
}
For the curious: SROA replaces the float alloca with an i32 alloca, which
is then optimized away and described via dbg.value(i32 0, ...).
Tom Stellard [Mon, 2 Feb 2015 18:02:28 +0000 (18:02 +0000)]
R600/SI: 64-bit and larger memory access must be at least 4-byte aligned
This is true for SI only. CI+ supports unaligned memory accesses,
but this requires driver support, so for now we disallow unaligned
accesses for all GCN targets.