[PM/AA] Fix *numerous* serious bugs in GlobalsModRef found by
inspection.
While we want to handle calls specially in this code because they should
have been modeled by the call graph analysis that precedes it, we should
*not* be re-implementing the predicates for whether an instruction reads
or writes memory. Those are well defined already. Notably, at least the
following issues seem to be clearly missed before:
- Ordered atomic loads can "write" to memory by causing writes from other
threads to become visible. Similarly for ordered atomic stores.
- AtomicRMW instructions quite obviously both read and write to memory.
- AtomicCmpXchg instructions also read and write to memory.
- Fences read and write to memory.
- Invokes of intrinsics or memory allocation functions.
I don't have any test cases, and I suspect this has never really come up
in the real world. But there is no reason why it wouldn't, and it makes
the code simpler to do this the right way.
While here, I've tried to make the loops significantly simpler as well
and added helpful comments as to what is going on.
------------------------------------------------------------------------
The vec_sld interface provides access to the vsldoi instruction.
Unlike most of the vec_* interfaces, we do not attempt to change the
generated code for vec_sld based on the endian mode. It is too
difficult to correctly infer the desired semantics because of
different element types, and the corrected instruction sequence is
expensive, involving loading a permute control vector and performing a
generalized permute.
For GCC, this was implemented as "Don't touch the vec_sld"
implementation. When it came time for the LLVM implementation, I did
the same thing. However, this was hasty and incorrect. In LLVM's
version of altivec.h, vec_sld was previously defined in terms of the
vec_perm interface. Because vec_perm semantics are adjusted for
little endian, this means that leaving vec_sld untouched causes it to
generate something different for LE than for BE. Not good.
This back-end patch accompanies the changes to altivec.h that change
vec_sld's behavior for little endian. Those changes mean that we see
slightly different code in the back end when trying to recognize a
VSLDOI instruction in isVSLDOIShuffleMask. In particular, a
ShuffleKind of 1 (where the two inputs are identical) must now be
treated the same way as a ShuffleKind of 2 (little endian with
different inputs) when little endian mode is in force. This is
because ShuffleKind of 1 is defined using big-endian numbering.
This has a ripple effect on LowerBUILD_VECTOR, where we create our own
internal VSLDOI instructions. Because these are a ShuffleKind of 1,
they will now have their shift amounts subtracted from 16 when
recognizing the shuffle mask. To avoid problems we have to subtract
them from 16 again before creating the VSLDOI instructions.
There are a couple of other uses of BuildVSLDOI, but these do not need
to be modified because the shift amount is 8, which is unchanged when
subtracted from 16.
I was looking at some vector code generation and kept seeing
unnecessary vector copies into the Altivec half of the VSX registers.
I discovered that we overlooked v4i32 when adding the register classes
for VSX; we only added v4f32 and v2f64. This means that anything that
canonicalizes into v4i32 (which is a LOT of stuff) ends up being
forced into VRRC on its way to VSRC.
The fix is one line. The rest of the patch is fixing up some test
cases whose code generation has changed as a result.
This seems like it would be a good candidate for backport to 3.7.
test-release.sh: Run both .o files through sed before comparing them
On some systems (e.g. Mac OS X), sed will add a newline to the end of
the output if there wasn't one already. This would cause false
cmp errors since the .o file from Phase 2 was passed through sed and
the one from Phase 3 wasn't. Work around this by passing both through
sed.
------------------------------------------------------------------------
Switch the release script to build with CMake by default (PR21561)
It retains the possibility to use the autoconf build with a
command-line option ('-use-autoconf'), and uses that by default on Darwin since
compiler-rt requires it on that platform.
This commit also removes the "Release-64" flavour and related logic. The script
would previously do two builds unless the '-no-64bit' flag was passed, but on
my machine and from those I asked this always ended up producing two 64-bit builds,
causing much confusion.
It also removes the -build-triple option, which caused the --build= flag to
get passed to ./configure. This was presumably intended for cross-compiling,
but none of the release testers use it. If someone does want to pass it,
they can use '-configure-flags --build=foo' instead.
Follow-up r235483, with the corresponding support in PPC. We use a regular call
for symbolic targets (because they're much cheaper than indirect calls).
------------------------------------------------------------------------
Hal Finkel [Tue, 14 Jul 2015 22:26:06 +0000 (22:26 +0000)]
[PowerPC] Use the ABI indirect-call protocol for patchpoints
We used to take the address specified as the direct target of the patchpoint
and did no TOC-pointer handling. This, however, as not all that useful,
because MCJIT tends to create a lot of modules, and they have their own TOC
sections. Thus, to call from the generated code to other generated code, you
really need to switch TOC pointers. Make this work as expected, and under
ELFv1, tread the address as the function descriptor address so that the correct
TOC pointer can be loaded.
Rafael Espindola [Tue, 14 Jul 2015 22:18:43 +0000 (22:18 +0000)]
Add support for reading members out of thin archives.
For now the Archive owns the buffers of the thin archive members.
This makes for a simple API, but all the buffers are destructed
only when the archive is destructed. This should be fine since we
close the files after mmap so we should not hit an open file
limit.
Alex Lorenz [Tue, 14 Jul 2015 21:18:25 +0000 (21:18 +0000)]
MIR Printer: move the function 'printReg'. NFC.
This commit moves the function 'printReg' towards the start of the file so that
it can be used by the conversion methods in MIRPrinter and not just the printing
methods in MIPrinter.
Tim Northover [Tue, 14 Jul 2015 21:03:18 +0000 (21:03 +0000)]
GVN: tolerate an instruction being replaced without existing in the leaderboard
Sometimes an incidentally created instruction can duplicate a Value used
elsewhere. It then often doesn't end up in the leader table. If it's later
removed, we attempt to remove it from the leader table and segfault.
Instead we should just ignore the removal request, which won't cause any
problems. The reverse situation, where the original instruction is replaced by
the new one (which you might think could leave the leader table empty) cannot
occur, because the incidental instruction will never be found in the first
place.
[MMX] Use the appropriate instructions for GR64 <-> VR64 copies.
MOVSDto64rr and MOV64toSDrr are defined to convert between FR64 (%xmm)
<-> GR64 registers, not VR64 (%mm) <-> GR64. This is wrong.
I found this by inspection and could not find a suitable testcase for it
since (1) we don't handle MMX bitcasts in Peephole optimizer as to
generate COPYs that (2) could be expanded back to the appropriate x86
instruction in ExpandPostRA.
Switch to use the appropriate instructions: MMX_MOVD64from64rr and
MMX_MOVD64to64rr here.
Hal Finkel [Tue, 14 Jul 2015 20:02:02 +0000 (20:02 +0000)]
[PowerPC] Fix the PPCInstrInfo::getInstrLatency implementation
PowerPC uses itineraries to describe processor pipelines (and dispatch-group
restrictions for P7/P8 cores). Unfortunately, the target-independent
implementation of TII.getInstrLatency calls ItinData->getStageLatency, and that
looks for the largest cycle count in the pipeline for any given instruction.
This, however, yields the wrong answer for the PPC itineraries, because we
don't encode the full pipeline. Because the functional units are fully
pipelined, we only model the initial stages (there are no relevant hazards in
the later stages to model), and so the technique employed by getStageLatency
does not really work. Instead, we should take the maximum output operand
latency, and that's what PPCInstrInfo::getInstrLatency now does.
This caused some test-case churn, including two unfortunate side effects.
First, the new arrangement of copies we get from function parameters now
sometimes blocks VSX FMA mutation (a FIXME has been added to the code and the
test cases), and we have one significant test-suite regression:
In this benchmark we have a loop with a vectorized FP divide, and it with the
new scheduling both divides end up in the same dispatch group (which in this
case seems to cause a problem, although why is not exactly clear). The grouping
structure is hard to predict from the bottom of the loop, and there may not be
much we can do to fix this.
Very few other test-suite performance effects were really significant, but
almost all weakly favor this change. However, in light of the issues
highlighted above, I've left the old behavior available via a
command-line flag.
Dan Liew [Tue, 14 Jul 2015 19:46:19 +0000 (19:46 +0000)]
Fix several issues with the test-release.sh script
* Use the default install prefix (/usr/local) and use DESTDIR instead to
set a temporary install location for tarballing. This is the correct
way to package binary releases (otherwise the temporary install path
ends up in files in the binary release).
* Remove ``-disable-clang`` option. It did not work correctly
(tarballing assumed phase 3 was run) and when doing a release
we should always be doing a three-phased build and test.
Note: Technically we should only be using DESTDIR for the third phase
and use --prefix for the first and second phase because we run the built
clang from phase 1 and 2 (and in general an application's behaviour
may depend on the install prefix). However in the case of clang it
seems to not care what the install prefix was so to simplify the script
we use DESTDIR for all three stages.
[CodeGen] Force emission of personality directive if explicitly specified
Summary:
Before this change, personality directives were not emitted
if there was no invoke left in the function (of course until
recently this also meant that we couldn't know what
the personality actually was). This patch forces personality directives
to still be emitted, unless it is known to be a noop in the absence of
invokes, or the user explicitly specified `nounwind` (and not
`uwtable`) on the function.
Matt Arsenault [Tue, 14 Jul 2015 18:20:33 +0000 (18:20 +0000)]
AMDGPU: Avoid using 64-bit shift for i64 (shl x, 32)
This can be done only with moves which theoretically
will optimize better later.
Although this transform increases the instruction count,
it should be code size / cycle count neutral in the worst
VALU case. It also seems to slightly improve a couple
of testcases due to other DAG combines this exposes.
This is probably slightly worse for the SALU case, so
it might be better to handle this during moveToVALU,
although then you lose some simplifications like
the load width reducing in the simple testcase.
Matt Arsenault [Tue, 14 Jul 2015 17:57:36 +0000 (17:57 +0000)]
AMDGPU/SI: Fix read2 merging into a super register.
If the read2 produced was supposed to be writing into a
super register, it would use the wrong subregister indices.
Fix this by inserting copies, so we only ever write to a vreg_64.
Run the register coalescer again to clean this up, although this
isn't ideal and often does result in an extra move.
Also remove the assert that offset1 > offset0.
There isn't a real reason to not allow this other than a minor
convenience in the compiler, and it doesn't seem worth the effort
of avoiding it.
We have a detailed def/use lists for every physical register in
MachineRegisterInfo anyway, so there is little use in maintaining an
additional bitset of which ones are used.
Removing it frees us from extra book keeping. This simplifies
VirtRegMap.
RAGreedy: Keep track of allocated PhysRegs internally
Do not use MachineRegisterInfo::setPhysRegUsed()/isPhysRegUsed()
anymore. This bitset changes function-global state and is set by the
VirtRegRewriter anyway.
Simply use a bitvector private to RAGreedy.
Tim Northover [Tue, 14 Jul 2015 17:23:55 +0000 (17:23 +0000)]
ARM: add at least one real test for r242123.
The ones committed were orthogonal to the change and would have passed before
that revision. What it *did* do was prevent an assertion failure when
generating object files.
PrologEpilogInserter: Rewrite API to determine callee save regsiters.
This changes TargetFrameLowering::processFunctionBeforeCalleeSavedScan():
- Rename the function to determineCalleeSaves()
- Pass a bitset of callee saved registers by reference, thus avoiding
the function-global PhysRegUsed bitset in MachineRegisterInfo.
- Without PhysRegUsed the implementation is fine tuned to not save
physcial registers which are only read but never modified.
results:
llvm[2]: Linking Release unit test Support (without symbols)
llvm[2]: ======= Finished Linking Release Unit test Support (without symbols)
make[3]: Entering directory '/build/llvm-svn/src/build/bindings/ocaml/llvm'
make[3]: *** No rule to make target '/build/llvm-
svn/src/build/Release/lib/ocaml/libLLVM-3.7.0svn.so', needed by 'build-
deplibs'. Stop.
make[3]: *** Waiting for unfinished jobs....
llvm[3]: Compiling llvm_ocaml.c for Release build
make[3]: Leaving directory '/build/llvm-svn/src/build/bindings/ocaml/llvm'
/build/llvm-svn/src/llvm/Makefile.rules:880: recipe for target 'all' failed
/build/llvm-svn/src/llvm/Makefile.rules:965: recipe for target 'all' failed
Caused regressions: compile Release+Asserts failed on clang-native-arm-cortex-a9
Revert "-Added API for retrieving the default FPU of a CPU from TargetParser."
Daniel Sanders [Tue, 14 Jul 2015 12:24:22 +0000 (12:24 +0000)]
[mips] Fix li/la differences between IAS and GAS.
Summary:
- Signed 16-bit should have priority over unsigned.
- For la, unsigned 16-bit must use ori+addu rather than directly use ori.
- Correct tests on 32-bit immediates with 64-bit predicates by
sign-extending the immediate beforehand. For example, isInt<16>(0xffff8000)
should be true and use addiu.
Also split li/la testing into separate files due to their size.
David Majnemer [Tue, 14 Jul 2015 06:19:58 +0000 (06:19 +0000)]
[SROA] Don't de-atomic volatile loads and stores
Volatile loads and stores are made visible in global state regardless of
what memory is involved. It is not correct to disregard the ordering
and synchronization scope because it is possible to synchronize with
memory operations performed by hardware.
[CMake] Unbreak add_llvm_external_project when external projects are specified.
LLVM_EXTERNAL_*_SOURCE_DIR is reset as PATH with set(CACHE PATH).
Then the CACHE PATH variable, LLVM_EXTERNAL_*_SOURCE_DIR, is normalized as
${CMAKE_SOURCE_DIR}/${path_var} if ${path_var} is relative.
LegalizeDAG: Fix and improve FCOPYSIGN/FABS legalization
- Factor out code to query and modify the sign bit of a floatingpoint
value as an integer. This also works if none of the targets integer
types is big enough to hold all bits of the floatingpoint value.
- Legalize FABS(x) as FCOPYSIGN(x, 0.0) if FCOPYSIGN is available,
otherwise perform bit manipulation on the sign bit. The previous code
used "x >u 0 ? x : -x" which is incorrect for x being -0.0! It also
takes 34 instructions on ARM Cortex-M4. With this patch we only
require 5:
vldr d0, LCPI0_0
vmov r2, r3, d0
lsrs r2, r3, #31
bfi r1, r2, #31, #1
bx lr
(This could be further improved if the compiler would recognize that
r2, r3 is zero).
- Only lower FCOPYSIGN(x, y) = sign(x) ? -FABS(x) : FABS(x) if FABS is
available otherwise perform bit manipulation on the sign bit.
- Perform the sign(x) test by masking out the sign bit and comparing
with 0 rather than shifting the sign bit to the highest position and
testing for "<s 0". For x86 copysignl (on 80bit values) this gets us:
testl $32768, %eax
rather than:
shlq $48, %rax
sets %al
testb %al, %al
Andrew Wilkins [Tue, 14 Jul 2015 01:23:06 +0000 (01:23 +0000)]
Add capability to get and set the personalitty function from the C API
Summary:
The capability was lost with D10429 where the personality function was set at function level rather than landing pad level. Now there is no way to get/set the personality function from the C API. That is a problem.
Note that the whole thing could be avoided by improving the C API testing, as started by D10725
Chris Bieneman [Tue, 14 Jul 2015 01:17:43 +0000 (01:17 +0000)]
[CMake] We shouldn't be storing values in the cache unless they actually need CMake cache behavior.
add_llvm_external_project puts LLVM_EXTERNAL_${nameUPPER}_SOURCE_DIR into the cache even if it is just the in-tree default path. This causes all sorts of oddness, and makes it so that I can't change the behavior of this variable.
This patch never puts LLVM_EXTERNAL_${nameUPPER}_SOURCE_DIR into the cache. It will only end up in the cache if it is specified on the command line, which is the correct behavior.
There is also a temporary change to remove non-default values from the cache if they are already present. This should have the impact of cleaning out unncecissary values from the caches on the buildbots and people's local build directories. This part of the change is marked with a TODO and can be removed in a few days.
Update enforceKnownAlignment after the isWeakForLinker semantic change
Previously we would refrain from attempting to increase the linkage of
available_externally globals because they were considered weak for the
linker. Now they are treated more like a declaration instead of a weak
definition.
This was causing SSE alignment faults in Chromuim, when some code
assumed it could increase the alignment of a dllimported global that it
didn't control. http://crbug.com/509256
Bill Schmidt [Mon, 13 Jul 2015 22:58:19 +0000 (22:58 +0000)]
[PPC64LE] More improvements to VSX swap optimization
This patch allows VSX swap optimization to succeed more frequently.
Specifically, it is concerned with common code sequences that occur
when copying a scalar floating-point value to a vector register. This
patch currently handles cases where the floating-point value is
already in a register, but does not yet handle loads (such as via an
LXSDX scalar floating-point VSX load). That will be dealt with later.
A typical case is when a scalar value comes in as a floating-point
parameter. The value is copied into a virtual VSFRC register, and
then a sequence of SUBREG_TO_REG and/or COPY operations will convert
it to a full vector register of the class required by the context. If
this vector register is then used as part of a lane-permuted
computation, the original scalar value will be in the wrong lane. We
can fix this by adding a swap operation following any widening
SUBREG_TO_REG operation. Additional COPY operations may be needed
around the swap operation in order to keep register assignment happy,
but these are pro forma operations that will be removed by coalescing.
If a scalar value is otherwise directly referenced in a computation
(such as by one of the many XS* vector-scalar operations), we
currently disable swap optimization. These operations are
lane-sensitive by definition. A MentionsPartialVR flag is added for
use in each swap table entry that mentions a scalar floating-point
register without having special handling defined.
A common idiom for PPC64LE is to convert a double-precision scalar to
a vector by performing a splat operation. This ensures that the value
can be referenced as V[0], as it would be for big endian, whereas just
converting the scalar to a vector with a SUBREG_TO_REG operation
leaves this value only in V[1]. A doubleword splat operation is one
form of an XXPERMDI instruction, which takes one doubleword from a
first operand and another doubleword from a second operand, with a
two-bit selector operand indicating which doublewords are chosen. In
the general case, an XXPERMDI can be permitted in a lane-swapped
region provided that it is properly transformed to select the
corresponding swapped values. This transformation is to reverse the
order of the two input operands, and to reverse and complement the
bits of the selector operand (derivation left as an exercise to the
reader ;).
A new test case that exercises the scalar-to-vector and generalized
XXPERMDI transformations is added as CodeGen/PowerPC/swaps-le-5.ll.
The patch also requires a change to CodeGen/PowerPC/swaps-le-3.ll to
use CHECK-DAG instead of CHECK for two independent instructions that
now appear in reverse order.
There are two small unrelated changes that are added with this patch.
First, the XXSLDWI instruction was incorrectly omitted from the list
of lane-sensitive instructions; this is now fixed. Second, I observed
that the same webs were being rejected over and over again for
different reasons. Since it's sufficient to reject a web only once, I
added a check for this to speed up the compilation time slightly.
Pete Cooper [Mon, 13 Jul 2015 21:50:35 +0000 (21:50 +0000)]
Remove unnecessary lines from the test in r242068.
This test case was breaking the hexagon elf bot. The failing lines
were actually unnecessary as checking that the store still reads the
correct value demonstrates that everything is working fine now.
Reduce memory usage of ComputeEditDistance() by (almost) 50%
ComputeEditDistance() currently keeps two rows of the edit distance matrix in
memory. That's unnecessary, one row plus one additional element are sufficient.
With this change, strings up to 64 chars can be processed without going to the
heap, compared to 32 chars previously. (But the main motivation is that the
code gets a bit simpler.)
[WinEH] Emit the LSDA even if no lpads remain but outlining occurred
The outlined funclets call intrinsics which reference labels from the
LSDA. This situation can easily arise in small functions with a single
cleanup at -O0, where Clang marks a definition as nounwind, and then
WinEHPrepare "discovers" that the landingpad is dead by accident and
deletes it.
We now need to ask the LLVM IR Function for it's personality directly,
rather than going through MachineModuleInfo.
Chris Bieneman [Mon, 13 Jul 2015 20:23:15 +0000 (20:23 +0000)]
[CMake] Cleanup tools/CMakeLists.txt to take advantage of the auto-registration that was already partially working.
Summary:
This change re-lands r241621, with an additional fix that was required to allow tool sources to live outside the llvm checkout. It also no longer renames LLVM_EXTERNAL_*_SOURCE_DIR. This change was reverted in r241663, because it renamed several variables of the format LLVM_EXTERNAL_*_* to LLVM_TOOL_*_*.
Original Summary:
The tools CMakeLists file already had implicit tool registration, but there were a few things off about it that needed to be altered to make it work. This change addresses all that. The changes in this patch are:
* factored out canonicalizing tool names from paths to CMake variables * removed the LLVM_IMPLICIT_PROJECT_IGNORE mechanism in favor of LLVM_EXTERNAL_${nameUPPER}_BUILD which I renamed to LLVM_TOOL_${nameUPPER}_BUILD because it applies to internal and external tools
* removed ignore_llvm_tool_subdirectory() in favor of just setting LLVM_TOOL_${nameUPPER}_BUILD to Off
* Added create_llvm_tool_options() to resolve a bug in add_llvm_external_project() - the old LLVM_EXTERNAL_${nameUPPER}_BUILD would not work on a clean CMake directory because the option could be created after it was set in code.
* Removed all but the minimum required calls to add_llvm_external_project from tools/CMakeLists.txt