Shoaib Meenai [Fri, 8 Dec 2017 19:42:47 +0000 (19:42 +0000)]
[cmake] Only pass CMAKE_SYSROOT if non-empty
In my build environment (cmake 3.6.1 and gcc 4.8.5 on CentOS 7), having
an empty CMAKE_SYSROOT in the cache results in --sysroot="" being passed
to all compile commands, and then the compiler errors out because of the
empty sysroot. Only set CMAKE_SYSROOT if non-empty to avoid this.
Sanjay Patel [Fri, 8 Dec 2017 18:35:51 +0000 (18:35 +0000)]
[x86] use hasAVX2() rather than hasInt256(); NFC
These are aliases, but the thing we're checking here is that the target has
vpsllv*, not that the data type is 256-bit. Those instructions exist for
128-bit vectors too...but sadly, not for all element sizes.
Michael Trent [Fri, 8 Dec 2017 17:51:04 +0000 (17:51 +0000)]
Updated llvm-objdump to display local relocations in Mach-O binaries
Summary:
llvm-objdump's Mach-O parser was updated in r306037 to display external
relocations for MH_KEXT_BUNDLE file types. This change extends the Macho-O
parser to display local relocations for MH_PRELOAD files. When used with
the -macho option relocations will be displayed in a historical format.
Summary:
If we have the code like this:
```
float a, b;
a = std::max(a ,b);
```
it is converted into something like this:
```
%call = call dereferenceable(4) float* @_ZSt3maxIfERKT_S2_S2_(float* nonnull dereferenceable(4) %a.addr, float* nonnull dereferenceable(4) %b.addr)
%1 = bitcast float* %call to i32*
%2 = load i32, i32* %1, align 4
%3 = bitcast float* %a.addr to i32*
store i32 %2, i32* %3, align 4
```
After inlinning this code is converted to the next:
```
%1 = load float, float* %a.addr
%2 = load float, float* %b.addr
%cmp.i = fcmp fast olt float %1, %2
%__b.__a.i = select i1 %cmp.i, float* %a.addr, float* %b.addr
%3 = bitcast float* %__b.__a.i to i32*
%4 = load i32, i32* %3, align 4
%5 = bitcast float* %arrayidx to i32*
store i32 %4, i32* %5, align 4
```
This pattern is not recognized as minmax pattern.
Patch solves this problem by converting sequence
```
store (bitcast, (load bitcast (select ((cmp V1, V2), &V1, &V2))))
```
to a sequence
```
store (,load (select((cmp V1, V2), &V1, &V2)))
```
After this the code is recognized as minmax pattern.
Max Kazantsev [Fri, 8 Dec 2017 12:19:45 +0000 (12:19 +0000)]
[SCEV] Fix predicate usage in computeExitLimitFromICmp
In this method, we invoke `SimplifyICmpOperands` which takes the `Cond` predicate
by reference and may change it along with `LHS` and `RHS` SCEVs. But then we invoke
`computeShiftCompareExitLimit` with Values from which the SCEVs have been derived,
these Values have not been modified while `Cond` could be.
One of possible outcomes of this is that we may falsely prove that an infinite loop ends
within some finite number of iterations.
In this patch, we save the original `Cond` and pass it along with original operands.
This logic may be removed in future once `computeShiftCompareExitLimit` works
with SCEVs instead of value operands.
Pavel Labath [Fri, 8 Dec 2017 09:59:48 +0000 (09:59 +0000)]
[cmake] Make setting of CMAKE_C(XX)_COMPILER flags overridable for cross-builds
Summary:
r319898 made it possible to override these variables via the
CROSS_TOOLCHAIN_FLAGS setting, but this only worked if one explicitly
specifies these variables there. If, instead, one uses
CROSS_TOOLCHAIN_FLAGS to specify a toolchain file (as our internal
builds do, to point cmake to a checked-in toolchain), the
CMAKE_C(XX)_COMPILER flags would still win over the ones specified by
the toolchain file.
To fix is to make the mere presence of these flags overridable. I do
this by putting them as a default value for the CROSS_TOOLCHAIN_FLAGS
setting, so they can be overridden at cmake configuration time.
Gadi Haber [Fri, 8 Dec 2017 09:48:44 +0000 (09:48 +0000)]
[X86][Haswell]: Updating the scheduling information for the Haswell subtarget.
Updated the scheduling information for the Haswell subtarget with the following changes:
Regrouped the instructions after adding appropriate load + store latencies.
Added scheduling for missing instructions such as the GATHER instrs.
The changes were made after revisiting the latencies impact of all memory uOps.
Craig Topper [Fri, 8 Dec 2017 08:10:58 +0000 (08:10 +0000)]
[X86] Always consider inserting a vXi1 vector into the lsbs of a zero vector to be legal during lowering. Add isel patterns to emit shifts.
Previously we only allowed these through if the subvector came from a compare or test instruction which we would again check for during isel.
With this change we only check for the compare and test instructions during isel and have fallback patterns that emit the shifts if needed.
I noticed that in a lot of cases we don't actually see the compare during lowering and rely on an odd legalization of concat_vectors with a zero vector as the second argument. This keeps the concat_vectors around long enough for a later dag combine to expose the compare then we re-legalize the concat_vectors and catch the compare.
Derek Schuff [Fri, 8 Dec 2017 00:39:54 +0000 (00:39 +0000)]
Revert "[WebAssemby] Support main functions with alternate signatures."
This reverts commit 959e37e669b0c3cfad4cb9f1f7c9261ce9f5e9ae.
That commit doesn't handle the case where main is declared rather than defined,
in particular the even-more special case where main is a prototypeless
declaration (which is of course the one actually used by musl currently).
Craig Topper [Fri, 8 Dec 2017 00:16:09 +0000 (00:16 +0000)]
[X86] Handle alls version of vXi1 insert_vector_elt with a constant index without falling back to shuffles.
We previously only supported inserting to the LSB or MSB where it was easy to zero to perform an OR to insert.
This change effectively extracts the old value and the new value, xors them together and then xors that single bit with the correct location in the original vector. This will cancel out the old value in the first xor leaving the new value in the position.
The way I've implemented this uses 3 shifts and two xors and uses an additional register. We can avoid the additional register at the cost of another shift.
Bill Seurer [Thu, 7 Dec 2017 22:53:33 +0000 (22:53 +0000)]
[PowerPC][asan] Update asan to handle changed memory layouts in newer kernels
In more recent Linux kernels with 47 bit VMAs the layout of virtual memory
for powerpc64 changed causing the address sanitizer to not work properly. This
patch adds support for 47 bit VMA kernels for powerpc64 and fixes up test
cases.
Zachary Turner [Thu, 7 Dec 2017 22:51:16 +0000 (22:51 +0000)]
[DebugInfo] Fix register variables not showing up in pdb.
Previously, when linking against libcmt from the MSVC runtime,
lld-link /verbose would show "Ignoring unknown symbol record
with kind 0x1006". It turns out this was because
TypeIndexDiscovery did not handle S_REGISTER records, so these
records were not getting properly remapped.
Alina Sbirlea [Thu, 7 Dec 2017 22:41:34 +0000 (22:41 +0000)]
[ModRefInfo] Make enum ModRefInfo an enum class [NFC].
Summary:
Make enum ModRefInfo an enum class. Changes to ModRefInfo values should
be done using inline wrappers.
This should prevent future bit-wise opearations from being added, which can be more error-prone.
The offset overflow check before was incorrect. It would always give the
correct result, but it was comparing the SCALED potential fixed-up offset
against an UNSCALED minimum/maximum. As a result, the outliner was missing a
bunch of frame setup/destroy instructions that ought to have been safe to
outline. This fixes that, and adds an instruction to the .mir test that
failed the old test.
Mark Searles [Thu, 7 Dec 2017 21:14:41 +0000 (21:14 +0000)]
[AMDGPU] Revert "[AMDGPU] Add options for waitcnt pass debugging; add instr count in debug output."
Patch caused a buildbot failure; http://lab.llvm.org:8011/builders/lld-x86_64-darwin13/builds/15733/steps/build_Lld/logs/stdio :
lib/Target/AMDGPU/SIInsertWaitcnts.cpp:396:11: error: private field 'InstCnt' is not used [-Werror,-Wunused-private-field]
int32_t InstCnt = 0;
^
1 error generated.
"
This reverts commit 71627f79010aafe74fdcba901bba28dd7caa0869.
Mark Searles [Thu, 7 Dec 2017 20:36:39 +0000 (20:36 +0000)]
[AMDGPU] Add options for waitcnt pass debugging; add instr count in debug output.
-amdgpu-waitcnt-forcezero={1|0} Force all waitcnt instrs to be emitted as s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-amdgpu-waitcnt-forceexp=<n> Force emit a s_waitcnt expcnt(0) before the first <n> instrs
-amdgpu-waitcnt-forcelgkm=<n> Force emit a s_waitcnt lgkmcnt(0) before the first <n> instrs
-amdgpu-waitcnt-forcevm=<n> Force emit a s_waitcnt vmcnt(0) before the first <n> instrs
Mark Searles [Thu, 7 Dec 2017 20:34:25 +0000 (20:34 +0000)]
[AMDGPU] Add GCNHazardRecognizer::checkInlineAsmHazards() and GCNHazardRecognizer::checkVALUHazardsHelper(). checkInlineAsmHazards() checks INLINEASM for hazards that we particularly care about (so not exhaustive); this patch adds a check for INLINEASM that defs vregs that hold data-to-be stored by immediately preceding store of more than 8 bytes. If the instr were not within an INLINEASM, this scenario would be handled by checkVALUHazard(). Add checkVALUHazardsHelper(), which will be called by both checkVALUHazards() and checkInlineAsmHazards().
Craig Topper [Thu, 7 Dec 2017 20:10:04 +0000 (20:10 +0000)]
[X86] Fix InsertBitToMaskVector to only issue KSHIFTS of native size so that upper bits are properly zeroed.
There's no v2i1 or v4i1 kshift, and v8i1 is only supported with AVXDQ. Isel has fake patterns to extend these types to native shifts, but makes no guarantees about the value of any bits shifted in when shifting right.
This patch promotes the vector to a type that supports a native shift first and only allows inserting into the msb of a native sized shift.
I've constructed this in a way that doesn't do the promotion if we're going to fallback to using a xmm/ymm/zmm shuffle. I think I have a plan to remove the shuffle fall back entirely. In which case we this can be simplified, but I wanted to fix the correctness issue first.
Simon Pilgrim [Thu, 7 Dec 2017 15:24:14 +0000 (15:24 +0000)]
[X86] Tag LZCNT/TZCNT instructions scheduler classes
Tagged as IMUL instructions for a reasonable approximation (ALU tends to be a lot faster) - POPCNT is currently tagged as FAdd which I think should be replaced with IMUL as well
Sanjay Patel [Thu, 7 Dec 2017 15:17:58 +0000 (15:17 +0000)]
[DAGCombiner] eliminate shuffle of insert element
I noticed this pattern in D38316 / D38388. We failed to combine a shuffle that is either
repeating a scalar insertion at the same position in a vector or translated to a different
element index.
Like the earlier patch, this could be an instcombine too, but since we opted to make this
a DAG transform earlier, I've made this one a DAG patch too.
We do not need any legality checking because the new insert is identical to the existing
insert except that it may have a different constant insertion operand.
The constant insertion test in test/CodeGen/X86/vector-shuffle-combining.ll was the
motivation for D38756.
Dan Gohman [Thu, 7 Dec 2017 13:49:27 +0000 (13:49 +0000)]
[WebAssemby] Support main functions with alternate signatures.
WebAssembly requires caller and callee signatures to match, so the usual
C runtime trick of calling main and having it just work regardless of
whether main is defined as '()' or '(int argc, char *argv[])' doesn't
work. Extend the FixFunctionBitcasts pass to rewrite main to use the
latter form.
[Nios2] final infrastructure to provide compilation of a return from a function
This patch includes all missing functionality needed to provide first
compilation of a simple program that just returns from a function.
I've added a test case that checks for "ret" instruction printed in assembly
output.
Patch by Andrei Grischenko (andrei.l.grischenko@intel.com)
Differential revision: https://reviews.llvm.org/D39688
[dsymutil] Add -verify option to run DWARF verifier after linking.
This patch adds support for running the DWARF verifier on the linked
debug info files. If the -verify options is specified and verification
fails, dsymutil exists with abort with non-zero exit code. This behavior
is *not* enabled by default.
Pavel Labath [Thu, 7 Dec 2017 10:54:23 +0000 (10:54 +0000)]
[Testing/Support] Make matchers work with Expected<T&>
Summary:
This did not work because the ExpectedHolder was trying to hold the
value in an Optional<T*>. Instead of trying to mimic the behavior of
Expected and try to make ExpectedHolder work with references and
non-references, I simply store the reference to the Expected object in
the holder.
I also add a bunch of tests for these matchers, which have helped me
flesh out some problems in my initial implementation of this patch, and
uncovered the fact that we are not consistent in quoting our values in
the matcher output (which I also fix).
Alex Bradbury [Thu, 7 Dec 2017 10:46:23 +0000 (10:46 +0000)]
[RISCV] MC layer support for the standard RV32D instruction set extension
As the FPR32 and FPR64 registers have the same names, use
validateTargetOperandClass in RISCVAsmParser to coerce a parsed FPR32 to an
FPR64 when necessary. The rest of this patch is very similar to the RV32F
patch.
[CodeGen] Use MachineOperand::print in the MIRPrinter for MO_Register.
Work towards the unification of MIR and debug output by refactoring the
interfaces.
For MachineOperand::print, keep a simple version that can be easily called
from `dump()`, and a more complex one which will be called from both the
MIRPrinter and MachineInstr::print.
Add extra checks inside MachineOperand for detached operands (operands
with getParent() == nullptr).
Alex Bradbury [Thu, 7 Dec 2017 10:26:05 +0000 (10:26 +0000)]
[RISCV] MC layer support for the standard RV32F instruction set extension
The most interesting part of this patch is probably the handling of
rounding mode arguments. Sadly, the RISC-V assembler handles floating point
rounding modes as a special "argument" when it would be more consistent to
handle them like the atomics, opcode suffixes. This patch supports parsing
this optional parameter, using InstAlias to allow parsing these floating point
instructions when no rounding mode is specified.
Alex Bradbury [Thu, 7 Dec 2017 09:51:55 +0000 (09:51 +0000)]
[TableGen] Give the option of tolerating duplicate register names
A number of architectures re-use the same register names (e.g. for both 32-bit
FPRs and 64-bit FPRs). They are currently unable to use the tablegen'erated
MatchRegisterName and MatchRegisterAltName, as tablegen (when built with
asserts enabled) will fail.
When the AllowDuplicateRegisterNames in AsmParser is set, duplicated register
names will be tolerated. A backend can then coerce registers to the desired
register class by (for instance) implementing validateTargetOperandClass.
At least the in-tree Sparc backend could benefit from this, as does RISC-V
(single and double precision floating point registers).
Gadi Haber [Thu, 7 Dec 2017 09:16:34 +0000 (09:16 +0000)]
[X86][FMA][FMA4]: Adding full coverage of MC encoding for the FMA, FMA4 isa sets.<NFC>
NFC.
Adding MC regressions tests to cover the FMA and FMA4 ISA sets.
This patch is part of a larger task to cover MC encoding of all X86 ISA Sets starting revision https://reviews.llvm.org/D39952
Gadi Haber [Thu, 7 Dec 2017 09:00:19 +0000 (09:00 +0000)]
[X86][X87]: Adding full coverage of MC encoding for all X87 ISA Sets.<NFC>
NFC.
Currently, not all the X86 ISA Sets are covered by the MC regressions tests for X86.
A full coverage needs to be added for each ISA set and for both 32bit and 64bit instructions + registers.
This patch includes MC assembly tests for the X87 32bit and 64bit.