Simon Dardis [Sat, 9 Dec 2017 23:25:57 +0000 (23:25 +0000)]
Infer lowest bits of an integer Multiply when the low bits of the operands are known
When the lowest bits of the operands to an integer multiply are known, the low bits of the result are deducible.
Code to deduce known-zero bottom bits already existed, but this change improves on that by deducing known-ones.
Craig Topper [Sat, 9 Dec 2017 08:19:07 +0000 (08:19 +0000)]
[X86] When inserting into the upper bits of a vXi1 vector, make sure we shift enough bits if we widened the vector.
We may need to widen the vector to make the shifts legal, but if we do that we need to make sure we shift left/right after accounting for the new size. If not we can't guarantee we are shifting in zeros.
The test cases affected actually show cases where we should move the shifts all together, but that's another problem.
Craig Topper [Sat, 9 Dec 2017 07:02:19 +0000 (07:02 +0000)]
[X86] Improve lowering of concats of mask vectors to better optimize zero vector inputs.
We were previously using kunpck with zero inputs unnecessarily. And we had cases where we would insert into a zero vector and then insert into larger zero vector incurring two sets of shifts.
The outliner previously would never outline calls. Calls are pretty common in
files, so it makes sense to outline them. In fact, in the LLVM test suite, if
you count the number of instructions that the outliner misses when you outline
calls vs when you don't, it turns out that, on average, around 6% of the
instructions encountered are calls. So, if we outline calls, we can find more
candidates, and thus save some more space.
This commit adds that functionality and updates the mir test to reflect that.
Wolfgang Pieb [Sat, 9 Dec 2017 00:39:53 +0000 (00:39 +0000)]
[NFC] Change the string offsets table tests to generate the object on the fly
which enables us to remove the test scripts and object files from the repository.
Summary:
This is LLVM instrumentation for the new HWASan tool. It is basically
a stripped down copy of ASan at this point, w/o stack or global
support. Instrumenation adds a global constructor + runtime callbacks
for every load and store.
HWASan comes with its own IR attribute.
A brief design document can be found in
clang/docs/HardwareAssistedAddressSanitizerDesign.rst (submitted earlier).
Paul Robinson [Sat, 9 Dec 2017 00:17:01 +0000 (00:17 +0000)]
Fix out-of-order stepping behavior in programs with sunk instructions.
MachineSink attempts to place instructions near the basic blocks where
they are needed. Once an instruction has been sunk, its location
relative to other instructions no longer is consistent with the
original source code. In order to ensure correct stepping in the
debugger, the debug location for sunk instructions is either merged
with the insertion point or erased if the target successor block is
empty.
Originally submitted as r318679, revised to fix sanitizer failure and
improve testing.
Craig Topper [Fri, 8 Dec 2017 23:30:03 +0000 (23:30 +0000)]
[X86][Mips] Remove unused method declaration from the X86 and Mips AsmPrinters.
Both had a declaration of EmitXRayTable, but there is no method defined in either with that name. There is a emitXRayTable in the base class with a lower case 'e' and they both call that.
Revert part of "Cleanup some GraphTraits iteration code"
This reverts part of r300656, which caused a regression in
propagateMassToSuccessors by counting edges n^2 times, where n is the
number of edges from the source basic block to the same successor basic
block. The result was both incorrect and very slow to compute for large
values of n (e.g. switches with multiple cases that go to the same basic
block).
Adrian Prantl [Fri, 8 Dec 2017 21:58:18 +0000 (21:58 +0000)]
Generalize llvm::replaceDbgDeclare and actually support the use-case that
is mentioned in the documentation (inserting a deref before the plus_uconst).
Vedant Kumar [Fri, 8 Dec 2017 21:57:28 +0000 (21:57 +0000)]
[Debugify] Add a pass to test debug info preservation
The Debugify pass synthesizes debug info for IR. It's paired with a
CheckDebugify pass which determines how much of the original debug info
is preserved. These passes make it easier to create targeted tests for
debug info preservation.
Here is the Debugify algorithm:
NextLine = 1
for (Instruction &I : M)
attach DebugLoc(NextLine++) to I
NextVar = 1
for (Instruction &I : M)
if (canAttachDebugValue(I))
attach dbg.value(NextVar++) to I
The CheckDebugify pass expects contiguous ranges of DILocations and
DILocalVariables. If it fails to find all of the expected debug info, it
prints a specific error to stderr which can be FileChecked.
This was discussed on llvm-dev in the thread:
"Passes to add/validate synthetic debug info"
Florian Hahn [Fri, 8 Dec 2017 21:49:03 +0000 (21:49 +0000)]
[CodeExtractor] Add debug locations for new call and branch instrs.
Summary:
If a partially inlined function has debug info, we have to add debug
locations to the call instruction calling the outlined function.
We use the debug location of the first instruction in the outlined
function, as the introduced call transfers control to this statement and
there is no other equivalent line in the source code.
We also use the same debug location for the branch instruction added
to jump from artificial entry block for the outlined function, which just
jumps to the first actual basic block of the outlined function.
Dan Gohman [Fri, 8 Dec 2017 21:18:21 +0000 (21:18 +0000)]
[WebAssemby] Re-apply r320041: "Support main functions with alternate signatures."
This includes a fix so that it doesn't transform declarations, and it
puts the functionality under control of a command-line option which is off
by default to avoid breaking existing setups.
Craig Topper [Fri, 8 Dec 2017 20:10:33 +0000 (20:10 +0000)]
[X86] Teach lowering to only let through (insert_subvector (vXi1 zeros), subvec, 0) for vector sizes that have native KSHIFT support.
For narrow sizes we'll widen the zero vector and widen the insert. Then do an extract_subvector to get back down to correct size.
This allows us to remove some patterns from the isel table that had to COPY_TO_REGCLASS to an oversized register, do the shift and then COPY_TO_REGCLASS back to the narrow register. Now this is represented explicitly in the DAG.
This seems to have perturbed the register allocation in one of the tests, but the number of instructions didn't change.
Shoaib Meenai [Fri, 8 Dec 2017 19:42:47 +0000 (19:42 +0000)]
[cmake] Only pass CMAKE_SYSROOT if non-empty
In my build environment (cmake 3.6.1 and gcc 4.8.5 on CentOS 7), having
an empty CMAKE_SYSROOT in the cache results in --sysroot="" being passed
to all compile commands, and then the compiler errors out because of the
empty sysroot. Only set CMAKE_SYSROOT if non-empty to avoid this.
Sanjay Patel [Fri, 8 Dec 2017 18:35:51 +0000 (18:35 +0000)]
[x86] use hasAVX2() rather than hasInt256(); NFC
These are aliases, but the thing we're checking here is that the target has
vpsllv*, not that the data type is 256-bit. Those instructions exist for
128-bit vectors too...but sadly, not for all element sizes.
Michael Trent [Fri, 8 Dec 2017 17:51:04 +0000 (17:51 +0000)]
Updated llvm-objdump to display local relocations in Mach-O binaries
Summary:
llvm-objdump's Mach-O parser was updated in r306037 to display external
relocations for MH_KEXT_BUNDLE file types. This change extends the Macho-O
parser to display local relocations for MH_PRELOAD files. When used with
the -macho option relocations will be displayed in a historical format.
Summary:
If we have the code like this:
```
float a, b;
a = std::max(a ,b);
```
it is converted into something like this:
```
%call = call dereferenceable(4) float* @_ZSt3maxIfERKT_S2_S2_(float* nonnull dereferenceable(4) %a.addr, float* nonnull dereferenceable(4) %b.addr)
%1 = bitcast float* %call to i32*
%2 = load i32, i32* %1, align 4
%3 = bitcast float* %a.addr to i32*
store i32 %2, i32* %3, align 4
```
After inlinning this code is converted to the next:
```
%1 = load float, float* %a.addr
%2 = load float, float* %b.addr
%cmp.i = fcmp fast olt float %1, %2
%__b.__a.i = select i1 %cmp.i, float* %a.addr, float* %b.addr
%3 = bitcast float* %__b.__a.i to i32*
%4 = load i32, i32* %3, align 4
%5 = bitcast float* %arrayidx to i32*
store i32 %4, i32* %5, align 4
```
This pattern is not recognized as minmax pattern.
Patch solves this problem by converting sequence
```
store (bitcast, (load bitcast (select ((cmp V1, V2), &V1, &V2))))
```
to a sequence
```
store (,load (select((cmp V1, V2), &V1, &V2)))
```
After this the code is recognized as minmax pattern.
Max Kazantsev [Fri, 8 Dec 2017 12:19:45 +0000 (12:19 +0000)]
[SCEV] Fix predicate usage in computeExitLimitFromICmp
In this method, we invoke `SimplifyICmpOperands` which takes the `Cond` predicate
by reference and may change it along with `LHS` and `RHS` SCEVs. But then we invoke
`computeShiftCompareExitLimit` with Values from which the SCEVs have been derived,
these Values have not been modified while `Cond` could be.
One of possible outcomes of this is that we may falsely prove that an infinite loop ends
within some finite number of iterations.
In this patch, we save the original `Cond` and pass it along with original operands.
This logic may be removed in future once `computeShiftCompareExitLimit` works
with SCEVs instead of value operands.
Pavel Labath [Fri, 8 Dec 2017 09:59:48 +0000 (09:59 +0000)]
[cmake] Make setting of CMAKE_C(XX)_COMPILER flags overridable for cross-builds
Summary:
r319898 made it possible to override these variables via the
CROSS_TOOLCHAIN_FLAGS setting, but this only worked if one explicitly
specifies these variables there. If, instead, one uses
CROSS_TOOLCHAIN_FLAGS to specify a toolchain file (as our internal
builds do, to point cmake to a checked-in toolchain), the
CMAKE_C(XX)_COMPILER flags would still win over the ones specified by
the toolchain file.
To fix is to make the mere presence of these flags overridable. I do
this by putting them as a default value for the CROSS_TOOLCHAIN_FLAGS
setting, so they can be overridden at cmake configuration time.
Gadi Haber [Fri, 8 Dec 2017 09:48:44 +0000 (09:48 +0000)]
[X86][Haswell]: Updating the scheduling information for the Haswell subtarget.
Updated the scheduling information for the Haswell subtarget with the following changes:
Regrouped the instructions after adding appropriate load + store latencies.
Added scheduling for missing instructions such as the GATHER instrs.
The changes were made after revisiting the latencies impact of all memory uOps.
Craig Topper [Fri, 8 Dec 2017 08:10:58 +0000 (08:10 +0000)]
[X86] Always consider inserting a vXi1 vector into the lsbs of a zero vector to be legal during lowering. Add isel patterns to emit shifts.
Previously we only allowed these through if the subvector came from a compare or test instruction which we would again check for during isel.
With this change we only check for the compare and test instructions during isel and have fallback patterns that emit the shifts if needed.
I noticed that in a lot of cases we don't actually see the compare during lowering and rely on an odd legalization of concat_vectors with a zero vector as the second argument. This keeps the concat_vectors around long enough for a later dag combine to expose the compare then we re-legalize the concat_vectors and catch the compare.
Derek Schuff [Fri, 8 Dec 2017 00:39:54 +0000 (00:39 +0000)]
Revert "[WebAssemby] Support main functions with alternate signatures."
This reverts commit 959e37e669b0c3cfad4cb9f1f7c9261ce9f5e9ae.
That commit doesn't handle the case where main is declared rather than defined,
in particular the even-more special case where main is a prototypeless
declaration (which is of course the one actually used by musl currently).
Craig Topper [Fri, 8 Dec 2017 00:16:09 +0000 (00:16 +0000)]
[X86] Handle alls version of vXi1 insert_vector_elt with a constant index without falling back to shuffles.
We previously only supported inserting to the LSB or MSB where it was easy to zero to perform an OR to insert.
This change effectively extracts the old value and the new value, xors them together and then xors that single bit with the correct location in the original vector. This will cancel out the old value in the first xor leaving the new value in the position.
The way I've implemented this uses 3 shifts and two xors and uses an additional register. We can avoid the additional register at the cost of another shift.