Fangrui Song [Mon, 29 Apr 2019 05:38:22 +0000 (05:38 +0000)]
[llvm-nm] Simplify and fix a buffer overflow
* char SymbolAddrStr[18] can't hold "%" PRIo64 which may need 22 characters.
* Use range-based for
* Delete unnecessary typedef
* format(...).print(Str, sizeof(Str)) + outs() << Str => outs() << format(...)
* Use cascading outs() << .. << ..
* Use iterator_range(Container &&c)
* (A & B) == B => A & B if B is a power of 2
* replace null sentinel in constants with makeArrayRef
I got confused on the terminology, and the change in D60598 was not
correct. I was thinking of "exact" in terms of the result being
non-approximate. However, the relevant distinction here is whether
the result is
* Largest range such that:
Forall Y in Other: Forall X in Result: X BinOp Y does not wrap.
(makeGuaranteedNoWrapRegion)
* Smallest range such that:
Forall Y in Other: Forall X not in Result: X BinOp Y wraps.
(A hypothetical makeAllowedNoWrapRegion)
* Both. (makeExactNoWrapRegion)
I'm adding a separate makeExactNoWrapRegion method accepting a
single APInt (same as makeExactICmpRegion) and using it in the
places where the guarantee is relevant.
[DAGCombiner] try repeated fdiv divisor transform before building estimate
This was originally part of D61028, but it's an independent diff.
If we try the repeated divisor reciprocal transform before producing an estimate sequence,
then we have an opportunity to use scalar fdiv. On x86, the trade-off is 1 divss vs. 5
vector FP ops in the default estimate sequence. On recent chips (Skylake, Ryzen), the
full-precision division is only 3 cycle throughput, so that's probably the better perf
default option and avoids problems from x86's inaccurate estimates.
The last 2 tests show that users still have the option to override the defaults by using
the function attributes for reciprocal estimates, but those patterns are potentially made
faster by converting the vector ops (including ymm ops) to scalar math.
An xor reduction of a bool vector can be optimized to a parity check of the MOVMSK/BITCAST'd integer - if the population count is odd return 1, else return 0.
[X86] Remove (V)MOV64toSDrr/m and (V)MOVDI2SSrr/m. Use 128-bit result MOVD/MOVQ and COPY_TO_REGCLASS instead
Summary:
The register form of these instructions are CodeGenOnly instructions that cover
GR32->FR32 and GR64->FR64 bitcasts. There is a similar set of instructions for
the opposite bitcast. Due to the patterns using bitcasts these instructions get
marked as "bitcast" machine instructions as well. The peephole pass is able to
look through these as well as other copies to try to avoid register bank copies.
Because FR32/FR64/VR128 are all coalescable to each other we can end up in a
situation where a GR32->FR32->VR128->FR64->GR64 sequence can be reduced to
GR32->GR64 which the copyPhysReg code can't handle.
To prevent this, this patch removes one set of the 'bitcast' instructions. So
now we can only go GR32->VR128->FR32 or GR64->VR128->FR64. The instruction that
converts from GR32/GR64->VR128 has no special significance to the peephole pass
and won't be looked through.
I guess the other option would be to add support to copyPhysReg to just promote
the GR32->GR64 to a GR64->GR64 copy. The upper bits were basically undefined
anyway. But removing the CodeGenOnly instruction in favor of one that won't be
optimized seemed safer.
I deleted the peephole test because it couldn't be made to work with the bitcast
instructions removed.
The load version of the instructions were unnecessary as the pattern that selects
them contains a bitcasted load which should never happen.
Simon Pilgrim [Sat, 27 Apr 2019 17:32:46 +0000 (17:32 +0000)]
[X86][AVX512] Improve vector bool reductions
As predicate masks are legal on AVX512 targets, we avoid MOVMSK in these cases, but we can just bitcast the bool vector to the integer equivalent directly - avoiding expansion of the reduction to a shuffle pattern.
Simon Atanasyan [Sat, 27 Apr 2019 09:28:54 +0000 (09:28 +0000)]
[cmake] Disable a GCC optimization when building LLVM for MIPS
GCC when compiling LLVM for MIPS can introduce a jump to an uninitialized
value when shrink wrapping is enabled. As shrink wrapping is enabled in
GCC at all optimization levels, it must be disabled. This bug exists for
all versions of GCC since 4.9.2.
This partially resolves PR37701 / GCC PR target/86069.
[X86] Use MOVQ for i64 atomic_stores when SSE2 is enabled
Summary: If we have SSE2 we can use a MOVQ to store 64-bits and avoid falling back to a cmpxchg8b loop. If its a seq_cst store we need to insert an mfence after the store.
Lang Hames [Fri, 26 Apr 2019 22:58:39 +0000 (22:58 +0000)]
[ORC] Add a 'plugin' interface to ObjectLinkingLayer for events/configuration.
ObjectLinkingLayer::Plugin provides event notifications when objects are loaded,
emitted, and removed. It also provides a modifyPassConfig callback that allows
plugins to modify the JITLink pass configuration.
This patch moves eh-frame registration into its own plugin, and teaches
llvm-jitlink to only add that plugin when performing execution runs on
non-Windows platforms. This should allow us to re-enable the test case that was
removed in r359198.
[GlobalISel][AArch64] Use getConstantVRegValWithLookThrough for extracts
getConstantVRegValWithLookThrough does the same thing as the
getConstantValueForReg function, and has more visibility across GISel. Plus, it
supports looking through G_TRUNC, G_SEXT, and G_ZEXT. So, we get better code
reuse and more functionality for free by using it.
Add some test cases to select-extract-vector-elt.mir to show that we can now
look through those instructions.
Nick Desaulniers [Fri, 26 Apr 2019 18:45:04 +0000 (18:45 +0000)]
[AsmPrinter] refactor to support %c w/ GlobalAddress'
Summary:
Targets like ARM, MSP430, PPC, and SystemZ have complex behavior when
printing the address of a MachineOperand::MO_GlobalAddress. Move that
handling into a new overriden method in each base class. A virtual
method was added to the base class for handling the generic case.
Refactors a few subclasses to support the target independent %a, %c, and
%n.
The patch also contains small cleanups for AVRAsmPrinter and
SystemZAsmPrinter.
It seems that NVPTXTargetLowering is possibly missing some logic to
transform GlobalAddressSDNodes for
TargetLowering::LowerAsmOperandForConstraint to handle with "i" extended
inline assembly asm constraints.
Add support for abs() to ConstantRange. This will allow to handle
SPF_ABS select flavor in LVI and will also come in handy as a
primitive for the srem implementation.
The implementation is slightly tricky, because a) abs of signed min
is signed min and b) sign-wrapped ranges may have an abs() that is
smaller than a full range, so we need to explicitly handle them.
Roland Froese [Fri, 26 Apr 2019 16:14:17 +0000 (16:14 +0000)]
[PowerPC] Update P9 vector costs for insert/extract element
The PPC vector cost model values for insert/extract element reflect older
processors that lacked vector insert/extract and move-to/move-from VSR
instructions. Update getVectorInstrCost to give appropriate values for when
the newer instructions are present.
Fangrui Song [Fri, 26 Apr 2019 16:01:48 +0000 (16:01 +0000)]
[llvm-nm] Fix handling of symbol types 't' 'd' 'r'
In addition, fix and convert the two tests to yaml2obj based. This
allows us to delete two executables.
X86/weak.test: 'v' was not tested
X86/init-fini.test: symbol types of __bss_start _edata _end were wrong
GNU nm reports __init_array_start as 't', and __preinit_array_start as 'd'.
__init_array_start is 't' just because its section ".init_array" starts with ".init"
'd' makes more sense and allows us to drop the weird SHT_INIT_ARRAY rule.
So, change __init_array_start to 'd' instead.
Simon Pilgrim [Fri, 26 Apr 2019 10:49:13 +0000 (10:49 +0000)]
[X86][SSE] Disable shouldFoldConstantShiftPairToMask for btver1/btver2 targets (PR40758)
As detailed on PR40758, Bobcat/Jaguar can perform vector immediate shifts on the same pipes as vector ANDs with the same latency - so it doesn't make sense to replace a shl+lshr with a shift+and pair as it requires an additional mask (with the extra constant pool, loading and register pressure costs).
Simon Pilgrim [Fri, 26 Apr 2019 09:56:14 +0000 (09:56 +0000)]
[X86][AVX] Combine shuffles extracted from a common vector
A small step towards combining shuffles across vector sizes - this recognizes when a shuffle's operands are all extracted from the same larger source and tries to combine to an unary shuffle of that source instead. Fixes one of the test cases from PR34380.
[InferAddressSpaces] Add AS parameter to the pass factory
This enables the pass to be used in the absence of
TargetTransformInfo. When the argument isn't passed, the factory
defaults to UninitializedAddressSpace and the flat address space is
obtained from the TargetTransformInfo as before this change. Existing
users won't have to change.
Hans Wennborg [Fri, 26 Apr 2019 08:31:00 +0000 (08:31 +0000)]
Fix alignment in AArch64InstructionSelector::emitConstantPoolEntry()
The code was using the alignment of a pointer to the value, not the
alignment of the constant itself.
Maybe we got away with it so far because the pointer alignment is
fairly high, but we did end up under-aligning <16 x i8> vectors,
which was caught in the Chromium build after lld stopped over-aligning
the .rodata.cst16 section in r356428. (See crbug.com/953815)
[GlobalISel] Fix inserting copies in the right position for reg definitions
When constrainRegClass is called if the constraining happens on a use the COPY
needs to be inserted before the instruction that contains the MachineOperand,
but if we are constraining a definition it actually needs to be added
after the instruction. In addition, the COPY needs to have its operands
flipped (in the use case we are copying from the old unconstrained register
to the new constrained register, while in the definition case we are copying
from the new constrained register that the instruction defines to the old
unconstrained register).
Fangrui Song [Fri, 26 Apr 2019 02:10:10 +0000 (02:10 +0000)]
[llvm-objcopy] Accept --long-option but not -long-option
Summary:
llvm-{objcopy,strip} (and many other LLVM binary utilities) accept
cl::opt style -long-option as well as many short options (e.g. -p -S
-x). People who use them as replacement of GNU binutils often use the
grouped option syntax (POSIX Utility Conventions), e.g. -Sx => -S -x,
-Wd => -W -d, -sj.text => -s -j.text
There is ambiguity if a long option starts with the character used by a
short option. Drop the support for -long-option to resolve the ambiguity.
This divergence from other utilities is accepted (other utilities
continue supporting -long-option).
https://lists.llvm.org/pipermail/llvm-dev/2019-April/131786.html
[GlobalOpt] Swap the expensive check for cold calls with the cheap TTI check
isValidCandidateForColdCC is much more expensive than
TTI.useColdCCForColdCall, which by default just returns false. Avoid
doing this work if we're not going to look at the answer anyway.
This change is NFC, but I see significant compile time improvements on
some code with pathologically many functions.
Lang Hames [Thu, 25 Apr 2019 23:31:33 +0000 (23:31 +0000)]
[ORC] Remove symbols from dependency lists when failing materialization.
When failing materialization of a symbol X, remove X from the dependants list
of any of X's dependencies. This ensures that when X's dependencies are
emitted (or fail themselves) they do not try to access the no-longer-existing
MaterializationInfo for X.
These builtins provide access to the new integer and
sub-integer variants of MMA (matrix multiply-accumulate) instructions
provided by CUDA-10.x on sm_75 (AKA Turing) GPUs.
Also added a feature for PTX 6.4. While Clang/LLVM does not generate
any PTX instructions that need it, we still need to pass it through to
ptxas in order to be able to compile code that uses the new 'mma'
instruction as inline assembly (e.g used by NVIDIA's CUTLASS library
https://github.com/NVIDIA/cutlass/blob/master/cutlass/arch/mma.h#L101)
PTX 6.3 extends `wmma` instruction to support s8/u8/s4/u4/b1 -> s32.
All of the new instructions are still handled mostly by tablegen. I've slightly
refactored the code to drive intrinsic/instruction generation from a master
list of supported variants, so all irregularities have to be implemented in one place only.
The test generation script wmma.py has been refactored in a similar way.
[NVPTX] generate correct MMA instruction mnemonics with PTX63+.
PTX 6.3 requires using ".aligned" in the MMA instruction names.
In order to generate correct name, now we pass current
PTX version to each instruction as an extra constant operand
and InstPrinter adjusts its output accordingly.
[NVPTX] Refactor generation of MMA intrinsics and instructions. NFC.
Generalized constructions of 'fragments' of MMA operations to provide
common primitives for construction of the ops. This will make it easier
to add new variants of the instructions that operate on integer types.
Use nested foreach loops which makes it possible to better control
naming of the intrinsics.
This patch does not affect LLVM's output, so there are no test changes.
[CMake][PowerPC] Recognize LLVM_NATIVE_TARGET="ppc64le" as PowerPC
Summary:
This value is derived from the host triple, which on the machine
I'm currently using is `ppc64le-linux-redhat`. This change makes
LLVM compile.
David Blaikie [Thu, 25 Apr 2019 20:09:00 +0000 (20:09 +0000)]
Assigning to a local object in a return statement prevents copy elision. NFC.
I added a diagnostic along the lines of `-Wpessimizing-move` to detect `return x = y` suppressing copy elision, but I don't know if the diagnostic is really worth it. Anyway, here are the places where my diagnostic reported that copy elision would have been possible if not for the assignment.
P1155R1 in the post-San-Diego WG21 (C++ committee) mailing discusses whether WG21 should fix this pitfall by just changing the core language to permit copy elision in cases like these.
(Kona update: The bulk of P1155 is proceeding to CWG review, but specifically *not* the parts that explored the notion of permitting copy-elision in these specific cases.)
[ObjC][ARC] Let ARC optimizer bail out if the number of pointer states
it keeps track of becomes too large
ARC optimizer does a top-down and a bottom-up traversal of the whole
function to pair up retain and release instructions and remove them.
This can be expensive if the number of instructions in the function and
pointer states it tracks are large since it has to look at each pointer
state and determine whether the instruction being visited can
potentially use the pointer.
This patch adds a command line option that sets a limit to the number of
pointers it tracks.
[GlobalISel][AArch64] Add generic legalization rule for extends
This adds a legalization rule for G_ZEXT, G_ANYEXT, and G_SEXT which allows
extends whenever the types will fit in registers (or the source is an s1).
Update tests. Add GISel checks throughout all of arm64-vabs.ll,
where we now select a good portion of the code. Add GISel checks to
arm64-subvector-extend.ll, which has a good number of vector extends in it.
[SelectionDAG][X86] Use stack load/store in PromoteIntRes_BITCAST when the input needs to be be split and the output type is a vector.
We had special case handling here, but it uses a scalar any_extend for the
promotion then bitcasts to the final type. This won't split up the input data
into multiple promoted elements like we need.
This patch falls back to doing the conversion through memory.
Fixes PR41594 which I believe was reflected in the bitcast-vector-bool.ll
changes. The changes to vector-half-conversions.ll are fixing a previously
unknown miscompile from this issue.
Robert Lougher [Thu, 25 Apr 2019 17:00:01 +0000 (17:00 +0000)]
[Evaluator] Walk initial elements when handling load through bitcast
When evaluating a store through a bitcast, the evaluator tries to move the
bitcast from the pointer onto the stored value. If the cast is invalid, it
tries to "introspect" the type to get a valid cast by obtaining a pointer to
the initial element (if the type is nested, this may require walking several
initial elements).
In some situations it is possible to get a bitcast on a load (e.g. with
unions, where the bitcast may not be the same type as the store). However,
equivalent logic to the store to introspect the type is missing. This patch
add this logic.
Note, when developing the patch I was unhappy with adding similar logic
directly to the load case as it could get out of step. Instead, I have
abstracted the "introspection" into a helper function, with the specifics
being handled by a passed-in lambda function.