The variables MinGPR/MinG8R were not updated properly when resetting the
offsets, which in the included testcase lead to saving the CR register
in the same location as R30.
Tom Stellard [Fri, 26 May 2017 14:32:08 +0000 (14:32 +0000)]
Merging part of 292188:
------------------------------------------------------------------------
r292188 | ab | 2017-01-16 22:10:02 -0500 (Mon, 16 Jan 2017) | 11 lines
[TLI] Add prototype checking for all remaining LibFuncs.
This is another step towards unifying all LibFunc prototype checks.
This work started in r267758 (D19469); add the remaining checks.
Also add a unittest that checks each libfunc declared with a known-valid
and known-invalid prototype. New libfuncs added in the future are
required to have prototype checking in place; the known-valid test will
fail otherwise.
[ARM] Temporarily disable globals promotion to constant pools to prevent miscompilation
Summary:
A temporary workaround for PR32780 - rematerialized instructions accessing the same promoted global through different constant pool entries.
The patch turns off the globals promotion optimization leaving all its code in place, so that it can be easily turned on once PR32780 is fixed.
Since this is a miscompilation issue causing generation of misbehaving code, and the problem is very subtle, the patch might be valuable enough to get into 4.0.1.
[ARM] Clear the constant pool cache on explicit .ltorg directives
Multiple ldr pseudoinstructions with the same constant value will
reuse the same constant pool entry. However, if the constant pool
is explicitly flushed with a .ltorg directive, we should not try
to reference constants in the previous pool any longer, since they
may be out of range.
This fixes assembling hand-written assembler source which repeatedly
loads the same constant value, across a binary size larger than the
pc-relative fixup range for ldr instructions (4096 bytes). Such
assembler source already uses explicit .ltorg instructions to emit
constant pools with regular intervals. However if we try to reuse
constants emitted in earlier pools, they end up out of range.
This makes the output of the testcase match what binutils gas does
(prior to this patch, it would fail to assemble).
Summary:
The add_tablegen macros defines its own install target, and it was also calling
add_llvm_utility which adds another install target.
Configuring with -DLLVM_TOOLS_INSTALL_DIR set to something other than
'bin' along with -DLLVM_INSTALL_UTILS=ON was causing llvm-tablgen
to be installed to two separate directories.
[CMake] Fix pthread handling for out-of-tree builds
LLVM defines `PTHREAD_LIB` which is used by AddLLVM.cmake and various projects
to correctly link the threading library when needed. Unfortunately
`PTHREAD_LIB` is defined by LLVM's `config-ix.cmake` file which isn't installed
and therefore can't be used when configuring out-of-tree builds. This causes
such builds to fail since `pthread` isn't being correctly linked.
This patch attempts to fix that problem by renaming and exporting
`LLVM_PTHREAD_LIB` as part of`LLVMConfig.cmake`. I renamed `PTHREAD_LIB`
because It seemed likely to cause collisions with downstream users of
`LLVMConfig.cmake`.
------------------------------------------------------------------------
[Hexagon] Fix lowering of formal arguments of type i1
On Hexagon, values of type i1 are passed in registers of type i32,
even though i1 is not a legal value for these registers. This is a
special case and needs special handling to maintain consistency of
the lowering information.
This fixes PR32089.
------------------------------------------------------------------------
Adding const overloads of operator* and operator-> for DenseSet iterators
This fixes some problems when building ClangDiagnostics.cpp on Visual Studio 2017 RC. As far as I understand, there was a change in the implementation of the constructor for std::vector with two iterator parameters, which in our case causes an attempt to dereference const Iterator objects. Since there was no overload for a const Iterator, the compile would fail.
Fix invalid addrspacecast due to combining alloca with global var
For function-scope variables with large initialisation list, FE usually
generates a global variable to hold the initializer, then generates
memcpy intrinsic to initialize the alloca. InstCombiner::visitAllocaInst
identifies such allocas which are accessed only by reading and replaces
them with the global variable. This is done by casting the global variable
to the type of the alloca and replacing all references.
However, when the global variable is in a different address space which
is disjoint with addr space 0 (e.g. for IR generated from OpenCL,
global variable cannot be in private addr space i.e. addr space 0), casting
the global variable to addr space 0 results in invalid IR for certain
targets (e.g. amdgpu).
To fix this issue, when the global variable is not in addr space 0,
instead of casting it to addr space 0, this patch chases down the uses
of alloca until reaching the load instructions, then replaces load from
alloca with load from the global variable. If during the chasing
bitcast and GEP are encountered, new bitcast and GEP based on the global
variable are generated and used in the load instructions.
Use correct registers for "A" inline asm constraint
Summary:
In PR32594, inline assembly using the 'A' constraint on x86_64 causes
llvm to crash with a "Cannot select" stack trace. This is because
`X86TargetLowering::getRegForInlineAsmConstraint` hardcodes that 'A'
means the EAX and EDX registers.
However, on x86_64 it means the RAX and RDX registers, and on 16-bit x86
(ia16?) it means the old AX and DX registers.
Add new register classes in `X86RegisterInfo.td` to support these cases,
and amend the logic in `getRegForInlineAsmConstraint` to cope with
different subtargets. Also add a test case, derived from PR32594.
[mips][mc] Fix a crash when disassembling odd sized sections
Make the MIPS disassembler consistent with the other targets in returning
a Size of zero when the input buffer cannot contain an instruction due
to it's size. Previously it reported the minimum instruction size when
it failed due to the buffer not being big enough for an instruction
causing llvm-objdump to crash when disassembling all sections.
[SCEV] Decrease the recursion threshold for CompareValueComplexity
Fixes PR32142.
r287232 accidentally increased the recursion threshold for
CompareValueComplexity from 2 to 32. This change reverses that change
by introducing a separate flag for CompareValueComplexity's threshold.
------------------------------------------------------------------------
[ExecutionDepsFix] Don't make copies of LiveReg objects when collecting operands for soft instructions
Summary:
While collecting operands we make copies of the LiveReg objects which are stored in the LiveRegs array. If the instruction uses the same register multiple times we end up with multiple copies. Later we iterate through the collected list of LiveReg objects and merge DomainValues. In the process of doing this the merge function can change the contents of the original LiveReg object in the LiveRegs array, but not the copies that have been made. So when we get to the second usage of the register we end up seeing a stale copy of the LiveReg object.
To fix this I've stopped copying and now just store a pointer to the original LiveReg object. Another option might be to avoid adding the same register to the Regs array twice, but this approach seemed simpler.
The included test case exposes this bug due to an AVX-512 masked OR instruction using the same register for the passthru operand and one of the inputs to the OR operation.
Hans Wennborg [Mon, 27 Feb 2017 17:01:04 +0000 (17:01 +0000)]
Merging r295116:
------------------------------------------------------------------------
r295116 | dim | 2017-02-14 14:49:49 -0800 (Tue, 14 Feb 2017) | 18 lines
Disable wrapping llvm-xray YAML output
Summary:
The YAML output produced by llvm-xray is supposed to be wrapped at the
arbitrary default of 70 columns set by `yaml:Output`. Unfortunately,
the wrapping is rather unpredictable, and can easily go past the set
number of columns, depending on the execution environment.
To make the YAML output environment-independent, disable wrapping
instead.
[SLPVectorizer] Improved support of partial tree vectorization.
Currently SLP vectorizer tries to vectorize a binary operation and dies
immediately after unsuccessful the first unsuccessfull attempt. Patch
tries to improve the situation, trying to vectorize all binary
operations of all children nodes in the binop tree.
[Reassociate] Add negated value of negative constant to the Duplicates list.
In OptimizeAdd, we scan the operand list to see if there are any common factors
between operands that can be factored out to reduce the number of multiplies
(e.g., 'A*A+A*B*C+D' -> 'A*(A+B*C)+D'). For each operand of the operand list, we
only consider unique factors (which is tracked by the Duplicate set). Now if we
find a factor that is a negative constant, we add the negated value as a factor
as well, because we can percolate the negate out. However, we mistakenly don't
add this negated constant to the Duplicates set.
Consider the expression A*2*-2 + B. Obviously, nothing to factor.
For the added value A*2*-2 we over count 2 as a factor without this change,
which causes the assert reported in PR30256. The problem is that this code is
assuming that all the multiply operands of the add are already reassociated.
This change avoids the issue by making OptimizeAdd tolerate multiplies which
haven't been completely optimized; this sort of works, but we're doing wasted
work: we'll end up revisiting the add later anyway.
Another possible approach would be to enforce RPO iteration order more strongly.
If we have RedoInsts, we process them immediately in RPO order, rather than
waiting until we've finished processing the whole function. Intuitively, it
seems like the natural approach: reassociation works on expression trees, so
the optimization only works in one direction. That said, I'm not sure how
practical that is given the current Reassociate; the "optimal" form for an
expression depends on its use list (see all the uses of "user_back()"), so
Reassociate is really an iterative optimization of sorts, so any changes here
would probably get messy.
Hans Wennborg [Fri, 24 Feb 2017 18:37:22 +0000 (18:37 +0000)]
Merging r296030:
------------------------------------------------------------------------
r296030 | hans | 2017-02-23 14:29:00 -0800 (Thu, 23 Feb 2017) | 7 lines
Revert r282872 "CVP. Turn marking adds as no wrap on by default"
While not CVP's fault, this caused miscompiles (PR31181). Reverting
until those are resolved.
(This also reverts the follow-ups r288154 and r288161 which removed the
flag.)
------------------------------------------------------------------------
Address of an alias of a global with offset is incorrectly lowered as an address of the global (i.e. ignoring offset).
------------------------------------------------------------------------
Hans Wennborg [Thu, 23 Feb 2017 00:14:14 +0000 (00:14 +0000)]
Backport r293433, ARM: support `-mlong-calls` with AEABI TLS on ELF
Support lowering AEABI TLS access (__aeabi_read_tp) with long calls.
This requires adjusting the call sequence to use an indirect call to get
full addressability.
[LICM] When we are recomputing the alias sets for a subloop, we cannot
skip sub-subloops.
The logic to skip subloops dated from when this code was shared with the
cached case. Once it was factored out to only run in the case of
recomputed subloops it became a dangerous bug. If a subsubloop contained
an interfering instruction it would be silently skipped from the alias
sets for LICM.
With the old pass manager this was extremely hard to trigger as it would
require failing to visit these subloops with the LICM pass but then
visiting the outer loop somehow. I've not yet contrived any test case
that actually manages to trigger this.
But with the new pass manager we don't do the cross-loop caching hack
that the old PM does and so we recompute alias set information from
first principles. While this seems much cleaner and simpler it exposed
this bug and would subtly miscompile code due to failing to correctly
model the aliasing constraints of deeply nested loops.
------------------------------------------------------------------------
Hans Wennborg [Tue, 21 Feb 2017 18:53:27 +0000 (18:53 +0000)]
Merging r295486 and r295490:
------------------------------------------------------------------------
r295486 | adrian | 2017-02-17 11:42:32 -0800 (Fri, 17 Feb 2017) | 6 lines
Debug Info: Sort frame index expressions before emitting them.
This fixes PR31381, which caused an assertion and/or invalid debug info.
This affects debug variables that have multiple fragments in the MMI
side (i.e.: in the stack frame) table.
rdar://problem/30571676
------------------------------------------------------------------------
------------------------------------------------------------------------
r295490 | adrian | 2017-02-17 12:02:26 -0800 (Fri, 17 Feb 2017) | 1 line
Fix windows bots by locking down the target triple on this testcase.
------------------------------------------------------------------------
[LoopUnroll] Properly update loopinfo for runtime unrolling by 2
Even when we don't create a remainder loop (that is, when we unroll by 2), we
may duplicate nested loops into the remainder. This is complicated by the fact
the remainder may itself be either inserted into an outer loop, or at the top
level. In the latter case, we may need to create new top-level loops.
They are register promoted by ISel and so it makes no sense to treat them as
memory.
Inserting calls to the thread sanitizer would also generate invalid IR.
You would hit:
"swifterror value can only be loaded and stored from, or as a swifterror
argument!"
------------------------------------------------------------------------
They are register promoted by ISel and so it makes no sense to treat them as
memory.
Inserting calls to the thread sanitizer would also generate invalid IR.
You would hit:
"swifterror value can only be loaded and stored from, or as a swifterror
argument!"
------------------------------------------------------------------------
[DAG] Don't try to create an INSERT_SUBVECTOR with an illegal source
We currently can't legalize those, but we should really not be creating
them in the first place, since legalization would probably look similar to the
way we legalize CONCAT_VECTORS - basically replace the INSERT with a BUILD.
Silence some Sphinx diagnostics in an attempt to get the documentation builder back to green (http://lab.llvm.org:8011/builders/llvm-sphinx-docs/builds/1895).
------------------------------------------------------------------------
[SelectionDAG] In InstrEmitter, handle EXTRACT_SUBREG of a physical register.
Summary:
Without this change, the getVR() call would hit an assert since it was
being passed a physical register.
Update the AArch64/ldst-opt.ll test with a case that triggers this
behavior by adding a run with strict-align, which causes an unaligned
STR XZR instruction to be split into byte stores, creating an
EXTRACT_SUBREG of XZR that triggers the original problem.
[ARM/AArch ISel] SwiftCC: First parameters that are marked swiftself are not 'this returns'
We mark X0 as preserved by a call that passes the returned parameter.
x0 = ...
fun(x0) // no implicit def of x0
This no longer is valid if we pass the parameter in a different register then
the returned value as is the case with a swiftself parameter (passed in x20).
x20 = ...
fun(x20) // there should be an implict def of x8
SwiftCC: swifterror register cannot be as the base register
Functions that have a dynamic alloca require a base register which is defined to
be X19 on AArch64 and r6 on ARM. We have defined the swifterror register to be
the same register. Use a different callee save register for swifterror instead:
Hans Wennborg [Wed, 8 Feb 2017 17:02:53 +0000 (17:02 +0000)]
Merging r294349 and r294357:
------------------------------------------------------------------------
r294349 | dexonsmith | 2017-02-07 13:03:50 -0800 (Tue, 07 Feb 2017) | 12 lines
ADT: Add explicit conversions for reverse ilist iterators
Add explicit conversions between forward and reverse ilist iterators.
These follow the conversion conventions of std::reverse_iterator, which
are off-by-one: the newly-constructed "reverse" iterator dereferences to
the previous node of the one sent in. This has the benefit of
converting reverse ranges in place:
- If [I, E) is a valid range,
- then [reverse(E), reverse(I)) gives the same range in reverse order.
ilist_iterator::getReverse() is unchanged: it returns a reverse iterator
to the *same* node.
------------------------------------------------------------------------
------------------------------------------------------------------------
r294357 | dblaikie | 2017-02-07 13:31:03 -0800 (Tue, 07 Feb 2017) | 1 line
Fix some missing negations in the traits checking from r294349
------------------------------------------------------------------------
Hans Wennborg [Tue, 7 Feb 2017 21:15:12 +0000 (21:15 +0000)]
Merging r294318:
------------------------------------------------------------------------
r294318 | adrian | 2017-02-07 09:35:41 -0800 (Tue, 07 Feb 2017) | 12 lines
Fix the bitcode upgrade for DIGlobalVariable in a DIImportedEntity context.
The bitcode upgrade for DIGlobalVariable unconditionally wrapped
DIGlobalVariables in a DIGlobalVariableExpression. When a
DIGlobalVariable is referenced by a DIImportedEntity, however, this is
wrong. This patch fixes the bitcode upgrade by deferring the creation
of DIGlobalVariableExpressions until we know the context of the
DIGlobalVariable.
Revert r293017 and fix the actual underlying issue.
The patch committed in r293017, as discussed on the list, doesn't really
make sense but was causing an actual issue to go away.
The issue turns out to be that in one place the extra template arguments
were dropped from the OuterAnalysisManagerProxy. This in turn caused the
types used in one set of places to access the key to be completely
different from the types used in another set of places for both Loop and
CGSCC cases where there are extra arguments.
I have literally no idea how anything seemed to work with this bug in
place. It blows my mind. But it did except for mingw64 in a DLL build.
I've added a really handy static assert that helps ensure we don't break
this in the future. It immediately diagnoses the issue with a compile
failure and a very clear error message. Much better that staring at
backtraces on a build bot. =]
------------------------------------------------------------------------
[AArch64] Fix incorrect MachinePointerInfo in splitStoreSplat
When splitting up one store into several in splitStoreSplat we have to
make sure we get the MachinePointerInfo right, otherwise alias
analysis thinks they all store to the same location. This can then
cause invalid scheduling later on.
[InstCombine] move icmp transforms that might be recognized as min/max and inf-loop (PR31751)
This is a minimal patch to avoid the infinite loop in:
https://llvm.org/bugs/show_bug.cgi?id=31751
But the general problem is bigger: we're not canonicalizing all of the min/max forms reported
by value tracking's matchSelectPattern(), and we don't define min/max consistently. Some code
uses matchSelectPattern(), other code uses matchers like m_Umax, and others have their own
inline definitions which may be subtly different from any of the above.
The reason that the test cases in this patch need a cast op to trigger is because we don't
(yet) canonicalize all min/max forms based on matchSelectPattern() in
canonicalizeMinMaxWithConstant(), but we do make min/max+cast transforms based on
matchSelectPattern() in visitSelectInst().
The location of the icmp transforms that trigger the inf-loop seems arbitrary at best, so
I'm moving those behind the min/max fence in visitICmpInst() as the quick fix.
LSR: Don't drop address space when type doesn't match
For targets with different addressing modes in each address space,
if this is dropped querying isLegalAddressingMode later with this
will give a nonsense result, breaking the isLegalUse assertions.
This is a candidate for the 4.0 release branch.
------------------------------------------------------------------------
[ARM/AArch64] Relocate and update InterleavedAccessPass tests (NFC)
The interleaved access pass is an IR-to-IR transformation that runs before code
generation. It matches interleaved memory operations to target-specific
intrinsics (that are later lowered to load and store multiple instructions on
ARM/AArch64). We place tests for similar passes (e.g., GlobalMergePass) under
test/Transforms. This patch moves the InterleavedAccessPass tests out of
test/CodeGen and into target-specific directories under
test/Transforms/InterleavedAccess.
Although the pass is an IR pass, many of the existing tests were llc tests
rather opt tests. For example, the tests would check for ldN/stN instructions
generated by llc rather than the intrinsic calls the pass actually inserts.
Thus, this patch updates all tests to be opt tests that check for the inserted
intrinsics. We already have separate CodeGen tests that ensure we lower the
interleaved access intrinsics to their corresponding ldN/stN instructions. In
addition to migrating the tests to opt, this patch also performs some minor
clean-up (to ensure consistent naming, etc.).
[InstCombine] Make sure that LHS and RHS have the same type in
transformToIndexedCompare
If they don't have the same type, the size of the constant
index would need to be adjusted (and this wouldn't be always
possible).
Alternatively we could try the analysis with the initial
RHS value, which would guarantee that the two sides have
the same type. However it is unlikely that in practice this
would pass our transformation requirements.