[lit] Make lit support config files with .py extension.
Many editors and Python-related diagnostics tools such as
debuggers break or fail in mysterious ways when python files
don't end in .py. This is especially true on Windows, but
still exists on other platforms. I don't want to be too heavy
handed in changing everything across the board, but I do want
to at least *allow* lit configs to have .py extensions. This
patch makes the discovery process first look for a config file
with a .py extension, and if one is not found, then looks for
a config file using the old method. So for existing users, there
should be no functional change.
Dave Lee [Wed, 20 Sep 2017 22:41:34 +0000 (22:41 +0000)]
Remove references to response file argument in CommandLine.rst
Summary:
The documentation refers to a boolean that controls whether response files are
handled, but this is incorrect. Since r165535, response files are always
enabled.
I noticed this inefficiency while investigating PR34603:
https://bugs.llvm.org/show_bug.cgi?id=34603
This fix will likely push another bug (we don't maintain state of 'LateSimplifyCFG')
into hiding, but I'll try to clean that up with a follow-up patch anyway.
[IR] Add llvm.dbg.addr, a control-dependent version of llvm.dbg.declare
Summary:
This implements the design discussed on llvm-dev for better tracking of
variables that live in memory through optimizations:
http://lists.llvm.org/pipermail/llvm-dev/2017-September/117222.html
This is tracked as PR34136
llvm.dbg.addr is intended to be produced and used in almost precisely
the same way as llvm.dbg.declare is today, with the exception that it is
control-dependent. That means that dbg.addr should always have a
position in the instruction stream, and it will allow passes that
optimize memory operations on local variables to insert llvm.dbg.value
calls to reflect deleted stores. See SourceLevelDebugging.rst for more
details.
The main drawback to generating DBG_VALUE machine instrs is that they
usually cause LLVM to emit a location list for DW_AT_location. The next
step will be to teach DwarfDebug.cpp how to recognize more DBG_VALUE
ranges as not needing a location list, and possibly start setting
DW_AT_start_offset for variables whose lifetimes begin mid-scope.
This reverts commit SVN r313668. The original test case attempted to
write a pointer value into 16-bits, although the value may exceed the
range representable in 16-bits. Ensure that the symbol is located in
the address space such that its absolute address is representable in
16-bits. This should fix the assertion failure that was seen on the
Windows hosts.
We already did (X & C2) > C1 --> (X & C2) != 0, if any bit set in (X & C2) will produce a result greater than C1. But there is an equivalent inverse condition with <= C1 (which will be canonicalized to < C1+1)
The Swift CC is identical to Win64 CC with the exception of swift error
being passed in r12 which is a CSR. However, since this calling
convention is only used in swift -> swift code, it does not impact
interoperability and can be treated entirely as Win64 CC. We would
previously incorrectly lower the frame setup as we did not treat the
frame as conforming to Win64 specifications.
Matt Arsenault [Wed, 20 Sep 2017 20:53:49 +0000 (20:53 +0000)]
AMDGPU: Add tied operands to v_mad_mix{lo|hi}_f16
These write to the low and high half of the destination
register and leave the other 16-bits unchanged. This is true
for most 16-bit instructions on gfx9, but we don't use that
now.
Introduce the llvm-cfi-verify tool (resubmission of D37937).
Summary: Resubmission of D37937. Fixed i386 target building (conversion from std::size_t& to uint64_t& failed). Fixed documentation warning failure about docs/CFIVerify.rst not being in the tree.
Matt Arsenault [Wed, 20 Sep 2017 20:28:39 +0000 (20:28 +0000)]
AMDGPU: Start selecting v_mad_mixlo_f16
Also add some tests that should be able to use v_mad_mixhi_f16,
but do not yet. This is trickier because we don't really model
the partial update of the register done by 16-bit instructions.
Introduce the llvm-cfi-verify tool (resubmission of D37937).
Summary: Resubmission of D37937. Fixed i386 target building (conversion from std::size_t& to uint64_t& failed). Fixed documentation warning failure about docs/CFIVerify.rst not being in the tree.
CodeGen: support SwiftError SwiftCC on Windows x64
Add support for passing SwiftError through a register on the Windows x64
calling convention. This allows the use of swifterror attributes on
parameters which is used by the swift front end for the `Error`
parameter. This partially enables building the swift standard library
for Windows x86_64.
Marek Sokolowski [Wed, 20 Sep 2017 18:33:35 +0000 (18:33 +0000)]
[llvm-readobj] Teach readobj to dump .res files (WindowsResource).
This enables readobj to output Windows resource files (.res). This way,
we'll be able to test .res outputs without comparing them byte-by-byte
with "magic binary files" generated by MS toolchain.
Re-land "[DebugInfo] Insert DW_OP_deref when spilling indirect DBG_VALUEs"
After r313775, it's easier to maintain a parallel BitVector of spilled
locations indexed by location number.
I wasn't able to build a good reduced test case for this iteration of
the bug, but I added a more direct assertion that spilled values must
use frame index locations. If this bug reappears, it won't only fire on
the NEON vector code that we detected it on, but on medium-sized
integer-only programs as well.
This broke the buildbots, e.g.
http://bb.pgr.jp/builders/test-llvm-i686-linux-RA/builds/391
> Summary:
> This patch tries to vectorize loads of consecutive memory accesses, accessed
> in non-consecutive or jumbled way. An earlier attempt was made with patch D26905
> which was reverted back due to some basic issue with representing the 'use mask'
> jumbled accesses.
>
> This patch fixes the mask representation by recording the 'use mask' in the usertree entry.
>
> Change-Id: I9fe7f5045f065d84c126fa307ef6ebe0787296df
>
> Subscribers: mzolotukhin
>
> Reviewed By: ayal
>
> Differential Revision: https://reviews.llvm.org/D36130
>
> Review comments updated accordingly
>
> Change-Id: I22ab0a8a9bac9d49d74baa81a08e1e486f5e75f0
>
> Added a TODO for sortLoadAccesses API
>
> Change-Id: I3c679bf1865422d1b45e17ea28f1992bca660b58
>
> Modified the TODO for sortLoadAccesses API
>
> Change-Id: Ie64a66cb5f9e2a7610438abb0e750c6e090f9565
>
> Review comment update for using OpdNum to insert the mask in respective location
>
> Change-Id: I016d0c1b29874e979efc0205bbf078991f92edce
>
> Fixes '-Wsign-compare warning' in LoopAccessAnalysis.cpp and code rebase
>
> Change-Id: I64b2ea5e68c1d7b6a028f5ef8251c5a97333f89b
[DebugInfo] Use a MapVector to coalesce MachineOperand locations
Summary:
The new code should be linear in the number of DBG_VALUEs, while the old
code was quadratic. NFC intended.
This is also hopefully a more direct expression of the problem, which is
to:
1. Rewrite all virtual register operands to stack slots or physical
registers
2. Uniquely number those machine operands, assigning them location
numbers
3. Rewrite all uses of the old location numbers in the interval map to
use the new location numbers
In r313400, I attempted to track which locations were spilled in a
parallel bitvector indexed by location number. My code was broken
because these location numbers are not stable during rewriting.
In these cases, two selects have constant selectable operands for
both the true and false components and have the same conditional
expression.
We then create two arithmetic operations of the same type and feed a
final select operation using the result of the true arithmetic for the true
operand and the result of the false arithmetic for the false operand and reuse
the original conditionl expression.
The arithmetic operations are naturally folded as a consequence, leaving
only the newly formed select to replace the old arithmetic operation.
Patch by: Michael Berg <michael_c_berg@apple.com>
Differential Revision: https://reviews.llvm.org/D37019
Mohammad Shahid [Wed, 20 Sep 2017 17:19:57 +0000 (17:19 +0000)]
[SLP] Vectorize jumbled memory loads.
Summary:
This patch tries to vectorize loads of consecutive memory accesses, accessed
in non-consecutive or jumbled way. An earlier attempt was made with patch D26905
which was reverted back due to some basic issue with representing the 'use mask'
jumbled accesses.
This patch fixes the mask representation by recording the 'use mask' in the usertree entry.
Teresa Johnson [Wed, 20 Sep 2017 17:09:47 +0000 (17:09 +0000)]
[ThinLTO] Fix dead stripping analysis for SamplePGO
Summary:
The fix for dead stripping analysis in the case of SamplePGO indirect
calls to local functions (r313151) introduced the possibility of an
infinite loop.
Make sure we check for the value being already live after we update it
for SamplePGO indirect call handling.
Reviewers: danielcdh
Subscribers: mehdi_amini, inglorion, llvm-commits, eraman
[lit] Reverse path list when updating environment vars.
Bug pointed out by EricWF. This would construct a path where
items would be added in the wrong order, potentially leading
to using the wrong tools for testing.
Make libcxx tests work when llvm sources are not present.
Despite a strong CMake warning that this is an unsupported
libcxx build configuration, some bots still rely on being
able to check out lit and libcxx independently with no
LLVM sources, and then run lit against libcxx.
A previous patch broke that workflow, so this is making it work
again. Unfortunately, it breaks generation of the llvm-lit
script for libcxx, but we will just have to live with that until
a solution is found that allows libcxx to make more use of
llvm build pieces. libcxx can still run tests by using the
ninja check target, or by running lit.py directly against the
build tree or source tree.
Recommit [MachineCombiner] Update instruction depths incrementally for large BBs.
This version of the patch fixes an off-by-one error causing PR34596. We
do not need to use std::next(BlockIter) when calling updateDepths, as
BlockIter already points to the next element.
Original commit message:
> For large basic blocks with lots of combinable instructions, the
> MachineTraceMetrics computations in MachineCombiner can dominate the compile
> time, as computing the trace information is quadratic in the number of
> instructions in a BB and it's relevant successors/predecessors.
> In most cases, knowing the instruction depth should be enough to make
> combination decisions. As we already iterate over all instructions in a basic
> block, the instruction depth can be computed incrementally. This reduces the
> cost of machine-combine drastically in cases where lots of instructions
> are combined. The major drawback is that AFAIK, computing the critical path
> length cannot be done incrementally. Therefore we only compute
> instruction depths incrementally, for basic blocks with more
> instructions than inc_threshold. The -machine-combiner-inc-threshold
> option can be used to set the threshold and allows for easier
> experimenting and checking if using incremental updates for all basic
> blocks has any impact on the performance.
>
> Reviewers: sanjoy, Gerolf, MatzeB, efriedma, fhahn
>
> Reviewed By: fhahn
>
> Subscribers: kiranchandramohan, javed.absar, efriedma, llvm-commits
>
> Differential Revision: https://reviews.llvm.org/D36619
Mohammad Shahid [Wed, 20 Sep 2017 08:18:28 +0000 (08:18 +0000)]
[SLP] Vectorize jumbled memory loads.
Summary:
This patch tries to vectorize loads of consecutive memory accesses, accessed
in non-consecutive or jumbled way. An earlier attempt was made with patch D26905
which was reverted back due to some basic issue with representing the 'use mask' of
jumbled accesses.
This patch fixes the mask representation by recording the 'use mask' in the usertree entry.
[X86] Remove isel checks for immediate size on floating point compare and xop compare instructions. NFCI
If these checks fail we end up not selecting an instruction at all. So we are already relying on the immediate being checked upstream of isel. So doing the check in isel is just bloat to the isel table. Interestingly, we didn't check on the AVX512 version of the instructions anyway.
Sanjoy Das [Wed, 20 Sep 2017 02:31:57 +0000 (02:31 +0000)]
Tighten the invariants around LoopBase::invalidate
Summary:
With this change:
- Methods in LoopBase trip an assert if the receiver has been invalidated
- LoopBase::clear frees up the memory held the LoopBase instance
This change also shuffles things around as necessary to work with this stricter invariant.
Many svn-based buildbots seem to be getting stuck continually
in tree conflicts due to the output of pyc files. I'm disabling
these as a temporary measure in an attempt to get everything
stable again.
I'll try to remove this code once I understand the problem
better.
[MIRPrinter] Print empty successor lists when they cannot be guessed
This re-applies commit r313685, this time with the proper updates to
the test cases.
Original commit message:
Unreachable blocks in the machine instr representation are these
weird empty blocks with no successors.
The MIR printer used to not print empty lists of successors. However,
the MIR parser now treats non-printed list of successors as "please
guess it for me". As a result, the parser tries to guess the list of
successors and given the block is empty, just assumes it falls through
the next block (if any).
For instance, the following test case used to fail the verifier.
The MIR printer would print
entry
/ \
true (def) false (no list of successors)
|
split.true (use)
Sanjoy Das [Tue, 19 Sep 2017 23:19:00 +0000 (23:19 +0000)]
[LoopInfo] Make LoopBase and Loop destructors non-public
Summary:
See comment for why I think this is a good idea.
This change also:
- Removes an SCEV test case. The SCEV test was not testing anything useful (most of it was `#if 0` ed out) and it would need to be updated to deal with a private ~Loop::Loop.
- Updates the loop pass manager test case to deal with a private ~Loop::Loop.
- Renames markAsRemoved to markAsErased to contrast with removeLoop, via the usual remove vs. erase idiom we already have for instructions and basic blocks.
Adam Nemet [Tue, 19 Sep 2017 23:00:55 +0000 (23:00 +0000)]
Allow ORE.emit to take a closure to delay building the remark object
In the lambda we are now returning the remark by value so we need to preserve
its type in the insertion operator. This requires making the insertion
operator generic.
I've also converted a few cases to use the new API. It seems to work pretty
well. See the LoopUnroller for a slightly more interesting case.
Summary: Introduces the llvm-cfi-verify tool to llvm. Includes the design document (docs/CFIVerify.rst). Current implementation of the tool is simply a disassembler that identifies and prints the indirect control flow instructions.
[MIRPrinter] Print empty successor lists when they cannot be guessed
Unreachable blocks in the machine instr representation are these
weird empty blocks with no successors.
The MIR printer used to not print empty lists of successors. However,
the MIR parser now treats non-printed list of successors as "please
guess it for me". As a result, the parser tries to guess the list of
successors and given the block is empty, just assumes it falls through
the next block (if any).
For instance, the following test case used to fail the verifier.
The MIR printer would print
entry
/ \
true (def) false (no list of successors)
|
split.true (use)
The MIR parser would understand this:
entry
/ \
true (def) false
| / <-- invalid edge
split.true (use)
Because of the invalid edge, we get the "def does not
dominate all uses" error.
The fix consists in printing empty successor lists, so that the parser
knows what to do for unreachable blocks.
Reland "[llvm-objcopy] Add support for nested and overlapping segments"
I didn't initialize a pointer to be nullptr that I needed to.
This change adds support for nested and even overlapping segments. This means
that PT_PHDR, PT_GNU_RELRO, PT_TLS, and PT_DYNAMIC can be supported properly.
Jonathan Roelofs [Tue, 19 Sep 2017 21:23:19 +0000 (21:23 +0000)]
[ARM] Relax 'cpsie'/'cpsid' flag parsing.
The ARM docs suggest in examples that the flags can have either case, and there
are applications in the wild that (libopencm3, for example) that expect to be
able to use the uppercase spelling.