Summary: Fix the bug when promoted call return type mismatches with the promoted function, we should not try to inline it. Otherwise it may lead to compiler crash.
[llvm-objcopy] Add support for nested and overlapping segments
This change adds support for nested and even overlapping segments. This means
that PT_PHDR, PT_GNU_RELRO, PT_TLS, and PT_DYNAMIC can be supported properly.
Add support for the R_AARCH64_ABS{16,32} relocations in the execution
engine. This is primarily used for DWARF debug information relocations
and needed by the LLVM JIT to support JITing for lldb.
Improve TableGen performance of -gen-dag-isel (motivated by X86 backend)
The introduction of parameterized register classes in r313271 caused the
matcher generation code in TableGen to run much slower, particularly so
in the unoptimized (debug) build. This patch recovers some of the lost
performance.
Summary of changes:
- Cache the set of legal types in TypeInfer::getLegalTypes. The contents
of this set do not change.
- Add LLVM_ATTRIBUTE_ALWAYS_INLINE to several small functions. Normally
this would not be necessary, but in the debug build TableGen is not
optimized, so this helps a little bit.
- Add an early exit from TypeSetByHwMode::operator== for the case when
one or both arguments are "simple", i.e. only have one mode. This
saves some time in GenerateVariants.
- Finally, replace the underlying storage type in TypeSetByHwMode::SetType
with MachineValueTypeSet based on std::array instead of std::set.
This significantly reduces the number of memory allocation calls.
I've done a number of experiments with the underlying type of InfoByHwMode.
The type is a map, and for targets that do not use the parameterization,
this map has only one entry. The best (unoptimized) performance, somewhat
surprisingly came from std::map, followed closely by std::unordered_map.
DenseMap was the slowest by a large margin.
Various hand-crafted solutions (emulating enough of the map interface
not to make sweeping changes to the users) did not yield any observable
improvements.
Resubmit "Fix llvm-lit script generation in libcxx."
After speaking with the libcxx owners, they agreed that this is
a bug in the bot that needs to be fixed by the bot owners, and
the CMake changes are correct.
Re-land r313400 "[DebugInfo] Insert DW_OP_deref when spilling indirect DBG_VALUEs"
I forgot to zero out the BitVector when reusing it between UserValues.
Later uses of the same location number for a different UserValue would
falsely indicate that they were spilled. Usually this would lead to
incorrect debug info, but in some cases they would indicate something
nonsensical like a memory location based on a vector register (Q8 on
ARM).
Tony Jiang [Tue, 19 Sep 2017 16:14:37 +0000 (16:14 +0000)]
[PowerPC Peephole] Constants into a join add, use ADDI over LI/ADD.
Two blocks prior to the join each perform an li and the the join block has an
add using the initialized register. Optimize each predecessor block to instead
use addi and delete the li's and add.
David Blaikie [Tue, 19 Sep 2017 15:13:55 +0000 (15:13 +0000)]
dwarfdump: Delay parsing abbreviations until they're needed
This speeds up dumping specific DIEs by not parsing abbreviations for
units that are not used.
(this is also handy to have in eventually to speed up llvm-symbolizer
for .dwp files, where parsing most of the DWP file can be avoided by
using the index)
Nikolai Bozhenov [Tue, 19 Sep 2017 11:54:29 +0000 (11:54 +0000)]
[Nios2] Subtarget, basic infrastructure for frame, instructions and registers
This is the second minimal patch keeping Nios2 target buildable.
I'm adding subtarget here and other stuff for frame lowering, instruction,
register information methods. I do not add any test cases, as still there
are missing parts like DAG selector and assembly printing. I plan to include
them into the next patch.
Patch by Andrei Grischenko <andrei.l.grischenko@intel.com>
This change:
- makes nodes ISD::ADDCARRY and ISD::SUBCARRY legal for i32
- lowering is done by first converting the boolean value into the carry flag
using (_, C) ← (ARMISD::ADDC R, -1) and converted back to an integer value
using (R, _) ← (ARMISD::ADDE 0, 0, C). An ARMISD::ADDE between the two
operations does the actual addition.
- for subtraction, given that ISD::SUBCARRY second result is actually a
borrow, we need to invert the value of the second operand and result before
and after using ARMISD::SUBE. We need to invert the carry result of
ARMISD::SUBE to preserve the semantics.
- given that the generic combiner may lower ISD::ADDCARRY and
ISD::SUBCARRYinto ISD::UADDO and ISD::USUBO we need to update their lowering
as well otherwise i64 operations now would require branches. This implies
updating the corresponding test for unsigned.
- add new combiner to remove the redundant conversions from/to carry flags
to/from boolean values (ARMISD::ADDC (ARMISD::ADDE 0, 0, C), -1) → C
- fixes PR34045
- fixes PR34564
Gadi Haber [Tue, 19 Sep 2017 06:19:27 +0000 (06:19 +0000)]
[X86][Skylake] Adding the scheduling information for the SkylakeClient target
This patch adds the instruction scheduling information for the SkylakeClient (SKL) architecture target by adding the file X86SchedSkylakeClient.td located under the X86 Target.
We used the scheduling information retrieved from the Skylake architects in order to create the file.
The scheduling information includes latency, number of micro-Ops and used ports by each SKL instruction.
The patch continues the scheduling replacement and insertion effort started with the SNB target in r307529 and r310792 and for HSW in r311879.
Please expect some performance fluctuations due to code alignment effects.
There is a bot that is checking out libcxx and lit with nothing
else and then running lit.py against the test tree. Since there's
no LLVM source tree, there's no LLVM CMake. CMake actually
reports this as a warning saying unsupported libcxx configuration,
but I guess someone is depending on it anyway.
[Coverage] Use gap regions to select better line exec counts
After clang started emitting deferred regions (r312818), llvm-cov has
had a hard time picking reasonable line execuction counts. There have
been one or two generic improvements in this area (e.g r310012), but
line counts can still report coverage for whitespace instead of code
(llvm.org/PR34612).
To fix the problem:
* Introduce a new region kind so that frontends can explicitly label
gap areas.
This is done by changing the encoding of the columnEnd field of
MappingRegion. This doesn't substantially increase binary size, and
makes it easy to maintain backwards-compatibility.
* Don't set the line count to a count from a gap area, unless the count
comes from a wrapped segment.
Since the path a user specifies to the llvm-lit script might be
different than the source tree they built from (since they could
be behind different symlinks), we need to use realpath to make
sure that path comparisons work as expected.
Even better would be to use a custom dictionary comparison with
actual file equivalence comparison semantics, but this is the
least friction to unbreak things for now.
Hans Wennborg [Mon, 18 Sep 2017 23:08:42 +0000 (23:08 +0000)]
Revert r313400 "[DebugInfo] Insert DW_OP_deref when spilling indirect DBG_VALUEs"
This caused asserts in Chromium. See http://crbug.com/766261
> Summary:
> This comes up in optimized debug info for C++ programs that pass and
> return objects indirectly by address. In these programs,
> llvm.dbg.declare survives optimization, which causes us to emit indirect
> DBG_VALUE instructions. The fast register allocator knows to insert
> DW_OP_deref when spilling indirect DBG_VALUE instructions, but the
> LiveDebugVariables did not until this change.
>
> This fixes part of PR34513. I need to look into why this doesn't work at
> -O0 and I'll send follow up patches to handle that.
>
> Reviewers: aprantl, dblaikie, probinson
>
> Subscribers: qcolombet, hiraditya, llvm-commits
>
> Differential Revision: https://reviews.llvm.org/D37911
If we have an AssertZext of a truncated value that has already been AssertZext'ed,
we can assert on the wider source op to improve the zext-y knowledge:
assert (trunc (assert X, i8) to iN), i1 --> trunc (assert X, i1) to iN
This moves a fold from being Mips-specific to general combining, and x86 shows
improvements.
[DAG, x86] allow store merging before and after legalization (PR34217)
rL310710 allowed store merging to occur after legalization to catch stores that are created late,
but this exposes a logic hole seen in PR34217:
https://bugs.llvm.org/show_bug.cgi?id=34217
We will miss merging stores if the target lowers vector extracts into target-specific operations.
This patch allows store merging to occur both before and after legalization if the target chooses
to get maximum merging.
I don't think the potential regressions in the other tests are relevant. The tests are for
correctness of weird IR constructs rather than perf tests, and I think those are still correct.
[X86] Make sure we still emit zext for GR32 to GR64 when the source of the zext is AssertZext
The AssertZext we might see in this case is only giving information about the lower 32 bits. It isn't providing information about the upper 32 bits. So we should emit a zext.
[X86] Don't emit COPY_TO_REG to ABCD registers before EXTRACT_SUBREG of sub_8bit
This is similar to D37843, but for sub_8bit. This fixes all of the patterns except for the 2 that emit only an EXTRACT_SUBREG. That causes a verifier error with global isel because global isel doesn't know to issue the ABCD when doing this extract on 32-bits targets.
[X86] Don't emit COPY_TO_REG to ABCD registers before EXTRACT_SUBREG of sub_8bit_hi
I'm pretty sure that InstrEmitter::EmitSubregNode will take care of this itself by calling ConstrainForSubReg which in turn calls TRI->getSubClassWithSubReg.
I think Jakob Stoklund Olesen alluded to this in his commit message for r141207 which added the code to EmitSubregNode.
Simon Pilgrim [Mon, 18 Sep 2017 16:45:05 +0000 (16:45 +0000)]
[SelectionDAG] Add BITCAST handling to ComputeNumSignBits for splatted sign bits.
For cases where we are BITCASTing to vectors of smaller elements, then if the entire source was a splatted sign (src's NumSignBits == SrcBitWidth) we can say that the dst's NumSignBit == DstBitWidth, as we're just splitting those sign bits across multiple elements.
We could generalize this but at the moment the only use case I have is to peek through bitcasts to vector comparison results.
[X86] Fix two more places to prefer VPERMQ/PD over VPERM2X128 when AVX2 is enabled
The shuffle combining and lowerVectorShuffleAsLanePermuteAndBlend were both still trying to use VPERM2XF128 for unary shuffles when AVX2 is enabled. VPERM2X128 takes two inputs meaning when we use it for a unary shuffle one of those inputs is left undefined creating a false dependency on whatever register gets allocated there.
If we have VPERMQ/PD we should prefer those since they only have a single input.
Sam Parker [Mon, 18 Sep 2017 14:46:14 +0000 (14:46 +0000)]
[AArch64] Add V8_2aOps feature to Cortex-A55 and 75
Add the missing hardware features the ProcA55 and ProcA75 feature.
These are already enabled via the target parser, but I had missed
them in the backend.
Sam Parker [Mon, 18 Sep 2017 14:28:51 +0000 (14:28 +0000)]
[ARM] Implement isTruncateFree
Implement the isTruncateFree hooks, lifted from AArch64, that are
used by TargetTransformInfo. This allows simplifycfg to reduce the
test case into a single basic block.
[dwarfdump] Make .eh_frame an alias for .debug_frame
This patch makes the `.eh_frame` extension an alias for `.debug_frame`.
Up till now it was only possible to dump the section using objdump, but
not with dwarfdump. Since the two are essentially interchangeable, we
dump whichever of the two is present.
As a workaround, this patch also adds parsing for 3 currently
unimplemented CFA instructions: `DW_CFA_def_cfa_expression`,
`DW_CFA_expression`, and `DW_CFA_val_expression`. Because I lack the
required knowledge, I just parse the fields without actually creating
the instructions.
Finally, this also fixes the typo in the `.debug_frame` section name
which incorrectly contained a trailing `s`.
Nikolai Bozhenov [Mon, 18 Sep 2017 10:17:59 +0000 (10:17 +0000)]
[X86FixupBWInsts] More precise register liveness if no <imp-use> on MOVs.
Summary:
Subregister liveness tracking is not implemented for X86 backend, so
sometimes the whole super register is said to be live, when only a
subregister is really live. That might happen if the def and the use
are located in different MBBs, see added fixup-bw-isnt.mir test.
However, using knowledge of the specific instructions handled by the
bw-fixup-pass we can get more precise liveness information which this
change does.
[XRay][tools] Support tail-call exits before we write them in the runtime
Summary:
This change adds support for explicit tail-exit records to be written by
the XRay runtime. This lets us differentiate the tail exit
records/events in the log, and allows us to treat those exit events
especially in the future. For now we allow printing those out in YAML
(and reading them in).
[X86] Strengthen some of the SD type constraints in X86InstrFragmentsSIMD.td
This effects the vector shift and rotates as well as some of the vector compares.
The changes to the shifts by immediates allows a few hundred bytes to be removed by removing type checks for the size of the immediate containing the shift/rotate amount.