Sanjay Patel [Mon, 10 Jun 2019 14:46:36 +0000 (14:46 +0000)]
[InstCombine] change canonicalization to fabs() to use FMF on fsub
Similar to rL362909:
This isn't the ideal fix (use FMF on the select), but it's still an
improvement until we have better FMF propagation to selects and other
FP math operators.
I don't think there's much risk of regression from this change by
not including the FMF on the fcmp any more. The nsz/nnan FMF
should be the same on the fcmp and the fsub because they have the
same operand.
Simon Tatham [Mon, 10 Jun 2019 14:43:55 +0000 (14:43 +0000)]
[ARM] Disallow PC, and optionally SP, in VMOVRH and VMOVHR.
Arm v8.1-M supports the VMOV instructions that move a half-precision
value to and from a GPR, but not if the GPR is SP or PC.
To fix this, I've changed those instructions to use the rGPR register
class instead of GPR. rGPR always excludes PC, and it excludes SP
except in the presence of the HasV8Ops target feature (i.e. Arm v8-A).
So the effect is that VMOV.F16 to and from PC is now illegal
everywhere, but VMOV.F16 to and from SP is illegal only on non-v8-A
cores (which I believe is all as it should be).
George Rimar [Mon, 10 Jun 2019 12:43:18 +0000 (12:43 +0000)]
[yaml2obj/obj2yaml] - Make RawContentSection::Content and RawContentSection::Size optional
This is a follow-up for D62809.
Content and Size fields should be optional as was discussed in comments
of the D62809's thread. With that, we can describe a specific string table and
symbol table sections in a more correct way and also show appropriate errors.
The patch adds lots of test cases where the behavior is described in details.
George Rimar [Mon, 10 Jun 2019 11:38:06 +0000 (11:38 +0000)]
[yaml2obj] - Do not assert when .dynsym is specified explicitly, but .dynstr is not present.
We have a code in buildSectionIndex() that adds implicit sections:
// Add special sections after input sections, if necessary.
for (StringRef Name : implicitSectionNames())
if (SN2I.addName(Name, SecNo)) {
// Account for this section, since it wasn't in the Doc
++SecNo;
DotShStrtab.add(Name);
}
The problem arises when .dynsym is specified explicitly and no
DynamicSymbols is used. In that case, we do not add
.dynstr implicitly and will assert later when will try to set Link
for .dynsym.
Seems, in this case, reasonable behavior is to allow Link field to be zero.
This is what this patch does.
David Green [Mon, 10 Jun 2019 10:22:14 +0000 (10:22 +0000)]
[ARM] Enable Unroll UpperBound
This option allows loops with small max trip counts to be fully unrolled. This
can help with code like the remainder loops from manually unrolled loops like
those that appear in the cmsis dsp library. We would apparently previously
runtime unroll them with the default unroll count (4).
Nikola Prica [Mon, 10 Jun 2019 08:41:06 +0000 (08:41 +0000)]
[DebugInfo] More strict debug range for stack variables
Variable's stack location can stretch longer than it should. If a
variable is placed at the stack in a some nested basic block its range
can be calculated to be up to the next occurrence of the variable's
DBG_VALUE, or up to the end of the function, thus covering a basic
blocks that should not be included in the variable’s location range.
This happens because the DbgEntityHistoryCalculator ends register
locations at the end of a basic block only if the variable’s location
register has been changed throughout the function, which is not the
case for the register used to reference stack objects.
This patch also tries to produce a single value location if the location
list builder managed to merge all the locations into one.
QingShan Zhang [Mon, 10 Jun 2019 05:40:21 +0000 (05:40 +0000)]
[DAGCombine] Match a pattern where a wide type scalar value is stored by several narrow stores
This opportunity is found from spec 2017 557.xz_r. And it is used by the sha encrypt/decrypt. See sha-2/sha512.c
static u64 load64(const unsigned char* y)
{
u64 res = 0;
for(int i = 0; i != 8; ++i)
res |= (u64)(y[i]) << ((7-i) * 8);
return res;
}
The load64 has been implemented by https://reviews.llvm.org/D26149
This patch is trying to implement the store pattern.
Match a pattern where a wide type scalar value is stored by several narrow
stores. Fold it into a single store or a BSWAP and a store if the targets
supports it.
Craig Topper [Mon, 10 Jun 2019 04:50:12 +0000 (04:50 +0000)]
[X86] When promoting i16 compare with immediate to i32, try to use sign_extend for eq/ne if the input is truncated from a type with enough sign its.
Summary:
Our default behavior is to use sign_extend for signed comparisons and zero_extend for everything else. But for equality we have the freedom to use either extension. If we can prove the input has been truncated from something with enough sign bits, we can use sign_extend instead and let DAG combine optimize it out. A similar rule is used by type legalization in LegalizeIntegerTypes.
This gets rid of the movzx in PR42189. The immediate will still take 4 bytes instead of the 2 bytes plus 0x66 prefix a cmp di, 32767 would get, but it avoids a length changing prefix.
Craig Topper [Mon, 10 Jun 2019 04:37:16 +0000 (04:37 +0000)]
[X86] Disable f32->f64 extload when sse2 is enabled
Summary:
We can only use the memory form of cvtss2sd under optsize due to a partial register update. So previously we were emitting 2 instructions for extload when optimizing for speed. Also due to a late optimization in preprocessiseldag we had to handle (fpextend (loadf32)) under optsize.
This patch forces extload to expand so that it will always be in the (fpextend (loadf32)) form during isel. And when optimizing for speed we can just let each of those pieces select an instruction independently.
Craig Topper [Mon, 10 Jun 2019 00:41:07 +0000 (00:41 +0000)]
[X86] Convert f32/f64 FANDN/FAND/FOR/FXOR to vector logic ops and scalar_to_vector/extract_vector_elts to reduce isel patterns.
Previously we did the equivalent operation in isel patterns with
COPY_TO_REGCLASS operations to transition. By inserting
scalar_to_vetors and extract_vector_elts before isel we can
allow each piece to be selected individually and accomplish the
same final result.
I ideally we'd use vector operations earlier in lowering/combine,
but that looks to be more difficult.
The scalar-fp-to-i64.ll changes are because we have a pattern for
using movlpd for store+extract_vector_elt. While an f64 store
uses movsd. The encoding sizes are the same.
Roman Lebedev [Sun, 9 Jun 2019 16:30:14 +0000 (16:30 +0000)]
[NFC][InstCombine] Revisit canonicalize-constant-low-bit-mask-and-icmp-s* tests in preparatio for PR42198.
The `icmp sgt`/`icmp sle` variants are, too, miscompiles:
https://rise4fun.com/Alive/JFNP
https://rise4fun.com/Alive/jHvL
A precondition 'x != 0' was forgotten by me.
While ensuring test coverage for `-1`, also add test coverage
for `0` mask. Mask `0` is allowed for all the folds,
mask `-1` is allowed for all the folds with unsigned `icmp` pred.
Constant mask `0` is missed though.
Sanjay Patel [Sun, 9 Jun 2019 16:22:01 +0000 (16:22 +0000)]
[InstCombine] change canonicalization to fabs() to use FMF on fneg
This isn't the ideal fix (use FMF on the select), but it's still an
improvement until we have better FMF propagation to selects and other
FP math operators.
I don't think there's much risk of regression from this change by
not including the FMF on the fcmp any more. The nsz/nnan FMF
should be the same on the fcmp and the fneg (fsub) because they
have the same operand.
This works around the most glaring FMF logical inconsistency cited
in PR38086:
https://bugs.llvm.org/show_bug.cgi?id=38086
Sanjay Patel [Sun, 9 Jun 2019 13:48:59 +0000 (13:48 +0000)]
[InstSimplify] enhance fcmp fold with never-nan operand
This is another step towards correcting our usage of fast-math-flags when applied on an fcmp.
In this case, we are checking for 'nnan' on the fcmp itself rather than the operand of
the fcmp. But I'm leaving that clause in until we're more confident that we can stop
relying on fcmp's FMF.
By using the more general "isKnownNeverNaN()", we gain a simplification shown on the
tests with 'uitofp' regardless of the FMF on the fcmp (uitofp never produces a NaN).
On the tests with 'fabs', we are now relying on the FMF for the call fabs instruction
in addition to the FMF on the fcmp.
Anton Afanasyev [Sun, 9 Jun 2019 12:15:47 +0000 (12:15 +0000)]
[MIR] Add simple PRE pass to MachineCSE
This is the second part of the commit fixing PR38917 (hoisting
partitially redundant machine instruction). Most of PRE (partitial
redundancy elimination) and CSE work is done on LLVM IR, but some of
redundancy arises during DAG legalization. Machine CSE is not enough
to deal with it. This simple PRE implementation works a little bit
intricately: it passes before CSE, looking for partitial redundancy
and transforming it to fully redundancy, anticipating that the next
CSE step will eliminate this created redundancy. If CSE doesn't
eliminate this, than created instruction will remain dead and eliminated
later by Remove Dead Machine Instructions pass.
The third part of the commit is supposed to refactor MachineCSE,
to make it more clear and to merge MachinePRE with MachineCSE,
so one need no rely on further Remove Dead pass to clear instrs
not eliminated by CSE.
First step: https://reviews.llvm.org/D54839
Fixes llvm.org/PR38917
This is fixed recommit of r361356 after PowerPC64 multistage build failure.
[CaptureTracking] Don't let comparisons against null escape inbounds pointers
Pointers that are in-bounds (either through dereferenceable_or_null or
thorough a getelementptr inbounds) cannot be captured with a comparison
against null. There is no way to construct a pointer that is still in
bounds but also NULL.
This helps safe languages that insert null checks before load/store
instructions. Without this patch, almost all pointers would be
considered captured even for simple loads. With this patch, an icmp with
null will not be seen as escaping as long as certain conditions are met.
There was a lot of discussion about this patch. See the Phabricator
thread for detals.
Sanjay Patel [Sat, 8 Jun 2019 15:12:33 +0000 (15:12 +0000)]
[InstSimplify] enhance fcmp fold with never-nan operand
This is 1 step towards correcting our usage of fast-math-flags when applied on an fcmp.
In this case, we are checking for 'nnan' on the fcmp itself rather than the operand of
the fcmp. But I'm leaving that clause in until we're more confident that we can stop
relying on fcmp's FMF.
By using the more general "isKnownNeverNaN()", we gain a simplification shown on the
tests with 'uitofp' regardless of the FMF on the fcmp (uitofp never produces a NaN).
On the tests with 'fabs', we are now relying on the FMF for the call fabs instruction
in addition to the FMF on the fcmp.
I'll update the 'ult' case below here as a follow-up assuming no problems here.
David Green [Sat, 8 Jun 2019 10:32:53 +0000 (10:32 +0000)]
[ARM] Adjust isLegalT1AddressImmediate for non-legal types
Types such as float and i64's do not have legal loads in Thumb1, but will still
be loaded with a LDR (or potentially multiple LDR's). As such we can treat the
cost of addressing mode calculations the same as an i32 and get some optimisation
benefits.
David Green [Sat, 8 Jun 2019 10:18:23 +0000 (10:18 +0000)]
[ARM] Add MVE addressing to isLegalT2AddressImmediate
Now with MVE being added, we can add the vector addressing mode costs for it.
These are generally imm7 multiplied by the size of the type being loaded /
stored.
David Green [Sat, 8 Jun 2019 10:09:02 +0000 (10:09 +0000)]
[ARM] Add fp16 addressing to isLegalT2AddressImmediate
The fp16 version of VLDR takes a imm8 multiplied by 2. This updates the costs
to account for those, and adds extra testing. It is dependant upon hasFPRegs16
as this is what the load/store instructions require.
David Green [Sat, 8 Jun 2019 09:36:49 +0000 (09:36 +0000)]
[ARM] Add HasNEON for all Neon patterns in ARMInstrNEON.td. NFCI
We are starting to add an entirely separate vector architecture to the ARM
backend. To do that we need at least some separation between the existing NEON
and the new MVE code. This patch just goes through the Neon patterns and
ensures that they are predicated on HasNEON, giving MVE a stable place to start
from.
No tests yet as this is largely an NFC, and we don't have the other target that
will treat any of these intructions as legal.
Jonas Paulsson [Sat, 8 Jun 2019 06:19:15 +0000 (06:19 +0000)]
[SystemZ, RegAlloc] Favor 3-address instructions during instruction selection.
This patch aims to reduce spilling and register moves by using the 3-address
versions of instructions per default instead of the 2-address equivalent
ones. It seems that both spilling and register moves are improved noticeably
generally.
Regalloc hints are passed to increase conversions to 2-address instructions
which are done in SystemZShortenInst.cpp (after regalloc).
Since the SystemZ reg/mem instructions are 2-address (dst and lhs regs are
the same), foldMemoryOperandImpl() can no longer trivially fold a spilled
source register since the reg/reg instruction is now 3-address. In order to
remedy this, new 3-address pseudo memory instructions are used to perform the
folding only when the dst and lhs virtual registers are known to be allocated
to the same physreg. In order to not let MachineCopyPropagation run and
change registers on these transformed instructions (making it 3-address), a
new target pass called SystemZPostRewrite.cpp is run just after
VirtRegRewriter, that immediately lowers the pseudo to a target instruction.
If it would have been possibe to insert a COPY instruction and change a
register operand (convert to 2-address) in foldMemoryOperandImpl() while
trusting that the caller (e.g. InlineSpiller) would update/repair the
involved LiveIntervals, the solution involving pseudo instructions would not
have been needed. This is perhaps a potential improvement (see Phabricator
post).
Common code changes:
* A new hook TargetPassConfig::addPostRewrite() is utilized to be able to run a
target pass immediately before MachineCopyPropagation.
* VirtRegMap is passed as an argument to foldMemoryOperand().
Seiya Nuta [Sat, 8 Jun 2019 01:22:54 +0000 (01:22 +0000)]
[llvm-objcopy][MachO] Recompute and update offset/size fields in the writer
Summary:
Recompute and update offset/size fields so that we can implement llvm-objcopy options like --only-section.
This patch is the first step and focuses on supporting load commands that covered by existing tests: executable files and
dynamic libraries are not supported.
Mike Spertus [Sat, 8 Jun 2019 00:23:08 +0000 (00:23 +0000)]
Visualizer for APInt and remove obsolete visualizer
Visualizer for the simple case of APInt (uints < 2^64)
as will be required for Clang ConstantArrayType visualizer.
Also, removed obsolete VS2013 SmallVectorVisualizer as VS2013
is no longer supported.
Amara Emerson [Sat, 8 Jun 2019 00:05:17 +0000 (00:05 +0000)]
Factor out SelectionDAG's switch analysis and lowering into a separate component.
In order for GlobalISel to re-use the significant amount of analysis and
optimization code in SDAG's switch lowering, we first have to extract it and
create an interface to be used by both frameworks.
Reid Kleckner [Fri, 7 Jun 2019 22:05:12 +0000 (22:05 +0000)]
[COFF] Fix /export:foo=bar when bar is a weak alias
Summary:
When handling exports from the command line or from .def files, the
linker does a "fuzzy" string lookup to allow finding mangled symbols.
However, when the symbol is re-exported under a new name, the linker has
to transfer the decorations from the exported symbol over to the new
name. This is implemented by taking the mangled symbol that was found in
the object and replacing the original symbol name with the export name.
Before this patch, LLD implemented the fuzzy search by adding an
undefined symbol with the unmangled name, and then during symbol
resolution, checking if similar mangled symbols had been added after the
last round of symbol resolution. If so, LLD makes the original symbol a
weak alias of the mangled symbol. Later, to get the original symbol
name, LLD would look through the weak alias and forward it on to the
import library writer, which copies the symbol decorations. This
approach doesn't work when bar is itself a weak alias, as is the case in
asan. It's especially bad when the aliasee of bar contains the string
"bar", consider "bar_default". In this case, we would end up exporting
the symbol "foo_default" when we should've exported just "foo".
To fix this, don't look through weak aliases to find the mangled name.
Save the mangled name earlier during fuzzy symbol lookup.
[llvm-objdump] Fix Bugzilla ID 41862 to support checking addresses of disassembled object
Summary:
This fixes the bugzilla id,41862 to support dealing with checking
stop address against start address to support this not being a
proper object to check the disasembly against like gnu objdump
currently does.
Adrian McCarthy [Fri, 7 Jun 2019 21:14:33 +0000 (21:14 +0000)]
Fix string literals to avoid deprecation warnings in regexp patterns
In LLDB, where tests run with the debug version of Python, we get a
series of deprecation warnings because escape sequences like `\(` are
being treated as part of the string literal rather than an escape for
the regexp pattern.
Alina Sbirlea [Fri, 7 Jun 2019 20:43:55 +0000 (20:43 +0000)]
[DomTreeUpdater] Add all insert before all delete updates to reduce compile time.
Summary:
The cleanup in D62751 introduced a compile-time regression due to the way DT updates are performed.
Add all insert edges then all delete edges in DTU to match the previous compile time.
Compile time on the test provided by @mstorsjo before and after this patch on my machine:
113.046s vs 35.649s
Repro: clang -target x86_64-w64-mingw32 -c -O3 glew-preproc.c; on https://martin.st/temp/glew-preproc.c.
Lang Hames [Fri, 7 Jun 2019 19:33:51 +0000 (19:33 +0000)]
[ORC] Update symbol lookup to use a single callback with a required symbol state
rather than two callbacks.
The asynchronous lookup API (which the synchronous lookup API wraps for
convenience) used to take two callbacks: OnResolved (called once all requested
symbols had an address assigned) and OnReady to be called once all requested
symbols were safe to access). This patch updates the asynchronous lookup API to
take a single 'OnComplete' callback and a required state (SymbolState) to
determine when the callback should be made. This simplifies the common use case
(where the client is interested in a specific state) and will generalize neatly
as new states are introduced to track runtime initialization of symbols.
Clients who were making use of both callbacks in a single query will now need to
issue two queries (one for SymbolState::Resolved and another for
SymbolState::Ready). Synchronous lookup API clients who were explicitly passing
the WaitOnReady argument will now need neeed to pass a SymbolState instead (for
'WaitOnReady == true' use SymbolState::Ready, for 'WaitOnReady == false' use
SymbolState::Resolved). Synchronous lookup API clients who were using default
arugment values should see no change.
Revert "[ADT] Enable set_difference() to be used on StringSet"
This reverts commit 0bddef79019a23ab14fcdb27028e55e484674c88, it was
causing ASan failures on the sanitizer bots:
http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/32800
llvm-objcopy: Implement --extract-partition and --extract-main-partition.
This implements the functionality described in
https://lld.llvm.org/Partitions.html. It works as follows:
- Reads the section headers using the ELF header at file offset 0;
- If extracting a loadable partition:
- Finds the section containing the required partition ELF header by looking it up in the section table;
- Reads the ELF and program headers from the section.
- If extracting the main partition:
- Reads the ELF and program headers from file offset 0.
- Filters the section table according to which sections are in the program headers that it read:
- If ParentSegment != nullptr or section is not SHF_ALLOC, then it goes in.
- Sections containing partition ELF headers or program headers are excluded as there are no headers for these in ordinary ELF files.
Before this patch we used either a single thread, or the number of
hardware threads available, effectively ignoring the number of threads
specified on the command line.
James Henderson [Fri, 7 Jun 2019 16:43:44 +0000 (16:43 +0000)]
[docs]Move llvm-readobj from "Developer Tools" to "Basic Commands"
On the Command Guide page, there are multiple sections with links to the
different documentation pages available for LLVM tools. The "Basic
Tools" section includes tools like llvm-objdump, llvm-nm and so on. The
"Developer Tools" section contains things like FileCheck and lit. This
change moves llvm-readobj into the former block, from the latter.
Sanjay Patel [Fri, 7 Jun 2019 16:09:54 +0000 (16:09 +0000)]
[Analysis] simplify code for getSplatValue(); NFC
AFAIK, this is only currently called by TTI, but it could be
used from instcombine or CGP to help solve problems like:
https://bugs.llvm.org/show_bug.cgi?id=37428
https://bugs.llvm.org/show_bug.cgi?id=42174
Nico Weber [Fri, 7 Jun 2019 16:06:27 +0000 (16:06 +0000)]
Attempt to fix nm-archive.test after r362798
llvm-lib now needs a `target triple` for bitcode, so add a new file
that's like trivial.ll but has one, and use that in the test.
(trivial.ll had a comment that looked like it wasn't supposed to be used
in tests directly, so I don't want to change that file.)
David Tenty [Fri, 7 Jun 2019 15:45:25 +0000 (15:45 +0000)]
Build with _XOPEN_SOURCE defined on AIX
Summary:
It is useful to build with _XOPEN_SOURCE defined on AIX, enabling X/Open
and POSIX compatibility mode, to work around stray macros and other
bugs in the headers provided by the system and build compiler.
This patch adds the config to cmake to build with _XOPEN_SOURCE defined
on AIX with a few exceptions. Google Test internals require access to
platform specific thread info constructs on AIX so in that case we build
with _ALL_SOURCE defined instead. Libclang also uses header which needs
_ALL_SOURCE on AIX so we leave that as is as well.
We also add building on AIX with the large file API and doing CMake
header checks with X/OPEN definitions so the results are consistent with
the environment that will be present in the build.
When we call checkResourceLimit in bumpCycle or bumpNode, and we
know the resource count has just reached the limit (the equations
are equal). We should return true to mark that we are resource
limited for next schedule, or else we might continue to schedule
in favor of latency for 1 more schedule and create a schedule that
actually overbook the resource.
When we call checkResourceLimit to estimate the resource limite before
scheduling, we don't need to return true even if the equations are
equal, as it shouldn't limit the schedule for it .
Matt Arsenault [Fri, 7 Jun 2019 13:33:34 +0000 (13:33 +0000)]
TailDuplicator: Remove no-op analyzeBranch call
This could fail, which looked concerning. However nothing was actually
using the results of this. I assume this was intended to use the
anti-feature of analyzeBranch of removing instructions, but wasn't
actually calling it with AllowModify = true.