Craig Topper [Mon, 24 Jun 2019 17:28:41 +0000 (17:28 +0000)]
[X86] Don't a vzext_movl in LowerBuildVectorv16i8/LowerBuildVectorv8i16 if there are no zeroes in the vector we're building.
In LowerBuildVectorv16i8 we took care to use an any_extend if the first pair is in the lower 16-bits of the vector and no elements are 0. So bits [31:16] will be undefined. But we still emitted a vzext_movl to ensure that bits [127:32] are 0. If we don't need any zeroes we should be consistent and make all of 127:16 undefined.
In LowerBuildVectorv8i16 we can just delete the vzext_movl code because we only use the scalar_to_vector when there are no zeroes. So the vzext_movl is always unnecessary.
Found while investigating whether (vzext_movl (scalar_to_vector (loadi32)) patterns are necessary. At least one of the cases where they were necessary was where the loadi32 matched 32-bit aligned 16-bit extload. Seemed weird that we required vzext_movl for that case.
Craig Topper [Mon, 24 Jun 2019 17:28:26 +0000 (17:28 +0000)]
[X86] Cleanups and safety checks around the isFNEG
This patch does a few things to start cleaning up the isFNEG function.
-Remove the Op0/Op1 peekThroughBitcast calls that seem unnecessary. getTargetConstantBitsFromNode has its own peekThroughBitcast inside. And we have a separate peekThroughBitcast on the return value.
-Add a check of the scalar size after the first peekThroughBitcast to ensure we haven't changed the element size and just did something like f32->i32 or f64->i64.
-Remove an unnecessary check that Op1's type is floating point after the peekThroughBitcast. We're just going to look for a bit pattern from a constant. We don't care about its type.
-Add VT checks on several places that consume the return value of isFNEG. Due to the peekThroughBitcasts inside, the type of the return value isn't guaranteed. So its not safe to use it to build other nodes without ensuring the type matches the type being used to build the node. We might be able to replace these checks with bitcasts instead, but I don't have a test case so a bail out check seemed better for now.
Ayke van Laethem [Mon, 24 Jun 2019 16:23:17 +0000 (16:23 +0000)]
[bindings/go] Add debug information accessors
Add debug information accessors, as provided in the following patches:
https://reviews.llvm.org/D46627 (DILocation)
https://reviews.llvm.org/D52693 metadata kind
https://reviews.llvm.org/D60481 get/set debug location on a Value
https://reviews.llvm.org/D60489 (DIScope)
The API as proposed in this patch is similar to the current Value API,
with a single root type and methods that are only valid for certain
subclasses. I have considered just implementing generic Line() calls
(that are valid on all DINodes that have a line) but the implementation
of that got a bit awkward without support from the C API. I've also
considered creating generic getters like a Metadata.DebugLoc() that
returns a DebugLoc, but there is a mismatch between the Go DI nodes in
the LLVM API and the actual DINode class hierarchy, so that's also hard
to get right (without being confusing or breaking the API).
Matt Arsenault [Mon, 24 Jun 2019 15:50:29 +0000 (15:50 +0000)]
CodeGen: Introduce a class for registers
Avoids using a plain unsigned for registers throughoug codegen.
Doesn't attempt to change every register use, just something a little
more than the set needed to build after changing the return type of
MachineOperand::getReg().
Summary:
Removing the unused variable AllSGPRSpilledToVGPRs in
SIFrameLowering::processFunctionBeforeFrameFinalized
to avoid
error: variable 'AllSGPRSpilledToVGPRs' set but not used
[-Werror=unused-but-set-variable]
Sanjay Patel [Mon, 24 Jun 2019 15:20:49 +0000 (15:20 +0000)]
[InstCombine] reduce funnel-shift i16 X, X, 8 to bswap X
Prefer the more exact intrinsic to remove a use of the input value
and possibly make further transforms easier (we will still need
to match patterns with funnel-shift of wider types as pieces of
bswap, especially if we want to canonicalize to funnel-shift with
constant shift amount). Discussed in D46760.
Matt Arsenault [Mon, 24 Jun 2019 14:53:56 +0000 (14:53 +0000)]
AMDGPU: Fold frame index into MUBUF
This matters for byval uses outside of the entry block, which appear
as copies.
Previously, the only folding done was during selection, which could
not see the underlying frame index. For any uses outside the entry
block, the frame index was materialized in the entry block relative to
the global scratch wave offset.
This may produce worse code in cases where the offset ends up not
fitting in the MUBUF offset field. A better heuristic would be helpfu
for extreme frames.
Simon Pilgrim [Mon, 24 Jun 2019 12:47:17 +0000 (12:47 +0000)]
[DAGCombine] visitMUL - allow shift by zero in MulByConstant.
This can occur under certain circumstances when undefs are created later on in the constant multipliers (e.g. in this case due to SimplifyDemandedVectorElts). Its better to let the shift by zero to occur and perform any cleanup afterward.
Bjorn Pettersson [Mon, 24 Jun 2019 12:07:11 +0000 (12:07 +0000)]
[Scalarizer] Add scalarizer support for smul.fix.sat
Summary:
Handle smul.fix.sat in the scalarizer. This is done by
adding smul.fix.sat to the set of "isTriviallyVectorizable"
intrinsics.
The addition of smul.fix.sat in isTriviallyVectorizable and
hasVectorInstrinsicScalarOpd can also be seen as a preparation
to be able to use hasVectorInstrinsicScalarOpd in ConstantFolding.
James Henderson [Mon, 24 Jun 2019 10:50:49 +0000 (10:50 +0000)]
[docs][llvm-nm] Add missing options to documentation
There were several options missing from the documentation. This patch
adds them as well as improving some wording and separating the Mach-O
only options into a separate section.
Simon Tatham [Mon, 24 Jun 2019 10:00:39 +0000 (10:00 +0000)]
[ARM] Add MVE interleaving load/store family.
This adds the family of loads and stores with names like VLD20.8 and
VST42.32, which load and store parts of multiple q-registers in such a
way that executing both VLD20 and VLD21, or all four of VLD40..VLD43,
will distribute 2 or 4 vectors' worth of memory data across the lanes
of the same number of registers but in a transposed order.
In addition to the Tablegen descriptions of the instructions
themselves, this patch also adds encode and decode support for the
QQPR and QQQQPR register classes (representing the range of loaded or
stored vector registers), and tweaks to the parsing system for lists
of vector registers to make it return the right format in this case
(since, unlike NEON, MVE regards q-registers as primitive, and not
just an alias for two d-registers).
Pavel Labath [Mon, 24 Jun 2019 09:11:24 +0000 (09:11 +0000)]
[Support] Fix error handling in DataExtractor::get[US]LEB128
Summary:
These functions are documented as not modifying the offset argument if
the extraction fails (just like other DataExtractor functions). However,
while reviewing D63591 we discovered that this is not the case -- if the
function reaches the end of the data buffer, it will just return the
value parsed until that point and set offset to point to the end of the
buffer.
This fixes the functions to act as advertised, and adds a regression
test.
George Rimar [Mon, 24 Jun 2019 08:29:54 +0000 (08:29 +0000)]
[llvm-readobj/llvm-readelf] - Eliminate the elf-groups.x86_64 precompiled binary from the inputs.
We do not need the elf-groups.x86_64. In one of the tests, it was
used for no solid reason, and for the second test case we can use
YAML input with SHT_GROUP sections.
The patch performs a cleanup of one of the test cases, removes another
one completely (since during the review was found out it actually
duplicates one of the existent tests) and removes the precompiled binary.
Craig Topper [Sun, 23 Jun 2019 23:51:21 +0000 (23:51 +0000)]
[X86] Turn v16i16->v16i8 truncate+store into a any_extend+truncstore if we avx512f, but not avx512bw.
Ideally we'd be able to represent this truncate as a any_extend to
v16i32 and a truncate, but SelectionDAG doens't know how to not
fold those together.
We have isel patterns to use a vpmovzxwd+vpdmovdb for the truncate,
but we aren't able to simultaneously fold the load and the store
from the isel pattern. By pulling the truncate into the store we
can successfully hide it from the DAG combiner. Then we can isel
pattern match the truncstore and load+any_extend separately.
Petr Hosek [Sun, 23 Jun 2019 23:12:10 +0000 (23:12 +0000)]
[GN] Generation failure caused by trailing space in file name
When I executed gn.py gen out/gn I got the following error:
ERROR at //compiler-rt/lib/builtins/BUILD.gn:162:7: Only source, header, and object files belong in the sources of a static_library. //compiler-rt/lib/builtins/emutls.c is not one of the valid types.
"emutls.c ",
^----------
See //compiler-rt/lib/BUILD.gn:3:5: which caused the file to be included.
"//compiler-rt/lib/builtins",
^---------------------------
It turns out to be that the latest gn doesn't accept ill-format file name. And the emutls.c above has a trailing space.
Remove the trailing space should work.
Philip Reames [Sun, 23 Jun 2019 17:06:57 +0000 (17:06 +0000)]
[IndVars] Remove dead instructions after folding trivial loop exit
In rL364135, I taught IndVars to fold exiting branches in loops with a zero backedge taken count (i.e. loops that only run one iteration). This extends that to eliminate the dead comparison left around.
Sanjay Patel [Sun, 23 Jun 2019 14:22:37 +0000 (14:22 +0000)]
[InstCombine] squash is-power-of-2 that uses ctpop
This is another intermediate IR step towards solving PR42314:
https://bugs.llvm.org/show_bug.cgi?id=42314
We can test if a value is power-of-2-or-0 using ctpop(X) < 2,
so combining that with a non-zero check of the input is the
same as testing if exactly 1 bit is set:
Craig Topper [Sun, 23 Jun 2019 07:00:46 +0000 (07:00 +0000)]
[SelectionDAG] Remove the code that attempts to calculate the alignment for the second half of a split masked load/store.
The code divides the alignment by 2 if the original alignment is
equal to the original VT size. But this wouldn't be correct
if the alignment was larger than the VT size.
The memory operand object already takes care of calling MinAlign
on the base alignment and the memory pointer offset. So we don't
need any special code at all.
Craig Topper [Sun, 23 Jun 2019 06:06:04 +0000 (06:06 +0000)]
[X86][SelectionDAG] Cleanup and simplify masked_load/masked_store in tablegen. Use more precise PatFrags for scalar masked load/store.
Rename masked_load/masked_store to masked_ld/masked_st to discourage
their direct use. We need to check truncating/extending and
compressing/expanding before using them. This revealed that
our scalar masked load/store patterns were misusing these.
With those out of the way, renamed masked_load_unaligned and
masked_store_unaligned to remove the "_unaligned". We didn't
check the alignment anyway so the name was somewhat misleading.
Make the aligned versions inherit from masked_load/store instead
from a separate identical version. Merge the 3 different alignments
PatFrags into a single version that uses the VT from the SDNode to
determine the size that the alignment needs to match.
Keno Fischer [Sun, 23 Jun 2019 00:29:59 +0000 (00:29 +0000)]
[Support] Fix build under Emscripten
Summary:
Emscripten's libc doesn't define MNT_LOCAL, thus causing a build
failure in the fallback path. However, to the best of my knowledge,
it also doesn't support remote file system mounts, so we may simply
return `true` here (as we do for e.g. Fuchsia). With this fix, the
core LLVM libraries build correctly under emscripten (though some
of the tools and utils do not).
Philip Reames [Sat, 22 Jun 2019 17:54:25 +0000 (17:54 +0000)]
Exploit a zero LoopExit count to eliminate loop exits
This turned out to be surprisingly effective. I was originally doing this just for completeness sake, but it seems like there are a lot of cases where SCEV's exit count reasoning is stronger than it's isKnownPredicate reasoning.
Once this is in, I'm thinking about trying to build on the same infrastructure to eliminate provably untaken checks. There may be something generally interesting here.
Don Hinton [Sat, 22 Jun 2019 17:22:50 +0000 (17:22 +0000)]
[CommandLine] Remove OptionCategory and SubCommand caches from the Option class.
Summary:
This change processes `OptionCategory`s and `SubCommand`s as they
are seen instead of caching them in the Option class and processing
them later. Doing so simplifies the work needed to be done by the Global
parser and significantly reduces the size of the Option class to a mere 64
bytes.
Removing the `OptionCategory` cache saved 24 bytes, and removing
the `SubCommand` cache saved an additional 48 bytes, for a total of a
72 byte reduction.
Hubert Tong [Sat, 22 Jun 2019 16:03:29 +0000 (16:03 +0000)]
[NFC] Fix indentation in PPCAsmPrinter.cpp
After r248261, the indentation switches, inside a namespace definition,
between indenting and not indenting one level in for that namespace; the
abomination occurs in the middle of a class definition. Fix that.
AArch64: Add support for reading pc using llvm.read_register.
This is useful for allowing code to efficiently take an address
that can be later mapped onto debug info. Currently the hwasan
pass achieves this by taking the address of the current function:
http://llvm-cs.pcc.me.uk/lib/Transforms/Instrumentation/HWAddressSanitizer.cpp#921
but this costs two instructions (plus a GOT entry in PIC code) per function
with stack variables. This will allow the cost to be reduced to a single
instruction.
Yuanfang Chen [Sat, 22 Jun 2019 00:22:57 +0000 (00:22 +0000)]
[llvm-objdump] Move --start-address >= --stop-address check out of the
-d code.
Summary:
Move it into `main` function so the checking is effective for all actions
user may do with llvm-objdump; notably, -r and -s in addition to existing -d.
AArch64: Prefer FP-relative debug locations in HWASANified functions.
To help produce better diagnostics for stack use-after-return, we'd like
to be able to determine the addresses of each HWASANified function's local
variables given a small amount of information recorded on entry to the
function. Currently we require all HWASANified functions to use frame pointers
and record (PC, FP) on function entry. This works better than recording SP
because FP cannot change during the function, unlike SP which can change
e.g. due to dynamic alloca.
However, most variables currently end up using SP-relative locations in their
debug info. This prevents us from recomputing the address of most variables
because the distance between SP and FP isn't recorded in the debug info. To
address this, make the AArch64 backend prefer FP-relative debug locations
when producing debug info for HWASANified functions.
Tom Tan [Fri, 21 Jun 2019 23:38:05 +0000 (23:38 +0000)]
[COFF, ARM64] Fix encoding of debugtrap for Windows
On Windows ARM64, intrinsic __debugbreak is compiled into brk #0xF000 which is
mapped to llvm.debugtrap in Clang. Instruction brk #F000 is the defined break
point instruction on ARM64 which is recognized by Windows debugger and
exception handling code, so llvm.debugtrap should map to it instead of
redirecting to llvm.trap (brk #1) as the default implementation.
Julian Lettner [Fri, 21 Jun 2019 21:01:39 +0000 (21:01 +0000)]
[ASan] Use dynamic shadow on 32-bit iOS and simulators
The VM layout on iOS is not stable between releases. On 64-bit iOS and
its derivatives we use a dynamic shadow offset that enables ASan to
search for a valid location for the shadow heap on process launch rather
than hardcode it.
This commit extends that approach for 32-bit iOS plus derivatives and
their simulators.
Craig Topper [Fri, 21 Jun 2019 19:10:21 +0000 (19:10 +0000)]
[X86] Add DAG combine to turn (vzmovl (insert_subvector undef, X, 0)) into (insert_subvector allzeros, (vzmovl X), 0)
128/256 bit scalar_to_vectors are canonicalized to (insert_subvector undef, (scalar_to_vector), 0). We have isel patterns that try to match this pattern being used by a vzmovl to use a 128-bit instruction and a subreg_to_reg.
This patch detects the insert_subvector undef portion of this and pulls it through the vzmovl, creating a narrower vzmovl and an insert_subvector allzeroes. We can then match the insertsubvector into a subreg_to_reg operation by itself. Then we can fall back on existing (vzmovl (scalar_to_vector)) patterns.
Note, while the scalar_to_vector case is the motivating case I didn't restrict to just that case. I'm also wondering about shrinking any 256/512 vzmovl to an extract_subvector+vzmovl+insert_subvector(allzeros) but I fear that would have bad implications to shuffle combining.
I also think there is more canonicalization we can do with vzmovl with loads or scalar_to_vector with loads to create vzload.
Craig Topper [Fri, 21 Jun 2019 18:49:21 +0000 (18:49 +0000)]
[X86] Add a debug print of the node in the default case for unhandled opcodes in ReplaceNodeResults.
This should be unreachable, but bugs can make it reachable. This
adds a debug print so we can see the bad node in the output when
the llvm_unreachable triggers.
Amara Emerson [Fri, 21 Jun 2019 18:10:38 +0000 (18:10 +0000)]
[GlobalISel][IRTranslator] Change switch table translation to generate jump tables and range checks.
This change makes use of the newly refactored SwitchLoweringUtils code from
SelectionDAG to in order to generate jump tables and range checks where appropriate.
Much of this code is ported from SDAG with some modifications. We generate
G_JUMP_TABLE and G_BRJT instructions when JT opportunities are found. This means
that targets which previously relied on the naive one MBB per case stmt
translation will now start falling back until they add support for the new opcodes.
For range checks, we don't generate any previously unused operations. This
just recognizes contiguous ranges of case values and generates a single block per
range. Single case value blocks are just a special case of ranges so we get that
support almost for free.
There are still some optimizations missing that I haven't ported over, and
bit-tests are also unimplemented. This patch series is already complex enough.
Actual arm64 support for selection of jump tables is coming in a later patch.
Simon Pilgrim [Fri, 21 Jun 2019 17:57:01 +0000 (17:57 +0000)]
[SLP] Look-ahead operand reordering heuristic.
This patch introduces a new heuristic for guiding operand reordering. The new "look-ahead" heuristic can look beyond the immediate predecessors. This helps break ties when the immediate predecessors have identical opcodes (see lit test for an example).
Committed on behalf of @vporpo (Vasileios Porpodas)
Craig Topper [Fri, 21 Jun 2019 17:24:21 +0000 (17:24 +0000)]
[X86] Use vmovq for v4i64/v4f64/v8i64/v8f64 vzmovl.
We already use vmovq for v2i64/v2f64 vzmovl. But we were using a
blendpd+xorpd for v4i64/v4f64/v8i64/v8f64 under opt speed. Or
movsd+xorpd under optsize.
I think the blend with 0 or movss/d is only needed for
vXi32 where we don't have an instruction that can move 32
bits from one xmm to another while zeroing upper bits.
Amara Emerson [Fri, 21 Jun 2019 16:43:50 +0000 (16:43 +0000)]
[AArch64][GlobalISel] Make s8 and s16 G_CONSTANTs legal.
We sometimes get poor code size because constants of types < 32b are legalized
as 32 bit G_CONSTANTs with a truncate to fit. This works but means that the
localizer can no longer sink them (although it's possible to extend it to do so).
On AArch64 however s8 and s16 constants can be selected in the same way as s32
constants, with a mov pseudo into a W register. If we make s8 and s16 constants
legal then we can avoid unnecessary truncates, they can be CSE'd, and the
localizer can sink them as normal.
There is a caveat: if the user of a smaller constant has to widen the sources,
we end up with an anyext of the smaller typed G_CONSTANT. This can cause
regressions because of the additional extend and missed pattern matching. To
remedy this, there's a new artifact combiner to generate the wider G_CONSTANT
if it's legal for the target.
George Rimar [Fri, 21 Jun 2019 14:15:15 +0000 (14:15 +0000)]
[llvm-objcopy] - Get rid of dynrel.elf precompiled binary from inputs.
We do not have to spread using the precompiled binaries in the tests,
when we can use YAML. This patch removes the dynrel.elf binary and adds
a few comments to the test cases.
Jay Foad [Fri, 21 Jun 2019 14:10:18 +0000 (14:10 +0000)]
[Scalarizer] Propagate IR flags
Summary:
The motivation for this was to propagate fast-math flags like nnan and
ninf on vector floating point operations to the corresponding scalar
operations to take advantage of follow-on optimizations. But I think
the same argument applies to all of our IR flags: if they apply to the
vector operation then they also apply to all the individual scalar
operations, and they might enable follow-on optimizations.
Sam Elliott [Fri, 21 Jun 2019 13:36:09 +0000 (13:36 +0000)]
[RISCV] Add RISCV-specific TargetTransformInfo
Summary:
LLVM Allows Targets to provide information that guides optimisations
made to LLVM IR. This is done with callbacks on a TargetTransformInfo object.
This patch adds a TargetTransformInfo class for RISC-V. This will allow us to
implement RISC-V specific callbacks as they become necessary.
This commit also adds the getIntImmCost callbacks, and tests them with a simple
constant hoisting test. Our immediate costs are on the conservative side, for
the moment, but we prevent hoisting in most circumstances anyway.
Andrea Di Biagio [Fri, 21 Jun 2019 13:32:54 +0000 (13:32 +0000)]
[MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.
The goal is to help users identify the sequence of instruction which is most
critical for performance.
A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.
There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.
Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).
The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.
Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.
At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors. Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).
The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).
Examples of critical sequence computations can be found in tests added/modified
by this patch.
On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.
Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.
The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".
By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).
That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).
These instructions let you load half a vector register at once from
two general-purpose registers, or vice versa.
The assembly syntax for these instructions mentions the vector
register name twice. For the move _into_ a vector register, the MC
operand list also has to mention the register name twice (once as the
output, and once as an input to represent where the unchanged half of
the output register comes from). So we can conveniently assign one of
the two asm operands to be the output $Qd, and the other $QdSrc, which
avoids confusing the auto-generated AsmMatcher too much. For the move
_from_ a vector register, there's no way to get round the fact that
both instances of that register name have to be inputs, so we need a
custom AsmMatchConverter to avoid generating two separate output MC
operands. (And even that wouldn't have worked if it hadn't been for
D60695.)
Simon Tatham [Fri, 21 Jun 2019 13:17:08 +0000 (13:17 +0000)]
[ARM] Add MVE vector instructions that take a scalar input.
This adds the `MVE_qDest_rSrc` superclass and all its instances, plus
a few other instructions that also take a scalar input register or two.
I've also belatedly added custom diagnostic messages to the operand
classes for odd- and even-numbered GPRs, which required matching
changes in two of the existing MVE assembly test files.
Paul Robinson [Fri, 21 Jun 2019 13:10:19 +0000 (13:10 +0000)]
Fix a crash with assembler source and -g.
llvm-mc or clang with -g normally produces debug info describing the
assembler source itself; however, if that source already contains some
.file/.loc directives, we should instead emit the debug info described
by those directives. For certain assembler sources seen in the wild
(particularly in the Chrome build) this was causing a crash due to
incorrect assumptions about legal sequences of assembler source text.
Simon Pilgrim [Fri, 21 Jun 2019 12:42:39 +0000 (12:42 +0000)]
[X86] X86ISD::ANDNP is a (non-commutative) binop
The sat add/sub tests still have unnecessary extract_subvector((vandnps ymm, ymm), 0) uses that should be split to (vandnps (extract_subvector(ymm, 0), extract_subvector(ymm, 0)), but its getting better.
Simon Tatham [Fri, 21 Jun 2019 12:13:59 +0000 (12:13 +0000)]
[ARM] Add a batch of similarly encoded MVE instructions.
Summary:
This adds the `MVE_qDest_qSrc` superclass and all instructions that
inherit from it. It's not the complete class of _everything_ with a
q-register as both destination and source; it's a subset of them that
all have similar encodings (but it would have been hopelessly unwieldy
to call it anything like MVE_111x11100).
This category includes add/sub with carry; long multiplies; halving
multiplies; multiply and accumulate, and some more complex
instructions.