Embedded Trace Extension and Trace Buffer Extension are optional
future architecture extensions.
(cf. https://developer.arm.com/architectures/cpu-architecture/a-profile/exploration-tools)
Their system registers are documented here:
https://developer.arm.com/docs/ddi0601/a
ETE shares register names with ETM. One exception is the ETE
TRCEXTINSELR0 register, which has the same encoding as the ETM
TRCEXTINSELR register (but different semantics). This patch treats
them as aliases: the assembler will accept both names, emitting
identical encoding, and the disassembler will keep disassembling
to TRCEXRINSELR.
Simon Pilgrim [Fri, 26 Jul 2019 09:13:29 +0000 (09:13 +0000)]
[SelectionDAG] GetDemandedBits - update OR/XOR ops to just call SimplifyMultipleUseDemandedBits.
Eventually all of these will be moved over, but we create nodes in GetDemandedBits recursion at the moment which causes regressions when we try to remove them all.
Sam Parker [Fri, 26 Jul 2019 08:15:01 +0000 (08:15 +0000)]
[ARM][LowOverheadLoops] Add CPSR defs
Both WhileLoopStart and LoopEnd may get turned into a cmp and br pair,
so add an implicit def to these pseudo instructions in case that WLS
and LE aren't generated.
Pengfei Wang [Fri, 26 Jul 2019 07:33:15 +0000 (07:33 +0000)]
[WinEH] Allocate space in funclets stack to save XMM CSRs
Summary:
This is an alternate approach to D57970.
Currently funclets reuse the same stack slots that are used in the
parent function for saving callee-saved xmm registers. If the parent
function modifies a callee-saved xmm register before an excpetion is
thrown, the catch handler will overwrite the original saved value.
This patch allocates space in funclets stack for saving callee-saved xmm
registers and uses RSP instead RBP to access memory.
Matt Arsenault [Fri, 26 Jul 2019 02:36:05 +0000 (02:36 +0000)]
AMDGPU/GlobalISel: Handle most function return types
handleAssignments gives up pretty easily on structs, and i8 values for
some reason. The other case that doesn't work is when an implicit sret
needs to be inserted if the return size exceeds the number of return
registers.
Kang Zhang [Fri, 26 Jul 2019 01:58:53 +0000 (01:58 +0000)]
[PowerPC] Do the Simple Early Return in block-placement pass to optimize the blocks
Summary:
In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
But the `early-ret` pass is before `block-placement`, we don't want to run it again.
This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.
Below is an example
```
BB: | BB:
XOR 3, 3, 4 | XOR 3, 3, 4
B TBB | B ChainBB
... | ...
ChainBB: | ChainBB:
B TBB | ADD 3, 3, 4
... | BLR
TBB: |
ADD 3, 3, 4 |
BLR |
```
[CodeGen] Don't resolve the stack protector frame accesses until PEI
Currently, stack protector loads and stores are resolved during
LocalStackSlotAllocation (if the pass needs to run). When this is the
case, the base register assigned to the frame access is going to be one
of the vregs created during LocalStackSlotAllocation. This means that we
are keeping a pointer to the stack protector slot, and we're using this
pointer to load and store to it.
In case register pressure goes up, we may end up spilling this pointer
to the stack, which can be a security concern.
Instead, leave it to PEI to resolve the frame accesses. In order to do
that, we make all stack protector accesses go through frame index
operands, then PEI will resolve this using an offset from sp/fp/bp.
Yonghong Song [Thu, 25 Jul 2019 21:47:27 +0000 (21:47 +0000)]
[BPF] fix typedef issue for offset relocation
Currently, the CO-RE offset relocation does not work
if any struct/union member or array element is a typedef.
For example,
typedef const int arr_t[7];
struct input {
arr_t a;
};
func(...) {
struct input *in = ...;
... __builtin_preserve_access_index(&in->a[1]) ...
}
The BPF backend calculated default offset is 0 while
4 is the correct answer. Similar issues exist for struct/union
typedef's.
When getting struct/union member or array element type,
we should trace down to the type by skipping typedef
and qualifiers const/volatile as this is what clang did
to generate getelementptr instructions.
(const/volatile member type qualifiers are already
ignored by clang.)
This patch fixed this issue, for each access index,
skipping typedef and const/volatile/restrict BTF types.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D65259
Alex Lorenz [Thu, 25 Jul 2019 21:47:11 +0000 (21:47 +0000)]
[FileCollector] add support for recording empty directories
The file collector class is useful for constructing reproducers by
creating a snapshot of the files that are accessed. Sometimes it might
also be important to construct directories that don't necessarily have files,
but are still accessed by some tool that we want to make a reproducer for.
This is useful for instance for modeling the behavior of Clang's header search,
which scans through a number of directories it doesn't actually access when
looking for framework headers. This commit extends the file collector to allow
it to work with paths that are just directories, by constructing them as the
files are copied over.
Leonard Chan [Thu, 25 Jul 2019 20:53:15 +0000 (20:53 +0000)]
Reland the "[NewPM] Port Sancov" patch from rL365838. No functional
changes were made to the patch since then.
--------
[NewPM] Port Sancov
This patch contains a port of SanitizerCoverage to the new pass manager. This one's a bit hefty.
Changes:
- Split SanitizerCoverageModule into 2 SanitizerCoverage for passing over
functions and ModuleSanitizerCoverage for passing over modules.
- ModuleSanitizerCoverage exists for adding 2 module level calls to initialization
functions but only if there's a function that was instrumented by sancov.
- Added legacy and new PM wrapper classes that own instances of the 2 new classes.
- Update llvm tests and add clang tests.
[PredicateInfo] Replace pointer comparisons with deterministic compares.
Currently there are a few pointer comparisons in ValueDFS_Compare, which
can cause non-deterministic ordering when materializing values. There
are 2 cases this patch fixes:
1. Order defs before uses used to compare pointers, which guarantees
defs before uses, but causes non-deterministic ordering between 2
uses or 2 defs, depending on the allocation order. By converting the
pointers to booleans, we can circumvent that problem.
2. comparePHIRelated was comparing the basic block pointers of edges,
which also results in a non-deterministic order and is also not
really meaningful for ordering. By ordering by their destination DFS
numbers we guarantee a deterministic order.
For the example below, we can end up with 2 different uselist orderings,
when running `opt -mem2reg -ipsccp` hundreds of times. Because the
non-determinism is caused by allocation ordering, we cannot reproduce it
with ipsccp alone.
Roman Lebedev [Thu, 25 Jul 2019 20:26:34 +0000 (20:26 +0000)]
[NFC][DivRemPairs] Tests with rem in expanded form (PR42673)
As discussed in https://bugs.llvm.org/show_bug.cgi?id=42673
there is a TTI hook hasDivRemOp() that matters here.
While -div-rem-pairs will decompose 'rem' if that hook returns false,
nothing does the opposite transform.
We can't to this in InstCombine, because it does not currently
access TTI, and i'm not sure we should change that.
We can't really do that in DAGCombine since it also currently does not
access TTI.
We'd like to determine the idom of exit block after peeling one iteration.
Let Exit is exit block.
Let ExitingSet - is a set of predecessors of Exit block. They are exiting blocks.
Let Latch' and ExitingSet' are copies after a peeling.
We'd like to find an idom'(Exit) - idom of Exit after peeling.
It is an evident that idom'(Exit) will be the nearest common dominator of ExitingSet and ExitingSet'.
idom(Exit) is a nearest common dominator of ExitingSet.
idom(Exit)' is a nearest common dominator of ExitingSet'.
Taking into account that we have a single Latch, Latch' will dominate Header and idom(Exit).
So the idom'(Exit) is nearest common dominator of idom(Exit)' and Latch'.
All these basic blocks are in the same loop, so what we find is
(nearest common dominator of idom(Exit) and Latch)'.
[DDG] DirectedGraph as a base class for various dependence graphs such
as DDG and PDG.
Summary:
This is an implementation of a directed graph base class with explicit
representation of both nodes and edges. This implementation makes the
edges explicit because we expect to assign various attributes (such as
dependence type, distribution interference weight, etc) to the edges in
the derived classes such as DDG and DIG. The DirectedGraph consists of a
list of DGNode's. Each node consists of a (possibly empty) list of
outgoing edges to other nodes in the graph. A DGEdge contains a
reference to a single target node. Note that nodes do not know about
their incoming edges so the DirectedGraph class provides a function to
find all incoming edges to a given node.
This is the first patch in a series of patches that we are planning to
contribute upstream in order to implement Data Dependence Graph and
Program Dependence Graph.
More information about the proposed design can be found here:
https://ibm.ent.box.com/v/directed-graph-and-ddg
Authored By: bmahjour
Reviewer: Meinersbur, myhsum hfinkel, fhahn, jdoerfert, kbarton
Reviewed By: Meinersbur
Subscribers: mgorny, wuzish, jsji, lebedev.ri, dexonsmith, kristina,
llvm-commits, Whitney, etiotto
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D64088
[SimplifyCFG] avoid crashing after simplifying a switch (PR42737)
Later code in TryToSimplifyUncondBranchFromEmptyBlock() assumes that
we have cleaned up unreachable blocks, but that was not happening
with this switch transform.
As discussed in https://bugs.llvm.org/show_bug.cgi?id=42673
there is a TTI hook hasDivRemOp() that matters here.
While -div-rem-pairs will decompose 'rem' if that hook returns false,
nothing does the opposite transform.
We can't to this in InstCombine, because it does not currently
access TTI, and i'm not sure we should change that.
We may be able to teach DivRemPairs to do this, but this really is a
per-target perf optimization, and we seem to do the opposite transform
in backend if hasDivRemOp() returned false: https://godbolt.org/z/ttt4HZ
I think it makes sense to be consistent.
[LOOPINFO] Introduce the loop guard API.
Summary:
This is the first patch for the loop guard. We introduced
getLoopGuardBranch() and isGuarded().
This currently only works on simplified loop, as it requires a preheader
and a latch to identify the guard.
It will work on loops of the form:
/// GuardBB:
/// br cond1, Preheader, ExitSucc <== GuardBranch
/// Preheader:
/// br Header
/// Header:
/// ...
/// br Latch
/// Latch:
/// br cond2, Header, ExitBlock
/// ExitBlock:
/// br ExitSucc
/// ExitSucc:
Prior discussions leading upto the decision to introduce the loop guard
API: http://lists.llvm.org/pipermail/llvm-dev/2019-May/132607.html
Reviewer: reames, kbarton, hfinkel, jdoerfert, Meinersbur, dmgreen
Reviewed By: reames
Subscribers: wuzish, hiraditya, jsji, llvm-commits, bmahjour, etiotto
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D63885
CrashHandler: be careful about crashing while handling
Summary:
Looking at the current Apple-specific code for crash handling it does a few
silly things that I think we should avoid while handling crashes:
* Try real hard not to allocate.
* Set the global crash reporter string early so that any crash while
generating the stack trace will still report some info.
* Prevent reordering of operations in the current thread.
Yonghong Song [Thu, 25 Jul 2019 16:01:26 +0000 (16:01 +0000)]
[BPF] fix CO-RE incorrect index access string
Currently, we expect the CO-RE offset relocation records
a string encoding the original getelementptr access index,
so kernel bpf loader can decode it correctly.
For example,
struct s { int a; int b; };
struct t { int c; int d; };
#define _(x) (__builtin_preserve_access_index(x))
int get_value(const void *addr1, const void *addr2);
int test(struct s *arg1, struct t *arg2) {
return get_value(_(&arg1->b), _(&arg2->d));
}
We expect two offset relocations:
reloc 1: type s, access index 0, 1
reloc 2: type t, access index 0, 1
Two globals are created to retain access indexes for the
above two relocations with global variable names.
The first global has a name "0:1:". Unfortunately,
the second global has the name "0:1:.1" as the llvm
internals automatically add suffix ".1" to a global
with the same name. Later on, the BPF peels the last
character and record "0:1" and "0:1:." in the
relocation table.
This is not desirable. BPF backend could use the global
variable suffix knowledge to generate correct access str.
This patch rather took an approach not relying on
that knowledge. It generates "s:0:1:" and "t:0:1:" to
avoid global variable suffixes and later on generate
correct index access string "0:1" for both records.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D65258
Revert "[InstCombine] try to narrow a truncated load"
This reverts commit bc4a63fd3c29c1a8ce22891bf34ee4dccfef578c, this is a
speculative revert to fix a number of sanitizer bots (like
sanitizer-x86_64-linux-bootstrap-ubsan) that have started to see stage2
compiler crashes, presumably due to a miscompile.
[PredicateInfo] Use SmallVector instead of SmallPtrSet.
We do not need the SmallPtrSet to avoid adding duplicates to
OpsToRename, because we already keep a ValueInfo mapping. If we see an
op for the first time, Infos will be empty and we can also add it to
OpsToRename.
We process operands by visiting BBs depth-first and then iterate over
all instructions & users, so the order should be deterministic.
Therefore we can skip one round of sorting, which we purely needed for
guaranteeing a deterministic order when iterating over the SmallPtrSet.
Michael Liao [Thu, 25 Jul 2019 14:50:18 +0000 (14:50 +0000)]
[AMDGPU] Run `unreachable-mbb-elimination` after isel to clean up PHIs.
Summary:
- As LCSSA is turned on just before isel, it may create PHI of the flow,
which is consumed by pseudo structurized CFG instructions. When that
PHIs are eliminated in O0, COPY may be placed wrongly as the these
pseudo structurized CFG instructions are considering prologue of MBB.
- Run extra `unreachable-mbb-elimination` at the end of isel to clean up
PHIs.
[AArch64][SVE] Allow explicit size specifier for predicate operand
... for the vector forms of `{SQ,UQ,}{INC,DEC}P` instructions. Also continue
supporting the exsting behaviour of not requiring an explicit size
specifier. The preferred disasembly is *with* the specifier.
This is implemented by redefining intruction forms to require vector predicates
with explicit size and adding aliases, which allow a predicate with no size.
Summary:
It is a good idea to do as much matching inside of `match()` as possible.
If some checking is done afterwards, and we don't fold because of it,
chances are we may have missed some commutative pattern.
Roman Lebedev [Thu, 25 Jul 2019 13:34:14 +0000 (13:34 +0000)]
[IR][PatternMatch] introduce m_Unless() matcher
Summary:
I don't think it already exists? I don't see it at least.
It is important to have it because else we'll do some checks after `match()`,
and that may result in missed folds in commutative nodes.
trunc (load X) --> load (bitcast X to narrow type)
We have this transform in DAGCombiner::ReduceLoadWidth(), but the truncated
load pattern can interfere with other instcombine transforms, so I'd like to
allow the fold sooner.
Example:
https://bugs.llvm.org/show_bug.cgi?id=16739
...in that report, we have bitcasts bracketing these ops, so those could get
eliminated too.
We've generally ruled out widening of loads early in IR ( LoadCombine -
http://lists.llvm.org/pipermail/llvm-dev/2016-September/105291.html ), but
that reasoning may not apply to narrowing if we can preserve information
such as the dereferenceable range.
Pablo Barrio [Thu, 25 Jul 2019 10:59:45 +0000 (10:59 +0000)]
[ARM][AArch64] Support for Cortex-A65 & A65AE, Neoverse E1 & N1
Summary:
Add support for Cortex-A65, Cortex-A65AE, Neoverse E1 and Neoverse N1.
Neoverse E1 and Cortex-A65(&AE) only implement the AArch64 state of the
Arm architecture. Neoverse N1 implements both AArch32 and AArch64.
Simon Pilgrim [Thu, 25 Jul 2019 10:20:39 +0000 (10:20 +0000)]
Revert rL366946 : [Remarks] Add support for serializing metadata for every remark streamer
This allows every serializer format to implement metaSerializer() and
return the corresponding meta serializer.
........
Fix windows build bots
http://lab.llvm.org:8011/builders/llvm-clang-x86_64-win-fast
http://lab.llvm.org:8011/builders/llvm-clang-lld-x86_64-scei-ps4-windows10pro-fast
http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-win
George Rimar [Thu, 25 Jul 2019 10:19:23 +0000 (10:19 +0000)]
Recommit "rL366894: [yaml2obj] - Allow custom fields for the SHT_UNDEF sections."
With fix: do not use `stat` tool.
Original commit message:
This is a follow-up refactoring patch for recently
introduced functionality which which reduces the code duplication
and also makes possible to redefine all possible fields of
the first SHT_NULL section (previously it was only possible to set
sh_link and sh_size).
[IPSCCP] Add assertion to surface cases where we zap returns with overdefined users.
We should only zap returns in functions, where all live users have a
replace-able value (are not overdefined). Unused return values should be
undefined.
This should make it easier to detect bugs like in PR42738.
Alternatively we could bail out of zapping the function returns, but I
think it would be better to address those divergences between function
and call-site values where they are actually caused.
This refactors boolean 'OptForSize' that was passed around in a lot of places.
It controlled folding of the tail loop, the scalar epilogue, into the main loop
but code-size reasons may not be the only reason to do this. Thus, this is a
first step to generalise the concept of tail-loop folding, and hence OptForSize
has been renamed and is using an enum ScalarEpilogueStatus that holds the
status how the epilogue should be lowered.
This will be followed up by D65197, that picks up the predicate loop hint and
performs the tail-loop folding.
Kai Luo [Thu, 25 Jul 2019 07:47:52 +0000 (07:47 +0000)]
[PowerPC][NFC] Added `getDefMIPostRA` method
Summary:
In PostRA phase, we often have to find out the most recent definition
of a register. This patch adds getDefMIPostRA so that other methods
can use it rather than implementing it repeatedly.
that can be used to indicate to the vectoriser that all (load/store)
instructions should be predicated (masked). This allows, for example, folding
of the remainder loop into the main loop.
This patch will be followed up with D64916 and D65197. The former is a
refactoring in the loopvectorizer and the groundwork to make tail loop folding
a more general concept, and in the latter the actual tail loop folding
transformation will be implemented.
These tests are breaking three independent upstream buildbots (as well
downstream ones). These breakages have appeared mysteriously,
consistently, and during different revisions. Sadly, none of
{ASAN,TSAN,MSAN,UBSAN} flag anything, so the cause here is nonobvious.
Until we've figured this out, it seems best to disable these tests
entirely, so that the affected bots don't remain silent about any other,
unrelated failures.
[llvm-objdump][NFC] Make the PrettyPrinter::printInst() output buffered
Summary:
Every time PrettyPrinter::printInst is called, stdout is flushed and it makes llvm-objdump slow. This patches adds a string
buffer to prevent stdout from being flushed.
Benchmark results (./llvm-objdump-master: without this patch, ./bin/llvm-objcopy: with this patch):
$ hyperfine --warmup 10 './llvm-objdump-master -d ./bin/llvm-objcopy' './bin/llvm-objdump -d ./bin/llvm-objcopy'
Benchmark #1: ./llvm-objdump-master -d ./bin/llvm-objcopy
Time (mean ± σ): 2.230 s ± 0.050 s [User: 1.533 s, System: 0.682 s]
Range (min … max): 2.115 s … 2.278 s 10 runs
Benchmark #2: ./bin/llvm-objdump -d ./bin/llvm-objcopy
Time (mean ± σ): 386.4 ms ± 13.0 ms [User: 376.6 ms, System: 6.1 ms]
Range (min … max): 366.1 ms … 407.0 ms 10 runs
Summary
'./bin/llvm-objdump -d ./bin/llvm-objcopy' ran
5.77 ± 0.23 times faster than './llvm-objdump-master -d ./bin/llvm-objcopy'
Joel E. Denny [Thu, 25 Jul 2019 03:14:32 +0000 (03:14 +0000)]
[lit] Protect full test suite from FILECHECK_OPTS
lit's test suite calls lit multiple times for various sample test
suites. `FILECHECK_OPTS` is safe for FileCheck calls in lit's test
suite. It's not safe for FileCheck calls in the sample test suites,
whose output affects the results of lit's test suite.
Without this patch, only one such sample test suite is protected from
`FILECHECK_OPTS`, and I admit I haven't discovered other cases for
which I can produce false failures using `FILECHECK_OPTS`. However,
it's hard to predict the future, especially false passes. Thus, this
patch protects all existing and future sample test suites from
`FILECHECK_OPTS` (and the deprecated
`FILECHECK_DUMP_INPUT_ON_FAILURE`).
[FileCollector] Change coding style from LLDB to LLVM (NFC)
This patch changes the coding style of the FileCollector from the LLDB
to the LLVM coding style. Alex recently lifted it into LLVM and I
volunteered to do the conversion.
Philip Reames [Wed, 24 Jul 2019 23:24:13 +0000 (23:24 +0000)]
Define some basic terminology around loops in our documentation
I've noticed a lot of confusion around this area recently with key terms being misused in a number of threads. To help reign that in, let's go ahead and document the current terminology and meaning thereof.
My hope is to grow this over time into a broader discussion of canonical loop forms - yes, there are more than one ... many more than one - but for the moment, simply having the key terminology is a good stopping place.
Note: I am landing this *without* an LGTM. All feedback so far has been positive, and trying to apply all of the suggested changes/extensions would cause the review to never end. Instead, I decided to land it with the obvious fixes made based on reviewer comments, then iterate from there.
[AArch64][GlobalISel] Don't try to use GISel if subtarget doesn't have neon or fp.
Throughout the legalizerinfo we currently make the assumption that the target
has neon and FP target features available. Fixing it will require a refactor of
the whole thing, so until then make sure we fall back.
Summary:
This was originally reported in D62818.
https://rise4fun.com/Alive/oPH
InstCombine does the opposite fold, in hope that `C l>>/<< Y` expression
will be hoisted out of a loop if `Y` is invariant and `X` is not.
But as it is seen from the diffs here, if it didn't get hoisted,
the produced assembly is almost universally worse.
Much like with my recent "hoist add/sub by/from const" patches,
we should get almost universal win if we hoist constant,
there is almost always an "and/test by imm" instruction,
but "shift of imm" not so much, so we may avoid having to
materialize the immediate, and thus need one less register.
And since we now shift not by constant, but by something else,
the live-range of that something else may reduce.
Special care needs to be applied not to disturb x86 `BT` / hexagon `tstbit`
instruction pattern. And to not get into endless combine loop.
IR: Teach GlobalIndirectSymbol::getBaseObject() to handle more kinds of expressions.
For aliases, any expression that lowers at the MC level to global_object or
global_object+constant is valid at the object file level. getBaseObject()
should return a result if the aliasee ends up being of that form even if
the IR used to produce it is somewhat unconventional.
Note that this is different from what stripInBoundsOffsets() and that family
of functions is doing. Those functions are concerned about semantic properties
of IR, whereas here we only care about the lowering result.
Therefore reimplement getBaseObject() in a way that matches the lowering
result. This fixes a crash when producing a summary for aliases such as
that in the included test case.
[GlobalISel] Support for inlining memcpy, memset and memmove calls.
This introduces a new family of combiner helper routines that re-use the
target specific cost model from SelectionDAG, and generate inline implementations
of the memcpy family of intrinsics.
The combines are only enabled at optimization levels higher than -O0, and give
very substantial performance improvements.
[AArch64][GlobalISel] Fix a crash during s128 G_ICMP legalization due to r366317.
r366317 added a legalization for s128 G_ICMP narrow scalar which tried to hard
code the result type of the new legalized G_SELECT. Change this to instead use
type of the original G_ICMP result and allow the target to legalize it if necessary
later.
David Bolvansky [Wed, 24 Jul 2019 20:27:32 +0000 (20:27 +0000)]
Let CorrelatedValuePropagation preserve LazyValueInfo
Summary:
This patch makes CorrelatedValuePropagation preserve LazyValueInfo by adding LazyValueInfo::eraseValue & calling it whenever an instruction is erased.
Passes `make check` , test-suite, and SPECrate 2017.
David Green [Wed, 24 Jul 2019 17:36:47 +0000 (17:36 +0000)]
[ARM] Rewrite how VCMP are lowered, using a single node
This removes the VCEQ/VCNE/VCGE/VCEQZ/etc nodes, just using two called VCMP and
VCMPZ with an extra operand as the condition code. I believe this will make
some combines simpler, allowing us to just look at these codes and not the
operands. It also helps fill in a missing VCGTUZ MVE selection without adding
extra nodes for it.
matchBinOpReduction returns the lower extracted subvector in such cases, assuming isExtractSubvectorCheap accepts the extraction.
I've only enabled it for X86 reduction sums so far. I intend to enable it for the bitop/minmax cases in future patches, and eventually I think its worth turning it on all the time. This is mainly just a case of ensuring calls to matchBinOpReduction don't make assumptions on the vector width based on the original vector extraction.
Fixes the x86 partial reduction sum cases in PR33758 and PR42023.
David Green [Wed, 24 Jul 2019 17:26:26 +0000 (17:26 +0000)]
[ARM] Disable MVE fptosi and friends
The prevents us from trying to convert an i1 predicate vector to a float, or
vice-versa. Better patterns are possible, which will follow in a subsequent
commit. For now we just expand them.
David Green [Wed, 24 Jul 2019 16:42:09 +0000 (16:42 +0000)]
[ARM] Better OR's for MVE compares
This adds a DeMorgan combine for OR's of compares to turn them into AND's,
helping prevent them from going into and out of gpr registers. It also fills in
the VCLE and VCLT nodes that MVE can select, allowing it to invert more
compares.