Oliver Stannard [Tue, 24 Oct 2017 14:19:08 +0000 (14:19 +0000)]
[ARM] Error for invalid shift in memory operand
Report a diagnostic when we fail to parse a shift in a memory operand because
the shift type is not an identifier. Without this, we were silently ignoring
the whole instruction.
Zvi Rackover [Tue, 24 Oct 2017 12:13:05 +0000 (12:13 +0000)]
X86CallFrameOptimization: Recognize 'store 0/-1 using and/or' idioms
Summary:
r264440 added or/and patterns for storing -1 or 0 with the intention of decreasing code size. However,
X86CallFrameOptimization does not recognize these memory accesses so it will not replace them with push's when profitable.
This patch fixes this problem by teaching X86CallFrameOptimization these store 0/-1 idioms.
An alternative fix would be to prevent the 'store 0/1 idioms' patterns from firing when accessing the stack. This would save
the need to teach the pass about these idioms. However, because X86CallFrameOptimization does not always fire we may result
in cases where neither X86CallFrameOptimization not the patterns for 'store 0/1 idioms' fire.
Bjorn Pettersson [Tue, 24 Oct 2017 12:08:11 +0000 (12:08 +0000)]
[ConstantFolding] Avoid assert when folding ptrtoint of vectorized GEP
Summary:
Got asserts in llvm::CastInst::getCastOpcode saying:
`DestBits == SrcBits && "Illegal cast to vector (wrong type or size)"' failed.
Problem seemed to be that llvm::ConstantFoldCastInstruction did
not handle ptrtoint cast of a getelementptr returning a vector
correctly. I assume such situations are quite rare, since the
GEP needs to be considered as a constant value (base pointer
being null).
The solution used here is to simply avoid the constant fold
of ptrtoint when the value is a vector. It is not supported,
and by bailing out we do not fail on assertions later on.
Bjorn Pettersson [Tue, 24 Oct 2017 11:59:20 +0000 (11:59 +0000)]
[LangRef] Update description of Constant Expressions
Summary:
When describing trunc/zext/sext/ptrtoint/inttoptr in the chapter
about Constant Expressions we now simply refer to the Instruction
Reference. As far as I know there are no difference when it comes
to the semantics and the argument constraints. The only difference
is that the syntax is slighly different for the constant expressions,
regarding the use of parenthesis in constant expressions.
Referring to the Instruction Reference is the same solution as
already used for several other operations, such as bitcast.
The main goal was to add information that vector types are allowed
also in trunc/zext/sext/ptrtoint/inttoptr constant expressions.
That was not explicitly mentioned earlier, and resulted in some
questions in the review of https://reviews.llvm.org/D38546
George Rimar [Tue, 24 Oct 2017 11:44:19 +0000 (11:44 +0000)]
[llvm-dwarfdump] - Cleanup of gnu_call_site.s. NFC.
This change fixes values of test so that it passes
-verify without errors and also adds comments.
Test was introduced in D39119 and intention was to check
that tool is able to dump few
DW_*GNU_call_site* tags and attributes, so that
change is NFC cleanup.
Clement Courbet [Tue, 24 Oct 2017 08:05:07 +0000 (08:05 +0000)]
[CodeGen][ExpandMemcmp][NFC] Allow memcmp to expand to vector loads (1)
Refactor ExpandMemcmp:
- Stop duplicating the logic for computation of the sequence of loads to
generate (thsi was done in three different places), this is now done
only once in MemCmpExpansion::MemCmpExpansion().
- Add a FIXME to expose a bug with the computation of the number of loads
when not all sizes are loadable. For example, on X86-32 + SSE, possible
loads are {16,4,2,1} bytes. The current code considers that all loads
starting at MaxLoadSize are possible. This is not an issue right now as
vector loads are not enabled, so I'm not fixing the issue here to keep
the change as small as possible. I'm going to address this in a
subsequent revision, where I enable vector loads.
Zvi Rackover [Tue, 24 Oct 2017 07:38:29 +0000 (07:38 +0000)]
X86: Fix X86CallFrameOptimization to search for the COPY StackPointer
SelectionDAG inserts a copy of ESP into a virtual register.
X86CallFrameOptimization assumed that the COPY, if present, is always
right after the call-frame setup instruction (ADJCALLSTACKDOWN). This was a
wrong assumption as the COPY can be located anywhere between the call-frame setup
instruction and its first use. If the COPY happened to be located in a different
location than what X86CallFrameOptimization assumed, visiting it while
processing the call chain would lead to a conservative bail-out.
The fix is quite straightfoward, scan ahead for the stack-pointer copy and make note
of it so it can be ignored while processing the call chain.
Besides all the goodness from modularizing a header, this is necessary
to compile ToT with modules with the clang host compiler from Xcode 9 in
macOS 10.13, which our bots don't use yet.
[MC] Adding code padding for performance stability - infrastructure. NFC.
Infrastructure designed for padding code with nop instructions in key places such that preformance improvement will be achieved.
The infrastructure is implemented such that the padding is done in the Assembler after the layout is done and all IPs and alignments are known.
This patch by itself in a NFC. Future patches will make use of this infrastructure to implement required policies for code padding.
Zvi Rackover [Tue, 24 Oct 2017 05:47:07 +0000 (05:47 +0000)]
X86: Register the X86CallFrameOptimization pass
Summary:
The motivation of this change is to enable .mir testing for this pass.
Added one test case to cover the functionality, this same case will be improved by
a future patch.
Bob Haarman [Tue, 24 Oct 2017 01:26:22 +0000 (01:26 +0000)]
[raw_fd_ostream] report actual error in error messages
Summary:
Previously, we would emit error messages like "IO failure on output
stream". This change causes use to include information about what
actually went wrong, e.g. "No space left on device".
The `BasicBlock::getFirstInsertionPt` call may return `std::end` for the
BB. Dereferencing the end iterator results in an assertion failure
"(!NodePtr->isKnownSentinel()), function operator*". Ensure that the
returned iterator is valid before dereferencing it. If the end is
returned, move one position backward to get a valid insertion point.
Reid Kleckner [Mon, 23 Oct 2017 23:43:40 +0000 (23:43 +0000)]
[codeview] Add support for inlinee lists
This adds type index discovery and dumper support for symbol record kind
0x1168, which is a list of inlined function ids. This symbol kind is
undocumented, but S_INLINEES is consistent with the existing
nomenclature.
Jessica Paquette [Mon, 23 Oct 2017 23:36:46 +0000 (23:36 +0000)]
[MachineOutliner] Add optimisation remarks for successful outlining
This commit adds optimisation remarks for outlining which fire when a function
is successfully outlined.
To do this, OutlinedFunctions must now contain references to their Candidates.
Since the Candidates must still be sorted and worked on separately, this is
done by working on everything in terms of shared_ptrs to Candidates. This is
good; it means that we can easily move everything to outlining in terms of
the OutlinedFunctions rather than the individual Candidates. This is far more
intuitive than what's currently there!
(Remarks are output when a function is created for some group of Candidates.
In a later commit, all of the outlining logic should be rewritten so that we
loop over OutlinedFunctions rather than over Candidates.)
Bob Wilson [Mon, 23 Oct 2017 21:51:50 +0000 (21:51 +0000)]
Add a new Simulator entry for the target triple environment.
Apple's iOS, tvOS and watchOS simulator platforms have never been clearly
distinguished in the target triples. Even though they are intended to
behave similarly to the corresponding device platforms, they have separate
SDKs and are really separate platforms from the compiler's perspective.
Clang now defines a macro when building for one of these simulator platforms
(r297866) but that relies on the very indirect mechanism of checking to see
which option was used to specify the minimum deployment target. That is not
so great. Swift would also like to distinguish these simulator platforms in
a similar way, but unlike Clang, Swift does not use a separate option to
specify the minimum deployment target -- it uses a -target option to
specify the target triple directly, including the OS version number.
Using a different target triple for the simulator platforms is a much
more direct and obvious way to specify this. Putting the "simulator" in
the environment component of the triple means the OS values can stay the
same and existing code the looks at the OS field will not be affected.
Don't crash when we see unallocatable registers in clobbers
This fixes a bug where we'd crash given code like the test-case from
https://bugs.llvm.org/show_bug.cgi?id=30792 . Instead, we let the
offending clobber silently slide through.
This doesn't fully fix said bug, since the assembler will still complain
the moment it sees a crypto/fp/vector op, and we still don't diagnose
calls that require vector regs.
Mitch Phillips [Mon, 23 Oct 2017 20:25:19 +0000 (20:25 +0000)]
Graph builder implementation.
Implement a localised graph builder for indirect control flow
instructions. Main interface is through GraphBuilder::buildFlowGraph,
which will build a flow graph around an indirect CF instruction. Various
modifications to FileVerifier are also made to const-expose some members
needed for machine code analysis done by the graph builder.
[GVNSink] Fix failing GVNSink tests in the reverse iteration bot
Summary:
The elts of ActivePreds which is defined as a SmallPtrSet are copied
into Blocks using std::copy. This makes the resultant order of Blocks
non-deterministic. We cannot simply sort Blocks as they need to match
the corresponding Values. So a better approach is to define ActivePreds
as SmallSetVector.
This fixes the following failures in
http://lab.llvm.org:8011/builders/reverse-iteration:
LLVM :: Transforms/GVNSink/indirect-call.ll
LLVM :: Transforms/GVNSink/sink-common-code.ll
LLVM :: Transforms/GVNSink/struct.ll
[Hexagon] Return the correct chain edge for i1 function calls
In HexagonISelLowering, there is code to handle the case when
a function returns an i1 type. In this case, we need to generate
extra nodes to copy the result from R0 to a predicate register.
The code was returning the wrong value for the chain edge which
caused an assert "Wrong topological sorting" when converting the
instructions to MIs.
This patch fixes the problem by returning the chain for the final
copy.
Daniel Sanders [Mon, 23 Oct 2017 18:19:24 +0000 (18:19 +0000)]
[globalisel][tablegen] Import stores and allow GISel to automatically substitute zero regs like WZR/XZR/$zero.
This patch enables the import of stores. Unfortunately, doing so by itself,
loses an optimization where storing 0 to memory makes use of WZR/XZR.
To mitigate this, this patch also introduces a new feature that allows register
operands to nominate a zero register. When this is done, GlobalISel will
substitute (G_CONSTANT 0) with the nominated register automatically. This
is currently configured to only apply to the stores.
Applying it to GPR32/GPR64 register classes in general will be done after
review see (https://reviews.llvm.org/D39150).
Vedant Kumar [Mon, 23 Oct 2017 18:04:34 +0000 (18:04 +0000)]
[wasm] readSection: Avoid reading past eof (fixes oss-fuzz #3219)
A wasm file crafted with a bogus section size can trigger an ASan issue
in the DWARFObjInMemory constructor. Nip the problem in the bud when we
read the wasm section.
Found by OSS-Fuzz:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=3219
Matt Arsenault [Mon, 23 Oct 2017 17:09:35 +0000 (17:09 +0000)]
AMDGPU: Fix default range in non-kernel functions
The range should be assumed to be the hardware maximum
if a workitem intrinsic is used in a callable function
which does not know the restricted limit of the calling
kernel.
Craig Topper [Mon, 23 Oct 2017 16:11:33 +0000 (16:11 +0000)]
[X86] Change XRSTOR to use PS instead of TB to match XSAVE.
I don't think this changes anything functionally yet, but I plan to fix the disassembler to use this to disable matching certain instructions with 0xf3/0xf2/0x66 prefixes.
Simon Pilgrim [Mon, 23 Oct 2017 15:48:08 +0000 (15:48 +0000)]
[DAGCombine] Permit combining of shuffles of equivalent splat BUILD_VECTORs
combineShuffleOfScalars is very conservative about shuffled BUILD_VECTORs that can be combined together.
This patch adds one additional case - if both BUILD_VECTORs represent splats of the same scalar value but with different UNDEF elements, then we should create a single splat BUILD_VECTOR, sharing only the UNDEF elements defined by the shuffle mask.
Fix for Bug 30718 - Failure to disassemble certain MOV with rex.R. The issue was in illegal segment register index.
Differential Revision: https://reviews.llvm.org/D38786
Sam Parker [Mon, 23 Oct 2017 08:05:14 +0000 (08:05 +0000)]
[ARM] Allow unrolling of multi-block loops.
Before, loop unrolling was only enabled for loops with a single
block. This restriction has been removed and replaced by:
- allow a maximum of two exiting blocks,
- a four basic block limit for cores with a branch predictor.
Yichao Yu [Sun, 22 Oct 2017 20:28:17 +0000 (20:28 +0000)]
Fix invalid ptrtoint in InstCombine
Summary:
It's unclear if this is the only thing we can do but at least this is consistent with the check
of address space agreement in `isBitCastable`.
The code is used at least in both instcombine and jumpthreading though
I could only find a way to trigger the invalid cast in instcombine.
Sanjay Patel [Sun, 22 Oct 2017 19:10:07 +0000 (19:10 +0000)]
[SimplifyCFG] delay switch condition forwarding to -latesimplifycfg
As discussed in D39011:
https://reviews.llvm.org/D39011
...replacing constants with a variable is inverting the transform done
by other IR passes, so we definitely don't want to do this early.
In fact, it's questionable whether this transform belongs in SimplifyCFG
at all. I'll look at moving this to codegen as a follow-up step.
Scenario #1:
vreg0 is evicted from physreg0 by vreg1
Evictee vreg0 is intended for region splitting with split candidate physreg0 (the reg vreg0 was evicted from)
Region splitting creates a local interval because of interference with the evictor vreg1 (normally region spliiting creates 2 interval, the "by reg" and "by stack" intervals. Local interval created when interference occurs.)
one of the split intervals ends up evicting vreg2 from physreg1
Evictee vreg2 is intended for region splitting with split candidate physreg1
one of the split intervals ends up evicting vreg3 from physreg2 etc.. until someone spills
Scenario #2
vreg0 is evicted from physreg0 by vreg1
vreg2 is evicted from physreg2 by vreg3 etc
Evictee vreg0 is intended for region splitting with split candidate physreg1
Region splitting creates a local interval because of interference with the evictor vreg1
one of the split intervals ends up evicting back original evictor vreg1 from physreg0 (the reg vreg0 was evicted from)
Another evictee vreg2 is intended for region splitting with split candidate physreg1
one of the split intervals ends up evicting vreg3 from physreg2 etc.. until someone spills
As compile time was a concern, I've added a flag to control weather we do cost calculations for local intervals we expect to be created (it's on by default for X86 target, off for the rest).
Momchil Velikov [Sun, 22 Oct 2017 11:56:35 +0000 (11:56 +0000)]
[ARM] Dynamic stack alignment for 16-bit Thumb
This patch implements dynamic stack (re-)alignment for 16-bit Thumb. When
targeting processors, which support only the 16-bit Thumb instruction set
the compiler ignores the alignment attributes of automatic variables and may
silently generate incorrect code.
Guy Blank [Sun, 22 Oct 2017 11:43:08 +0000 (11:43 +0000)]
[X86] Add a pass to convert instruction chains between domains.
The pass scans the function to find instruction chains that define
registers in the same domain (closures).
It then calculates the cost of converting the closure to another domain.
If found profitable, the instructions are converted to instructions in
the other domain and the register classes are changed accordingly.
This commit adds the pass infrastructure and a simple conversion from
the GPR domain to the Mask domain.
Craig Topper [Sun, 22 Oct 2017 06:18:26 +0000 (06:18 +0000)]
[X86] Teach the disassembler that some instructions use VEX.W==0 without a corresponding VEX.W==1 instruction and we shouldn't treat them as if VEX.W is ignored.
Craig Topper [Sat, 21 Oct 2017 20:03:20 +0000 (20:03 +0000)]
[X86] Fix disassembling of EVEX instructions to stop accidentally decoding the SIB index register as an XMM/YMM/ZMM register.
This introduces a new operand type to encode the whether the index register should be XMM/YMM/ZMM. And new code to fixup the results created by readSIB.
This has the nice effect of removing a bunch of code that hard coded the name of every GATHER and SCATTER instruction to map the index type.