Andrea Di Biagio [Fri, 23 Aug 2019 12:19:45 +0000 (12:19 +0000)]
[X86][BtVer2] Add a read-advance to every implicit register use of CMPXCHG8B/16B.
This is a follow up of r369642.
This patch assigns a ReadAfterLd to every implicit register use of instruction
CMPXCHG8B and instruction CMPXCHG16B. Perf micro-benchmarks show that implicit
registers are read after 3cy from the start of execution.
Andrea Di Biagio [Fri, 23 Aug 2019 11:34:10 +0000 (11:34 +0000)]
[X86][BtVer2] Fix latency of ALU RMW instructions.
Excluding ADC/SBB and the bit-test instructions (BTR/BTS/BTC), the observed
latency of all other RMW integer arithmetic/logic instructions is 6cy and not
5cy.
The same latency profile applies to SUB/AND/OR/XOR/INC/DEC.
The observed latency of ADC/SBB is 7-8cy. So we need a different write to model
those. Latency of BTS/BTR/BTC is not fixed by this patch (they are much slower
than what the model for btver2 currently reports).
Martin Storsjo [Fri, 23 Aug 2019 11:18:11 +0000 (11:18 +0000)]
[llvm-dlltool] Make sure to strip decorations from ExtName for renamed exports
ExtName should not be decorated, just like Name.
This avoids double decoration on symbols in import libraries
that use = for renaming functions. (Weak aliases, which use ==,
worked fine with respect to decoration.)
George Rimar [Fri, 23 Aug 2019 09:31:07 +0000 (09:31 +0000)]
[yaml2obj] - Allow setting the symbol st_other field to any integer.
st_other field of a symbol usually contains its visibility.
Other bits are usually 0, though some targets, like
MIPS can set them using the named bit field values.
Problem is that there is no way to set an arbitrary value now,
though that might be useful for our test cases.
In this patch I introduced a way to set st_other to any numeric
value using the new StOther field.
I added a test and simplified the existent one to show the effect/benefit
Craig Topper [Fri, 23 Aug 2019 07:38:25 +0000 (07:38 +0000)]
[X86] Add a further unrolled madd reduction test case that shows several deficiencies.
The AVX2 check lines show two issues. An ADD that became an OR
because we knew the input was disjoint, but really it was zero
so we should have just removed the ADD/OR all together.
Relatedly we use 128-bit VPMADDWD instructions followed by
256-bit VPADDD operations. We should be able to narrow these
VPADDDs.
Craig Topper [Fri, 23 Aug 2019 05:33:27 +0000 (05:33 +0000)]
[X86] Improve lowering of v2i32 SAD handling in combineLoopSADPattern.
For v2i32 we only feed 2 i8 elements into the psadbw instructions
with 0s in the other 14 bytes. The resulting psadbw instruction
will produce zeros in bits [127:16] of the output. We need to take
the result and feed it to a v2i32 add where the first element
includes bits [15:0] of the sad result. The other element should
be zero.
Prior to this patch we were using a truncate to take 0 from
bits 95:64 of the psadbw. This results in a pshufd to move those
bits to 63:32. But since we also have zeroes in bits 63:32 of
the psadbw output, we should just take those bits.
The previous code probably worked better with promoting legalization,
but now we use widening legalization. I've preserved the old
behavior if -x86-experimental-vector-widening-legalization=false
until we get that option removed.
Philip Reames [Fri, 23 Aug 2019 04:03:23 +0000 (04:03 +0000)]
[IndVars] Fix a bug noticed by inspection
We were computing the loop exit value, but not ensuring the addrec belonged to the loop whose exit value we were computing. I couldn't actually trip this; the test case shows the basic setup which *might* trip this, but none of the variations I've tried actually do.
Fangrui Song [Fri, 23 Aug 2019 02:17:04 +0000 (02:17 +0000)]
[AlignmentFromAssumptions] getNewAlignmentDiff(): use getURemExpr()
The alignment is calculated incorrectly, thus sometimes it doesn't generate aligned mov instructions, as shown by the example below:
```
// b.cc
typedef long long index;
extern "C" index g_tid;
extern "C" index g_num;
void add3(float* __restrict__ a, float* __restrict__ b, float* __restrict__ c) {
index n = 64*1024;
index m = 16*1024;
index k = 4*1024;
index tid = g_tid;
index num = g_num;
__builtin_assume_aligned(a, 32);
__builtin_assume_aligned(b, 32);
__builtin_assume_aligned(c, 32);
for (index i0=tid*k; i0<m; i0+=num*k)
for (index i1=0; i1<n*m; i1+=m)
for (index i2=0; i2<k; i2++)
c[i1+i0+i2] = b[i0+i2] + a[i1+i0+i2];
}
```
Compile with `clang b.cc -Ofast -march=skylake -mavx2 -S`
```
vmovaps -224(%rdi,%rbx,4), %ymm0
vmovups -192(%rdi,%rbx,4), %ymm1 # should be movaps
vmovups -160(%rdi,%rbx,4), %ymm2 # should be movaps
vmovups -128(%rdi,%rbx,4), %ymm3 # should be movaps
vaddps -224(%rsi,%rbx,4), %ymm0, %ymm0
vaddps -192(%rsi,%rbx,4), %ymm1, %ymm1
vaddps -160(%rsi,%rbx,4), %ymm2, %ymm2
vaddps -128(%rsi,%rbx,4), %ymm3, %ymm3
vmovaps %ymm0, -224(%rdx,%rbx,4)
vmovups %ymm1, -192(%rdx,%rbx,4) # should be movaps
vmovups %ymm2, -160(%rdx,%rbx,4) # should be movaps
vmovups %ymm3, -128(%rdx,%rbx,4) # should be movaps
```
Differential Revision: https://reviews.llvm.org/D66575
Patch by Dun Liang
hwasan: Untag unwound stack frames by wrapping personality functions.
One problem with untagging memory in landing pads is that it only works
correctly if the function that catches the exception is instrumented.
If the function is uninstrumented, we have no opportunity to untag the
memory.
To address this, replace landing pad instrumentation with personality function
wrapping. Each function with an instrumented stack has its personality function
replaced with a wrapper provided by the runtime. Functions that did not have
a personality function to begin with also get wrappers if they may be unwound
past. As the unwinder calls personality functions during stack unwinding,
the original personality function is called and the function's stack frame is
untagged by the wrapper if the personality function instructs the unwinder
to keep unwinding. If unwinding stops at a landing pad, the function is
still responsible for untagging its stack frame if it resumes unwinding.
The old landing pad mechanism is preserved for compatibility with old runtimes.
Sam Clegg [Fri, 23 Aug 2019 01:00:55 +0000 (01:00 +0000)]
[MC] Minor cleanup to MCFixup::Kind handling. NFC.
Prefer `MCFixupKind` where possible and add getTargetKind() to
convert to `unsigned` when needed rather than scattering cast
operators around the place.
IR. Change strip* family of functions to not look through aliases.
I noticed another instance of the issue where references to aliases were
being replaced with aliasees, this time in InstCombine. In the instance that
I saw it turned out to be only a QoI issue (a symbol ended up being missing
from the symbol table due to the last reference to the alias being removed,
preventing HWASAN from symbolizing a global reference), but it could easily
have manifested as incorrect behaviour.
Since this is the third such issue encountered (previously: D65118, D65314)
it seems to be time to address this common error/QoI issue once and for all
and make the strip* family of functions not look through aliases.
Includes a test for the specific issue that I saw, but no doubt there are
other similar bugs fixed here.
As with D65118 this has been tested to make sure that the optimization isn't
load bearing. I built Clang, Chromium for Linux, Android and Windows as well
as the test-suite and there were no size regressions.
Summary:
The matchers for section/symbol related flags (e.g. `--keep-symbol=Name` or `--regex --keep-symbol=foo.*`) are currently just vectors that are matched linearlly. However, adding wildcard support would require negative matching too, e.g. a symbol should be removed if it matches a wildcard *but* doesn't match some other wildcard.
To make the next patch simpler, consolidate matching logic to a class defined in CopyConfig that takes care of matching.
Matt Arsenault [Thu, 22 Aug 2019 17:29:17 +0000 (17:29 +0000)]
GlobalISel: Don't create G_UADDE with constant false carry in
The x86 tests are now broken (in paticular add-scalar.ll now hits the
DAG fallback) due to not handling G_UADDO. The DAG x86 backend has a
custom lowering for this, so that will need to be implemented.
[MachO][TLOF] Use hasLocalLinkage to determine if indirect symbol is local
Local symbols in the indirect symbol table contain the value
`INDIRECT_SYMBOL_LOCAL` and the corresponding __pointers entry must
contain the address of the target.
In r349060, I added support for local symbols in the indirect symbol
table, which was checking if the symbol `isDefined` && `!isExternal` to
determine if the symbol is local or not.
It turns out that `isDefined` will return false if the user of the
symbol comes before its definition, and we'll again generate .long 0
which will be the symbol at the adress 0x0.
Instead of doing that, use GlobalValue::hasLocalLinkage() to check if
the symbol is local.
Guozhi Wei [Thu, 22 Aug 2019 16:21:32 +0000 (16:21 +0000)]
[MBP] Disable aggressive loop rotate in plain mode
Patch https://reviews.llvm.org/D43256 introduced more aggressive loop layout optimization which depends on profile information. If profile information is not available, the statically estimated profile information(generated by BranchProbabilityInfo.cpp) is used. If user program doesn't behave as BranchProbabilityInfo.cpp expected, the layout may be worse.
To be conservative this patch restores the original layout algorithm in plain mode. But user can still try the aggressive layout optimization with -force-precise-rotation-cost=true.
Amaury Sechet [Thu, 22 Aug 2019 15:35:45 +0000 (15:35 +0000)]
[DAGCombiner] Remove explicit call to AddToWorklist in sqrt and reciprocal computations
Summary: These nodes end up being processed regardless due to DAGCombiner ensuring arguments are processed. This changes the order in which nodes are processed, which fixes an issue on PowerPC.
Excluding the 64bit variant, which has a latency of 6cy, every other instruction
has a latency of 3cy. However, the number of decoded macro-opcodes (as well as
the resource cyles) depend on the MUL size.
The two operand MULs have a more predictable profile (see below):
Sean Fertile [Thu, 22 Aug 2019 15:11:28 +0000 (15:11 +0000)]
[PowerPC] Add combined ELF ABI and 32/64 bit queries to the subtarget. [NFC]
A lot of places in the code combine checks for both ABI (SVR4/Darwin/AIX) and
addressing mode (64-bit vs 32-bit). In an attempt to make some of the code more
readable I've added a couple functions that combine checking for the ELF abi and
64-bit/32-bit code at once. As we add more AIX support I intend to add similar
functions for the AIX ABI.
Sean Fertile [Thu, 22 Aug 2019 15:11:23 +0000 (15:11 +0000)]
[PowerPC][XCOFF][MC] Explicitly set containing csect on symbols. [NFC]
Previously we would get the csect a symbol was contained in through its
fragment. This works only if we are writing an object file, and only for
defined symbols. To fix this we set the contating csect explicitly on the
MCSymbolXCOFF object.
Hideto Ueno [Thu, 22 Aug 2019 14:18:29 +0000 (14:18 +0000)]
[Attributor][NFC] Move DerefState to header and use StateWrapper
Summary: In D65402, I want to get DerefState from AADereferenceable but it was not allowed. This patch moves DerefState definition into Attributor.h and makes AADerefenceable inherit StateWrapper.
Jinsong Ji [Thu, 22 Aug 2019 13:44:47 +0000 (13:44 +0000)]
[SlotIndexes] Add print-slotindexes to disable printing slotindexes
Summary:
When we print the IR with --print-after/before-*,
SlotIndexes will be printed whenever available (We haven't freed it).
This introduces some noises when we try to compare the IR
among different optimizations.
eg:
-print-before=machine-cp will print SlotIndexes for 1st machine-cp
pass, but NOT for 2nd machine-cp;
-print-after=machine-cp will NOT print SlotIndexes for both
machine-cp passes.
So SlotIndexes in 1st pass introduce noises when differing these IRs.
George Rimar [Thu, 22 Aug 2019 12:39:56 +0000 (12:39 +0000)]
[yaml2obj] - Lookup relocation symbols in dynamic symbol when .dynsym referenced.
This fixes https://bugs.llvm.org/show_bug.cgi?id=40337.
Previously, it was always assumed that relocations referenced symbols in the static symbol table.
Now, if the Link field references a section called ".dynsym" it will look up these symbols
in the dynamic symbol table.
This patch is heavily based on D59097 by James Henderson
Sylvestre Ledru [Thu, 22 Aug 2019 12:16:08 +0000 (12:16 +0000)]
Fix some regressions caused by r369553 on old versions of Debian and Ubuntu
It was causing some errors like:
Encoding error:
'ascii' codec can't decode byte 0xe2 in position 341: ordinal not in range(128)
The full traceback has been saved in /tmp/sphinx-err-y2fq4dtb.log, if you want to report the issue to the developers.
Andrea Di Biagio [Thu, 22 Aug 2019 11:32:47 +0000 (11:32 +0000)]
[X86][BtVer2] Fix latency and throughput of XCHG and XADD.
On Jaguar, XCHG has a latency of 1cy and decodes to 2 macro-opcodes. Maximum
throughput for XCHG is 1 IPC. The byte exchange has worse latency and decodes to
1 extra uOP; maximum observed throughput is 0.5 IPC.
The reg-mem forms of XCHG are atomic operations with an observed latency of
16cy. The resource usage is similar to the XCHGrr variants. The biggest
difference is obviously the bus-locking, which prevents the LS to issue other
memory uOPs in parallel until the unlocking store uOP is executed.
The exchanged in/out register operand becomes available after 11cy from the
start of execution. Added test xchg.s to verify that we correctly see that
register write committed in 11cy (and not 16cy).
Reg-reg XADD instructions have the same latency/throughput than the byte
exchange (register-register variant).
The non-atomic RM variants have a latency of 11cy, and decode to 4
macro-opcodes. They still consume 2 ALU pipes, and the exchange in/out register
operand becomes available in 3cy (it matches the 'load-to-use latency').
Hans Wennborg [Thu, 22 Aug 2019 09:16:53 +0000 (09:16 +0000)]
Revert r369626 "[ARM] Fix lsrl with a 128/256 bit shift amount or a shift of 32"
It broke the bots, see e.g. http://lab.llvm.org:8011/builders/clang-cuda-build/builds/36275/
> This patch fixes shifts by a 128/256 bit shift amount. It also fixes
> codegen for shifts of 32 by delegating to LLVM's default optimisation
> instead of emitting a long shift.
>
> Tests that used to generate long shifts of 32 are updated to check for the
> more optimised codegen.
>
> Differential revision: https://reviews.llvm.org/D66519
>
> llvm-svn: 369626
We do not need it, std::error_code is used mostly for COFF and
this patch rewrites the calls to use a different overload.
Having reportError(std::error_code EC, ... is excessive by itself,
because API that use error codes actually needs refactoring to
use Error/Expected<> instead.
Craig Topper [Thu, 22 Aug 2019 08:18:45 +0000 (08:18 +0000)]
[X86] Lower the cost of v2i32->v2f64 sint_to_fp under vector widening legalization.
I don't really understand the costs we're using for fp_to_sint,
but prior to widening legalization we used 20 as the cost for this
via the v2i64->v2f64 entry. That number seems better than the 40
we got with widening legalization. So now we need either a
v2i32->v2f64 entry or a v4i32->v2f64 entry depending on whether
AVX is enabled or not since we skip the first SSE2 table look up
under AVX.
Pavel Labath [Thu, 22 Aug 2019 08:13:30 +0000 (08:13 +0000)]
[Support] Improve readNativeFile(Slice) interface
Summary:
There was a subtle, but pretty important difference between the Slice
and regular versions of this function. The Slice function was
zero-initializing the rest of the buffer when the read syscall returned
less bytes than expected, while the regular function did not.
This patch removes the inconsistency by making both functions *not*
zero-initialize the buffer. The zeroing code is moved to the
MemoryBuffer class, which is currently the only user of this code. This
makes the API more consistent, and the code shorter.
While in there, I also refactor the functions to return the number of
bytes through the regular return value (via Expected<size_t>) instead of
a separate by-ref argument.
Sam Tebbs [Thu, 22 Aug 2019 08:12:06 +0000 (08:12 +0000)]
[ARM] Fix lsrl with a 128/256 bit shift amount or a shift of 32
This patch fixes shifts by a 128/256 bit shift amount. It also fixes
codegen for shifts of 32 by delegating to LLVM's default optimisation
instead of emitting a long shift.
Tests that used to generate long shifts of 32 are updated to check for the
more optimised codegen.
Shiva Chen [Thu, 22 Aug 2019 04:59:43 +0000 (04:59 +0000)]
[TargetLowering] Remove optional arguments passing to makeLibCall
The patch introduces MakeLibCallOptions struct as suggested by @efriedma on D65497.
The struct contain argument flags which will pass to makeLibCall function.
The patch should not has any functionality changes.
Joel E. Denny [Thu, 22 Aug 2019 03:42:01 +0000 (03:42 +0000)]
[lit] Diagnose insufficient args to internal env
Without this patch, failing to provide a subcommand to lit's internal
`env` results in either a python `IndexError` or an attempt to execute
the final `env` argument, such as `FOO=1`, as a command. This patch
diagnoses those cases with a more helpful message.
Fangrui Song [Thu, 22 Aug 2019 01:48:34 +0000 (01:48 +0000)]
[COFF] Fix section name for constants larger than 64 bits on Windows
APIntToHexString returns wrong value ("0000000000000000ffffffffffffffff")
for integer larger than 64 bits, and thus
TargetLoweringObjectFileCOFF::getSectionForConstant returns same section name
for all numbers larger than 64 bits. This patch tries to fix it.
Differential Revision: https://reviews.llvm.org/D66458
Patch by Senran Zhang
Cyndy Ishida [Wed, 21 Aug 2019 23:30:53 +0000 (23:30 +0000)]
[Object] Add tapi files to object
Summary:
The intention for this is to allow reading and printing symbols out from
llvm-nm. Tapi file, and Tapi universal follow a similiar format to
their respective MachO Object format.
The tests are dependent on llvm-nm processing tbd files which is why its in D66160
Greg Clayton [Wed, 21 Aug 2019 21:48:11 +0000 (21:48 +0000)]
Add FileWriter to GSYM and encode/decode functions to AddressRange and AddressRanges
The full GSYM patch started with: https://reviews.llvm.org/D53379
This patch add the ability to encode data using the new llvm::gsym::FileWriter class.
FileWriter is a simplified binary data writer class that doesn't require targets, target definitions, architectures, or require any other optional compile time libraries to be enabled via the build process. This class needs the ability to seek to different spots in the binary data that it produces to fix up offsets and sizes in GSYM data. It currently uses std::ostream over llvm::raw_ostream because llvm::raw_ostream doesn't support seeking which is required when encoding and decoding GSYM data.
AddressRange objects are encoded and decoded to be relative to a base address. This will be the FunctionInfo's start address if the AddressRange is directly contained in a FunctionInfo, or a base address of the containing parent AddressRange or AddressRanges. This allows address ranges to be efficiently encoded using ULEB128 encodings as we encode the offset and size of each range instead of full addresses. This also makes encoded addresses easy to relocate as we just need to relocate one base address.
[Attributor][NFCI] Introduce tight iteration bounds in the tests
Summary:
To be able to track how many iterations we need to manifest all
information we check for we now make the maximum iteration count
explicit. The count is set tightly now and should be kept that way.
Cyndy Ishida [Wed, 21 Aug 2019 21:00:16 +0000 (21:00 +0000)]
[BinaryFormat] Teach identify_magic about Tapi files.
Summary:
Tapi files are YAML files that start with the !tapi tag. The only execption are
TBD v1 files, which don't have a tag. In that case we have to scan a little
further and check if the first key "archs" exists.
This is the first patch in a series of patches to add libObject support for
text-based dynamic library (.tbd) files.
This patch is practically exactly the same as D37820, that was never pushed to master,
and is needed for future commits related to reading tbd files for llvm-nm
Florian Hahn [Wed, 21 Aug 2019 20:06:50 +0000 (20:06 +0000)]
[GVN] Do PHI translations across all edges between the load and the unavailable pred.
Currently we do not properly translate addresses with PHIs if LoadBB !=
LI->getParent(), because PHITranslateAddr expects a direct predecessor as argument,
because it considers all instructions outside of the current block to
not requiring translation.
The amount of cases that trigger this should be very low, as most single
predecessor blocks should be folded into their predecessor by GVN before
we actually start with value numbering. It is still not guaranteed to
happen, so we should do PHI translation along all edges between the
loads' block and the predecessor where we have to place a load.
There are a few test cases showing current limits of the PHI translation, which
could be improved later.
Simon Atanasyan [Wed, 21 Aug 2019 18:54:51 +0000 (18:54 +0000)]
[mips] Replace call `expandLoadAddress` by `loadAndAddSymbolAddress`. NFC
In case of expanding `lw/sw $reg, symbol($reg)` instruction for PIC it's
enough to call the `loadAndAddSymbolAddress` method. Additional work
performed by the `expandLoadAddress` is not required here.
Amaury Sechet [Wed, 21 Aug 2019 18:51:08 +0000 (18:51 +0000)]
[DAGCombiner] Remove mostly redundant calls to AddToWorklist
Summary:
These calls change the order in which some nodes are processed and so have an effect on codegen.
The change in fixup-bw-copy.ll is due to (and (load anyext)) gets transformed into (load zext) while previously the and was removed by SimplifyDemandedBits, so the (load anyext) remained.
Florian Hahn [Wed, 21 Aug 2019 18:20:11 +0000 (18:20 +0000)]
[BitcodeReader] Check if we can create a null constant for type.
We cannot create null constants for certain types, e.g. VoidTy,
FunctionTy or LabelTy. getNullValue asserts if we pass in an
unsupported type. We should also check for opaque types, but I'm not
sure how.
This fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=14795.
Jordan Rupprecht [Wed, 21 Aug 2019 18:00:17 +0000 (18:00 +0000)]
[docs] Convert remaining command guide entries from md to rst.
Summary:
Linking between markdown and rst files is currently not supported very well, e.g. the current llvm-addr2line docs [1] link to "llvm-symbolizer" instead of "llvm-symbolizer.html". This is weirdly broken in different ways depending on which versions of sphinx and recommonmark are being used, so workaround the bug by using rst everywhere.
Mitch Phillips [Wed, 21 Aug 2019 17:53:51 +0000 (17:53 +0000)]
[GWP-ASan] Add public-facing documentation [6].
Summary:
Note: Do not submit this documentation until Scudo support is reviewed and submitted (should be #[5]).
See D60593 for further information.
This patch introduces the public-facing documentation for GWP-ASan, as well as updating the definition of one of the options, which wasn't properly merged. The document describes the design and features of GWP-ASan, as well as how to use GWP-ASan from both a user's standpoint, and development documentation for supporting allocators.
Alina Sbirlea [Wed, 21 Aug 2019 17:00:57 +0000 (17:00 +0000)]
[LoopPassManager + MemorySSA] Only enable use of MemorySSA for LPMs known to preserve it.
Summary:
Add a flag to the FunctionToLoopAdaptor that allows enabling MemorySSA only for the loop pass managers that are known to preserve it.
If an LPM is known to have only loop transforms that *all* preserve MemorySSA, then use MemorySSA if `EnableMSSALoopDependency` is set.
If an LPM has loop passes that do not preserve MemorySSA, then the flag passed is `false`, regardless of the value of `EnableMSSALoopDependency`.
When using a custom loop pass pipeline via `passes=...`, use keyword `loop` vs `loop-mssa` to use MemorySSA in that LPM. If a loop that does not preserve MemorySSA is added while using the `loop-mssa` keyword, that's an error.
Add the new `loop-mssa` keyword to a few tests where a difference occurs when enabling MemorySSA.
Matt Arsenault [Wed, 21 Aug 2019 16:59:10 +0000 (16:59 +0000)]
GlobalISel: Implement moreElementsVector for G_UNMERGE_VALUES sources
This is necessary for handling <3 x s16> on AMDGPU, assuming this
should be handled as 2 separate legalization actions. The alternative
would be for fewerElementsVector to handle 3->2.