Thomas Lively [Wed, 9 Oct 2019 21:42:08 +0000 (21:42 +0000)]
[WebAssembly] Make returns variadic
Summary:
This is necessary and sufficient to get simple cases of multiple
return working with multivalue enabled. More complex cases will
require block and loop signatures to be generalized to potentially be
type indices as well.
Wei Mi [Wed, 9 Oct 2019 21:36:03 +0000 (21:36 +0000)]
[SampleFDO] Add indexing for function profiles so they can be loaded on demand
in ExtBinary format
Currently for Text, Binary and ExtBinary format profiles, when we compile a
module with samplefdo, even if there is no function showing up in the profile,
we have to load all the function profiles from the profile input. That is a
waste of compile time.
CompactBinary format profile has already had the support of loading function
profiles on demand. In this patch, we add the support to load profile on
demand for ExtBinary format. It will work no matter the sections in ExtBinary
format profile are compressed or not. Experiment shows it reduces the time to
compile a server benchmark by 30%.
When profile remapping and loading function profiles on demand are both used,
extra work needs to be done so that the loading on demand process will take
the name remapping into consideration. It will be addressed in a follow-up
patch.
Adds links to Getting Started/Tutorials, User Guides, and Reference documentation pages to sidebar. Also adds a new section for LLVM IR on the Reference documentation page.
David Greene [Wed, 9 Oct 2019 19:51:48 +0000 (19:51 +0000)]
[System Model] [TTI] Update cache and prefetch TTI interfaces
Re-apply 9fdfb045ae8b/r365676 with fixes for PPC and Hexagon. This involved
moving defaults from TargetTransformInfoImplBase to MCSubtargetInfo.
Rework the TTI cache and software prefetching APIs to prepare for the
introduction of a general system model. Changes include:
- Marking existing interfaces const and/or override as appropriate
- Adding comments
- Adding BasicTTIImpl interfaces that delegate to a subtarget
implementation
- Moving the default TargetTransformInfoImplBase implementation to a default
MCSubtarget implementation
Only a handful of targets use these interfaces currently: AArch64, Hexagon, PPC
and SystemZ. AArch64 already has a custom subtarget implementation, so its
custom TTI implementation is migrated to use the new facilities in BasicTTIImpl
to invoke its custom subtarget implementation. The custom TTI implementations
continue to exist for the other targets with this change. They are not moved
over to subtarget-based implementations.
The end goal is to have the default subtarget implementation defer to the system
model defined by the target. With this change, the default MCSubtargetInfo
implementation essentially returns the defaults TargetTransformInfoImplBase used
to return. Existing users of TTI defaults will hit the defaults now in
MCSubtargetInfo. Targets that define their own custom TTI implementations won't
use the BasicTTIImpl implementations that route to the subtarget.
Once system models are in place for the targets that use these interfaces, their
custom TTI implementations can be removed.
David Blaikie [Wed, 9 Oct 2019 18:37:13 +0000 (18:37 +0000)]
DebugInfo: Shot in the dark attempt to fix ubsan error from r374122
(specifying an underlying type for the enum might also be suitable - but
this seems better/as good, since there's a clear expectation this can
contain values other than the actual enumerators of this enum)
Julian Lettner [Wed, 9 Oct 2019 18:23:30 +0000 (18:23 +0000)]
[lit] Refactor ProgressDisplay
Move progress display to separate file. Simplify some code paths.
Decouple from other components via progress callback. Remove unused
`_Display` class.
Thomas Lively [Wed, 9 Oct 2019 17:45:47 +0000 (17:45 +0000)]
[WebAssembly] Add builtin and intrinsic for v8x16.swizzle
Summary:
This clang builtin and corresponding LLVM intrinsic are necessary to
expose the exact semantics of the underlying WebAssembly instruction
to users. LLVM produces a poison value if the dynamic swizzle indices
are greater than the vector size, but the WebAssembly instruction sets
the corresponding output lane to zero. Users who depend on this
behavior can safely use this builtin.
Thomas Lively [Wed, 9 Oct 2019 17:39:19 +0000 (17:39 +0000)]
[WebAssembly] v8x16.swizzle and rewrite BUILD_VECTOR lowering
Summary:
Adds the new v8x16.swizzle SIMD instruction as specified at
https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md#swizzling-using-variable-indices.
In addition to adding swizzles as a candidate lowering in
LowerBUILD_VECTOR, also rewrites and simplifies the lowering to
minimize the number of replace_lanes necessary rather than trying to
minimize code size. This leads to more uses of v128.const instead of
splats, which is expected to increase performance.
The new code will be easier to tune once V8 implements all the vector
construction operations, and it will also be easier to add new
candidate instructions in the future if necessary.
Kevin P. Neal [Wed, 9 Oct 2019 17:24:56 +0000 (17:24 +0000)]
[FPEnv][NFC] Change test to conform to strictfp attribute rules.
In particular, the function definition is not marked strictfp despite
containing a function marked strictfp. Also, if any function call is marked
strictfp then all function calls in that function must be marked.
This change to move the one strictfp call to a new properly marked function
meets all the new rules.
Sanjay Patel [Wed, 9 Oct 2019 16:32:49 +0000 (16:32 +0000)]
[SLP] respect target register width for GEP vectorization (PR43578)
We failed to account for the target register width (max vector factor)
when vectorizing starting from GEPs. This causes vectorization to
proceed to obviously illegal widths as in:
https://bugs.llvm.org/show_bug.cgi?id=43578
For x86, this also means that SLP can produce rogue AVX or AVX512
code even when the user specifies a narrower vector width.
The AArch64 test in ext-trunc.ll appears to be better using the
narrower width. I'm not exactly sure what getelementptr.ll is trying
to do, but it's testing with "-slp-threshold=-18", so I'm not worried
about those diffs. The x86 test is an over-reduction from SPEC h264;
this patch appears to restore the perf loss caused by SLP when using
-march=haswell.
Momchil Velikov [Wed, 9 Oct 2019 16:31:50 +0000 (16:31 +0000)]
[AArch64] Ensure no tagged memory is left in the unallocated portion of the
stack
This patch makes sure that if we tag some memory, we untag that memory before
the function returns/throws via any exit, reachable from the tag operation. For
that we place the untag operation either at:
a) the lifetime end call for the alloca, if that call post-dominates the
lifetime start call (where the tag operation is placed), or it (the
lifetime end call) dominates all reachable exits, otherwise
b) at the reachable exits
Jason Liu [Wed, 9 Oct 2019 16:19:39 +0000 (16:19 +0000)]
[AIX][XCOFF][NFC] Change the SectionLen field name of CSect Auxiliary entry to SectionOrLength.
Summary:
According the the XCOFF document,
If
Then
XTY_SD
x_scnlen contains the csect length.
XTY_LD
x_scnlen contains the symbol table index of the containing csect.
XTY_CM
x_scnlen contains the csect length.
XTY_ER
x_scnlen contains 0.
Change the SectionLen member name to SectionOrLength is more reasonable.
Re-land "[dsymutil] Fix handling of common symbols in multiple object files."
The original patch got reverted because it hit a long-standing legacy
issue on Windows that prevents files from being named `com`. Thanks
Kristina & Jeremy for pointing this out.
Alina Sbirlea [Wed, 9 Oct 2019 15:54:24 +0000 (15:54 +0000)]
[MemorySSA] Make the use of moveAllAfterMergeBlocks consistent.
Summary:
The rule for the moveAllAfterMergeBlocks API si for all instructions
from `From` to have been moved to `To`, while keeping the CFG edges (and
block terminators) unchanged.
Update all the callsites for moveAllAfterMergeBlocks to follow this.
Pending follow-up: since the same behavior is needed everytime, merge
all callsites into one. The common denominator may be the call to
`MergeBlockIntoPredecessor`.
Simon Atanasyan [Wed, 9 Oct 2019 13:12:21 +0000 (13:12 +0000)]
[mips] Split expandLoadImmReal into multiple methods. NFC
The `expandLoadImmReal` handles four different and almost non-overlapping
cases: loading a "single" float immediate into a GPR, loading a "single"
float immediate into a FPR, and the same couple for a "double" float
immediate.
It's better to move each `else if` branch into separate methods.
James Molloy [Wed, 9 Oct 2019 09:15:34 +0000 (09:15 +0000)]
[TableGen] Fix crash when using HwModes in CodeEmitterGen
When an instruction has an encoding definition for only a subset of
the available HwModes, ensure we just avoid generating an encoding
rather than crash.
Hans Wennborg [Wed, 9 Oct 2019 09:06:30 +0000 (09:06 +0000)]
Unify the two CRC implementations
David added the JamCRC implementation in r246590. More recently, Eugene
added a CRC-32 implementation in r357901, which falls back to zlib's
crc32 function if present.
These checksums are essentially the same, so having multiple
implementations seems unnecessary. This replaces the CRC-32
implementation with the simpler one from JamCRC, and implements the
JamCRC interface in terms of CRC-32 since this means it can use zlib's
implementation when available, saving a few bytes and potentially making
it faster.
JamCRC took an ArrayRef<char> argument, and CRC-32 took a StringRef.
This patch changes it to ArrayRef<uint8_t> which I think is the best
choice, and simplifies a few of the callers nicely.
[dsymutil] Fix handling of common symbols in multiple object files.
For common symbols the linker emits only a single symbol entry in the
debug map. This caused dsymutil to not relocate common symbols when
linking DWARF coming form object files that did not have this entry.
This patch fixes that by keeping track of common symbols in the object
files and synthesizing a debug map entry for them using the address from
the main binary.
The verbose output for finding relocations assumed that we'd always dump
the DIE after (which starts with a newline) and therefore didn't include
one itself. However, this isn't always true, leading to garbled output.
This patch adds a newline to the verbose output and adds a line that
says that the DIE is being kept (which isn't obvious otherwise). It also
adds a 0x prefix to the relocations.
Roman Lebedev [Tue, 8 Oct 2019 20:29:48 +0000 (20:29 +0000)]
[CVP} Replace SExt with ZExt if the input is known-non-negative
Summary:
zero-extension is far more friendly for further analysis.
While this doesn't directly help with the shift-by-signext problem, this is not unrelated.
TLDR: we produce -0.11% less instructions, -6.84% less `sext`, +10.75% more `zext`.
To be noted, clearly, not all new `zext`'s are produced by this fold.
(And now i guess it might have been interesting to measure this for D68103 :S)
Daniel Sanders [Tue, 8 Oct 2019 18:41:32 +0000 (18:41 +0000)]
[tblgen] Add getOperatorAsDef() to Record
Summary:
While working with DagInit's, it's often the case that you expect the
operator to be a reference to a def. This patch adds a wrapper for this
common case to reduce the amount of boilerplate callers need to duplicate
repeatedly.
getOperatorAsDef() returns the record if the DagInit has an operator that is
a DefInit. Otherwise, it prints a fatal error.
There's only a few pre-existing examples in LLVM at the moment and I've
left a few instances of the code this simplifies as they had more specific
error messages than the generic one this produces. I'm going to be using
this a fair bit in my subsequent patches.
Yonghong Song [Tue, 8 Oct 2019 18:23:17 +0000 (18:23 +0000)]
[BPF] do compile-once run-everywhere relocation for bitfields
A bpf specific clang intrinsic is introduced:
u32 __builtin_preserve_field_info(member_access, info_kind)
Depending on info_kind, different information will
be returned to the program. A relocation is also
recorded for this builtin so that bpf loader can
patch the instruction on the target host.
This clang intrinsic is used to get certain information
to facilitate struct/union member relocations.
The offset relocation is extended by 4 bytes to
include relocation kind.
Currently supported relocation kinds are
enum {
FIELD_BYTE_OFFSET = 0,
FIELD_BYTE_SIZE,
FIELD_EXISTENCE,
FIELD_SIGNEDNESS,
FIELD_LSHIFT_U64,
FIELD_RSHIFT_U64,
};
for __builtin_preserve_field_info. The old
access offset relocation is covered by
FIELD_BYTE_OFFSET = 0.
An example:
struct s {
int a;
int b1:9;
int b2:4;
};
enum {
FIELD_BYTE_OFFSET = 0,
FIELD_BYTE_SIZE,
FIELD_EXISTENCE,
FIELD_SIGNEDNESS,
FIELD_LSHIFT_U64,
FIELD_RSHIFT_U64,
};
void bpf_probe_read(void *, unsigned, const void *);
int field_read(struct s *arg) {
unsigned long long ull = 0;
unsigned offset = __builtin_preserve_field_info(arg->b2, FIELD_BYTE_OFFSET);
unsigned size = __builtin_preserve_field_info(arg->b2, FIELD_BYTE_SIZE);
#ifdef USE_PROBE_READ
bpf_probe_read(&ull, size, (const void *)arg + offset);
unsigned lshift = __builtin_preserve_field_info(arg->b2, FIELD_LSHIFT_U64);
#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
lshift = lshift + (size << 3) - 64;
#endif
#else
switch(size) {
case 1:
ull = *(unsigned char *)((void *)arg + offset); break;
case 2:
ull = *(unsigned short *)((void *)arg + offset); break;
case 4:
ull = *(unsigned int *)((void *)arg + offset); break;
case 8:
ull = *(unsigned long long *)((void *)arg + offset); break;
}
unsigned lshift = __builtin_preserve_field_info(arg->b2, FIELD_LSHIFT_U64);
#endif
ull <<= lshift;
if (__builtin_preserve_field_info(arg->b2, FIELD_SIGNEDNESS))
return (long long)ull >> __builtin_preserve_field_info(arg->b2, FIELD_RSHIFT_U64);
return ull >> __builtin_preserve_field_info(arg->b2, FIELD_RSHIFT_U64);
}
There is a minor overhead for bpf_probe_read() on big endian.
The code and relocation generated for field_read where bpf_probe_read() is
used to access argument data on little endian mode:
r3 = r1
r1 = 0
r1 = 4 <=== relocation (FIELD_BYTE_OFFSET)
r3 += r1
r1 = r10
r1 += -8
r2 = 4 <=== relocation (FIELD_BYTE_SIZE)
call bpf_probe_read
r2 = 51 <=== relocation (FIELD_LSHIFT_U64)
r1 = *(u64 *)(r10 - 8)
r1 <<= r2
r2 = 60 <=== relocation (FIELD_RSHIFT_U64)
r0 = r1
r0 >>= r2
r3 = 1 <=== relocation (FIELD_SIGNEDNESS)
if r3 == 0 goto LBB0_2
r1 s>>= r2
r0 = r1
LBB0_2:
exit
Compare to the above code between relocations FIELD_LSHIFT_U64 and
FIELD_LSHIFT_U64, the code with big endian mode has four more
instructions.
r1 = 41 <=== relocation (FIELD_LSHIFT_U64)
r6 += r1
r6 += -64
r6 <<= 32
r6 >>= 32
r1 = *(u64 *)(r10 - 8)
r1 <<= r6
r2 = 60 <=== relocation (FIELD_RSHIFT_U64)
Considering verifier is able to do limited constant
propogation following branches. The following is the
code actually traversed.
r2 = 0
r3 = 4 <=== relocation
r4 = 4 <=== relocation
if r4 s> 3 goto LBB0_3
LBB0_3: # %entry
if r4 == 4 goto LBB0_7
LBB0_7: # %sw.bb5
r1 += r3
r2 = *(u32 *)(r1 + 0)
LBB0_9: # %sw.epilog
r1 = 51 <=== relocation
r2 <<= r1
r1 = 60 <=== relocation
r0 = r2
r0 >>= r1
r3 = 1
if r3 == 0 goto LBB0_11
r2 s>>= r1
r0 = r2
LBB0_11: # %sw.epilog
exit
For native load case, the load size is calculated to be the
same as the size of load width LLVM otherwise used to load
the value which is then used to extract the bitfield value.
Matt Arsenault [Tue, 8 Oct 2019 17:36:38 +0000 (17:36 +0000)]
AMDGPU: Fix i16 arithmetic pattern redundancy
There were 2 problems here. First, these patterns were duplicated to
handle the inverted shift operands instead of using the commuted
PatFrags.
Second, the point of the zext folding patterns don't apply to the
non-0ing high subtargets. They should be skipped instead of inserting
the extension. The zeroing high code would be emitted when necessary
anyway. This was also emitting unnecessary zexts in cases where the
high bits were undefined.
Testing: check-llvm, and comparing the AMDGPUDisassembler.cpp.o binary
pre- vs. post-patch.
An alternate approach is to hide CodeExtractorAnalysisCache from clients
of CodeExtractor, and to recompute the analysis from scratch inside of
CodeExtractor::extractCodeRegion(). This eliminates some redundant work
in the shrinkwrapping legality check. However, some clients continue to
exhibit O(n^2) compile time behavior as computing the analysis is O(n).
Tom Stellard [Tue, 8 Oct 2019 17:04:51 +0000 (17:04 +0000)]
AMDGPU: Add offsets to MMO when lowering buffer intrinsics
Summary:
Without offsets on the MachineMemOperands (MMOs),
MachineInstr::mayAlias() will return true for all reads and writes to the
same resource descriptor. This leads to O(N^2) complexity in the MachineScheduler
when analyzing dependencies of buffer loads and stores. It also limits
the SILoadStoreOptimizer from merging more instructions.
This patch reduces the compile time of one pathological compute shader
from 12 seconds to 1 second.
Hideto Ueno [Tue, 8 Oct 2019 17:01:56 +0000 (17:01 +0000)]
[Attributor][Fix] Temporary fix for windows build bot failure
D65402 causes test failure related to attributor-max-iterations.
This commit removes attributor-max-iterations-verify for now.
I'll examine the factor and the flag should be reverted.
The static analyzer is warning about potential null dereferences, but in these cases we should be able to use cast<> directly and if not assert will fire for us.
Inhibit generation of unused real dpp instructions on gfx10 just
like it is done on other subtargets. This does not change anything
because these are illegal anyway and not accepted, but it does
reduce the number of instruction definitions generated.
David Greene [Tue, 8 Oct 2019 16:25:42 +0000 (16:25 +0000)]
[UpdateCCTestChecks] Detect function mangled name on separate line
Sometimes functions with large comment blocks in front of them have their
declarations output on several lines by c-index-test. Hence the one-line
function name/line/mangled pattern will not work to detect them. Break the
pattern up into two patterns and keep state after seeing the name/line
information until we finally see the mangled name.
Roman Lebedev [Tue, 8 Oct 2019 16:21:13 +0000 (16:21 +0000)]
[NFC][CVP] Add tests where we can replace sext with zext
If the sign bit of the value that is being sign-extended is not set,
i.e. the value is non-negative (s>= 0), then zero-extension will suffice,
and is better for analysis: https://rise4fun.com/Alive/a8PD
Heejin Ahn [Tue, 8 Oct 2019 16:15:39 +0000 (16:15 +0000)]
[WebAssembly] Fix a bug in 'try' placement
Summary:
When searching for local expression tree created by stackified
registers, for 'block' placement, we start the search from the previous
instruction of a BB's terminator. But in 'try''s case, we should start
from the previous instruction of a call that can throw, or a EH_LABEL
that precedes the call, because the return values of the call's previous
instructions can be stackified and consumed by the throwing call.
For example,
```
i32.call @foo
call @bar ; may throw
br $label0
```
In this case, if we start the search from the previous instruction of
the terminator (`br` here), we end up stopping at `call @bar` and place
a 'try' between `i32.call @foo` and `call @bar`, because `call @bar`
does not have a return value so it is not a local expression tree of
`br`.
But in this case, unlike when placing 'block's, we should start the
search from `call @bar`, because the return value of `i32.call @foo` is
stackified and used by `call @bar`.
Hideto Ueno [Tue, 8 Oct 2019 15:25:56 +0000 (15:25 +0000)]
[Attributor][MustExec] Deduce dereferenceable and nonnull attribute using MustBeExecutedContextExplorer
Summary:
In D65186 and related patches, MustBeExecutedContextExplorer is introduced. This enables us to traverse instructions guaranteed to execute from function entry. If we can know the argument is used as `dereferenceable` or `nonnull` in these instructions, we can mark `dereferenceable` or `nonnull` in the argument definition:
1. Memory instruction (similar to D64258)
Trace memory instruction pointer operand. Currently, only inbounds GEPs are traced.
```
define i64* @f(i64* %a) {
entry:
%add.ptr = getelementptr inbounds i64, i64* %a, i64 1
; (because of inbounds GEP we can know that %a is at least dereferenceable(16))
store i64 1, i64* %add.ptr, align 8
ret i64* %add.ptr ; dereferenceable 8 (because above instruction stores into it)
}
```
2. Propagation from callsite (similar to D27855)
If `deref` or `nonnull` are known in call site parameter attributes we can also say that argument also that attribute.
Hideto Ueno [Tue, 8 Oct 2019 15:20:19 +0000 (15:20 +0000)]
[Attributor] Add helper class to compose two structured deduction.
Summary: This patch introduces a generic way to compose two structured deductions. This will be used for composing generic deduction with `MustBeExecutedExplorer` and other existing generic deduction.
Cyndy Ishida [Tue, 8 Oct 2019 15:07:36 +0000 (15:07 +0000)]
[TextAPI] Introduce TBDv4
Summary:
This format introduces new features and platforms
The motivation for this format is to support more than 1 platform since previous versions only supported additional architectures and 1 platform,
for example ios + ios-simulator and macCatalyst.
Mirko Brkusanin [Tue, 8 Oct 2019 14:32:03 +0000 (14:32 +0000)]
[Mips] Emit proper ABI for _mcount calls
When -pg option is present than a call to _mcount is inserted into every
function. However since the proper ABI was not followed then the generated
gmon.out did not give proper results. By inserting needed instructions
before every _mcount we can fix this.
Pavel Labath [Tue, 8 Oct 2019 14:15:32 +0000 (14:15 +0000)]
Object/minidump: Add support for the MemoryInfoList stream
Summary:
This patch adds the definitions of the constants and structures
necessary to interpret the MemoryInfoList minidump stream, as well as
the object::MinidumpFile interface to access the stream.
While the code is fairly simple, there is one important deviation from
the other minidump streams, which is worth calling out explicitly.
Unlike other "List" streams, the size of the records inside
MemoryInfoList stream is not known statically. Instead it is described
in the stream header. This makes it impossible to return
ArrayRef<MemoryInfo> from the accessor method, as it is done with other
streams. Instead, I create an iterator class, which can be parameterized
by the runtime size of the structure, and return
iterator_range<iterator> instead.
Kevin P. Neal [Tue, 8 Oct 2019 14:10:26 +0000 (14:10 +0000)]
Nope, I'm wrong. It looks like someone else removed these on purpose and
it just happened to break the bot right when I did my push. So I'm undoing
this mornings incorrect push. I've also kicked off an email to hopefully
get the bot fixed the correct way.
Sebastian Pop [Tue, 8 Oct 2019 13:23:57 +0000 (13:23 +0000)]
fix fmls fp16
Tim Northover remarked that the added patterns for fmls fp16
produce wrong code in case the fsub instruction has a
multiplication as its first operand, i.e., all the patterns FMLSv*_OP1:
> define <8 x half> @test_FMLSv8f16_OP1(<8 x half> %a, <8 x half> %b, <8 x half> %c) {
> ; CHECK-LABEL: test_FMLSv8f16_OP1:
> ; CHECK: fmls {{v[0-9]+}}.8h, {{v[0-9]+}}.8h, {{v[0-9]+}}.8h
> entry:
>
> %mul = fmul fast <8 x half> %c, %b
> %sub = fsub fast <8 x half> %mul, %a
> ret <8 x half> %sub
> }
>
> This doesn't look right to me. The exact instruction produced is "fmls
> v0.8h, v2.8h, v1.8h", which I think calculates "v0 - v2*v1", but the
> IR is calculating "v2*v1-v0". The equivalent <4 x float> code also
> doesn't emit an fmls.
This patch generates an fmla and negates the value of the operand2 of the fsub.
Inspecting the pattern match, I found that there was another mistake in the
opcode to be selected: matching FMULv4*16 should generate FMLSv4*16
and not FMLSv2*32.
Graham Hunter [Tue, 8 Oct 2019 12:53:54 +0000 (12:53 +0000)]
[SVE][IR] Scalable Vector size queries and IR instruction support
* Adds a TypeSize struct to represent the known minimum size of a type
along with a flag to indicate that the runtime size is a integer multiple
of that size
* Converts existing size query functions from Type.h and DataLayout.h to
return a TypeSize result
* Adds convenience methods (including a transparent conversion operator
to uint64_t) so that most existing code 'just works' as if the return
values were still scalars.
* Uses the new size queries along with ElementCount to ensure that all
supported instructions used with scalable vectors can be constructed
in IR.
Nicolai Haehnle [Tue, 8 Oct 2019 12:46:20 +0000 (12:46 +0000)]
MachineSSAUpdater: insert IMPLICIT_DEF at top of basic block
Summary:
When getValueInMiddleOfBlock happens to be called for a basic block
that has no incoming value at all, an IMPLICIT_DEF is inserted in that
block via GetValueAtEndOfBlockInternal. This IMPLICIT_DEF must be at
the top of its basic block or it will likely not reach the use that
the caller intends to insert.
[MCA][LSUnit] Track loads and stores until retirement.
Before this patch, loads and stores were only tracked by their corresponding
queues in the LSUnit from dispatch until execute stage. In practice we should be
more conservative and assume that memory opcodes leave their queues at
retirement stage.
Basically, loads should leave the load queue only when they have completed and
delivered their data. We conservatively assume that a load is completed when it
is retired. Stores should be tracked by the store queue from dispatch until
retirement. In practice, stores can only leave the store queue if their data can
be written to the data cache.
This is mostly a mechanical change. With this patch, the retire stage notifies
the LSUnit when a memory instruction is retired. That would triggers the release
of LDQ/STQ entries. The only visible change is in memory tests for the bdver2
model. That is because bdver2 is the only model that defines the load/store
queue size.
Support for tracking registers that forward function parameters into the
following function frame. For now we only support cases when parameter
is forwarded through single register.
Clement Courbet [Tue, 8 Oct 2019 09:06:48 +0000 (09:06 +0000)]
[llvm-exegesis] Finish plumbing the `Config` field.
Summary:
Right now there are no snippet generators that emit the `Config` Field,
but I plan to add it to investigate LEA operands for PR32326.
What was broken was:
- `Config` Was not propagated up until the BenchmarkResult::Key.
- Clustering should really consider different configs as measuring
different things, so we should stabilize on (Opcode, Config) instead of
just Opcode.
Kristof Beyls [Tue, 8 Oct 2019 08:25:42 +0000 (08:25 +0000)]
[ARM] Generate vcmp instead of vcmpe
Based on the discussion in
http://lists.llvm.org/pipermail/llvm-dev/2019-October/135574.html, the
conclusion was reached that the ARM backend should produce vcmp instead
of vcmpe instructions by default, i.e. not be producing an Invalid
Operation exception when either arguments in a floating point compare
are quiet NaNs.
In the future, after constrained floating point intrinsics for floating
point compare have been introduced, vcmpe instructions probably should
be produced for those intrinsics - depending on the exact semantics
they'll be defined to have.
This patch logically consists of the following parts:
- Revert http://llvm.org/viewvc/llvm-project?rev=294945&view=rev and
http://llvm.org/viewvc/llvm-project?rev=294968&view=rev, which
implemented fine-tuning for when to produce vcmpe (i.e. not do it for
equality comparisons). The complexity introduced by those patches
isn't needed anymore if we just always produce vcmp instead. Maybe
these patches need to be reintroduced again once support is needed to
map potential LLVM-IR constrained floating point compare intrinsics to
the ARM instruction set.
- Simply select vcmp, instead of vcmpe, see simple changes in
lib/Target/ARM/ARMInstrVFP.td
- Adapt lots of tests that tested for vcmpe (instead of vcmp). For all
of these test, the intent of what is tested for isn't related to
whether the vcmp should produce an Invalid Operation exception or not.
Kai Nacke [Tue, 8 Oct 2019 08:21:20 +0000 (08:21 +0000)]
[Tools] Mark output of tools as text if it is text
Several LLVM tools write text files/streams without using OF_Text.
This can cause problems on platforms which distinguish between
text and binary output. This PR adds the OF_Text flag for the
following tools:
Bill Wendling [Tue, 8 Oct 2019 04:39:52 +0000 (04:39 +0000)]
[IA] Recognize hexadecimal escape sequences
Summary:
Implement support for hexadecimal escape sequences to match how GNU 'as'
handles them. I.e., read all hexadecimal characters and truncate to the
lower 16 bits.