David Blaikie [Fri, 7 Jul 2017 21:02:59 +0000 (21:02 +0000)]
ProfData: Fix some unchecked Errors in unit tests
The 'NoError' function was meant to be used as the input to
ASSERT/EXPECT_TRUE, but it is easy to forget this (it could be annotated
with nodiscard to help this) so many sites that look like they're checked
are not (& silently discard the failure). Only one site actually has an
Error sneaking out this way and I've replaced that one with a
FIXME+consumeError.
The rest of the code has been modified to use the EXPECT_THAT_ERROR
macros Zach introduced a while back. Between the options available this
seems OK/good/something to standardize on - though it's difficult to
build a matcher that could handle checking for a specific llvm::Error
result, so those remain using the custom ErrorEquals (& the nodiscard
added to ensure it is not misused as it was previous to this patch). It
could still be generalized a bit further (even not as far as a matcher,
but at least support multiple kinds of Error, etc) & added to the
general Error utility header.
Increase the import-threshold for crtical functions.
Summary: For interative sample-pgo, if a hot call site is inlined in the profiling binary, we should inline it in before profile annotation in the backend. Before that, the compile phase first collects all GUIDs that needs to be imported and creates virtual "hot" call edge in the summary. However, "hot" is not good enough to guarantee the callsites get inlined. This patch introduces "critical" call edge, and assign much higher importing threshold for those edges.
Reviewers: tejohnson
Reviewed By: tejohnson
Subscribers: sanjoy, mehdi_amini, llvm-commits, eraman
Add sample PGO support to ThinLTO new pass manager.
Summary:
For SamplePGO + ThinLTO, because profile annotation is done twice at both PrepareForThinLTO pipeline and backend compiler, the following changes are needed at the PrepareForThinLTO phase to ensure the IR is not changed dramatically. Otherwise the profile annotation will be inaccurate in the backend compiler.
This will unblock the new PM testing for sample PGO (tools/clang/test/CodeGen/pgo-sample-thinlto-summary.c), which will be covered in another cfe patch.
[PDB] More changes to bring lld PDBs to parity with MSVC.
1) Don't write a /src/headerblock stream. This appears to be
written conditionally by MSVC, but it's not clear what the
condition is. For now, just remove it since we dont' know
what it is anyway and the particular pdb we've checked in
for the test doesn't have one.
2) Write a valid timestamp for the PDB file signature. This
leads to non-reproducible builds, but it matches the default
behavior of link, so it should be out default as well. If
we need reproducibility, we should add a separate command
line option for it that is off by default.
3) Write an empty FPO stream. MSVC seems to always write an
FPO stream. This change makes the stream directory match
up, although we still need to make the contents of the FPO
stream match.
Anna Thomas [Fri, 7 Jul 2017 20:12:32 +0000 (20:12 +0000)]
[LoopUnrollRuntime] Support multiple exit blocks unrolling when prolog remainder generated
With the NFC refactoring in rL307417 (git SHA 987dd01), all the logic
is in place to support multiple exit/exiting blocks when prolog
remainder is generated.
This patch removed the assert that multiple exit blocks unrolling is only
supported when epilog remainder is generated.
Also, added test runs and checks with PROLOG prefix in
runtime-loop-multiple-exits.ll test cases.
[PatternMatch] Implement m_One and m_AllOnes using Constant::isOneValue/isAllOnesValue instead of doing our own splat detection and checking the resulting APInt.
[RegAllocFast] Don't insert kill flags of super-register for partial kill
When reusing a register for a new definition, the fast register allocator
used to insert a kill flag at the previous last use of that register to
inform later passes that this register is free between the redef and the
last use. However, this may be wrong when subregisters are involved.
Indeed, a partially redef would have trigger a kill of the full super
register, potentially wrongly marking all the other subregisters as
free. Given we don't track which lanes are still live, we cannot set the
kill flag in such case.
Note: This bug has been latent for about 7 years (r104056).
Some platforms require an explicit specialization of std::hash
for PdbRaw_FeaturesSig. Also a test involving case sensitivity
needed to be fixed. For now that particular check just accepts
any path even if they're completely different. Long term we
should output paths in the correct case to match MSVC.
Use windows path syntax when writing PDB module name.
Without this we would just append whatever the user
wrote on the command line, so if we're in C:\foo
and we run lld-link bar/baz.obj, we would write
C:\foo\bar/baz.obj in various places in the PDB.
MSVC linker does not do this, so we shouldn't either.
This fixes some differences in the diff test, so we
update the test as well.
Fix some differences between lld and MSVC generated PDBs.
A couple of things were different about our generated PDBs.
1) We were outputting the wrong Version on the PDB Stream.
The version we were setting was newer than what MSVC is setting.
It's not clear what the implications are, but we change LLD
to use PdbImplVC70, as MSVC does.
2) For the optional debug stream indices in the DBI Stream, we
were outputting 0 to mean "the stream is not present". MSVC
outputs uint16_t(-1), which is the "correct" way to specify
that a stream is not present. So we fix that as well.
3) We were setting the PDB Stream signature to 0. This is supposed
to be the result of calling time(nullptr). Although this leads
to non-deterministic builds, a better way to solve that is by
having a command line option explicitly for generating a
reproducible build, and have the default behavior of lld-link
match the default behavior of link.
To test this, I'm making use of the new and improved `pdb diff`
sub command. To make it suitable for writing tests against, I had
to modify the diff subcommand slightly to print less verbose output.
Previously it would always print | <column> | <value1> | <value2> |
which is quite verbose, and the values are fragile. All we really
want to know is "did we produce the same value as link?" So I added
command line options to print a single character representing the
result status (different, identical, equivalent), and another to
hide the value display. Note that just inspecting the diff output
used to write the test, you can see some things that are obviously
wrong. That is just reflective of the fact that this is the state
of affairs today, not that we're asserting that this is "correct".
We can use this as a starting point to discover differences, fix
them, and update the test.
We're getting to the point that some MS tools (e.g. DIA) can recognize
our PDBs but others (e.g. link.exe) cannot. I think the way forward is
to improve our tooling to help us find differences more easily. For
example, if we can compile the same program with clang-cl and cl and
have a tool tell us all the places where the PDBs differ, this could
tell us what we're doing wrong. It's tricky though, because there are a
lot of "benign" differences in a PDB. For example, if the string table
in one PDB consists of "foo" followed by "bar" and in the other PDB it
consists of "bar" followed by "foo", this is not necessarily a critical
difference, as long as the uses of these strings also refer to the
correct location. On the other hand, if the second PDB doesn't even
contain the string "foo" at all, this is a critical difference.
diff mode has been in llvm-pdbutil for quite a while, but because of the
above challenge along with some others, it's been hard to make it
useful. I think this patch addresses that. It looks for all the same
things, but it now prints the output in tabular format (carefully
formatted and aligned into tables and fields), and it highlights
critical differences in red, non-critical differences in yellow, and
identical fields in green. This makes it easy to spot the places we
differ, and the general concept of outputting arbitrary fields in
tabular format can be extended to provide analysis into many of the
different types of information that show up in a PDB.
Gor Nishanov [Fri, 7 Jul 2017 18:24:20 +0000 (18:24 +0000)]
[cloning] Do not duplicate types when cloning functions
Summary:
This is an addon to the change rl304488 cloning fixes. (Originally rl304226 reverted rl304228 and reapplied rl304488 https://reviews.llvm.org/D33655)
rl304488 works great when DILocalVariables that comes from the inlined function has a 'unique-ed' type, but,
in the case when the variable type is distinct we will create a second DILocalVariable in the scope of the original function that was inlined.
Consider cloning of the following function:
```
define private void @f() !dbg !5 {
%1 = alloca i32, !dbg !11
call void @llvm.dbg.declare(metadata i32* %1, metadata !14, metadata !12), !dbg !18
ret void, !dbg !18
}
Now we have two DILocalVariable for "inlined" within the same scope. This result in assert in AsmPrinter/DwarfDebug.h:131: void llvm::DbgVariable::addMMIEntry(const llvm::DbgVariable &): Assertion `V.Var == Var && "conflicting variable"' failed.
(Full example: See: https://bugs.llvm.org/show_bug.cgi?id=33492)
In this change we prevent duplication of types so that when a metadata for DILocalVariable is cloned it will get uniqued to the same metadate node as an original variable.
Anna Thomas [Fri, 7 Jul 2017 18:05:28 +0000 (18:05 +0000)]
[LoopUnrollRuntime] NFC: use the precomputed loop exit in ConnectProlog
Minor refactoring to use the preexisting loop exit that's already
calculated. We do not need to recompute the loop exit in ConnectProlog.
Apart from avoiding redundant computation, this is required for
supporting multiple loop exits when Prolog remainder loops are generated.
Matthew Simpson [Fri, 7 Jul 2017 16:15:05 +0000 (16:15 +0000)]
[ARM] Implement interleaved access bug fix from r306334
r306334 fixed a bug in AArch64 dealing with wide interleaved accesses having
pointer types. The bug also exists in ARM, so this patch copies over the fix.
[x86] add SBB optimization for SETAE (uge) condition code
x86 scalar select-of-constants (Cond ? C1 : C2) combining/lowering is a mess
with missing optimizations. We handle some patterns, but miss logical variants.
To clean that up, we should convert all select-of-constants to logic/math and
enhance the combining for the expected patterns from that. DAGCombiner already
has the foundation to allow the transforms, so we just need to fill in the holes
for x86 math op lowering. Selecting 0 or -1 needs extra attention to produce the
optimal code as shown here.
Attempt to verify that all of these IR forms are logically equivalent:
http://rise4fun.com/Alive/plxs
Chad Rosier [Fri, 7 Jul 2017 13:55:55 +0000 (13:55 +0000)]
[ValueTracking] Fix the identity case (LHS => RHS) when the LHS is false.
Prior to this commit both of the added test cases were passing. However, in the
latter case (test7) we were doing a lot more work to arrive at the same answer
(i.e., we were using isImpliedCondMatchingOperands() to determine the
implication.).
Anna Thomas [Fri, 7 Jul 2017 13:02:29 +0000 (13:02 +0000)]
[SafepointIRVerifier] Avoid false positives in GC verifier for compare between pointers
Today the safepoint IR verifier catches some unrelocated uses of base
pointers that are actually valid.
With this change, we narrow down the set of false positives.
Specifically, the verifier knows about compares to null and compares
between 2 unrelocated pointers.
[AArch64] Use 16 bytes as preferred function alignment on Cortex-A57.
Summary:
This change gives a 0.89% speed on execution time, a 0.94% improvement
in benchmark scores and a 0.62% increase in binary size on a Cortex-A57.
These numbers are the geomean results on a wide range of benchmarks from
the test-suite, SPEC2000, SPEC2006 and a range of proprietary suites.
The software optimization guide for the Cortex-A57 recommends 16 byte
branch alignment.
[AArch64] Use 16 bytes as preferred function alignment on Cortex-A72.
Summary:
This change gives a 0.34% speed on execution time, a 0.61% improvement
in benchmark scores and a 0.57% increase in binary size on a Cortex-A72.
These numbers are the geomean results on a wide range of benchmarks from
the test-suite, SPEC2000, SPEC2006 and a range of proprietary suites.
The software optimization guide for the Cortex-A72 recommends 16 byte
branch alignment.
Alex Lorenz [Fri, 7 Jul 2017 09:53:47 +0000 (09:53 +0000)]
[Support] sys::getProcessTriple should return a macOS triple using
the system's version of macOS
sys::getProcessTriple returns LLVM_HOST_TRIPLE, whose system version might not
be the actual version of the system on which the compiler running. This commit
ensures that, for macOS, sys::getProcessTriple returns a triple with the
system's macOS version.
We lower to a sequence consisting of:
- MOVi 0 into a register
- VCMPS to do the actual comparison and set the VFP flags
- FMSTAT to move the flags out of the VFP unit
- MOVCCi to either use the "zero register" that we have previously set
with the MOVi, or move 1 into the result register, based on the values
of the flags
As was the case with soft-float, for some predicates (one, ueq) we
actually need two comparisons instead of just one. When that happens, we
generate two VCMPS-FMSTAT-MOVCCi sequences and chain them by means of
using the result of the first MOVCCi as the "zero register" for the
second one. This is a bit overkill, since one comparison followed by
two non-flag-setting conditional moves should be enough. In any case,
the backend manages to CSE one of the comparisons away so it doesn't
matter much.
Note that unlike SelectionDAG and FastISel, we always use VCMPS, and not
VCMPES. This makes the code a lot simpler, and it also seems correct
since the LLVM Lang Ref defines simple true/false returns if the
operands are QNaN's. For SNaN's, even VCMPS throws an Invalid Operand
exception, so they won't be slipping through unnoticed.
Implementation-wise, this introduces a template so we can share the same
code that we use for handling integer comparisons, since the only
differences are in the details (exact opcodes to be used etc). Hopefully
this will be easy to extend to s64 G_FCMP.
Based strictly on the name, this seems to have something to do
width edit & continue. The goal of this patch has nothing to do
with supporting edit and continue though. msvc link.exe writes
very basic information into this area even when *not* compiling
with support for E&C, and so the goal here is to bring lld-link
to parity. Since we cannot know what assumptions standard tools
make about the content of PDB files, we need to be as close as
possible.
This ECNames data structure is a standard PDB string hash table.
link.exe puts a single string into this hash table, which is the
full path to the PDB file on disk. It then references this string
from the module descriptor for the compiler generated `* Linker *`
module.
With this patch, lld-link will generate the exact same sequence of
bytes as MSVC link for this subsection for a given object file
input (as reported by `llvm-pdbutil bytes -ec`).
Lang Hames [Fri, 7 Jul 2017 02:59:13 +0000 (02:59 +0000)]
[ORC] Errorize the ORC APIs.
This patch updates the ORC layers and utilities to return and propagate
llvm::Errors where appropriate. This is necessary to allow ORC to safely handle
error cases in cross-process and remote JITing.
Yaxun Liu [Fri, 7 Jul 2017 02:40:13 +0000 (02:40 +0000)]
[InferAddressSpaces] Fix assertion about null pointer
InferAddressSpaces does not check address space in collectFlatAddressExpressions,
which causes values with non flat address space put into Postorder and causes
assertion in cloneValueWithNewAddressSpace.
This patch fixes assertion in OpenCL 2.0 conformance test generic_address_space
subtest for amdgcn target.
Sam Clegg [Fri, 7 Jul 2017 02:01:29 +0000 (02:01 +0000)]
[WebAssembly] Support weak defined symbols
Model weakly defined symbols as symbols that are both
exports and imported and marked as weak. Local references
to the symbols refer to the import but the linker can
resolve this to the weak export if not strong symbol
is found at link time.
Sean Fertile [Fri, 7 Jul 2017 02:00:06 +0000 (02:00 +0000)]
Extend memcpy expansion in Transform/Utils to handle wider operand types.
Adds loop expansions for known-size and unknown-sized memcpy calls, allowing the
target to provide the operand types through TTI callbacks. The default values
for the TTI callbacks use int8 operand types and matches the existing behaviour
if they aren't overridden by the target.
Copy arguments passed by value into explicit allocas for ASan.
ASan determines the stack layout from alloca instructions. Since
arguments marked as "byval" do not have an explicit alloca instruction, ASan
does not produce red zones for them. This commit produces an explicit alloca
instruction and copies the byval argument into the allocated memory so that red
zones are produced.
Anna Thomas [Fri, 7 Jul 2017 00:40:37 +0000 (00:40 +0000)]
[SafepointIRVerifier] NFC: Refactor code for identifying exclusive base type
Added a new Enum to identify if the base pointer is exclusively null or
exlusively some constant or not exclusively any constant.
Converted the base pointer identification method from recursive to
iterative form.
Wei Mi [Fri, 7 Jul 2017 00:11:05 +0000 (00:11 +0000)]
[ConstHoisting] Turn on consthoist-with-block-frequency by default.
Using profile information to guide consthoisting is generally helpful for
performance, so the patch turns it on by default. No compile time or perf
regression were found using spec2000 and spec2006 on x86. Some significant
improvement (>20%) was seen on internal benchmarks.
Wei Mi [Thu, 6 Jul 2017 22:32:27 +0000 (22:32 +0000)]
[ConstHoisting] choose to hoist when frequency is the same.
The patch is to adjust the strategy of frequency based consthoisting:
Previously when the candidate block has the same frequency with the existing
blocks containing a const, it will not hoist the const to the candidate block.
For that case, now we change the strategy to hoist the const if only existing
blocks have more than one block member. This is helpful for reducing code size.
The patch adds support of i128 params lowering. The changes are quite trivial to
support i128 as a "special case" of integer type. With this patch, we lower i128
params the same way as aggregates of size 16 bytes: .param .b8 _ [16].
Currently, NVPTX can't deal with the 128 bit integers:
* in some cases because of failed assertions like
ValVTs.size() == OutVals.size() && "Bad return value decomposition"
* in other cases emitting PTX with .i128 or .u128 types (which are not valid [1])
[1] http://docs.nvidia.com/cuda/parallel-thread-execution/index.html#fundamental-types
Martin Storsjo [Thu, 6 Jul 2017 21:08:34 +0000 (21:08 +0000)]
[COFF, AArch64] Set the private label prefix to .L
This fixes calls to external functions starting with a capital L,
fixing errors like this:
fatal error: error in backend: assembler label 'LocalFree' can not be undefined
Regardless of relaxation options such as -cl-fast-relaxed-math
we are producing rather long code for fdiv via amdgcn_fdiv_fast
intrinsic. This intrinsic is used to replace fdiv with 2.5ulp
metadata and does not handle denormals, thus believed to be fast.
An fdiv instruction can also have fast math flag either by itself
or together with fpmath metadata. Clang used with a relaxation flag
always produces both metadata and fast flag:
Current implementation ignores fast flag and favors metadata. An
instruction with just fast flag would be lowered to a fastest rcp +
mul, but that never happen on practice because of described mutual
clang and BE behavior.
This change allows an "fdiv fast" to be always lowered as rcp + mul.
Davide Italiano [Thu, 6 Jul 2017 19:58:26 +0000 (19:58 +0000)]
[LTO] Fix the interaction between linker redefined symbols and ThinLTO
This is the same as r304719 but for ThinLTO.
The substantial difference is that in this case we don't have
whole visibility, just the summary.
In the LTO case, when we got the resolution for the input file we
could just see if the linker told us whether a symbol was linker
redefined (using --wrap or --defsym) and switch the linkage directly
for the GV.
Here, we have the summary. So, we record that the linkage changed
from <whatever it was> to $weakany to prevent IPOs across this symbol
boundaries and actually just switch the linkage at FunctionImport time.
This patch should also fixes the lld bits (as all the scaffolding for
communicating if a symbol is linker redefined should be there & should
be the same), but I'll make sure to add some tests there as well.
Allows the MachineIRBuilder APIs to directly create registers (based on
LLT or TargetRegisterClass) as well as accept MachineInstrBuilders
and implicitly converts to register(with getOperand(0).getReg()).
Eg usage:
LLT s32 = LLT::scalar(32);
auto C32 = Builder.buildConstant(s32, 32);
auto Tmp = Builder.buildInstr(TargetOpcode::G_SUB, s32, C32,
OtherReg);
auto Tmp2 = Builder.buildInstr(Opcode, DstReg,
Builder.buildConstant(s32, 31)); ....
David Blaikie [Thu, 6 Jul 2017 19:00:12 +0000 (19:00 +0000)]
Prototype: Reduce llvm-profdata merge memory usage further
The InstrProfWriter already stores the name and hash of the record in
the nested maps it uses for lookup while merging - this data is
duplicated in the value within the maps.
Refactor the InstrProfRecord to use a nested struct for the counters
themselves so that InstrProfWriter can use this nested struct alone
without the name or hash duplicated there.
This work is incomplete, but enough to demonstrate the value (around a
50% decrease in memory usage for a large test case (10GB -> 5GB)).
Though most of that decrease is probably from removing the
SoftInstrProfError as well, but I haven't implemented a replacement for
it yet. (it needs to go with the counters, because the operations on the
counters - merging, etc, are where the failures are - unlike the
name/hash which are totally unused by those counter-related operations
and thus easy to split out)
Ongoing discussion about removing SoftInstrProfError as a field of the
InstrProfRecord is happening on the thread that added it - including
the possibility of moving back towards an earlier version of that
proposed patch that passed SoftInstrProfError through the various APIs,
rather than as a member of InstrProfRecord.
Leo Li [Thu, 6 Jul 2017 18:47:05 +0000 (18:47 +0000)]
Modify constraints in `llvm::canReplaceOperandWithVariable`
Summary:
`Instruction::Switch`: only first operand can be set to a non-constant value.
`Instruction::InsertValue` both the first and the second operand can be set to a non-constant value.
`Instruction::Alloca` return true for non-static allocation.
[Constants] If we already have a ConstantInt*, prefer to use isZero/isOne/isMinusOne instead of isNullValue/isOneValue/isAllOnesValue inherited from Constant. NFCI
Going through the Constant methods requires redetermining that the Constant is a ConstantInt and then calling isZero/isOne/isMinusOne.
Anna Thomas [Thu, 6 Jul 2017 18:39:26 +0000 (18:39 +0000)]
[LoopUnrollRuntime] Bailout when multiple exiting blocks to the unique latch exit block
Currently, we do not support multiple exiting blocks to the
latch exit block. However, this bailout wasn't triggered when we had a
unique exit block (which is the latch exit), with multiple exiting
blocks to that unique exit.
Moved the bailout so that it's triggered in both cases and added
testcase.