Reid Kleckner [Wed, 5 Oct 2016 22:36:07 +0000 (22:36 +0000)]
[codeview] Truncate records to maximum record size near 64KB
If we don't truncate, LLVM asserts when the label difference doesn't fit
in a 16 bit field. This patch truncates two kinds of data: trailing null
terminated names in symbol records, and inline line tables. The inline
line table test that I have is too large (many MB), so I'm not checking
it in.
Hal Finkel [Wed, 5 Oct 2016 22:25:33 +0000 (22:25 +0000)]
[llvm-opt-report] Distinguish inlined contexts when optimizations differ
How code is optimized sometimes, perhaps often, depends on the context into
which it was inlined. This change allows llvm-opt-report to track the
differences between the optimizations performed, or not, in different contexts,
and when these differ, display those differences.
For example, this code:
$ cat /tmp/q.cpp
void bar();
void foo(int n) {
for (int i = 0; i < n; ++i)
bar();
}
void quack() {
foo(4);
}
void quack2() {
foo(4);
}
will now produce this report:
< /home/hfinkel/src/llvm/test/tools/llvm-opt-report/Inputs/q.cpp
2 | void bar();
3 | void foo(int n) {
[[
> foo(int):
4 | for (int i = 0; i < n; ++i)
> quack(), quack2():
4 U4 | for (int i = 0; i < n; ++i)
]]
5 | bar();
6 | }
7 |
8 | void quack() {
9 I | foo(4);
10 | }
11 |
12 | void quack2() {
13 I | foo(4);
14 | }
15 |
Note that the tool has demangled the function names, and grouped the reports
associated with line 4. This shows that the loop on line 4 was unrolled by a
factor of 4 when inlined into the functions quack() and quack2(), but not in
the function foo(int) itself.
Hal Finkel [Wed, 5 Oct 2016 22:10:35 +0000 (22:10 +0000)]
Add an llvm-opt-report tool to generate basic source-annotated optimization summaries
LLVM now has the ability to record information from optimization remarks in a
machine-consumable YAML file for later analysis. This can be enabled in opt
(see r282539), and D25225 adds a Clang flag to do the same. This patch adds
llvm-opt-report, a tool to generate basic optimization "listing" files
(annotated sources with information about what optimizations were performed)
from one of these YAML inputs.
D19678 proposed to add this capability directly to Clang, but this more-general
YAML-based infrastructure was the direction we decided upon in that review
thread.
For this optimization report, I focused on making the output as succinct as
possible while providing information on inlining and loop transformations. The
goal here is that the source code should still be easily readable in the
report. My primary inspiration here is the reports generated by Cray's tools
(http://docs.cray.com/books/S-2496-4101/html-S-2496-4101/z1112823641oswald.html).
These reports are highly regarded within the HPC community. Intel's compiler,
for example, also has an optimization-report capability
(https://software.intel.com/sites/default/files/managed/55/b1/new-compiler-optimization-reports.pdf).
$ cat /tmp/v.c
void bar();
void foo() { bar(); }
void Test(int *res, int *c, int *d, int *p, int n) {
int i;
#pragma clang loop vectorize(assume_safety)
for (i = 0; i < 1600; i++) {
res[i] = (p[i] == 0) ? res[i] : res[i] + d[i];
}
for (i = 0; i < 16; i++) {
res[i] = (p[i] == 0) ? res[i] : res[i] + d[i];
}
foo();
foo(); bar(); foo();
}
D25225 adds -fsave-optimization-record (and
-fsave-optimization-record=filename), and this would be used as follows:
< /tmp/v.c
2 | void bar();
3 | void foo() { bar(); }
4 |
5 | void Test(int *res, int *c, int *d, int *p, int n) {
6 | int i;
7 |
8 | #pragma clang loop vectorize(assume_safety)
9 V4,2 | for (i = 0; i < 1600; i++) {
10 | res[i] = (p[i] == 0) ? res[i] : res[i] + d[i];
11 | }
12 |
13 U16 | for (i = 0; i < 16; i++) {
14 | res[i] = (p[i] == 0) ? res[i] : res[i] + d[i];
15 | }
16 |
17 I | foo();
18 |
19 | foo(); bar(); foo();
I | ^
I | ^
20 | }
Each source line gets a prefix giving the line number, and a few columns for
important optimizations: inlining, loop unrolling and loop vectorization. An
'I' is printed next to a line where a function was inlined, a 'U' next to an
unrolled loop, and 'V' next to a vectorized loop. These are printed on the
relevant code line when that seems unambiguous, or on subsequent lines when
multiple potential options exist (messages, both positive and negative, from
the same optimization with different column numbers are taken to indicate
potential ambiguity). When on subsequent lines, a '^' is output in the relevant
column.
Annotated source for all relevant input files are put into the listing file
(each starting with '<' and then the file name).
You can disable having the unrolling/vectorization factors appear by using the
-s flag.
David Callahan [Wed, 5 Oct 2016 21:36:16 +0000 (21:36 +0000)]
Modify df_iterator to support post-order actions
Summary: This makes a change to the state used to maintain visited information for depth first iterator. We know assume a method "completed(...)" which is called after all children of a node have been visited. In all existing cases, this method does nothing so this patch has no functional changes. It will however allow a client to distinguish back from cross edges in a DFS tree.
Lang Hames [Wed, 5 Oct 2016 21:20:00 +0000 (21:20 +0000)]
[Object] Fix a crash in Archive::child_iterator's default constructor.
To be default constructible, Archive::child_iterator needs to be able to
construct an Archive::Child with a null parent, however Archive::Child's
constructor always dereferenced its Parent argument to compute the remaining
archive size. This commit fixes Archive::Child's constructor to only do the
size calculation when the parent is non-null.
Martin Storsjo [Wed, 5 Oct 2016 21:08:02 +0000 (21:08 +0000)]
[ARM] Use __rt_div functions for divrem on Windows
This avoids falling back to calling out to the GCC rem functions
(__moddi3, __umoddi3) when targeting Windows.
The __rt_div functions have flipped the two arguments compared
to the __aeabi_divmod functions. To match MSVC, we emit a
check for division by zero before actually calling the library
function (even if the library function itself also might do
the same check).
Not all calls to __rt_div functions for division are currently
merged with calls to the same function with the same parameters
for the remainder. This is more wasteful than a div + mls as before,
but avoids calls to __moddi3.
Yunzhong Gao [Wed, 5 Oct 2016 20:26:29 +0000 (20:26 +0000)]
Improve the debug-info test created in r274263.
This patch is related to r274263 or Phabricator/D21818.
This patch aims to improve the test case added in the previous commit to verify
specifically that the stack protector pass is adding the debug line info as
intended. Before, the test only verified that the verifier pass does not crash.
The current approach is to generate the assembly output and then look for the
.loc directive.
Matthew Simpson [Wed, 5 Oct 2016 20:23:46 +0000 (20:23 +0000)]
[LV] Pass profitability analysis in vectorizer constructor (NFC)
The vectorizer already holds a pointer to one cost model artifact in a member
variable (i.e., MinBWs). As we add more, it will be easier to communicate these
artifacts to the vectorizer if we simply pass a pointer to the cost model
instead.
Matthew Simpson [Wed, 5 Oct 2016 19:11:54 +0000 (19:11 +0000)]
[LV] Use getScalarizationOverhead in memory instruction costs (NFC)
This patch refactors the cost estimation of scalarized loads and stores to
reuse getScalarizationOverhead for the cost of the extractelement and
insertelement instructions we might create. The existing code accounted for
this cost, but it was functionally equivalent to the helper function.
Reid Kleckner [Wed, 5 Oct 2016 18:36:02 +0000 (18:36 +0000)]
Improve DEBUG_VALUE assembly comments for spilled bitpieces
Previously we would give up when we saw the bitpiece DWARF expression
and print "[complex expression]" when actually we handled bitpiece
expressions outside the loop.
Matthew Simpson [Wed, 5 Oct 2016 18:30:36 +0000 (18:30 +0000)]
[LV] Add helper function for predicated block probability (NFC)
The cost model has to estimate the probability of executing predicated blocks.
However, we currently always assume predicated blocks have a 50% chance of
executing (this value is hardcoded in several places throughout the code).
Since we always use the same value, this patch adds a helper function for
getting this uniform probability. The function simplifies some comments and
makes our assumptions more clear. In the future, we may want to extend this
with actual block probability information if it's available.
Simon Dardis [Wed, 5 Oct 2016 18:26:19 +0000 (18:26 +0000)]
[mips][ias] fix li macro when values are negated with ~
The integrated assembler evaluates the expressions such as ~0x80000000 to
0xffffffff7fffffff early in the parsing process. This patch adds compatibility
with gas so that li loads the expected value (0x7fffffff) in those cases. This
only occurs iff all the upper 32bits are set and maintains existing checks by
not truncating the result down to 32 bits if any of the the upper bits are not
set.
Matthew Simpson [Wed, 5 Oct 2016 17:52:34 +0000 (17:52 +0000)]
[LV] Add isScalarWithPredication helper function (NFC)
This patch adds a single helper function for checking if an instruction will be
scalarized with predication. Such instructions include conditional stores and
instructions that may divide by zero. Existing checks have been updated to use
the new function.
[DAG] Teach computeKnownBits and ComputeNumSignBits in SelectionDAG to look through EXTRACT_VECTOR_ELT.
Summary: Both computeKnownBits and ComputeNumSignBits can now do a simple
look-through of EXTRACT_VECTOR_ELT. It will compute the result based
on the known bits (or known sign bits) for the vector that the element
is extracted from.
James Molloy [Wed, 5 Oct 2016 14:52:13 +0000 (14:52 +0000)]
[Thumb] Don't try and emit LDRH/LDRB from the constant pool
This is not a valid encoding - these instructions cannot do PC-relative addressing.
The underlying problem here is of whitelist in ARMISelDAGToDAG that unwraps ARMISD::Wrappers during addressing-mode selection. This didn't realise TargetConstantPool was actually possible, so didn't handle it.
This should allow users of the library to get a range to iterate through
all the subcommands that are registered to the global parser. This
allows users to define subcommands in libraries that self-register to
have dispatch done at a different stage (like main). It allows for
writing code like the following:
for (auto *S : cl::getRegisteredSubcommands()) {
if (*S) {
// Dispatch on S->getName().
}
}
This change also contains tests that show this usage pattern.
Issue with loop info and block removal revealed by polly.
I have a fix for this issue already in another patch, I'll re-roll this
together with that fix, and a test case.
[libFuzzer] clear the corpus elements if they are evicted (i.e. smaller elements with proper coverage are found). Make sure we never try to mutate empty element. Print the corpus size in bytes in the status lines
Kyle Butt [Tue, 4 Oct 2016 23:54:18 +0000 (23:54 +0000)]
Codegen: Tail-duplicate during placement.
The tail duplication pass uses an assumed layout when making duplication
decisions. This is fine, but passes up duplication opportunities that
may arise when blocks are outlined. Because we want the updated CFG to
affect subsequent placement decisions, this change must occur during
placement.
In order to achieve this goal, TailDuplicationPass is split into a
utility class, TailDuplicator, and the pass itself. The pass delegates
nearly everything to the TailDuplicator object, except for looping over
the blocks in a function. This allows the same code to be used for tail
duplication in both places.
This change, in concert with outlining optional branches, allows
triangle shaped code to perform much better, esepecially when the
taken/untaken branches are correlated, as it creates a second spine when
the tests are small enough.
Issue from previous rollback fixed, and a new test was added for that
case as well.
Alina Sbirlea [Tue, 4 Oct 2016 22:39:53 +0000 (22:39 +0000)]
[cpu-detection] Copy simplified version of get_cpuid_max to remove dependency to clang's implementation
Summary:
Attempting to fix PR30384.
Take the same approach as in compiler_rt and add a simplified version of __get_cpuid_max.
Including cpuid.h is no longer needed.
Sanjay Patel [Tue, 4 Oct 2016 20:46:43 +0000 (20:46 +0000)]
[Target] move reciprocal estimate settings from TargetOptions to TargetLowering
The motivation for the change is that we can't have pseudo-global settings for
codegen living in TargetOptions because that doesn't work with LTO.
Ideally, these reciprocal attributes will be moved to the instruction-level via
FMF, metadata, or something else. But making them function attributes is at least
an improvement over the current state.
The ingredients of this patch are:
Remove the reciprocal estimate command-line debug option.
Add TargetRecip to TargetLowering.
Remove TargetRecip from TargetOptions.
Clean up the TargetRecip implementation to work with this new scheme.
Set the default reciprocal settings in TargetLoweringBase (everything is off).
Update the PowerPC defaults, users, and tests.
Update the x86 defaults, users, and tests.
Note that if this patch needs to be reverted, the related clang patch checked in
at r283251 should be reverted too.
Kevin Enderby [Tue, 4 Oct 2016 20:37:43 +0000 (20:37 +0000)]
Next set of additional error checks for invalid Mach-O files for the
load commands that uses the MachO::encryption_info_command and
MachO::encryption_info_command types but not used in llvm libObject
code but used in llvm tool code.
This includes just LC_ENCRYPTION_INFO and
LC_ENCRYPTION_INFO_64 load commands.
Michal Gorny [Tue, 4 Oct 2016 20:25:37 +0000 (20:25 +0000)]
[cmake] Make LIT_COMMAND configurable and improve fallback support
Make LIT_COMMAND configurable, use source tree only when actually
available and extend the default search to other common executable names
'lit.py' and 'lit', in order to increase uniformity between all LLVM
projects and support using installed lit.
Changing the conditional used to determine whether in-tree or external
lit is being used covers the case when LLVM_MAIN_SRC_DIR is defined but
does not exist (anymore). In this case, the functions falls back to
looking for installed lit rather than attempting to use a non-existing
path. The same conditional is used in clang already.
Making LIT_COMMAND a cache variable in case the source tree variant is
used serves two purposes. Firstly, it increases uniformity between
the two branches since find_program() implicitly makes LIT_COMMAND
a cache variable. Secondly, it allows overriding the lit executable used
to run the tests when the LLVM source tree is provided. Gentoo is
planning to use this to use installed (and byte-compiled) lit instead of
re-compiling it in every LLVM project.
Extending default search is meant to increase uniformity between
different LLVM projects. The 'lit.py' name is already used by a few of
them, and 'lit' is the name used by utils/lit/setup.py when installing.
AArch64InstrInfo::shouldScheduleAdjacent() determines whether two
instruction can benefit from macroop fusion on apple CPUs. The list
turned out to be incomplete:
- the "rr" variants of the instructions were missing
- even the "rs" variants can have shift value == 0 and behave like the
"rr" variants
This also splits the MacropFusion target feature into
ArithmeticBccFusion and ArithmeticCbzFusion.
Hal Finkel [Tue, 4 Oct 2016 18:13:45 +0000 (18:13 +0000)]
Don't filter diagnostics written as YAML to the output file
The purpose of the YAML diagnostic output file is to collect information on
optimizations performed, or not performed, for later processing by tools that
help users (and compiler developers) understand how code was optimized. As
such, the diagnostics that appear in the file should not be coupled to what a
user might want to see summarized for them as the compiler runs, and in fact,
because the user likely does not know what optimization diagnostics their tools
might want to use, the user cannot provide a useful filter regardless. As such,
we shouldn't filter the diagnostics going to the output file.