Matthias Braun [Fri, 5 May 2017 21:09:30 +0000 (21:09 +0000)]
MIParser/MIRPrinter: Compute block successors if not explicitely specified
- MIParser: If the successor list is not specified successors will be
added based on basic block operands in the block and possible
fallthrough.
- MIRPrinter: Adds a new `simplify-mir` option, with that option set:
Skip printing of block successor lists in cases where the
parser is guaranteed to reconstruct it. This means we still print the
list if some successor cannot be determined (happens for example for
jump tables), if the successor order changes or branch probabilities
being unequal.
Reid Kleckner [Fri, 5 May 2017 18:30:34 +0000 (18:30 +0000)]
Simplify dbg.value handling in SDISel with early returns
No functional change other than improving dbgs logging accuracy on
constant dbg values. Previously we would add things like "i32 42" as
debug values, and then log that we were dropping the debug info, which
is silly.
Delete some dead code that was checking for static allocas. This
remained after r207165, but served no purpose. Currently, static alloca
dbg.values are always sent through the DanglingDebugInfoMap, and are
usually made valid the first time the alloca is used.
Craig Topper [Fri, 5 May 2017 17:36:09 +0000 (17:36 +0000)]
[KnownBits] Add wrapper methods for setting and clear all bits in the underlying APInts in KnownBits.
This adds routines for reseting KnownBits to unknown, making the value all zeros or all ones. It also adds methods for querying if the value is zero, all ones or unknown.
Adrian McCarthy [Fri, 5 May 2017 17:14:00 +0000 (17:14 +0000)]
Allow operator-> to work from a FixedStreamArrayIterator.
This is similar to my recent fix for VarStreamArrayIterator, but the cause
(and thus the fix) is subtley different. The FixedStreamArrayIterator
iterates over a const Array, so the iterator's value type must be const.
Hoisting common code can cause registers that live-in in the successor
blocks to no longer be live-in. The live-in information needs to be
updated to reflect this, or otherwise incorrect code can be generated
later on.
John Brawn [Fri, 5 May 2017 11:31:25 +0000 (11:31 +0000)]
[ARM] Add support for ORR and ORN instruction substitutions
Recently support was added for substituting one intruction for another by
negating or inverting the immediate, but ORR and ORN were missed so this patch
adds them.
This one is slightly different to the others in that ORN only exists in thumb,
so we only do the substitution in thumb.
George Rimar [Fri, 5 May 2017 10:52:39 +0000 (10:52 +0000)]
[llvm-dwarfdump] - Print an error message if section decompression failed.
llvm-dwarfdump currently prints no message if decompression fails
for some reason. I noticed that during work on one of LLD patches
where LLD produced an broken output. It was a bit confusing to see
no output for section dumped and no any error message at all.
Zachary Turner [Thu, 4 May 2017 23:53:54 +0000 (23:53 +0000)]
[pdb] Don't verify TPI hash values up front.
Verifying the hash values as we are currently doing
results in iterating every type record before the user
even tries to access the first one, and the API user
has no control over, or ability to hook into this
process.
As a result, when the user wants to iterate over types
to print them or index them, this results in a second
iteration over the same list of types. When there's
upwards of 1,000,000 type records, this is obviously
quite undesirable.
This patch raises the verification outside of TpiStream
, and llvm-pdbdump hooks a hash verification visitor
into the normal dumping process. So we still verify
the hash records, but we can do it while not requiring
a second iteration over the type stream.
Zachary Turner [Thu, 4 May 2017 23:53:29 +0000 (23:53 +0000)]
[PDB] Don't build the entire source file list up front.
I tried to run llvm-pdbdump on a very large (~1.5GB) PDB to
try and identify show-stopping performance problems. This
patch addresses the first such problem.
When loading the DBI stream, before anyone has even tried to
access a single record, we build an in memory map of every
source file for every module. In the particular PDB I was
using, this was over 85 million files. Specifically, the
complexity is O(m*n) where m is the number of modules and
n is the average number of source files (including headers)
per module.
The whole reason for doing this was so that we could have
constant time access to any module and any of its source
file lists. However, we can still get O(1) access to the
source file list for a given module with a simple O(m)
precomputation, and access to the list of modules is
already O(1) anyway.
So this patches reduces the O(m*n) up-front precomputation
to an O(m) one, where n is ~6,500 and n*m is about 85 million
in my pathological test case.
Zachary Turner [Thu, 4 May 2017 23:53:01 +0000 (23:53 +0000)]
[llvm-pdbdump] Only build the TypeDatabase if necessary.
Building the type database is expensive, and can take multiple
minutes for large PDBs. But we only need it in certain cases
depending on what command line options are specified. So only
build it when we know we're about to need it.
Craig Topper [Thu, 4 May 2017 21:45:49 +0000 (21:45 +0000)]
[JumpThreading] When processing compares, explicitly check that the result type is not a vector rather than check for it being an integer.
Compares always return a scalar integer or vector of integers. isIntegerTy returns false for vectors, but that's not completely obvious. So using isVectorTy is less confusing.
Sanjay Patel [Thu, 4 May 2017 19:51:34 +0000 (19:51 +0000)]
[InstSimplify] add folds for or-of-casted-icmps
The sibling folds for 'and' with casts were added with https://reviews.llvm.org/rL273200.
This is a preliminary step for adding the 'or' variants for the folds added with https://reviews.llvm.org/rL301260.
The reason for the strange form with constant LHS in the 1st test is because there's another missing fold in that
case for the inverted predicate. That should be fixed when we add the ConstantRange functionality for 'or-of-icmps'
that already exists for 'and-of-icmps'.
I'm hoping to share more code for the and/or cases, so we won't have these differences. This will allow us to remove
code from InstCombine. It's also possible that we can remove some code here in InstSimplify. I think we have some
duplicated folds because patterns are not matched in a general way.
Reid Kleckner [Thu, 4 May 2017 18:19:52 +0000 (18:19 +0000)]
[ms-inline-asm] Use the frontend size only for ambiguous instructions
This avoids problems on code like this:
char buf[16];
__asm {
movups xmm0, [buf]
mov [buf], eax
}
The frontend size in this case (1) is wrong, and the register makes the
instruction matching unambiguous. There are also enough bytes available
that we shouldn't complain to the user that they are potentially using
an incorrectly sized instruction to access the variable.
Craig Topper [Thu, 4 May 2017 17:00:41 +0000 (17:00 +0000)]
[APInt] Reduce number of allocations involved in multiplying. Reduce worst case multiply size
Currently multiply is implemented in operator*=. Operator* makes a copy and uses operator*= to modify the copy.
Operator*= itself allocates a temporary buffer to hold the multiply result as it computes it. Then copies it to the buffer in *this.
Operator*= attempts to bound the size of the result based on the number of active bits in its inputs. It also has a couple special cases to handle 0 inputs without any memory allocations or multiply operations. The best case is that it calculates a single word regardless of input bit width. The worst case is that it calculates the a 2x input width result and drop the upper bits.
Since operator* uses operator*= it incurs two allocations, one for a copy of *this and one for the temporary allocation. Neither of these allocations are kept after the method operation is done.
The main usage in the backend appears to be ConstantRange::multiply which uses operator* rather than operator*=.
This patch moves the multiply operation to operator* and implements operator*= using it. This avoids the copy in operator*. operator* now allocates a result buffer sized the same width as its inputs no matter what. This buffer will be used as the buffer for the returned APInt. Finally, we reuse tcMultiply to implement the multiply operation. This function is capable of not calculating additional upper words that will be discarded.
This change does lose the special optimizations for the inputs using less words than their size implies. But it also removed the getActiveBits calls from all multiplies. If we think those optimizations are important we could look at providing additional bounds to tcMultiply to limit the computations.
Jonas Paulsson [Thu, 4 May 2017 13:33:30 +0000 (13:33 +0000)]
[SystemZ] Make copyPhysReg() add impl-use operands of super reg.
When a 128 bit COPY is lowered into two instructions, an impl-use operand of
the super-reg should be added to each new instruction in case one of the
sub-regs is undefined.
In this patch, I introduce a new altmacro string delimiter.
This review is the second review in a series of four reviews.
(one for each altmacro feature: LOCAL, string delimiter, string '!' escape sign and absolute expression as a string '%' ).
In the alternate macro mode, you can delimit strings with matching angle brackets <..>
when using it as a part of calling macro arguments.
As described in the https://sourceware.org/binutils/docs-2.27/as/Altmacro.html
"<string>
You can delimit strings with matching angle brackets."
assumptions:
1. If an argument begins with '<' and ends with '>'. The argument is considered as a string.
2. Except adding new string mark '<..>', a regular macro behavior is expected.
3. The altmacro cannot affect the regular less/greater behavior.
4. If a comma is present inside an angle brackets it considered as a character and not as a separator.
Sam Parker [Thu, 4 May 2017 07:31:28 +0000 (07:31 +0000)]
[ARM] ACLE Chapter 9 intrinsics
Added the integer data processing intrinsics from ACLE v2.1 Chapter 9
but I have missed out the saturation_occurred intrinsics for now. For
the instructions that read and write the GE bits, a chain is included
and the only instruction that reads these flags (sel) is only
selectable via the implemented intrinsic.
Oren Ben Simhon [Thu, 4 May 2017 07:22:49 +0000 (07:22 +0000)]
[X86] Disabling PLT in Regcall CC Functions
According to psABI, PLT stub clobbers XMM8-XMM15.
In Regcall calling convention those registers are used for passing parameters.
Thus we need to prevent lazy binding in Regcall.
This makes it simpler for the runtime to consistently handle the entries
in the function sled index in both 32 and 64 bit platforms where the
XRay runtime works.
Summary:
This change adds a new section to the xray-instrumented binary that
stores an index into ranges of the instrumentation map, where sleds
associated with the same function can be accessed as an array. At
runtime, we can get access to this index by function ID offset allowing
for selective patching and unpatching by function ID.
Each entry in this new section (xray_fn_idx) will include two pointers
indicating the start and one past the end of the sleds associated with
the same function. These entries will be 16 bytes long on x86 and
aarch64. On arm, we align to 16 bytes anyway so the runtime has to take
that into consideration.
__{start,stop}_xray_fn_idx will be the symbols that the runtime will
look for when we implement the selective patching/unpatching by function
id APIs. Because XRay synthesizes the function id's in a monotonically
increasing manner at runtime now, implementations (and users) can use
this table to look up the sleds associated with a specific function.
This is useful in implementations that want to do things like:
- Implement coverage mode for functions by patching everything
pre-main, then as functions are encountered, the installed handler
can unpatch the function that's been encountered after recording
that it's been called.
- Do "learning mode", so that the implementation can figure out some
statistical information about function calls by function id for a
time being, and then determine which functions are worth
uninstrumenting at runtime.
- Do "selective instrumentation" where an implementation can
specifically instrument only certain function id's at runtime
(either based on some external data, or through some other
heuristics) instead of patching all the instrumented functions at
runtime.
IR: Use pointers instead of GUIDs to represent edges in the module summary. NFCI.
When profiling a no-op incremental link of Chromium I found that the functions
computeImportForFunction and computeDeadSymbols were consuming roughly 10% of
the profile. The goal of this change is to improve the performance of those
functions by changing the map lookups that they were previously doing into
pointer dereferences.
This is achieved by changing the ValueInfo data structure to be a pointer to
an element of the global value map owned by ModuleSummaryIndex, and changing
reference lists in the GlobalValueSummary to hold ValueInfos instead of GUIDs.
This means that a ValueInfo will take a client directly to the summary list
for a given GUID.
Summary:
This is an implementation of the loop detection logic that XRay needs to
determine whether a function might take time at runtime. Without this
heuristic, XRay will tend to not instrument short functions that have
loops that might have runtime dependent on inputs or external values.
While this implementation doesn't do any further analysis than just
figuring out whether there is a loop in the MachineFunction being
code-gen'ed, we're paving the way for being able to perform more
sophisticated analysis of the function in the future (for example to
determine whether the trip count for the loop might be constant, and
make a decision on that instead). This enables us to cover more
functions with the default heuristics, and potentially identify ones
that have variable runtime latency just by looking for the presence of
loops.
[SCEV] createAddRecFromPHI: Optimize for the most common case.
Summary:
The existing implementation creates a symbolic SCEV expression every
time we analyze a phi node and then has to remove it, when the analysis
is finished. This is very expensive, and in most of the cases it's also
unnecessary. According to the data I collected, ~60-70% of analyzed phi
nodes (measured on SPEC) have the following form:
PN = phi(Start, OP(Self, Constant))
Handling such cases separately significantly speeds this up.
Craig Topper [Wed, 3 May 2017 23:12:29 +0000 (23:12 +0000)]
[KnownBits] Add methods for determining if KnownBits is a constant value
This patch adds isConstant and getConstant for determining if KnownBits represents a constant value and to retrieve the value. Use them to simplify code.
Craig Topper [Wed, 3 May 2017 22:25:19 +0000 (22:25 +0000)]
[ValueTracking] Remove handling for BitWidth being 0 in ComputeSignBit and isKnownNonZero.
I don't believe its possible to have non-zero values here since DataLayout became required. The APInt constructor inside of the KnownBits object will assert if this ever happens.
Sanjay Patel [Wed, 3 May 2017 21:55:34 +0000 (21:55 +0000)]
[TargetLowering] use isSubsetOf in SimplifyDemandedBits; NFCI
This is the DAG equivalent of https://reviews.llvm.org/D32255 ,
which will hopefully be committed again. The functionality
(preferring a 'not' op) is already here in the DAG, so this is
just intended to be a clean-up and performance improvement.
DebugInfo: elide type index entries for synthetic types
Compiler emitted synthetic types may not have an associated DIFile
(translation unit). In such a case, when generating CodeView debug type
information, we would attempt to compute an absolute filepath which
would result in a segfault due to a NULL DIFile*. If there is no source
file associated with the type, elide the type index entry for the type
and record the type information. This actually results in higher
fidelity debug information than clang/C2 as of this writing.