David Bozier [Wed, 15 Feb 2017 12:58:41 +0000 (12:58 +0000)]
Fix incorrect formatting of DataRefImpl members in operator<< function
Changed format specifiers to use format macro constant for pointer type.
Moved width part of format specifier in the correct place for formatting members a and b.
Craig Topper [Wed, 15 Feb 2017 06:58:47 +0000 (06:58 +0000)]
[X86] Don't create VBROADCAST nodes with 256-bit or 512-bit input types
Summary:
We don't seem to have great rules on what a valid VBROADCAST node looks like. And as a consequence we end up with a lot of patterns to try to catch everything. We have patterns with scalar inputs, 128-bit vector inputs, 256-bit vector inputs, and 512-bit vector inputs.
As you can see from the things improved here we are currently missing patterns for 128-bit loads being extended to 256-bit before the vbroadcast.
I'd like to propose that VBROADCAST should always take a 128-bit vector type as input. As a first step towards that this patch adds an EXTRACT_SUBVECTOR in front of VBROADCAST when the input is 256 or 512-bits. In the future I would like to add scalar_to_vector around all the scalar operations. And maybe we should consider adding a VBROADCAST+load node to avoid separating loads from the broadcasting operation when the load itself isn't foldable.
This requires an additional change in target shuffle combining to look for the extract subvector and look through it to find the original operand. I'm sure this change isn't perfect but was enough to fix a few test failures that were being caused.
Another interesting thing I noticed is that the changes in masked_gather_scatter.ll show cases were we don't remove a useless insert into element 1 before broadcasting element 0.
Craig Topper [Wed, 15 Feb 2017 05:57:16 +0000 (05:57 +0000)]
[SelectionDAGBuilder] Simplify creation of shufflevector DAG nodes where inputs are larger than the mask
Summary:
The current code loops over all elements to calculate a used range. Then a second short loop looks at the ranges and determines if they can be used in a extract and creates a properly aligned start index for the extract.
This range finding is unnecessary, we can just calculate a properly aligned start index for an extract for each input during the first loop. If we don't find the same start index for each indice we can't use an extract.
Lang Hames [Wed, 15 Feb 2017 05:39:35 +0000 (05:39 +0000)]
[Orc][RPC] Add a AsyncHandlerTraits specialization for non-value-type response
handler args.
The specialization just inherits from the std::decay'd response handler type.
This allows member functions (via MemberFunctionWrapper) to be used as async
handlers.
The comment was somewhat misleading in that it implied that passes were not
responsible for adding new assumptions to the assumption cache. This new
wording now explicitly mentions that they are required to do so.
[AMDGPU] Fix MaxWorkGroupsPerCU for large workgroups
This patch corrects the maximum workgroups per CU if we have big
workgroups (more than 128). This calculation contributes to the
occupancy calculation in respect to LDS size.
Dimitry Andric [Tue, 14 Feb 2017 22:49:49 +0000 (22:49 +0000)]
Disable wrapping llvm-xray YAML output
Summary:
The YAML output produced by llvm-xray is supposed to be wrapped at the
arbitrary default of 70 columns set by `yaml:Output`. Unfortunately,
the wrapping is rather unpredictable, and can easily go past the set
number of columns, depending on the execution environment.
To make the YAML output environment-independent, disable wrapping
instead.
Easwaran Raman [Tue, 14 Feb 2017 22:49:28 +0000 (22:49 +0000)]
Fix a bug in caller's BFI update code after inlining.
Multiple blocks in the callee can be mapped to a single cloned block
since we prune the callee as we clone it. The existing code
iterates over the value map and clones the block frequency (and
eventually scales the frequencies of the cloned blocks). Value map's
iteration is not deterministic and so the cloned block might get the
frequency of any of the original blocks. The fix is to set the max of
the original frequencies to the cloned block. The first block in the
sequence must have this max frequency and, in the call context,
subsequent blocks must have its frequency.
WholeProgramDevirt: Change internal vcall data structures to match summary.
Group calls into constant and non-constant arguments up front, and use uint64_t
instead of ConstantInt to represent constant arguments. The goal is to allow
the information from the summary to fit naturally into this data structure in
a future change (specifically, it will be added to CallSiteInfo).
This has two side effects:
- We disallow VCP for constant integer arguments of width >64 bits.
- We remove the restriction that the bitwidth of a vcall's argument and return
types must match those of the vfunc definitions.
I don't expect either of these to matter in practice. The first case is
uncommon, and the second one will lead to UB (so we can do anything we like).
Taewook Oh [Tue, 14 Feb 2017 21:10:40 +0000 (21:10 +0000)]
[BasicBlockUtils] Use getFirstNonPHIOrDbg to set debugloc for instructions created in SplitBlockPredecessors
Summary:
When setting debugloc for instructions created in SplitBlockPredecessors, current implementation copies debugloc from the first-non-phi instruction of the original basic block. However, if the first-non-phi instruction is a call for @llvm.dbg.value, the debugloc of the instruction may point the location outside of the block itself. For the example code of
, which is a definition of 'ret' variable and outside of the scope of the basicblock itself. However, current implementation picks up this debugloc for the instructions created in SplitBlockPredecessors. This patch addresses this problem by picking up debugloc from the first-non-phi-non-dbg instruction.
Reid Kleckner [Tue, 14 Feb 2017 21:02:24 +0000 (21:02 +0000)]
[BranchFolding] Tail common all identical unreachable blocks
Summary:
Blocks ending in unreachable are typically cold because they end the
program or throw an exception, so merging them with other identical
blocks is usually profitable because it reduces the size of cold code.
MachineBlockPlacement generally does not arrange to fall through to such
blocks, so commoning these blocks will not introduce additional
unconditional branches.
Tim Northover [Tue, 14 Feb 2017 20:56:18 +0000 (20:56 +0000)]
GlobalISel: introduce G_PTR_MASK to simplify alloca handling.
This instruction clears the low bits of a pointer without requiring (possibly
dodgy if pointers aren't ints) conversions to and from an integer. Since (as
far as I'm aware) all masks are statically known, the instruction takes an
immediate operand rather than a register to specify the mask.
Vedant Kumar [Tue, 14 Feb 2017 20:03:48 +0000 (20:03 +0000)]
Re-apply "[profiling] Remove dead profile name vars after emitting name data"
This reverts 295092 (re-applies 295084), with a fix for dangling
references from the array of coverage names passed down from frontends.
I missed this in my initial testing because I only checked test/Profile,
and not test/CoverageMapping as well.
Original commit message:
The profile name variables passed to counter increment intrinsics are dead
after we emit the finalized name data in __llvm_prf_nm. However, we neglect to
erase these name variables. This causes huge size increases in the
__TEXT,__const section as well as slowdowns when linker dead stripping is
disabled. Some affected projects are so massive that they fail to link on
Darwin, because only the small code model is supported.
Fix the issue by throwing away the name constants as soon as we're done with
them.
Wolfgang Pieb [Tue, 14 Feb 2017 19:08:45 +0000 (19:08 +0000)]
Reapply r294532, reverted in r294787.
Store instructions can have more than one memory operand as a result
of optimizations that fold different stores into one.
When we identify spill instructions to generate DBG_VALUE instructions
to record the spilling of a variable, we disregard stores with
multiple memory operands for now. We may miss some relevant spills but
the handling is a bit more complex, so we'll do it in a different patch.
Bob Wilson [Tue, 14 Feb 2017 19:06:43 +0000 (19:06 +0000)]
allow migrating away from cmake option for LLVM_DISABLE_ABI_BREAKING_CHECKS_ENFORCING
In r288754, Mehdi added a cmake option to disable enforcement of the ABI
breaking checks in the "abi-breaking.h" header. We used that when building
Swift and it works, but I think it will be better to control this with a
preprocessor macro instead of a cmake option. That will let us opt out of
the enforcement more selectively.
This change allows skipping the cmake setting if the existing preprocessor
macro is already defined. My intention here is to make this change and get
Swift to use it, and then after a few weeks, we can remove the cmake option.
I want to stage it like that to be less disruptive. I'm not aware of anyone
else using that cmake option.
Mehdi had some initial concern about the impact of using a preprocessor
macro when building with modules enabled. I don't think that will be a
problem if we set the macro on the command line with a -D option in those
contexts where we need to disable the enforcement of the checks.
Vedant Kumar [Tue, 14 Feb 2017 18:48:48 +0000 (18:48 +0000)]
[profiling] Remove dead profile name vars after emitting name data
The profile name variables passed to counter increment intrinsics are
dead after we emit the finalized name data in __llvm_prf_nm. However, we
neglect to erase these name variables. This causes huge size increases
in the __TEXT,__const section as well as slowdowns when linker dead
stripping is disabled. Some affected projects are so massive that they
fail to link on Darwin, because only the small code model is supported.
Fix the issue by throwing away the name constants as soon as we're done
with them.
To help assist in debugging ISEL or to prioritize GlobalISel backend
work, this patch adds two more tables to <Target>GenISelDAGISel.inc -
one which contains the patterns that are used during selection and the
other containing include source location of the patterns
Enabled through CMake varialbe LLVM_ENABLE_DAGISEL_COV
Adam Nemet [Tue, 14 Feb 2017 18:18:58 +0000 (18:18 +0000)]
[opt-viewer] For single-process, fall back on map instead of Pool.map
This allows for nicer backtrace and debugging when -j1 is passed:
$ opt-viewer.py CMakeFiles/LLVMScalarOpts.dir/LoopVersioningLICM.cpp.opt.yaml html
Traceback (most recent call last):
File "/org/llvm/utils/opt-viewer/opt-viewer.py", line 405, in <module>
generate_report(pmap, all_remarks, file_remarks, args.source_dir, args.output_dir)
File "/org/llvm/utils/opt-viewer/opt-viewer.py", line 362, in generate_report
pmap(_render_file_bound, file_remarks.items())
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 251, in map
return self.map_async(func, iterable, chunksize).get()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 567, in get
raise self._value
Exception: blah
$ opt-viewer.py -j 1 CMakeFiles/LLVMScalarOpts.dir/LoopVersioningLICM.cpp.opt.yaml html
Traceback (most recent call last):
File "/org/llvm/utils/opt-viewer/opt-viewer.py", line 405, in <module>
generate_report(pmap, all_remarks, file_remarks, args.source_dir, args.output_dir)
File "/org/llvm/utils/opt-viewer/opt-viewer.py", line 362, in generate_report
pmap(_render_file_bound, file_remarks.items())
File "/org/llvm/utils/opt-viewer/opt-viewer.py", line 317, in _render_file
SourceFileRenderer(source_dir, output_dir, filename).render(remarks)
File "/org/llvm/utils/opt-viewer/opt-viewer.py", line 168, in __init__
raise Exception("blah")
Exception: blah
Taewook Oh [Tue, 14 Feb 2017 17:30:05 +0000 (17:30 +0000)]
Do not apply redundant LastCallToStaticBonus
Summary:
As written in the comments above, LastCallToStaticBonus is already applied to
the cost if Caller has only one user, so it is redundant to reapply the bonus
here.
If the only user is not a caller, TotalSecondaryCost will not be adjusted
anyway because callerWillBeRemoved is false. If there's no caller at all, we
don't need to care about TotalSecondaryCost because
inliningPreventsSomeOuterInline is false.
Adam Nemet [Tue, 14 Feb 2017 17:21:09 +0000 (17:21 +0000)]
Add new pass LazyMachineBlockFrequencyInfo
And use it in MachineOptimizationRemarkEmitter. A test will follow on top of
Justin's changes to enable MachineORE in AsmPrinter.
The approach is similar to the IR-level pass. It's a bit simpler because BPI
is immutable at the Machine level so we don't need to make that lazy.
Because of this, a new function mapping is introduced (BPIPassTrait::getBPI).
This function extracts BPI from the pass. In case of the lazy pass, this is
when the calculation of the BFI occurs. For Machine-level, this is the
identity function.
Pavel Labath [Tue, 14 Feb 2017 16:35:56 +0000 (16:35 +0000)]
[Support] Add formatv support for StringLiteral
Summary:
This is achieved by generalizing the expression selecting the StringRef
format_provider. Now, anything that can be converted to a StringRef will
use it's formatter.
Matthew Simpson [Tue, 14 Feb 2017 16:28:32 +0000 (16:28 +0000)]
Reapply "[LV] Extend trunc optimization to all IVs with constant integer steps"
This reapplies commit r294967 with a fix for the execution time regressions
caught by the clang-cmake-aarch64-quick bot. We now extend the truncate
optimization to non-primary induction variables only if the truncate isn't
already free.
Alexey Bataev [Tue, 14 Feb 2017 15:20:48 +0000 (15:20 +0000)]
[SLP] Fix for PR31879: vectorize repeated scalar ops that don't get put
back into a vector
Previously the cost of the existing ExtractElement/ExtractValue
instructions was considered as a dead cost only if it was detected that
they have only one use. But these instructions may be considered
dead also if users of the instructions are also going to be vectorized,
like:
```
%x0 = extractelement <2 x float> %x, i32 0
%x1 = extractelement <2 x float> %x, i32 1
%x0x0 = fmul float %x0, %x0
%x1x1 = fmul float %x1, %x1
%add = fadd float %x0x0, %x1x1
```
This can be transformed to
```
%1 = fmul <2 x float> %x, %x
%2 = extractelement <2 x float> %1, i32 0
%3 = extractelement <2 x float> %1, i32 1
%add = fadd float %2, %3
```
because though `%x0` and `%x1` have 2 users each other, these users are
part of the vectorized tree and we can consider these `extractelement`
instructions as dead.
Mikael Holmen [Tue, 14 Feb 2017 06:37:42 +0000 (06:37 +0000)]
[LSR] Pointers with different address spaces are considered incompatible.
Summary:
Function isCompatibleIVType is already used as a guard before the call to
SE.getMinusSCEV(OperExpr, PrevExpr);
in LSRInstance::ChainInstruction. getMinusSCEV requires the expressions
to be of the same type, so we now consider two pointers with different
address spaces to be incompatible, since it is possible that the pointers
in fact have different sizes.
Lang Hames [Tue, 14 Feb 2017 05:40:01 +0000 (05:40 +0000)]
[Orc][RPC] Remove lanch policies in favor of async handlers.
Launch policies provided a mechanism for running RPC handlers on a background
thread (unblocking the main RPC receiver thread). Async handlers generalize
this by passing the responder function (the function that sends the RPC return
value) as an argument to the handler. The handler can optionally do its work on
a background thread (the same way launch policies do), but can also (a) can
inspect the call arguments before deciding to run the work on a different
thread, or (b) can use the responder in a subsequent RPC call (e.g. in the
handler of a callAsync), allowing the handler to call back to the originator (or
to a 3rd party) without blocking the listener thread, and without launching a
new thread.
Philip Reames [Tue, 14 Feb 2017 01:38:31 +0000 (01:38 +0000)]
[LICM] Make store promotion work in the face of unordered atomics
Extend our store promotion code to deal with unordered atomic accesses. Ordered atomics continue to be unhandled.
Most of the change is straight-forward, the only complicated bit is in the reasoning around mixing of atomic and non-atomic memory access. Rather than trying to reason about the complex semantics in these cases, I simply disallowed promotion when both atomic and non-atomic accesses are present. This is conservatively correct.
It seems really tempting to just promote all access to atomics, but the original accesses might have been conditional. Since we can't lower an arbitrary atomic type, it might not be safe to promote all access to atomic. Consider a loop like the following:
while(b) {
load i128 ...
if (can lower i128 atomic)
store atomic i128 ...
else
store i128
}
It could be there's no race on the location and thus the code is perfectly well defined even if we can't lower a i128 atomically.
It's not clear we need to be this conservative - arguably the program above is brocken since it can't be lowered unless the branch is folded - but I didn't want to have to fix any fallout which might result.
Reid Kleckner [Tue, 14 Feb 2017 01:21:39 +0000 (01:21 +0000)]
Use std::call_once on Windows
Previously we could not use it because std::once_flag's default
constructor was not constexpr. Today, all supported versions of VS
correctly mark it constexpr. I confirmed that MSVC 2015 does not emit
any problematic racy dynamic initialization code, so we should be safe
to use this now.
Andrew Kaylor [Mon, 13 Feb 2017 23:38:52 +0000 (23:38 +0000)]
[X86] Add MXCSR register
This adds MXCSR to the set of recognized registers for X86 targets and updates the instructions that read or write it. I do not intend for all of the various floating point instructions that implicitly use the control bits or update the status bits of this register to ever have that usage modeled by default. However, when constrained floating point modes (such as strict FP exception status modeling or dynamic rounding modes) are enabled, implicit use/def information for MXCSR will be added to those instructions.
Until those additional updates are made this should cause (almost?) no functional changes. Theoretically, this will prevent instructions like LDMXCSR and STMXCSR from being moved past one another, but that should be prevented anyway and I haven't found a case where it is happening now.
Sanjoy Das [Mon, 13 Feb 2017 23:19:07 +0000 (23:19 +0000)]
[LangRef] Explicitly allow readnone and reaodnly functions to unwind
Summary:
This change edits the language reference to explicitly allow the
existence of readnone and readonly functions that can throw. Full
discussion at
http://lists.llvm.org/pipermail/llvm-dev/2017-January/108637.html
Sanjoy Das [Mon, 13 Feb 2017 23:14:03 +0000 (23:14 +0000)]
[LangRef] Update the TBAA section
Summary:
Update the TBAA section to mention the struct path TBAA that LLVM
implements today. This is not a proposal or change in semantics -- it
is intended only to **document** what LLVM already does today.
This is related to https://reviews.llvm.org/D26438 where I've tried to
implement some of the constraints as verifier checks.
Sanjay Patel [Mon, 13 Feb 2017 23:10:51 +0000 (23:10 +0000)]
[FunctionAttrs] try to extend nonnull-ness of arguments from a callsite back to its parent function
As discussed here:
http://lists.llvm.org/pipermail/llvm-dev/2016-December/108182.html
...we should be able to propagate 'nonnull' info from a callsite back to its parent.
The original motivation for this patch is our botched optimization of "dyn_cast" (PR28430),
but this won't solve that problem.
The transform is currently disabled by default while we wait for clang to work-around
potential security problems:
http://lists.llvm.org/pipermail/cfe-dev/2017-January/052066.html