Derek Schuff [Tue, 13 Dec 2016 23:01:53 +0000 (23:01 +0000)]
llvm-config: Set LinkMode in addition to LinkDyLib when using --ignore-llvm
Summary:
LinkDyLib is only used (before arg processing) to set up the default for
LinkMode. So reset LinkMode as well, and process before --link-shared or
--link-static to allow those flags to continue to override it.
Sanjoy Das [Tue, 13 Dec 2016 22:04:58 +0000 (22:04 +0000)]
Re-land "[SCEVExpander] Use llvm data structures; NFC"
This change re-lands r289215, by reverting r289482. The underlying
issue that caused it to be reverted has been fixed by Tim Northover in
r289496.
Original commit message for r289215:
[SCEVExpander] Use llvm data structures; NFC
Original commit message for r289482:
Revert "[SCEVExpander] Use llvm data structures; NFC"
This reverts r289215 (git SHA1 cb7b86a1). It breaks the ubsan build
because a DenseMap that keys off of `AssertingVH<T>` will hit UB when it
tries to cast the empty and tombstone keys to `T *` (due to insufficient
alignment).
This is the relevant stack trace (thanks to Mike Aizatsky):
#0 0x25cf100 in llvm::AssertingVH<llvm::PHINode>::getValPtr() const llvm/include/llvm/IR/ValueHandle.h:212:39
#1 0x25cea20 in llvm::AssertingVH<llvm::PHINode>::operator=(llvm::AssertingVH<llvm::PHINode> const&) llvm/include/llvm/IR/ValueHandle.h:234:19
#2 0x25d0092 in llvm::DenseMapBase<llvm::DenseMap<llvm::AssertingVH<llvm::PHINode>, llvm::detail::DenseSetEmpty, llvm::DenseMapInfo<llvm::AssertingVH<llvm::PHINode> >, llvm::detail::DenseSetPair<llvm::AssertingVH<llvm::PHINode> > >, llvm::AssertingVH<llvm::PHINode>, llvm::detail::DenseSetEmpty, llvm::DenseMapInfo<llvm::AssertingVH<llvm::PHINode> >, llvm::detail::DenseSetPair<llvm::AssertingVH<llvm::PHINode> > >::clear() llvm/include/llvm/ADT/DenseMap.h:113:23
Anna Thomas [Tue, 13 Dec 2016 21:05:21 +0000 (21:05 +0000)]
[IRCE] Avoid loop optimizations on pre and post loops
Summary:
This patch will add loop metadata on the pre and post loops generated by IRCE.
Currently, we have metadata for disabling optimizations such as vectorization,
unrolling, loop distribution and LICM versioning (and confirmed that these
optimizations check for the metadata before proceeding with the transformation).
The pre and post loops generated by IRCE need not go through loop opts (since
these are slow paths).
[LV] Don't vectorize when we have a small static bound on trip count
We currently check if the exact trip count is known and is smaller than the
"tiny loop" bound. We should be checking the maximum bound on the trip count
instead.
Alina Sbirlea [Tue, 13 Dec 2016 19:32:36 +0000 (19:32 +0000)]
Generalize strided store pattern in interleave access pass
Summary:
This patch aims to generalize matching of the strided store accesses to more general masks.
The more general rule is to have consecutive accesses based on the stride:
[x, y, ... z, x+1, y+1, ...z+1, x+2, y+2, ...z+2, ...]
All elements in the masks need not form a contiguous space, there may be gaps.
As before, undefs are allowed and filled in with adjacent element loads.
Matthias Braun [Tue, 13 Dec 2016 19:08:17 +0000 (19:08 +0000)]
Revert "AArch64CollectLOH: Rewrite as block-local analysis."
This is not always behaving as expected as it turns out block live-in
lists are only correct most of the time. Still waiting for reviews on
https://reviews.llvm.org/D27559 to have them correct all of the time.
[bpf] change llvm-objdump to print dec instead of hex
since bpf instruction stream is multiple of 8 change llvm-objdump
to print decimal instruction number instead of hex address, so that
users don't have to do this math manually to match kernel verifier output
Tim Northover [Tue, 13 Dec 2016 18:25:38 +0000 (18:25 +0000)]
GlobalISel: fix GOT accesses on AArch64.
We were using the correct pseudo-instruction, but because the operand's flags
weren't set correctly we still ended up emitting incorrect relocations during
MC lowering.
Greg Clayton [Tue, 13 Dec 2016 18:25:19 +0000 (18:25 +0000)]
Make a DWARFDIE class that can help avoid using the wrong DWARFUnit when extracting attributes
Many places pass around a DWARFDebugInfoEntryMinimal and a DWARFUnit. It is easy to get things wrong by using the wrong DWARFUnit with a DWARFDebugInfoEntryMinimal. This patch creates a DWARFDie class that contains the DWARFUnit and DWARFDebugInfoEntryMinimal objects so that they can't get out of sync. All attribute extraction has been moved out of DWARFDebugInfoEntryMinimal and into DWARFDie. DWARFDebugInfoEntryMinimal was also renamed to DWARFDebugInfoEntry.
DWARFDie objects are temporary objects that are used by clients and contain 2 pointers that you always need to have anyway. Keeping them grouped will avoid errors and simplify many of the attribute extracting APIs by not having to pass in a DWARFUnit.
Marcos Pividori [Tue, 13 Dec 2016 17:46:40 +0000 (17:46 +0000)]
[libFuzzer] Avoid name collision with Windows API.
Windows uses some macros to replace DeleteFile() by DeleteFileA() or
DeleteFileW(). This was causing an error at link time.
DeleteFile was renamed to RemoveFile().
Marcos Pividori [Tue, 13 Dec 2016 17:46:32 +0000 (17:46 +0000)]
[libFuzzer] Implement DirName() for Windows.
Implement DirName from scratch to avoid dependencies on external libraries.
It's based on MSDN documentation for Naming Files, Paths, and Namespaces.
The algorithm can't simply start from the end and look backwards for the
first separator, because we need to preserve the prefix that represent
the root location. We shouldn't remove anything there. In Windows we
have many different options, like:
\\Server\Share\ , \ , C: , C:\ , \\?\C:\ , \\?\UNC\Server\Share\
We remove the last separator in the rest of the path, if it exists.
It was implemented to have a similar behaviour to dirname() in linux,
removing trailing separators, returning "." when the path doesn't
contain separators, etc.
Marcos Pividori [Tue, 13 Dec 2016 17:46:25 +0000 (17:46 +0000)]
[libFuzzer] Fix bug in detecting timeouts when input string is empty.
I added a new flag RunningCB to know if the Fuzzer's main thread is
running the CB function, instead of using (!CurrentUnitSize).
(!CurrentUnitSize) doesn't work properly. For example, in FuzzerLoop.cpp,
inside ShuffleAndMinimize() function, we execute the callback with an
empty string (size=0). Previous implementation failed to detect timeouts
in that execution.
Also, I add a regression test for that case.
Marcos Pividori [Tue, 13 Dec 2016 17:45:53 +0000 (17:45 +0000)]
[libFuzzer] Properly use unsigned for workers, jobs and NumberOfCpuCores.
std::thread::hardware_concurrency() returns an unsigned, so I modify
NumberOfCpuCores() to return unsigned too.
The number of cpus is used to define the number of workers, so I decided
to update the worker and jobs flags to be declared as unsigned too.
Marcos Pividori [Tue, 13 Dec 2016 17:45:44 +0000 (17:45 +0000)]
[libFuzzer] Properly use unsigned for Process ID.
Use unsigned for PID instead of signed int. GetCurrentProcessId() returns
an unsigned (DWORD) so we must be sure we can deal with all possible values.
I use a long unsigned to be sure it can hold a 32 bit unsigned (DWORD).
Marcos Pividori [Tue, 13 Dec 2016 17:45:20 +0000 (17:45 +0000)]
[libFuzzer] Improve Signal Handler interface.
Add new flags to FuzzingOptions to represent the different conditions
on the signal handling. These options are passed when calling
SetSignalHandler().
This changes simplify the implementation of Windows's exception
handling. Now we can define a unique handler for all the exceptions.
Zachary Turner [Tue, 13 Dec 2016 17:03:49 +0000 (17:03 +0000)]
[ADT] Add llvm::StringLiteral.
StringLiteral is a wrapper around a string literal useful for
replacing global tables of char arrays with global tables of
StringRefs that can initialized in a constexpr context, avoiding
the invocation of a global constructor.
David Callahan [Tue, 13 Dec 2016 16:42:18 +0000 (16:42 +0000)]
[ADCE] Add code to remove dead branches
Summary:
This is last in of a series of patches to evolve ADCE.cpp to support
removing of unnecessary control flow.
This patch adds the code to update the control and data flow graphs
to remove the dead control flow.
Also update unit tests to test the capability to remove dead,
may-be-infinite loop which is enabled by the switch
-adce-remove-loops.
Previous patches:
D23824 [ADCE] Add handling of PHI nodes when removing control flow
D23559 [ADCE] Add control dependence computation
D23225 [ADCE] Modify data structures to support removing control flow
D23065 [ADCE] Refactor anticipating new functionality (NFC)
D23102 [ADCE] Refactoring for new functionality (NFC)
Artur Pilipenko [Tue, 13 Dec 2016 14:21:14 +0000 (14:21 +0000)]
[DAGCombiner] Match load by bytes idiom and fold it into a single load
Match a pattern where a wide type scalar value is loaded by several narrow loads and combined by shifts and ors. Fold it into a single load or a load and a bswap if the targets supports it.
Assuming little endian target:
i8 *a = ...
i32 val = a[0] | (a[1] << 8) | (a[2] << 16) | (a[3] << 24)
=>
i32 val = *((i32)a)
This optimization was discussed on llvm-dev some time ago in "Load combine pass" thread. We came to the conclusion that we want to do this transformation late in the pipeline because in presence of atomic loads load widening is irreversible transformation and it might hinder other optimizations.
Eventually we'd like to support folding patterns like this where the offset has a variable and a constant part:
i32 val = a[i] | (a[i + 1] << 8) | (a[i + 2] << 16) | (a[i + 3] << 24)
Matching the pattern above is easier at SelectionDAG level since address reassociation has already happened and the fact that the loads are adjacent is clear. Understanding that these loads are adjacent at IR level would have involved looking through geps/zexts/adds while looking at the addresses.
The general scheme is to match OR expressions by recursively calculating the origin of individual bits which constitute the resulting OR value. If all the OR bits come from memory verify that they are adjacent and match with little or big endian encoding of a wider value. If so and the load of the wider type (and bswap if needed) is allowed by the target generate a load and a bswap if needed.
Simon Dardis [Tue, 13 Dec 2016 11:07:51 +0000 (11:07 +0000)]
[mips] Fix compact branch hazard detection
In certain cases it is possible that transient instructions such as
%reg = IMPLICIT_DEF as a single instruction in a basic block to reach
the MipsHazardSchedule pass. This patch teaches MipsHazardSchedule to
properly look through such cases.
Dylan McKay [Tue, 13 Dec 2016 05:53:14 +0000 (05:53 +0000)]
[AVR] Add an 'relax memory operation' pass
Summary:
This pass will be used to relax instructions which use out of bounds
memory accesses to equivalent operations that can work with the
addresses.
The pass currently implements relaxation for the STDWPtrQRr instruction.
Without this pass, an assertion error would be hit in the pseudo expansion pass.
In the future, we will need to add more instructions to this pass. We can do
that on a case-by-case basic.
Philip Reames [Tue, 13 Dec 2016 01:38:41 +0000 (01:38 +0000)]
[peephole] Enhance folding logic to work for STATEPOINTs
The general idea here is to get enough of the existing restrictions out of the way that the already existing folding logic in foldMemoryOperand can kick in for STATEPOINTs and fold references to immutable stack slots. The key changes are:
Support for folding multiple operands at once which reference the same load
Support for folding multiple loads into a single instruction
Walk all the operands of the instruction for varidic instructions (this is a bug fix!)
Once this lands, I'll post another patch which refactors the TII interface here. There's nothing actually x86 specific about the x86 code used here.
Philip Reames [Tue, 13 Dec 2016 01:21:15 +0000 (01:21 +0000)]
[Statepoints] Reuse stack slots more than once within a basic block
The stack slot reuse code had a really amusing bug. We ended up only reusing a stack slot exact once (initial use + reuse) within a basic block. If we had a third statepoint to process, we ended up allocating a new set of stack slots. If we crossed a basic block boundary, the set got cleared. As a result, code which is invoke heavy doesn't see the problem, but multiple calls within a basic block does. Net result: as we optimize invokes into calls, lowering gets worse.
The root error here is that the bitmap uses by the custom allocator wasn't kept in sync. The result was that we ended up resizing the bitmap on the next statepoint (to handle the cross block case), reset the bit once, but then never reset it again.
[libFuzzer] don't require extra flags with -minimize_crash=1 (default to -max_total_time=600). Also respect exact_artifact_path when outputting the end result
Chris Bieneman [Tue, 13 Dec 2016 00:29:56 +0000 (00:29 +0000)]
[LIT] Fix system-windows
Turns out if you were on windows and your default target wasn't windows the system-windows feature wasn't getting enabled.
This fixes that and updates the coff-dwarf test to rely on the new "target-windows" feature. That test was the reason why system-windows was changed to not always be enabled on Windows hosts.
Tim Northover [Mon, 12 Dec 2016 23:29:07 +0000 (23:29 +0000)]
Stop lying about pointers' required alignments.
These extra specializations were added in the depths of history (r67984 from
2009) and are clearly problematic now. The pointers actually are aligned to the
default (8 bytes), since otherwise UBsan would be complaining loudly.
I *think* it originally made sense because there was no "alignof" to infer the
correct value so the generic case went with what malloc returned (8-byte
aliged objects), and on 32-bit machines this specialization was correct. It
became wrong when we started compiling for 64-bit, and caused a UBSan failure
when we tried to put a ValueHandle into a DenseMap.
Marcos Pividori [Mon, 12 Dec 2016 23:25:11 +0000 (23:25 +0000)]
[libFuzzer] Implement Timers for Windows.
Implemented timeouts for Windows using TimerQueueTimers.
Timers are used to supervise the time of execution of the
callback function that is being fuzzed.
Petr Hosek [Mon, 12 Dec 2016 23:15:10 +0000 (23:15 +0000)]
[CMake] Multi-target builtins build
This change enables building builtins for multiple different targets
using LLVM runtimes directory.
To specify the builtin targets to be built, use the LLVM_BUILTIN_TARGETS
variable, where the value is the list of targets. To pass a per target
variable to the builtin build, you can set BUILTINS_<target>_<variable>
where <variable> will be passed to the builtin build for <target>.
Sanjoy Das [Mon, 12 Dec 2016 23:00:12 +0000 (23:00 +0000)]
Revert "[SCEVExpander] Use llvm data structures; NFC"
This reverts r289215 (git SHA1 cb7b86a1). It breaks the ubsan build
because a DenseMap that keys off of `AssertingVH<T>` will hit UB when it
tries to cast the empty and tombstone keys to `T *` (due to insufficient
alignment).
This is the relevant stack trace (thanks to Mike Aizatsky):
#0 0x25cf100 in llvm::AssertingVH<llvm::PHINode>::getValPtr() const llvm/include/llvm/IR/ValueHandle.h:212:39
#1 0x25cea20 in llvm::AssertingVH<llvm::PHINode>::operator=(llvm::AssertingVH<llvm::PHINode> const&) llvm/include/llvm/IR/ValueHandle.h:234:19
#2 0x25d0092 in llvm::DenseMapBase<llvm::DenseMap<llvm::AssertingVH<llvm::PHINode>, llvm::detail::DenseSetEmpty, llvm::DenseMapInfo<llvm::AssertingVH<llvm::PHINode> >, llvm::detail::DenseSetPair<llvm::AssertingVH<llvm::PHINode> > >, llvm::AssertingVH<llvm::PHINode>, llvm::detail::DenseSetEmpty, llvm::DenseMapInfo<llvm::AssertingVH<llvm::PHINode> >, llvm::detail::DenseSetPair<llvm::AssertingVH<llvm::PHINode> > >::clear() llvm/include/llvm/ADT/DenseMap.h:113:23
Guozhi Wei [Mon, 12 Dec 2016 22:09:02 +0000 (22:09 +0000)]
[PPC] Prefer direct move on power8 if load 1 or 2 bytes to VSR
Power8 has MTVSRWZ but no LXSIBZX/LXSIHZX, so move 1 or 2 bytes to VSR through MTVSRWZ is much faster than store the extended value into stack and load it with LXSIWZX.
This patch fixes pr31144.
Tim Shen [Mon, 12 Dec 2016 21:59:30 +0000 (21:59 +0000)]
[APFloat] Implement PPCDoubleDouble add and subtract.
Summary:
I looked at libgcc's implementation (which is based on the paper,
Software for Doubled-Precision Floating-Point Computations", by Seppo Linnainmaa,
ACM TOMS vol 7 no 3, September 1981, pages 272-283.) and made it generic to
arbitrary IEEE floats.
Matthew Simpson [Mon, 12 Dec 2016 21:11:04 +0000 (21:11 +0000)]
[SLP] Fix sign-extends for type-shrinking
This patch ensures the correct minimum bit width during type-shrinking.
Previously when type-shrinking, we always sign-extended values back to their
original width. However, if we are going to sign-extend, and the sign bit is
unknown, we have to increase the minimum bit width by one bit so the
sign-extend will fill the upper bits correctly. If the sign bit is known to be
zero, we can perform a zero-extend instead. This should fix PR31243.
Paul Robinson [Mon, 12 Dec 2016 20:49:11 +0000 (20:49 +0000)]
Recommit r288212: Emit 'no line' information for interesting 'orphan' instructions.
DWARF specifies that "line 0" really means "no appropriate source
location" in the line table. By default, use this for branch targets
and some other cases that have no specified source location, to
prevent inheriting unfortunate line numbers from physically preceding
instructions (which might be from completely unrelated source).
Updated patch allows enabling or suppressing this behavior for all
unspecified source locations.
Mehdi Amini [Mon, 12 Dec 2016 19:34:26 +0000 (19:34 +0000)]
Refactor BitcodeReader: move Metadata and ValueId handling in their own class/file
Summary:
I'm planning on changing the way we load metadata to enable laziness.
I'm getting lost in this gigantic files, and gigantic class that is the bitcode
reader. This is a first toward splitting it in a few coarse components that
are more easily understandable.
Dimitry Andric [Mon, 12 Dec 2016 19:05:52 +0000 (19:05 +0000)]
Fix compile with GCC 5 or later
Summary:
Compiling with GCC 5 or later can fail with a bogus error "constructor
required before non-static data member for
llvm::ValueEnumerator::MDRange::First has been parsed".
This was originally fixed upstream in GCC PR 70528, but later this fix
was reverted, and released versions of GCC still show the bogus error.
To work around this, replace MDRange's declaration of a default
constructor with a definition.
Teresa Johnson [Mon, 12 Dec 2016 16:09:30 +0000 (16:09 +0000)]
[ThinLTO] Import only necessary DICompileUnit fields
Summary:
As discussed on mailing list, for ThinLTO importing we don't need
to import all the fields of the DICompileUnit. Don't import enums,
macros, retained types lists. Also only import local scoped imported
entities. Since we don't currently import any global variables,
we also don't need to import the list of global variables (added an
assert to verify none are being imported).
This is being done by pre-populating the value map entries to map
the unneeded metadata to nullptr. For the imported entities, we can
simply replace the source module's list with a new list containing
only those needed imported entities. This is done in the IRLinker
constructor so that value mapping automatically does the desired
mapping.
Simon Pilgrim [Mon, 12 Dec 2016 10:49:15 +0000 (10:49 +0000)]
[X86][SSE] Lower suitably sign-extended mul vXi64 using PMULDQ
PMULDQ returns the 64-bit result of the signed multiplication of the lower 32-bits of vXi64 vector inputs, we can lower with this if the sign bits stretch that far.
Craig Topper [Mon, 12 Dec 2016 07:57:21 +0000 (07:57 +0000)]
[X86] Change CMPSS/CMPSD intrinsic instructions to use sse_load_f32/f64 as its memory pattern instead of full vector load.
These intrinsics only load a single element. We should use sse_loadf32/f64 to give more options of what loads it can match.
Currently these instructions are often only getting their load folded thanks to the load folding in the peephole pass. I plan to add more types of loads to sse_load_f32/64 so we can match without the peephole.