Sanne Wouda [Tue, 28 Mar 2017 10:02:56 +0000 (10:02 +0000)]
[AArch64] [Assembler] option to disable negative immediate conversions
Summary:
Similar to the ARM target in https://reviews.llvm.org/rL298380, this
patch adds identical infrastructure for disabling negative immediate
conversions, and converts the existing aliases to the new infrastucture.
Igor Breger [Tue, 28 Mar 2017 09:35:06 +0000 (09:35 +0000)]
[GlobalISel][X86] support G_FRAME_INDEX instruction selection.
Summary:
G_LOAD/G_STORE, add alternative RegisterBank mapping.
For G_LOAD, Fast and Greedy mode choose the same RegisterBank mapping (GprRegBank ) for the G_GLOAD + G_FADD , can't get rid of cross register bank copy GprRegBank->VecRegBank.
Craig Topper [Tue, 28 Mar 2017 05:32:48 +0000 (05:32 +0000)]
[APInt] Use 'unsigned' instead of 'unsigned int' in the interface to the APInt tc functions. This is more consistent with the rest of the codebase. NFC
Kevin Enderby [Mon, 27 Mar 2017 20:09:23 +0000 (20:09 +0000)]
Add the error handling for Mach-O dyld compact lazy bind, weak bind and
rebase entry errors and test cases for each of the error checks.
Also verified with Nick Kledzik that a BIND_OPCODE_SET_ADDEND_SLEB
opcode is legal in a lazy bind table, so code that had that as an error
check was removed.
With MachORebaseEntry and MachOBindEntry classes now returning
an llvm::Error in all cases for malformed input the variables Malformed
and logic to set use them is no longer needed and has been removed
from those classes.
Also in a few places, removed the redundant Done assignment to true
when also calling moveToEnd() as it does that assignment.
This only leaves the dyld compact export entries left to have
error handling yet to be added for the dyld compact info.
Matthew Simpson [Mon, 27 Mar 2017 20:07:38 +0000 (20:07 +0000)]
[LV] Transform truncations of non-primary induction variables
The vectorizer tries to replace truncations of induction variables with new
induction variables having the smaller type. After r295063, this optimization
was applied to all integer induction variables, including non-primary ones.
When optimizing the truncation of a non-primary induction variable, we still
need to transform the new induction so that it has the correct start value.
This should fix PR32419.
[TableGen] Print #nnn as a name of an non-native reg unit with id nnn
When using -debug with -gen-register-info, tablegen will crash when
trying to print a name of a non-native register unit. This patch only
affects the debug information generated while running llvm-tblgen,
and has no impact on the compilable code coming out of it.
[Support] Avoid concurrency hazard in signal handler registration
Several static functions from the signal API can be invoked
simultaneously; RemoveFileOnSignal for instance can be called indirectly
by multiple parallel loadModule() invocations, which might lead to
the assertion:
Assertion failed: (NumRegisteredSignals < array_lengthof(RegisteredSignalInfo) && "Out of space for signal handlers!"),
function RegisterHandler, file /llvm/lib/Support/Unix/Signals.inc, line 105.
RemoveFileOnSignal calls RegisterHandlers(), which isn't currently
mutex protected, leading to the behavior above. This potentially affect
a few other users of RegisterHandlers() too.
Craig Topper [Mon, 27 Mar 2017 18:16:17 +0000 (18:16 +0000)]
[APInt] Move operator&=(uint64_t) inline and use memset to clear the upper words.
This method is pretty new and probably isn't use much in the code base so this should have a negligible size impact. The OR and XOR operators are already inline.
Yaxun Liu [Mon, 27 Mar 2017 14:04:01 +0000 (14:04 +0000)]
[AMDGPU] Get address space mapping by target triple environment
As we introduced target triple environment amdgiz and amdgizcl, the address
space values are no longer enums. We have to decide the value by target triple.
The basic idea is to use struct AMDGPUAS to represent address space values.
For address space values which are not depend on target triple, use static
const members, so that they don't occupy extra memory space and is equivalent
to a compile time constant.
Since the struct is lightweight and cheap, it can be created on the fly at
the point of usage. Or it can be added as member to a pass and created at
the beginning of the run* function.
Anna Thomas [Mon, 27 Mar 2017 13:52:51 +0000 (13:52 +0000)]
[InstCombine] Avoid incorrect folding of select into phi nodes when incoming element is a vector type
Summary:
We are incorrectly folding selects into phi nodes when the incoming value of a phi
node is a constant vector. This optimization is done in `FoldOpIntoPhi` when the
select condition is a phi node with constant incoming values.
Without the fix, we are miscompiling (i.e. incorrectly folding the
select into the phi node) when the vector contains non-zero
elements.
This patch fixes the miscompile and we will correctly fold based on the
select vector operand (see added test cases).
Daniel Sanders [Mon, 27 Mar 2017 13:43:24 +0000 (13:43 +0000)]
Correct OptionCategoryCompare() in the command line library.
Summary:
It should return <0, 0, or >0 for less-than, equal, and greater-than like
strcmp() (according to the history, it used to be implemented with
strcmp()) but it actually returned 0, or 1 for not-equal and equal.
Gadi Haber [Mon, 27 Mar 2017 12:13:37 +0000 (12:13 +0000)]
[X86][AVX2] bugzilla bug 21281 Performance regression in vector interleave in AVX2
This is a patch for an on-going bugzilla bug 21281 on the generated X86 code for a matrix transpose8x8 subroutine which requires vector interleaving. The generated code in AVX2 is currently non-optimal and requires 60 instructions as opposed to only 40 instructions generated for AVX1.
The patch includes a fix for the AVX2 case where vector unpack instructions use less operations than the vector blend operations available in AVX2.
In this case using vector unpack instructions is more efficient.
Craig Topper [Mon, 27 Mar 2017 02:38:17 +0000 (02:38 +0000)]
[IR] Share implementation of pairs of const and non-const methods in BasicBlock using the const version instead of the non-const version
Summary:
During post-commit review of a previous change I made it was pointed out that const casting 'this' is technically a bad practice. This patch re-implements all of the methods in BasicBlock that do this to use the const BasicBlock version and const_cast the return value instead.
I think there are still many other classes that do similar things. I may look at more in the future.
Shoaib Meenai [Sun, 26 Mar 2017 17:10:11 +0000 (17:10 +0000)]
[llvm-readobj] Prefer ILT to IAT for reading COFF imports
We're seeing binutils ld produce binaries where the import address
table's NameRVA entry is actually a VA instead (i.e. it's already base
relocated), which llvm-readobj then chokes on. Both dumpbin and the
Windows loader are able to handle these binaries correctly, however, and
we can make llvm-readobj handle them correctly too by iterating the
import lookup table (which doesn't have a relocated NameRVA) rather than
the import address table.
The import lookup table and the import address table are supposed to be
identical on disk, and prior to r277298 the import lookup table would be
used by `llvm-readobj -coff-imports` anyway, so this shouldn't have any
functional change (except in the case of our malformed binaries). The
import lookup table can apparently be missing when using old Borland
linkers, so fall back to the import address table in that case.
Simon Pilgrim [Sun, 26 Mar 2017 12:52:28 +0000 (12:52 +0000)]
[X86][AVX512F] Fix reg class for VMOVSSZrr/VMOVSSZrrk and VMOVSDZrr/VMOVSDZrrk
Fixed -verify-machineinstrs errors in fast-isel-select-sse.ll (one of many in PR27481)
The VMOVSSZrr/VMOVSSZrrk and VMOVSDZrr/VMOVSDZrrk instructions were assuming both source registers were V128X when the second is actually supposed to be FR32X/FR64X
The first variant contains all current transformations except
transforming switches into lookup tables. The second variant
contains all current transformations.
The switch-to-lookup-table conversion results in code that is more
difficult to analyze and optimize by other passes. Most importantly,
it can inhibit Dead Code Elimination. As such it is often beneficial to
only apply this transformation very late. A common example is inlining,
which can often result in range restrictions for the switch expression.
Changes in execution time according to LNT:
SingleSource/Benchmarks/Misc/fp-convert +3.03%
MultiSource/Benchmarks/ASC_Sequoia/CrystalMk/CrystalMk -11.20%
MultiSource/Benchmarks/Olden/perimeter/perimeter -10.43%
and a couple of smaller changes. For perimeter it also results 2.6%
a smaller binary.
Chandler Carruth [Sun, 26 Mar 2017 02:49:23 +0000 (02:49 +0000)]
[IR] Make SwitchInst::CaseIt almost a normal iterator.
This moves it to the iterator facade utilities giving it full random
access semantics, etc. It can also now be used with standard algorithms
like std::all_of and std::any_of and range adaptors like llvm::reverse.
Also make the semantics of iterating match what every other iterator
uses and forbid decrementing past the begin iterator. This was used as
a hacky way to work around iterator invalidation. However, every
instance trying to do this failed to actually avoid touching invalid
iterators despite the clear documentation that the removed and all
subsequent iterators become invalid including the end iterator. So I've
added a return of the next iterator to removeCase and rewritten the
loops that were doing this to correctly follow the iterator pattern of
either incremneting or removing and assigning fresh values to the
iterator and the end.
In one case we were trying to go backwards to make this cleaner but it
doesn't actually work. I've made that code match the code we use
everywhere else to remove cases as we iterate. This changes the order of
cases in one test output and I moved that test to CHECK-DAG so it
wouldn't care -- the order isn't semantically meaningful anyways.
Simon Pilgrim [Sat, 25 Mar 2017 19:50:14 +0000 (19:50 +0000)]
[X86][SSE] Generalised CMP+AND1 combine to ZERO/ALLBITS+MASK
Patch to generalize combinePCMPAnd1 (for handling SETCC + ZEXT cases) to work for any input that has zero/all bits set masked with an 'all low bits' mask.
Replaced the implicit assumption of shift availability with a call to SupportedVectorShiftWithImm.
Sanjay Patel [Sat, 25 Mar 2017 16:05:33 +0000 (16:05 +0000)]
[x86] use PMOVMSK to replace memcmp libcalls for 16-byte equality
This is the payoff for D31156 - if a target has efficient comparison instructions for vector-sized equality,
we can replace memcmp calls with inline code that is both smaller and faster.
Craig Topper [Sat, 25 Mar 2017 06:52:52 +0000 (06:52 +0000)]
[InstCombine] Change the interface of SimplifyDemandedBits so that it takes the instruction and operand instead of the Use.
The first thing it did was get the User for the Use to get the instruction back. This requires looking through the Uses for the User using the waymarking walk. That's pretty fast, but its probably still better to just pass the Instruction we already had.
Evgeniy Stepanov [Sat, 25 Mar 2017 01:01:11 +0000 (01:01 +0000)]
[asan] Put ctor/dtor in comdat.
When possible, put ASan ctor/dtor in comdat.
The only reason not to is global registration, which can be
TU-specific. This is not the case when there are no instrumented
globals. This is also limited to ELF targets, because MachO does
not have comdat, and COFF linkers may GC comdat constructors.
The benefit of this is a lot less __asan_init() calls: one per DSO
instead of one per TU. It's also necessary for the upcoming
gc-sections-for-globals change on Linux, where multiple references to
section start symbols trigger quadratic behaviour in gold linker.
[libFuzzer] read asan's dedup_token while minimizing a crash and stop minimization if another bug was found during minimization (https://github.com/google/oss-fuzz/issues/452)
Reid Kleckner [Fri, 24 Mar 2017 23:28:42 +0000 (23:28 +0000)]
[codeview] Don't assert when the user violates the ODR
If we have an array of a user-defined aggregates for which there was an
ODR violation, then the array size will not necessarily match the number
of elements times the size of the element.
Jessica Paquette [Fri, 24 Mar 2017 23:00:21 +0000 (23:00 +0000)]
[Outliner] Revert r298734.
When I tested r298734, I thought that red zones were enabled by default like in
X86. Since red zones are behind a flag on AArch64 the testing wasn't true.
Move spill size and alignment info from MC to TargetRegisterInfo
This is another step towards implementing register classes with
parametrized register/spill sizes and value types.
This is an updated version of r298652. The difference is that MCRegister-
Class still contains register size, available as getPhysRegSize(). The
old function getSize was retained as a temporary measure to avoid build
breakage for out-of-tree targets.