Adrian McCarthy [Thu, 22 Jun 2017 18:43:18 +0000 (18:43 +0000)]
Add IDs and clone methods to NativeRawSymbol
All NativeRawSymbols will have a unique symbol ID (retrievable via
getSymIndexId). For now, these are initialized to 0, but soon the
NativeSession will be responsible for creating the raw symbols, and it will
assign unique IDs.
The symbol cache in the NativeSession will also require the ability to clone
raw symbols, so I've provided implementations for that as well.
Adrian McCarthy [Thu, 22 Jun 2017 18:42:23 +0000 (18:42 +0000)]
Make IPDBSession::getGlobalScope a non-const method
There doesn't seem to be a compelling reason why this method should be const
other than it was possible with the DIA implementation. The native session
is going to act as a symbol factory and cache. This could be acheived with
mutable (and the existing const_cast), but it seems cleaner to accept that
this method affects the state of the session.
Sanjay Patel [Thu, 22 Jun 2017 18:11:19 +0000 (18:11 +0000)]
[x86] add/sub (X==0) --> sbb(neg X)
Our handling of select-of-constants is lumpy in IR (https://reviews.llvm.org/D24480),
lumpy in DAGCombiner, and lumpy in X86ISelLowering. That's why we only had the 'sbb'
codegen in 1 out of the 4 tests. This is a step towards smoothing that out.
First, show that all of these IR forms are equivalent:
http://rise4fun.com/Alive/mx
Second, show that the 'sbb' version is faster/smaller. IACA output for SandyBridge
(later Intel and AMD chips are similar based on Agner's tables):
This is the "obvious" x86 codegen (what gcc appears to produce currently):
| Num Of | Ports pressure in cycles | |
| Uops | 0 - DV | 1 | 2 - D | 3 - D | 4 | 5 | |
---------------------------------------------------------------------
| 1* | | | | | | | | xor eax, eax
| 1 | 1.0 | | | | | | CP | test edi, edi
| 1 | | | | | | 1.0 | CP | setnz al
| 1 | | 1.0 | | | | | CP | neg eax
Kevin Enderby [Thu, 22 Jun 2017 17:41:22 +0000 (17:41 +0000)]
Updated llvm-objdump symbolic disassembly with x86_64 Mach-O MH_KEXT_BUNDLE
file types so it symbolically disassembles operands using the external
relocation entries.
Craig Topper [Thu, 22 Jun 2017 16:23:30 +0000 (16:23 +0000)]
[InstCombine] Teach foldSelectICmpAndOr to recognize (select (icmp slt (trunc (X)), 0), Y, (or Y, C2))
Summary:
InstCombine likes to turn (icmp eq (and X, C1), 0) into (icmp slt (trunc (X)), 0) sometimes. This breaks foldSelectICmpAndOr's ability to recognize (select (icmp eq (and X, C1), 0), Y, (or Y, C2))->(or (shl (and X, C1), C3), y).
This patch tries to recover this. I had to flip around some of the early out checks so that I could create a new And instruction during the compare processing without it possibly never getting used.
There are 2 parts to this patch made simultaneously to avoid a regression.
We're reversing the canonicalization that moves bitwise vector ops before bitcasts.
We're moving bitwise vector ops *after* bitcasts instead. That's the 1st and 3rd hunks
of the patch. The motivation is that there's only one fold that currently depends on
the existing canonicalization (see next), but there are many folds that would
automatically benefit from the new canonicalization.
PR33138 ( https://bugs.llvm.org/show_bug.cgi?id=33138 ) shows why/how we have these
patterns in IR.
There's an or(and,andn) pattern that requires an adjustment in order to continue matching
to 'select' because the bitcast changes position. This match is unfortunately complicated
because it requires 4 logic ops with optional bitcast and sext ops.
Test diffs:
1. The bitcast.ll and bitcast-bigendian.ll changes show the most basic difference -
bitcast comes before logic.
2. There are also tests with no diffs in bitcast.ll that verify that we're still doing
folds that were enabled by the previous canonicalization.
3. icmp-xor-signbit.ll shows the payoff. We don't need to adjust existing icmp patterns
to look through bitcasts.
4. logical-select.ll contains several tests for the or(and,andn) --> select fold to
verify that we are still handling those cases. The lone diff shows the movement of
the bitcast from the new canonicalization rule.
Pavel Labath [Thu, 22 Jun 2017 14:18:55 +0000 (14:18 +0000)]
Revert "[Support] Add RetryAfterSignal helper function" and subsequent fix
The fix in r306003 uncovered a pretty fundamental problem that libc++
implementation of std::result_of does not handle the prototype of
open(2) correctly (presumably because it contains ...). This makes the
whole function unusable in its current form, so I am also reverting the
original commit (r305892), which introduced the function, at least until
I figure out a way to solve the libc++ issue.
Pavel Labath [Thu, 22 Jun 2017 13:55:54 +0000 (13:55 +0000)]
[Support] Fix return type deduction in RetryAfterSignal
The default value of the ResultT template argument (which was there only
to avoid spelling out the long std::result_of template multiple times)
was being overriden by function call template argument deduction. This
manifested itself as a compiler error when calling the function as
FILE *X = RetryAfterSignal(nullptr, fopen, ...)
because the function would try to assign the result of fopen to
nullptr_t, but a more insidious side effect was that
RetryAfterSignal(-1, read, ...) would return "int" instead of "ssize_t",
losing precision along the way.
I fix this by having the function take the argument in a way that
prevents argument deduction from kicking in and add a test that makes
sure the return type is correct.
Pavel Labath [Thu, 22 Jun 2017 13:11:50 +0000 (13:11 +0000)]
[Testing/Support] Remove the const_cast in TakeExpected
Summary:
The const_cast in the "const" version of TakeExpected was quite
dangerous, as the function does indeed modify the apparently const
argument.
I assume the reason the const overload was added was to make the
function bind to xvalues(temporaries). That can be also achieved with
rvalue references, so I use that instead.
Using the ASSERT macros on const Expected objects will now become
illegal, but I believe that is correct, as it is not actually possible
to inspect the error stored in an Expected object without modifying it.
Sam Kolton [Thu, 22 Jun 2017 12:42:14 +0000 (12:42 +0000)]
[AMDGPU] SDWA: remove support for VOP2 instructions that have only 64-bit encoding
Summary:
Despite that this instructions are listed in VOP2, they are treated as VOP3 in specs. They should not support SDWA.
There are no real instructions for them, but there are pseudo instructions.
Kristof Beyls [Thu, 22 Jun 2017 12:11:38 +0000 (12:11 +0000)]
Don't conditionalize Neon instructions, even in IT blocks.
This has been deprecated since ARMARM v7-AR, release C.b, published back
in 2012.
This also removes test/CodeGen/Thumb2/ifcvt-neon.ll that originally was
introduced to check that conditionalization of Neon instructions did
happen when generating Thumb2. However, the test had evolved and was no
longer testing that. Rather than trying to adapt that test, this commit
introduces test/CodeGen/Thumb2/ifcvt-neon-deprecated.mir, since we can
now use the MIR framework to write nicer/more maintainable tests.
Sagar Thakur [Thu, 22 Jun 2017 11:49:19 +0000 (11:49 +0000)]
[mips] Adds support for R_MIPS_26, HIGHER, HIGHEST relocations in RuntimeDyld
After the N64 static relocation model support was added to llvm it is required to add its support in RuntimeDyld also because lldb uses ExecutionEngine for evaluating expressions.
John Brawn [Thu, 22 Jun 2017 10:29:31 +0000 (10:29 +0000)]
[ARM] Clean up choice of narrow instructions in ARMAsmParser, NFC
This patch makes a couple of changes to how we decide whether to use the narrow
or wide encoding of thumb2 instructions:
* Common out the detection of the .w qualifier
* Check for the CPSR operand in a consistent way
Florian Hahn [Thu, 22 Jun 2017 09:39:36 +0000 (09:39 +0000)]
[ARM] Add macro fusion for AES instructions.
Summary:
This patch adds a macro fusion using CodeGen/MacroFusion.cpp to pair AES
instructions back to back and adds FeatureFuseAES to enable the feature.
AVX-512: Lowering Masked Gather intrinsic - fixed a bug
Masked gather for vector length 2 is lowered incorrectly for element type i32.
The type <2 x i32> was automatically extended to <2 x i64> and we generated VPGATHERQQ instead of VPGATHERQD.
The type <2 x float> is extended to <4 x float>, so there is no bug for this type, but the sequence may be more optimal.
In this patch I'm fixing <2 x i32>bug and optimizing <2 x float> sequence for GATHERs only. The same fix should be done for Scatters as well.
Sam Kolton [Thu, 22 Jun 2017 06:26:41 +0000 (06:26 +0000)]
[AMDGPU] SDWA: add support for GFX9 in peephole pass
Summary:
Added support based on merged SDWA pseudo instructions. Now peephole allow one scalar operand, omod and clamp modifiers.
Added several subtarget features for GFX9 SDWA.
This diff also contains changes from D34026.
Depends D34026
Craig Topper [Thu, 22 Jun 2017 05:20:39 +0000 (05:20 +0000)]
[InstCombine] Add test cases to demonstrate that and->xnor and or->xnor folding can create more instructions than it removed when there are multiple uses. NFC
Reid Kleckner [Thu, 22 Jun 2017 01:10:29 +0000 (01:10 +0000)]
[llvm-readobj] Dump the COFF image load config
This includes the safe SEH tables and the control flow guard function
table. LLD will emit the guard table soon, and I need a tool that dumps
them for testing.
Bob Haarman [Wed, 21 Jun 2017 22:31:52 +0000 (22:31 +0000)]
[codeview] respect signedness of APSInts when printing to YAML
Summary:
This fixes a bug where we always treat APSInts in Codeview as
signed when writing them to YAML. One symptom of this problem is that
llvm-pdbdump raw would show Enumerator Values that differ between the
original PDB and a PDB that has been round-tripped through YAML.
NAKAMURA Takumi [Wed, 21 Jun 2017 22:04:07 +0000 (22:04 +0000)]
TableGen.cmake: Use DEPFILE for Ninja Generator with CMake>=3.7.
CMake emits build targets as relative paths (from build.ninja) but Ninja doesn't identify absolute path (in *.d) as relative path (in build.ninja).
So, let file names, in the command line, relative from ${CMAKE_BINARY_DIR}, where build.ninja is.
Note that tblgen is executed on ${CMAKE_BINARY_DIR} as working directory.
Dehao Chen [Wed, 21 Jun 2017 22:01:32 +0000 (22:01 +0000)]
Enable vectorizer-maximize-bandwidth by default.
Summary:
vectorizer-maximize-bandwidth is generally useful in terms of performance. I've tested the impact of changing this to default on speccpu benchmarks on sandybridge machines. The result shows non-negative impact:
The regression on 453.povray seems real, but is due to secondary effects as all hot functions are bit-identical with and without the flag.
I started this patch to consult upstream opinions on this. It will be greatly appreciated if the community can help test the performance impact of this change on other architectures so that we can decided if this should be target-dependent.
Craig Topper [Wed, 21 Jun 2017 18:57:00 +0000 (18:57 +0000)]
[InstCombine] Cleanup using commutable matchers. Make a couple helper methods standalone static functions. Put 'if' around variable declaration instead of after. NFC
whitequark [Wed, 21 Jun 2017 18:46:50 +0000 (18:46 +0000)]
Add a "probe-stack" attribute
This attribute is used to ensure the guard page is triggered on stack
overflow. Stack frames larger than the guard page size will generate
a call to __probestack to touch each page so the guard page won't
be skipped.
Michael Kruse [Wed, 21 Jun 2017 18:25:37 +0000 (18:25 +0000)]
[BasicAA] Use MayAlias instead of PartialAlias for fallback.
Using various methods, BasicAA tries to determine whether two
GetElementPtr memory locations alias when its base pointers are known
to be equal. When none of its heuristics are applicable, it falls back
to PartialAlias to, according to a comment, protect TBAA making a wrong
decision in case of unions and malloc. PartialAlias is not correct,
because a PartialAlias result implies that some, but not all, bytes
overlap which is not necessarily the case here.
AAResults returns the first analysis result that is not MayAlias.
BasicAA is always the first alias analysis. When it returns
PartialAlias, no other analysis is queried to give a more exact result
(which was the intention of returning PartialAlias instead of MayAlias).
For instance, ScopedAA could return a more accurate result.
The PartialAlias hack was introduced in r131781 (and re-applied in
r132632 after some reverts) to fix llvm.org/PR9971 where TBAA returns a
wrong NoAlias result due to a union. A test case for the malloc case
mentioned in the comment was not provided and I don't think it is
affected since it returns an omnipotent char anyway.
Since r303851 (https://reviews.llvm.org/D33328) clang does emit specific
TBAA for unions anymore (but "omnipotent char" instead). Hence, the
PartialAlias workaround is not required anymore.
This patch passes the test-suite and check-llvm/check-clang of a
self-hoisted build on x64.
Reid Kleckner [Wed, 21 Jun 2017 17:25:56 +0000 (17:25 +0000)]
[PDB] Add symbols to the PDB
Summary:
The main complexity in adding symbol records is that we need to
"relocate" all the type indices. Type indices do not have anything like
relocations, an opaque data structure describing where to find existing
type indices for fixups. The linker just has to "know" where the type
references are in the symbol records. I added an overload of
`discoverTypeIndices` that works on symbol records, and it seems to be
able to link the standard library.
Define target hook isReallyTriviallyReMaterializable() to explicitly specify
PowerPC instructions that are trivially rematerializable. This will allow
the MachineLICM pass to accurately identify PPC instructions that should always
be hoisted.
Craig Topper [Wed, 21 Jun 2017 16:32:35 +0000 (16:32 +0000)]
[InstCombine] Add range metadata to cttz/ctlz/ctpop intrinsic calls based on known bits
Summary:
I noticed that passing known bits across these intrinsics isn't great at capturing the information we really know. Turning known bits of the input into known bits of a count output isn't able to convey a lot of what we really know.
This patch adds range metadata to these intrinsics based on the known bits.
Currently the patch punts if we already have range metadata present.
Nirav Dave [Wed, 21 Jun 2017 15:07:30 +0000 (15:07 +0000)]
[DAG] Remove Node csonstruction from BaseIndexOffset match. NFCI.
Move GlobalAddress Offset decomposition from initial match into
comparision check and removing the possibility of constructing a new
offseted global address when examining addresses.