Eli Friedman [Fri, 22 Mar 2019 18:37:26 +0000 (18:37 +0000)]
[ARM] [NFC] Use tGPR in patterns where appropriate.
This doesn't have any practical effect at the moment, as far as I know,
because high registers aren't allocatable in Thumb1 mode. But it might
matter in the future.
James Y Knight [Fri, 22 Mar 2019 18:27:13 +0000 (18:27 +0000)]
IR: Support parsing numeric block ids, and emit them in textual output.
Just as as llvm IR supports explicitly specifying numeric value ids
for instructions, and emits them by default in textual output, now do
the same for blocks.
This is a slightly incompatible change in the textual IR format.
Previously, llvm would parse numeric labels as string names. E.g.
define void @f() {
br label %"55"
55:
ret void
}
defined a label *named* "55", even without needing to be quoted, while
the reference required quoting. Now, if you intend a block label which
looks like a value number to be a name, you must quote it in the
definition too (e.g. `"55":`).
Previously, llvm would print nameless blocks only as a comment, and
would omit it if there was no predecessor. This could cause confusion
for readers of the IR, just as unnamed instructions did prior to the
addition of "%5 = " syntax, back in 2008 (PR2480).
Now, it will always print a label for an unnamed block, with the
exception of the entry block. (IMO it may be better to print it for
the entry-block as well. However, that requires updating many more
tests.)
Thus, the following is supported, and is the canonical printing:
define i32 @f(i32, i32) {
%3 = add i32 %0, %1
br label %4
4:
ret i32 %3
}
New test cases covering this behavior are added, and other tests
updated as required.
Nikita Popov [Fri, 22 Mar 2019 17:51:40 +0000 (17:51 +0000)]
[ValueTracking] Avoid redundant known bits calculation in computeOverflowForSignedAdd()
We're already computing the known bits of the operands here. If the
known bits of the operands can determine the sign bit of the result,
we'll already catch this in signedAddMayOverflow(). The only other
way (and as the comment already indicates) we'll get new information
from computing known bits on the whole add, is if there's an assumption
on it.
As such, we change the code to only compute known bits from assumptions,
instead of computing full known bits on the add (which would unnecessarily
recompute the known bits of the operands as well).
Alina Sbirlea [Fri, 22 Mar 2019 17:22:19 +0000 (17:22 +0000)]
[AliasAnalysis] Second prototype to cache BasicAA / anyAA state.
Summary:
Adding contained caching to AliasAnalysis. BasicAA is currently the only one using it.
AA changes:
- This patch is pulling the caches from BasicAAResults to AAResults, meaning the getModRefInfo call benefits from the IsCapturedCache as well when in "batch mode".
- All AAResultBase implementations add the QueryInfo member to all APIs. AAResults APIs maintain wrapper APIs such that all alias()/getModRefInfo call sites are unchanged.
- AA now provides a BatchAAResults type as a wrapper to AAResults. It keeps the AAResults instance and a QueryInfo instantiated to batch mode. It delegates all work to the AAResults instance with the batched QueryInfo. More API wrappers may be needed in BatchAAResults; only the minimum needed is currently added.
MemorySSA changes:
- All walkers are now templated on the AA used (AliasAnalysis=AAResults or BatchAAResults).
- At build time, we optimize uses; now we create a local walker (lives only as long as OptimizeUses does) using BatchAAResults.
- All Walkers have an internal AA and only use that now, never the AA in MemorySSA. The Walkers receive the AA they will use when built.
- The walker we use for queries after the build is instantiated on AliasAnalysis and is built after building MemorySSA and setting AA.
- All static methods doing walking are now templated on AliasAnalysisType if they are used both during build and after. If used only during build, the method now only takes a BatchAAResults. If used only after build, the method now takes an AliasAnalysis.
Bixia Zheng [Fri, 22 Mar 2019 16:37:37 +0000 (16:37 +0000)]
[ConstantFolding] Fix GetConstantFoldFPValue to avoid cast overflow.
Summary:
In C++, the behavior of casting a double value that is beyond the range
of a single precision floating-point to a float value is undefined. This
change replaces such a cast with APFloat::convert to convert the value,
which is consistent with how we convert a double value to a half value.
Nico Weber [Fri, 22 Mar 2019 16:34:39 +0000 (16:34 +0000)]
Make clang-move use same file naming convention as other tools
In all the other clang-foo tools, the main library file is called
Foo.cpp and the file in the tool/ folder is called ClangFoo.cpp.
Do this for clang-move too.
Tim Renouf [Fri, 22 Mar 2019 15:53:50 +0000 (15:53 +0000)]
InstCombineSimplifyDemanded: Allow v3 results for AMDGCN buffer and image intrinsics
This helps to avoid the situation where RA spots that only 3 of the
v4f32 result of a load are used, and immediately reallocates the 4th
register for something else, requiring a stall waiting for the load.
Xing GUO [Fri, 22 Mar 2019 15:42:13 +0000 (15:42 +0000)]
[llvm-readobj] Separate `Symbol Version` dumpers into `LLVM style` and `GNU style`
Summary:
Currently, llvm-readobj can dump symbol version sections only in LLVM style. In this patch, I would like to separate these dumpers into GNU style and
LLVM style for future implementation.
Tim Renouf [Fri, 22 Mar 2019 15:21:11 +0000 (15:21 +0000)]
[AMDGPU] Use three- and five-dword result type in image ops
Some image ops return three or five dwords. Previously, we modeled that
with a 4 or 8 dword register class. The register allocator could
cleverly spot that some subregs were dead and allocate something else
there, but that caused the de-optimization that waitcnt insertion would
think that the result was used immediately.
This commit allows such an image op to have a result with a three or
five dword result, avoiding the above de-optimization.
Tim Renouf [Fri, 22 Mar 2019 14:58:02 +0000 (14:58 +0000)]
[AMDGPU] Implemented dwordx3 variants of buffer/tbuffer load/store intrinsics
Now we have vec3 MVTs, this commit implements dwordx3 variants of the
buffer intrinsics.
On gfx6, a dwordx3 buffer load intrinsic is implemented as a dwordx4
instruction, and a dwordx3 buffer store intrinsic is not supported.
We need to support the dwordx3 load intrinsic because it is generated by
subtarget-unaware code in InstCombine.
Pavel Labath [Fri, 22 Mar 2019 14:47:26 +0000 (14:47 +0000)]
[ObjectYAML] Add basic minidump generation support
Summary:
This patch adds the ability to read a yaml form of a minidump file and
write it out as binary. Apart from the minidump header and the stream
directory, only three basic stream kinds are supported:
- Text: This kind is used for streams which contain textual data. This
is typically the contents of a /proc file on linux (e.g.
/proc/PID/maps). In this case, we just put the raw stream contents
into the yaml.
- SystemInfo: This stream contains various bits of information about the
host system in binary form. We expose the data in a structured form.
- Raw: This kind is used as a fallback when we don't have any special
knowledge about the stream. In this case, we just print the stream
contents in hex.
For this code to be really useful, more stream kinds will need to be
added (particularly for things like lists of memory regions and loaded
modules). However, these can be added incrementally.
James Henderson [Fri, 22 Mar 2019 12:45:27 +0000 (12:45 +0000)]
[llvm-objcopy]Add coverage for --split-dwo and --output-format
Also fix up a couple of minor issues in the test being updated, where
FileCheck could match on incorrect output and fix the test case order to
match the struct order.
Alex Bradbury [Fri, 22 Mar 2019 11:21:40 +0000 (11:21 +0000)]
[RISCV] Add basic RV32E definitions and MC layer support
The RISC-V ISA defines RV32E as an alternative "base" instruction set
encoding, that differs from RV32I by having only 16 rather than 32 registers.
This patch adds basic definitions for RV32E as well as MC layer support
(assembling, disassembling) and tests. The only supported ABI on RV32E is
ILP32E.
Add a new RISCVFeatures::validate() helper to RISCVUtils which can be called
from codegen or MC layer libraries to validate the combination of TargetTriple
and FeatureBitSet. Other targets have similar checks (e.g. erroring if SPE is
enabled on PPC64 or oddspreg + o32 ABI on Mips), but they either duplicate the
checks (Mips), or fail to check for both codegen and MC codepaths (PPC).
Codegen for the ILP32E ABI support and RV32E codegen are left for a future
patch/patches.
Alex Bradbury [Fri, 22 Mar 2019 10:45:03 +0000 (10:45 +0000)]
[RISCV] Optimize emission of SELECT sequences
This patch optimizes the emission of a sequence of SELECTs with the same
condition, avoiding the insertion of unnecessary control flow. Such a sequence
often occurs when a SELECT of values wider than XLEN is legalized into two
SELECTs with legal types. We have identified several use cases where the
SELECTs could be interleaved with other instructions. Therefore, we extend the
sequence to include non-SELECT instructions if we are able to detect that the
non-SELECT instructions do not impact the optimization.
This patch supersedes https://reviews.llvm.org/D59096, which attempted to
address this issue by introducing a new SelectionDAG node. Hat tip to Eli
Friedman for his feedback on how to best handle this issue.
Differential Revision: https://reviews.llvm.org/D59355
Patch by LuÃs Marques.
Alex Bradbury [Fri, 22 Mar 2019 10:39:22 +0000 (10:39 +0000)]
[RISCV] Allow conversion of CC logic to bitwise logic
Indicates in the TargetLowering interface that conversions from CC logic to
bitwise logic are allowed. Adds tests that show the benefit when optimization
opportunities are detected. Also adds tests that show that when the optimization
is not applied correct code is generated (but opportunities for other
optimizations remain).
Differential Revision: https://reviews.llvm.org/D59596
Patch by LuÃs Marques.
George Rimar [Fri, 22 Mar 2019 10:28:56 +0000 (10:28 +0000)]
[llvm-objcopy] - Fix a st_name of the first symbol table entry.
Spec says about the first symbol table entry that index 0 both designates the first entry in the table
and serves as the undefined symbol index. It should have zero value.
Hence the first symbol table entry has no name. And so has to have a st_name == 0.
(http://refspecs.linuxbase.org/elf/gabi4+/ch4.symtab.html)
Currently, we do not emit zero value for the first symbol table entry.
That happens because we add empty strings to the string builder, which
for each such case adds a zero byte:
(https://github.com/llvm-mirror/llvm/blob/master/lib/MC/StringTableBuilder.cpp#L185)
After the string optimization performed it might return non zero indexes for the
empty string requested.
The patch fixes this issue for the case above and other sections with no names.
George Rimar [Fri, 22 Mar 2019 10:24:37 +0000 (10:24 +0000)]
[llvm-objcopy] - Implement replaceSectionReferences for GroupSection class.
Currently, llvm-objcopy incorrectly handles compression and decompression of the
sections from COMDAT groups, because we do not implement the
replaceSectionReferences for this type of the sections.
James Henderson [Fri, 22 Mar 2019 10:21:09 +0000 (10:21 +0000)]
[llvm-objcopy]Add support for *-freebsd output formats
GNU objcopy can support output formats like elf32-i386-freebsd and
elf64-x86-64-freebsd. The only difference from their regular non-freebsd
counterparts that I have observed is that the freebsd versions set the
OS/ABI field to ELFOSABI_FREEBSD. This patch sets the OS/ABI field
according based on the format whenever --output-format is specified.
Yonghong Song [Fri, 22 Mar 2019 02:54:47 +0000 (02:54 +0000)]
[BPF] fix flaky btf unit test static-var-derived-type.ll
The DataSecEentries is defined as an unordered_map since
order does not really matter.
std::unordered_map<std::string, std::unique_ptr<BTFKindDataSec>>
DataSecEntries;
This seems causing the test static-var-derived-type.ll flaky
as two sections ".bss" and ".readonly" have undeterministic
ordering when performing map iterating, which decides the
output assembly code sequence of BTF_KIND_DATASEC entries.
Fix the test to have only one data section to remove
flakiness.
Signed-off-by: Yonghong Song <yhs@fb.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356731 91177308-0d34-0410-b5e6-96231b3b80d8
Fangrui Song [Fri, 22 Mar 2019 02:43:11 +0000 (02:43 +0000)]
[DWARF] Refactor RelocVisitor and fix computation of SHT_RELA-typed relocation entries
Summary:
getRelocatedValue may compute incorrect value for SHT_RELA-typed relocation entries.
// DWARFDataExtractor.cpp
uint64_t DWARFDataExtractor::getRelocatedValue(uint32_t Size, uint32_t *Off,
...
// This formula is correct for REL, but may be incorrect for RELA if the value
// stored in the location (getUnsigned(Off, Size)) is not zero.
return getUnsigned(Off, Size) + Rel->Value;
In this patch, we
* refactor these visit* functions to include a new parameter `uint64_t A`.
Since these visit* functions are no longer used as visitors, rename them to resolve*.
+ REL: A is used as the addend. A is the value stored in the location where the
relocation applies: getUnsigned(Off, Size)
+ RELA: The addend encoded in RelocationRef is used, e.g. getELFAddend(R)
* and add another set of supports* functions to check if a given relocation type is handled.
DWARFObjInMemory uses them to fail early.
Yonghong Song [Fri, 22 Mar 2019 01:30:50 +0000 (01:30 +0000)]
[BPF] handle derived type properly for computing type id
Currently, the type id for a derived type is computed incorrectly.
For example,
type #1: int
type #2: ptr to #1
For a global variable "int *a", type #1 will be attributed to variable "a".
This is due to a bug which assigns the type id of the basetype of
that derived type as the derived type's type id. This happens
to "const", "volatile", "restrict", "typedef" and "pointer" types.
This patch fixed this bug, fixed existing test cases and added
a new one focusing on pointers plus other derived types.
Signed-off-by: Yonghong Song <yhs@fb.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356727 91177308-0d34-0410-b5e6-96231b3b80d8
Amara Emerson [Thu, 21 Mar 2019 22:31:37 +0000 (22:31 +0000)]
[AArch64] Split the neon.addp intrinsic into integer and fp variants.
This is the result of discussions on the list about how to deal with intrinsics
which require codegen to disambiguate them via only the integer/fp overloads.
It causes problems for GlobalISel as some of that information is lost during
translation, while with other operations like IR instructions the information is
encoded into the instruction opcode.
This patch changes clang to emit the new faddp intrinsic if the vector operands
to the builtin have FP element types. LLVM IR AutoUpgrade has been taught to
upgrade existing calls to aarch64.neon.addp with fp vector arguments, and
we remove the workarounds introduced for GlobalISel in r355865.
Steven Wu [Thu, 21 Mar 2019 21:01:31 +0000 (21:01 +0000)]
[Object] Fix reading objects created with -fembed-bitcode-marker
Currently, this fails with many tools, e.g.
$ clang -fembed-bitcode-marker -c -o test.o test.c
$ nm test.o
nm: test.o The file was not recognized as a valid object file
-fembed-bitcode-marker creates a LLVM,bitcode section consisting of a single
byte. When reading the object file, IRObjectFile::findBitcodeInObject succeeds,
causing SymbolicFile::createSymbolicFile to try to read the "bitcode" rather
than using the outer Mach-O data - when then fails.
Fix this by making findBitcodeInObject return an error if the section size <= 1.
Matt Arsenault [Thu, 21 Mar 2019 20:56:05 +0000 (20:56 +0000)]
Mips: Don't create copy of nothing
This was creating a copy of the register the pseudo itself was
def'ing, leaving a copy of an undefined register. I'm not sure how
the verifier is not catching this, but this avoids asserting in a
future change to RegAllocFast
Matt Arsenault [Thu, 21 Mar 2019 20:45:36 +0000 (20:45 +0000)]
GlobalISel: Fix RegBankSelect for REG_SEQUENCE
The AArch64 test was broken since the result register already had a
set register class, so this test was a no-op. The mapping verify call
would fail because the result size is not the same as the inputs like
in a copy or phi.
The AMDGPU testcases are half broken and introduce illegal VGPR->SGPR
copies which need much more work to handle correctly (same for phis),
but add them as a baseline.
Akira Hatanaka [Thu, 21 Mar 2019 20:16:09 +0000 (20:16 +0000)]
Don't add a tail keyword to calls to ObjC runtime functions if the calls
are annotated with notail.
r356705 annotated calls to objc_retainAutoreleasedReturnValue with
notail on x86-64. This commit teaches ARC optimizer to check the notail
marker on the call before turning it into a tail call.
Jordan Rupprecht [Thu, 21 Mar 2019 18:45:44 +0000 (18:45 +0000)]
[llvm-objdump] Support arg grouping for -j and -M (e.g. llvm-objdump -sj.foo -dMreg-names-raw)
Summary:
r354375 added support for most objdump groupings, but didn't add support for -j|--sections, because that wasn't possible.
r354870 added --disassembler options, but grouping still wasn't available.
r355185 supported values for grouped options.
This just puts the three of them together. This supports -j in modes like `-s -j .foo`, `-sj .foo`, `-sj=.foo`, or `-sj.foo`, and similar for `-M`.
Reid Kleckner [Thu, 21 Mar 2019 18:02:34 +0000 (18:02 +0000)]
[llvm-pdbutil] Add -type-ref-stats to help find unused type info
Summary:
This considers module symbol streams and the global symbol stream to be
roots. Most types that this considers "unreferenced" are referenced by
LF_UDT_MOD_SRC_LINE id records, which VC seems to always include.
Essentially, they are types that the user can only find in the debugger
if they call them by name, they cannot be found by traversing a symbol.
In practice, around 80% of type information in a PDB is referenced by a
symbol. That seems like a reasonable number.
I don't really plan to do anything with this tool. It mostly just exists
for informational purposes, and to confirm that we probably don't need
to implement type reference tracking in LLD. We can continue to merge
all types as we do today without wasting space.
Craig Topper [Thu, 21 Mar 2019 17:38:58 +0000 (17:38 +0000)]
[X86] Don't avoid folding multiple use sign extended 8-bit immediate into instructions under optsize.
Under optsize we try to avoid folding immediates into instructions under optsize. But if the immediate is 16-bits or 32 bits, but can be encoded as an 8-bit immediate we don't save enough from disabling the folding unless the immediate has enough uses to make up for the size of the move which is either 3 bytes or 5 bytes since there are no sign extended 8-bit moves. We would also save something if the immediate was a live out of the basic block and thus a move was unavoidable, but that would require a more advanced heuristic than just counting uses.
Note we only avoid folding multiple use immediates into the patterns that use X86ISD::ADD/SUB/XOR/OR/AND/CMP/ADC/SBB nodes and not the more common ISD::ADD/SUB/XOR/OR/AND nodes.
Craig Topper [Thu, 21 Mar 2019 17:38:52 +0000 (17:38 +0000)]
[ScalarizeMaskedMemIntrin] Add support for scalarizing expandload and compressstore intrinsics.
This adds support for scalarizing these intrinsics as well the X86TargetTransformInfo support to avoid scalarizing them in the cases X86 can handle.
I've omitted handling special cases for constant masks for this first pass. Though CodeGenPrepare can constant fold the branch conditions and remove some of the control flow anyway.
Fixes PR40994 and is covers most of PR3666. Might want to implement constant masks to close that.
Nikita Popov [Thu, 21 Mar 2019 17:23:51 +0000 (17:23 +0000)]
[ValueTracking] Use ConstantRange based overflow check for signed sub
This is D59450, but for signed sub. This case is not NFC, because
the overflow logic in ConstantRange is more powerful than the existing
check. This resolves the TODO in the function.
I've added two tests to show that this indeed catches more cases than
the previous logic, but the main correctness test coverage here is in
the existing ConstantRange unit tests.
Florian Hahn [Thu, 21 Mar 2019 14:32:09 +0000 (14:32 +0000)]
[DAGCombiner] Use getTokenFactor in a few more cases.
SDNodes can only have 64k operands and for some inputs (e.g. large
number of stores), we can reach this limit when creating TokenFactor
nodes. This patch is a follow up to D56740 and updates a few more places
that potentially can create TokenFactors with too many operands.
Sanjay Patel [Thu, 21 Mar 2019 13:57:07 +0000 (13:57 +0000)]
[CodeGenPrepare] limit formation of overflow intrinsics (PR41129)
This is probably a bigger limitation than necessary, but since we don't have any evidence yet
that this transform led to real-world perf improvements rather than regressions, I'm making a
quick, blunt fix.
In the motivating x86 example from:
https://bugs.llvm.org/show_bug.cgi?id=41129
...and shown in the regression test, we want to avoid an extra instruction in the dominating
block because that could be costly.
The x86 LSR test diff is reversing the changes from D57789. There's no evidence that 1 version
is any better than the other yet.
Pavel Labath [Thu, 21 Mar 2019 10:21:55 +0000 (10:21 +0000)]
Fix two more issues with r356652
The first problem was a use-after-free in the tests (detected by asan
bots). The temporary array created for the "create" call is guaranteed
to live only until the end of the statement. The fix there is to store
the test data in a local variable to ensure it has the right lifetime
The second issue is broken BUILD_SHARED_LIBS build, which I fix by
adding the appropriate BinaryFormat dependency to the Object unit tests.
Pavel Labath [Thu, 21 Mar 2019 09:18:59 +0000 (09:18 +0000)]
[Object] Add basic minidump support
Summary:
This patch adds basic support for reading minidump files. It contains
the definitions of various important minidump data structures (header,
stream directory), and of one minidump stream (SystemInfo). The ability
to read other streams will be added in follow-up patches. However, all
streams can be read even now as raw data, which means lldb's minidump
support (where this code is taken from) can be immediately rebased on
top of this patch as soon as it lands.
As we don't have any support for generating minidump files (yet), this
tests the code via unit tests with some small handcrafted binaries in
the form of c char arrays.
Alina Sbirlea [Thu, 21 Mar 2019 05:02:05 +0000 (05:02 +0000)]
[BasicAA] Reduce no of map seaches [NFCI].
Summary:
This is a refactoring patch.
- Reduce the number of map searches by reusing the iterator.
- Add asserts to check that the entry is in the cache, as this is something BasicAA relies on to avoid infinite recursion.
Code archaeology in D59315 revealed that MSSA should never be moved.
Rather than trying to check dynamically that this hasn't happened in the
verify() functions of Walkers, it's likely best to just delete its move
constructor.
Since all these verify() functions did is check that MSSA hasn't moved,
this allows us to remove these verify functions.
I can readd the verification checks if someone's super concerned about
us trying to `memcpy` MemorySSA or something somewhere, but I imagine we
have other problems if we're trying anything like that...
Craig Topper [Wed, 20 Mar 2019 23:35:49 +0000 (23:35 +0000)]
[X86] Add CMPXCHG8B feature flag. Set it for all CPUs except i386/i486 including 'generic'. Disable use of CMPXCHG8B when this flag isn't set.
CMPXCHG8B was introduced on i586/pentium generation.
If its not enabled, limit the atomic width to 32 bits so the AtomicExpandPass will expand to lib calls. Unclear if we should be using a different limit for other configs. The default is 1024 and experimentation shows that using an i256 atomic will cause a crash in SelectionDAG.
Michael Trent [Wed, 20 Mar 2019 23:21:16 +0000 (23:21 +0000)]
Fix Mach-O bind and rebase validation errors in libObject
Summary:
llvm-objdump (via libObject) validates DYLD_INFO rebase and bind
entries against the basic structure found in the Mach-O file before
evaluating the contents of those entries. Certain malformed Mach-Os can
defeat the validation check and force llvm-objdump (libObject) to crash.
The previous logic verified a rebase or bind started in a valid Mach-O
section, but did not verify that the section wholely contained the
fixup. It also generally allows rebases or binds to start immediately
after a valid section even if that range is not itself part of a valid
section. Finally, bind and rebase opcodes that indicate more than one
fixup (apply N times...) are not completely validated: only the first
and final fixups are checked.
The previous logic also rejected certain binaries as false positives.
Some bind and rebase opcodes can modify the state machine such that the
next bind or rebase will fail. libObject will reject these opcodes as
invalid in order to be helpful and print an error message associated
with the instruction that caused the problem, even though the binary is
not actually illegal until it consumes the invalid state in the state
machine. In other words, libObject may reject a Mach-O binary that
Apple's dynamic linker may consider legal. The original version of
macho-rebase-add-addr-uleb-too-big is an example of such a binary.
I have replaced the existing checkSegAndOffset and checkCountAndSkip
functions with a single function, checkSegAndOffsets, which validates
all of the fixups realized by a DYLD_INFO opcode. checkSegAndOffsets
verifies that a Mach-O section fully contains each fixup. Every fixup
realized by an opcode is validated, and some (but not all!)
inconsistencies in the state machine are allowed until a fixup is
realized. This means that libObject may fail on an opcode that realizes
a fixup, not on the opcode that introduced the arithmetic error.
Existing test cases have been modified to reflect the changes in error
messages returned by libObject. What's more, the test case for
macho-rebase-add-addr-uleb-too-big has been modified so that it actually
triggers the error condition; the new code in libObject considers the
original test binary "legal".
Tim Renouf [Wed, 20 Mar 2019 22:02:09 +0000 (22:02 +0000)]
[AMDGPU] Do not generate spurious PAL metadata
My previous fix rL356591 "[AMDGPU] Added MsgPack format PAL metadata"
accidentally caused a spurious PAL metadata .note record to be emitted
for any AMDGPU output. That caused failures in the lld test
amdgpu-relocs.s. Fixed.
Craig Topper [Wed, 20 Mar 2019 21:30:20 +0000 (21:30 +0000)]
[X86] Call lowerShuffleAsBitMask for 512-bit vectors in lowerShuffleAsBlend.
This patch enables the use of lowerShuffleAsBitMask for 512-bit blends before
falling back to move immedate, GPR to k-register, and masked op.
I had to make some changes to support v8i64 when i64 is not a legal type. And to
support floating point types.
This trades a load for the move immediate and GPR move which is higher latency.
But its probably better for register pressure not having to hop through other
register classes. The load+and should play better with LICM and
rematerialization I think.
Matt Arsenault [Wed, 20 Mar 2019 20:41:34 +0000 (20:41 +0000)]
AMDGPU: Don't look for constant in insert/extract_vector_elt regbankselect
The constantness shouldn't change the register bank choice. We also
don't need to restrict this to only indexing VGPRs, since it's
possible to index SGPRs (but SelectionDAG made using this
difficult). Allow directly indexing SGPRs when appropriate.
Thomas Lively [Wed, 20 Mar 2019 20:26:45 +0000 (20:26 +0000)]
[WebAssembly] Target features section
Summary:
Implements a new target features section in assembly and object files
that records what features are used, required, and disallowed in
WebAssembly objects. The linker uses this information to ensure that
all objects participating in a link are feature-compatible and records
the set of used features in the output binary for use by optimizers
and other tools later in the toolchain.
The "atomics" feature is always required or disallowed to prevent
linking code with stripped atomics into multithreaded binaries. Other
features are marked used if they are enabled globally or on any
function in a module.
Future CLs will add linker flags for ignoring feature compatibility
checks and for specifying the set of allowed features, implement using
the presence of the "atomics" feature to control the type of memory
and segments in the linked binary, and add front-end flags for
relaxing the linkage policy for atomics.