Matt Arsenault [Mon, 25 Mar 2019 21:10:12 +0000 (21:10 +0000)]
AMDGPU: Set hasSideEffects 0 on _term instructions
These were defaulting to true, but they are just wrappers around bit
operations. This avoids regressions in the exec mask optimization
passes in a future commit.
Ali Tamur [Mon, 25 Mar 2019 20:08:00 +0000 (20:08 +0000)]
[llvm] Prevent duplicate files in debug line header in dwarf 5.
Summary:
Motivation: In previous dwarf versions, file name indexes started from 1, and
the primary source file was not explicit. Dwarf 5 standard (6.2.4) prescribes
the primary source file to be explicitly given an entry with an index number 0.
The current implementation honors the specification by just duplicating the
main source file, once with index number 0, and later maybe with another
index number. While this is compliant with the letter of the standard, the
duplication causes problems for consumers of this information such as lldb.
(Some files are duplicated, where only some of them have a line table although
all refer to the same file)
With this change, dwarf 5 debug line section files always start from 0, and
the zeroth entry is not duplicated whenever possible. This requires different
handling of dwarf 4 and dwarf 5 during generation (e.g. when a function returns
an index zero for a file name, it signals an error in dwarf 4, but not in dwarf 5)
However, I think the minor complication is worth it, because it enables all
consumers (lldb, gdb, dwarfdump, objdump, and so on) to treat all files in the
file name list homogenously.
Simon Pilgrim [Mon, 25 Mar 2019 20:05:27 +0000 (20:05 +0000)]
[SLPVectorizer] Merge reorderAltShuffleOperands into reorderInputsAccordingToOpcode
As discussed on D59738, this generalizes reorderInputsAccordingToOpcode to handle multiple + non-commutative instructions so we can get rid of reorderAltShuffleOperands and make use of the extra canonicalizations that reorderInputsAccordingToOpcode brings.
Simon Pilgrim [Mon, 25 Mar 2019 18:51:57 +0000 (18:51 +0000)]
[SelectionDAG] Add icmp UNDEF handling to SelectionDAG::FoldSetCC
First half of PR40800, this patch adds DAG undef handling to icmp instructions to match the behaviour in llvm::ConstantFoldCompareInstruction and SimplifyICmpInst, this permits constant folding of vector comparisons where some elements had been reduced to UNDEF (by SimplifyDemandedVectorElts etc.).
This involved a lot of tweaking to reduced tests as bugpoint loves to reduce icmp arguments to undef........
Teresa Johnson [Mon, 25 Mar 2019 18:38:48 +0000 (18:38 +0000)]
[CGP] Build the DominatorTree lazily
Summary:
In r355512 CGP was changed to build the DominatorTree only once per
function traversal, to avoid repeatedly building it each time it was
accessed. This solved one compile time issue but introduced another. In
the second case, we now were building the DT unnecessarily many times
when we performed many function traversals (i.e. more than once per
function when running CGP because of changes made each time).
Change to saving the DT in the CodeGenPrepare object, and building it
lazily when needed. It is reset whenever we need to rebuild it.
The case that exposed the issue there are 617 functions, and we walk
them (i.e. execute the "while (MadeChange)" loop in runOnFunction) a
total of 12083 times (so previously we were building the DT 12083
times). With this patch we only build the DT 844 times (average of 1.37
times per function). We dropped the total time to compile this file from
538.11s without this patch to 339.63s with it.
There is still an issue as CGP is taking much longer than all other
passes even with this patch, and before a recent compiler release cut at
r355392 the total time to this compile was only 97 sec with a huge
reduction in CGP time. I suspect that one of the other recent changes to
CGP led to iterating each function many more times on average, but I
need to do some more investigation.
Matt Arsenault [Mon, 25 Mar 2019 17:15:44 +0000 (17:15 +0000)]
MISched: Don't schedule regions with 0 instructions
I think this is correct, but may not necessarily be the correct fix
for the assertion I'm really trying to solve. If a scheduling region
was found that only has dbg_value instructions, the RegPressure
tracker would end up in an inconsistent state because it would skip
over any debug instructions and point to an instruction outside of the
scheduling region. It may still be possible for this to happen if
there are some real schedulable instructions between dbg_values, but I
haven't managed to break this.
The testcase is extremely sensitive and I'm not sure how to make it
more resistent to future scheduler changes that would avoid stressing
this situation.
James Henderson [Mon, 25 Mar 2019 16:36:26 +0000 (16:36 +0000)]
[llvm-objcopy]Preserve data in segments not covered by sections
llvm-objcopy previously knew nothing about data in segments that wasn't
covered by section headers, meaning that it wrote zeroes instead of what
was there. As it is possible for this data to be useful to the loader,
this patch causes llvm-objcopy to start preserving this data. Data in
sections that are explicitly removed continues to be written as zeroes.
This fixes https://bugs.llvm.org/show_bug.cgi?id=41005.
Remove attempts to commute non-Instructions to the LHS - the codegen changes appear to rely on chance more than anything else and also have a tendency to fight existing instcombine canonicalization which moves constants to the RHS of commutable binary ops.
This is prep work towards:
(a) reusing reorderInputsAccordingToOpcode for alt-shuffles and removing the similar reorderAltShuffleOperands
(b) improving reordering to optimized cases with commutable and non-commutable instructions to still find splat/consecutive ops.
Brock Wyma [Mon, 25 Mar 2019 13:50:26 +0000 (13:50 +0000)]
[DebugInfo] IntelJitEventListener follow up for "add SectionedAddress ..."
Following r354972 the Intel JIT Listener would not report line table
information because the section indices did not match. There was
a similar issue with the PerfJitEventListener. This change performs
the section index lookup when building the object address used to
query the line table information.
George Rimar [Mon, 25 Mar 2019 12:34:25 +0000 (12:34 +0000)]
[llvm-objcopy] - Refactor the code. NFC.
The idea of the patch is about to move out the code to a new
helper static functions (to reduce the size of 'handleArgs' and to
isolate the parts of it's logic).
Nico Weber [Mon, 25 Mar 2019 11:33:19 +0000 (11:33 +0000)]
gn build: Clean up README.rst a bit
- Make introduction a bit shorter
- Add a `git clone` step to Quick start
- Put command to run first in each of the Quick start steps
- Use ``code`` instead of `label` throughout; this is .rst not .md
Petar Avramovic [Mon, 25 Mar 2019 11:23:41 +0000 (11:23 +0000)]
[MIPS GlobalISel] Lower float and double arguments in registers
Lower float and double arguments in registers for MIPS32.
When float/double argument is passed through gpr registers
select appropriate move instruction.
Xing GUO [Mon, 25 Mar 2019 11:02:49 +0000 (11:02 +0000)]
[llvm-readobj] Separate `Symbol Version` dumpers into `LLVM style` and `GNU style`
Summary:
Currently, llvm-readobj can dump symbol version sections only in LLVM style. In this patch, I would like to separate these dumpers into GNU style and
LLVM style for future implementation.
Sjoerd Meijer [Mon, 25 Mar 2019 08:54:47 +0000 (08:54 +0000)]
[TTI] Move getIntrinsicCost to allow functions to be overridden. NFC
Moving this to base class TargetTransformInfoImplCRTPBase allows static_cast to
a subtarget so that calls to e.g. getMemcpyCost actually go the overridden
functions.
Diana Picus [Mon, 25 Mar 2019 08:54:29 +0000 (08:54 +0000)]
[ARM GlobalISel] 64-bit memops should be aligned
We currently use only VLDR/VSTR for all 64-bit loads/stores, so the
memory operands must be word-aligned. Mark aligned operations as legal
and narrow non-aligned ones to 32 bits.
While we're here, also mark non-power-of-2 loads/stores as unsupported.
Craig Topper [Mon, 25 Mar 2019 06:53:45 +0000 (06:53 +0000)]
[X86] When selecting (x << C1) op C2 as (x op (C2>>C1)) << C1, use the operation VT for the target constant.
Normally when the nodes we use here(AND32ri8 for example) are selected their
immediates are just converted from ConstantSDNode to TargetConstantSDNode
without changing VT from the original operation VT. So we should still be
emitting them with the operation VT.
Theoretically this could expose more accurate opportunities for CSE.
Craig Topper [Mon, 25 Mar 2019 06:53:44 +0000 (06:53 +0000)]
[X86] Remove GetLo8XForm and use GetLo32XForm instead. NFCI
We were using this to create an AND32ri8 node from a 64-bit and, but that node
normally still uses a 32-bit immediate. So we should just truncate the existing
immediate to i32. We already verified it has the same value in bits 31:7.
Simon Pilgrim [Sun, 24 Mar 2019 19:06:35 +0000 (19:06 +0000)]
[X86][SSE41] Start shuffle combining from ZERO_EXTEND_VECTOR_INREG (PR40685)
Enable SSE41 ZERO_EXTEND_VECTOR_INREG shuffle combines - for the PMOVZX(PSHUFD(V)) -> UNPCKH(V,0) pattern we reduce the shuffles (port5-bottleneck on Intel) at the expense of creating a zero (pxor v,v) and an extra register move - which is a good trade off as these are pretty cheap and in most cases it doesn't increase register pressure.
This also exposed a missed opportunity to use combine to ZERO_EXTEND_VECTOR_INREG with folded loads - even if we're in the float domain.
Heejin Ahn [Sun, 24 Mar 2019 17:34:40 +0000 (17:34 +0000)]
[WebAssembly] Rename a variable in CFGSort (NFC)
Class `RegionInfo` was `SortUnitInfo` before, so the variables were
named `SUI`. Now the class name is `RegionInfo`, so this renames `SUI`
to `RI`, matching the class name.
Craig Topper [Sun, 24 Mar 2019 17:02:14 +0000 (17:02 +0000)]
[LegalizeDAG] Expand i16 bswap directly to a rotate by 8 instead of relying on DAG combine.
An i16 bswap can be implemented with an i16 rotate by 8. We previously emitted
a shift and OR sequence that DAG combine should be able to turn back into
rotate. But we might as well go there directly. If rotate isn't legal,
LegalizeDAG should further legalize it to either the opposite rotate, or the
shift and OR pattern.
I don't know of any way to get the existing DAG combine reliance to fail. So
I don't know any way to add new tests for this that wouldn't have worked
previously.
Simon Pilgrim [Sun, 24 Mar 2019 16:30:35 +0000 (16:30 +0000)]
[X86][AVX] Start shuffle combining from ZERO_EXTEND_VECTOR_INREG (PR40685)
Just enable this for AVX for now as SSE41 introduces extra register moves for the PMOVZX(PSHUFD(V)) -> UNPCKH(V,0) pattern (but otherwise helps reduce port5 usage on Intel targets).
Only AVX support is required for PR40685 as the issue is due to 8i8->8i32 zext shuffle leftovers.
George Rimar [Sun, 24 Mar 2019 14:41:45 +0000 (14:41 +0000)]
Recommit r356738 "[llvm-objcopy] - Implement replaceSectionReferences for GroupSection class."
Fix: r356853 + set AddressAlign to 4 in
Inputs/compress-debug-sections.yaml for the new group section introduced.
Original commit message:
Currently, llvm-objcopy incorrectly handles compression and decompression of the
sections from COMDAT groups, because we do not implement the
replaceSectionReferences for this type of the sections.
Sanjay Patel [Sun, 24 Mar 2019 13:55:54 +0000 (13:55 +0000)]
[x86] improve the default expansion of uaddsat/usubsat
This is yet another step towards solving PR14613:
https://bugs.llvm.org/show_bug.cgi?id=14613
uaddsat X, Y --> (X >u (X + Y)) ? -1 : X + Y
usubsat X, Y --> (X >u Y) ? X - Y : 0
We can't count on a sane vector ISA, so override the default (umin/umax)
expansion of unsigned add/sub saturate in cases where we do not have umin/umax.
Simon Pilgrim [Sun, 24 Mar 2019 13:36:32 +0000 (13:36 +0000)]
[SLPVectorizer] shouldReorderOperands - just check for reordering. NFCI.
Remove the I.getOperand() calls from inside shouldReorderOperands - reorderInputsAccordingToOpcode should handle the creation of the operand lists and shouldReorderOperands should just check to see whether the i'th element should be commuted.
George Rimar [Sun, 24 Mar 2019 13:31:08 +0000 (13:31 +0000)]
[llvm-objcopy] - Report SHT_GROUP sections with invalid alignment.
This patch fixes the reason of ubsan failure (UB detected)
happened after landing the D59638 (I had to revert it).
http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-bootstrap-ubsan/builds/11760/steps/check-llvm%20ubsan/logs/stdio)
Problem is the following. Our implementation of GroupSection assumes that
its address is 4 bytes aligned when writes it:
Nikita Popov [Sun, 24 Mar 2019 09:34:40 +0000 (09:34 +0000)]
[ConstantRange] Add getFull() + getEmpty() named constructors; NFC
This adds ConstantRange::getFull(BitWidth) and
ConstantRange::getEmpty(BitWidth) named constructors as more readable
alternatives to the current ConstantRange(BitWidth, /* full */ false)
and similar. Additionally private getFull() and getEmpty() member
functions are added which return a full/empty range with the same bit
width -- these are commonly needed inside ConstantRange.cpp.
The IsFullSet argument in the ConstantRange(BitWidth, IsFullSet)
constructor is now mandatory for the few usages that still make use of it.
The https://reviews.llvm.org/D58194 patch changed symbolizer interface.
Particularily it requires not only Address but SectionIndex also.
Note object::SectionedAddress parameter:
There are callers of symbolizer which do not know particular section index.
That patch creates getModuleSectionIndexForAddress() routine which
will detect section index for the specified address. Thus if caller
set ModuleOffset.SectionIndex into object::SectionedAddress::UndefSection
state then symbolizer would detect section index using
getModuleSectionIndexForAddress routine.
Fedor Sergeev [Fri, 22 Mar 2019 23:11:08 +0000 (23:11 +0000)]
[Legacy][TimePasses] allow -time-passes reporting into a custom stream
As a followup to newpm -time-passes fix (D59366), now adding a similar
functionality to legacy time-passes.
Enhancing llvm::reportAndResetTimings to accept an optional stream
for reporting output. By default it still reports into the stream created
by CreateInfoOutputFile (-info-output-file).
Also fixing to actually reset after printing as declared.
Juergen Ributzka [Fri, 22 Mar 2019 22:46:52 +0000 (22:46 +0000)]
[TextAPI] TBD Reader/Writer
Add basic infrastructure for reading and writting TBD files (version 1 - 3).
The TextAPI library is not used by anything yet (besides the unit tests). Tool
support will be added in a separate commit.
The TBD format is currently documented in the implementation file (TextStub.cpp).
https://reviews.llvm.org/D53945
Update: This contains changes to fix issues discovered by the bots:
- add parentheses to silence warnings.
- rename variables
- use PlatformType from BinaryFormat
- Trying if switching from a vector to an array will appeas the bots.
- Replace the tuple with a struct to work around an explicit constructor bug.
- This fixes an issue where we were leaking the YAML document if there was a
parsing error.
Simon Pilgrim [Fri, 22 Mar 2019 21:27:11 +0000 (21:27 +0000)]
[SLP] Remove redundancy of performing operand reordering twice: once in buildTree() and later in vectorizeTree().
This is a refactoring patch that removes the redundancy of performing operand reordering twice, once in buildTree() and later in vectorizeTree().
To achieve this we need to keep track of the operands within the TreeEntry struct while building the tree, and later in vectorizeTree() we are just accessing them from the TreeEntry in the right order.
This patch is the first in a series of patches that will allow for better operand reordering across chains of instructions (e.g., a chain of ADDs), as presented here: https://www.youtube.com/watch?v=gIEn34LvyNo
The PDB is overall 1.9GB, so the LF_CLASS and LF_STRUCTURE declarations
account for about 10% of the overall file size. I was surprised to find
that on average LF_FIELDLIST records are short. Maybe this is because
there are many more types with short member lists than there are
instantiations with lots of members, like std::vector.
This change was originally committed in r356764, but then partially
reverted in r356777 due to "bad changes". This caused test failures
because the test changes committed along with the original change
were not reverted, so this change reverts the rest of the changes.
Eli Friedman [Fri, 22 Mar 2019 20:49:15 +0000 (20:49 +0000)]
[ARM] Don't form "ands" when it isn't scheduled correctly.
In r322972/r323136, the iteration here was changed to catch cases at the
beginning of a basic block... but we accidentally deleted an important
safety check. Restore that check to the way it was.
Craig Topper [Fri, 22 Mar 2019 20:47:02 +0000 (20:47 +0000)]
[X86] Use xmm registers to implement 64-bit popcnt on 32-bit targets if possible if popcnt instruction is not available
On 32-bit targets without popcnt, we currently expand 64-bit popcnt to sequences of arithmetic and logic ops for each 32-bit half and then add the 32 bit halves together. If we have xmm registers we can use use those to implement the operation instead. This results in less instructions then doing two separate 32-bit popcnt sequences.
This mitigates some of PR41151 for the i64 on i686 case when we have SSE2.
Craig Topper [Fri, 22 Mar 2019 20:46:56 +0000 (20:46 +0000)]
[X86] Use movq for i64 atomic load on 32-bit targets when sse2 is enable
We used a lock cmpxchg8b to do i64 atomic loads. But if we have SSE2 we can do better and use a plain movq to do the load instead.
I tried to just use an f64 atomic load and add isel patterns to MOVSD(which the domain fixing pass can turn to MOVQ), but the atomic_load SDNode in TargetSelectionDAG.td requires the type to be integer.
So I've emitted VZEXT_LOAD instead which should be selected by isel to a MOVQ. Hopefully we don't need a specific atomic flavor of this. I kept the memory operand from the original AtomicSDNode. I wasn't sure if I might need to set the MOVolatile flag?
I've left some FIXMEs for improvements we can do without SSE2.
Daniel Sanders [Fri, 22 Mar 2019 20:16:35 +0000 (20:16 +0000)]
Fix non-determinism in Reassociate caused by address coincidences
Summary:
Between building the pair map and querying it there are a few places that
erase and create Values. It's rare but the address of these newly created
Values is occasionally the same as a just-erased Value that we already
have in the pair map. These coincidences should be accounted for to avoid
non-determinism.
Eli Friedman [Fri, 22 Mar 2019 18:37:26 +0000 (18:37 +0000)]
[ARM] [NFC] Use tGPR in patterns where appropriate.
This doesn't have any practical effect at the moment, as far as I know,
because high registers aren't allocatable in Thumb1 mode. But it might
matter in the future.
James Y Knight [Fri, 22 Mar 2019 18:27:13 +0000 (18:27 +0000)]
IR: Support parsing numeric block ids, and emit them in textual output.
Just as as llvm IR supports explicitly specifying numeric value ids
for instructions, and emits them by default in textual output, now do
the same for blocks.
This is a slightly incompatible change in the textual IR format.
Previously, llvm would parse numeric labels as string names. E.g.
define void @f() {
br label %"55"
55:
ret void
}
defined a label *named* "55", even without needing to be quoted, while
the reference required quoting. Now, if you intend a block label which
looks like a value number to be a name, you must quote it in the
definition too (e.g. `"55":`).
Previously, llvm would print nameless blocks only as a comment, and
would omit it if there was no predecessor. This could cause confusion
for readers of the IR, just as unnamed instructions did prior to the
addition of "%5 = " syntax, back in 2008 (PR2480).
Now, it will always print a label for an unnamed block, with the
exception of the entry block. (IMO it may be better to print it for
the entry-block as well. However, that requires updating many more
tests.)
Thus, the following is supported, and is the canonical printing:
define i32 @f(i32, i32) {
%3 = add i32 %0, %1
br label %4
4:
ret i32 %3
}
New test cases covering this behavior are added, and other tests
updated as required.
Nikita Popov [Fri, 22 Mar 2019 17:51:40 +0000 (17:51 +0000)]
[ValueTracking] Avoid redundant known bits calculation in computeOverflowForSignedAdd()
We're already computing the known bits of the operands here. If the
known bits of the operands can determine the sign bit of the result,
we'll already catch this in signedAddMayOverflow(). The only other
way (and as the comment already indicates) we'll get new information
from computing known bits on the whole add, is if there's an assumption
on it.
As such, we change the code to only compute known bits from assumptions,
instead of computing full known bits on the add (which would unnecessarily
recompute the known bits of the operands as well).
Alina Sbirlea [Fri, 22 Mar 2019 17:22:19 +0000 (17:22 +0000)]
[AliasAnalysis] Second prototype to cache BasicAA / anyAA state.
Summary:
Adding contained caching to AliasAnalysis. BasicAA is currently the only one using it.
AA changes:
- This patch is pulling the caches from BasicAAResults to AAResults, meaning the getModRefInfo call benefits from the IsCapturedCache as well when in "batch mode".
- All AAResultBase implementations add the QueryInfo member to all APIs. AAResults APIs maintain wrapper APIs such that all alias()/getModRefInfo call sites are unchanged.
- AA now provides a BatchAAResults type as a wrapper to AAResults. It keeps the AAResults instance and a QueryInfo instantiated to batch mode. It delegates all work to the AAResults instance with the batched QueryInfo. More API wrappers may be needed in BatchAAResults; only the minimum needed is currently added.
MemorySSA changes:
- All walkers are now templated on the AA used (AliasAnalysis=AAResults or BatchAAResults).
- At build time, we optimize uses; now we create a local walker (lives only as long as OptimizeUses does) using BatchAAResults.
- All Walkers have an internal AA and only use that now, never the AA in MemorySSA. The Walkers receive the AA they will use when built.
- The walker we use for queries after the build is instantiated on AliasAnalysis and is built after building MemorySSA and setting AA.
- All static methods doing walking are now templated on AliasAnalysisType if they are used both during build and after. If used only during build, the method now only takes a BatchAAResults. If used only after build, the method now takes an AliasAnalysis.
Bixia Zheng [Fri, 22 Mar 2019 16:37:37 +0000 (16:37 +0000)]
[ConstantFolding] Fix GetConstantFoldFPValue to avoid cast overflow.
Summary:
In C++, the behavior of casting a double value that is beyond the range
of a single precision floating-point to a float value is undefined. This
change replaces such a cast with APFloat::convert to convert the value,
which is consistent with how we convert a double value to a half value.
Nico Weber [Fri, 22 Mar 2019 16:34:39 +0000 (16:34 +0000)]
Make clang-move use same file naming convention as other tools
In all the other clang-foo tools, the main library file is called
Foo.cpp and the file in the tool/ folder is called ClangFoo.cpp.
Do this for clang-move too.
Tim Renouf [Fri, 22 Mar 2019 15:53:50 +0000 (15:53 +0000)]
InstCombineSimplifyDemanded: Allow v3 results for AMDGCN buffer and image intrinsics
This helps to avoid the situation where RA spots that only 3 of the
v4f32 result of a load are used, and immediately reallocates the 4th
register for something else, requiring a stall waiting for the load.
Xing GUO [Fri, 22 Mar 2019 15:42:13 +0000 (15:42 +0000)]
[llvm-readobj] Separate `Symbol Version` dumpers into `LLVM style` and `GNU style`
Summary:
Currently, llvm-readobj can dump symbol version sections only in LLVM style. In this patch, I would like to separate these dumpers into GNU style and
LLVM style for future implementation.