Yaxun Liu [Thu, 20 Apr 2017 18:15:34 +0000 (18:15 +0000)]
CodeGen: Let frame index value type match alloca addr space
Recently alloca address space has been added to data layout. Due to this
change, pointer returned by alloca may have different size as pointer in
address space 0.
However, currently the value type of frame index is assumed to be of the
same size as pointer in address space 0.
This patch fixes that.
Most targets assume alloca returning pointer in address space 0, which
is the default alloca address space. Therefore it is NFC for them.
AMDGCN target with amdgiz environment requires this change since it
assumes alloca returning pointer to addr space 5 and its size is 32,
which is different from the size of pointer in addr space 0 which is 64.
Have the AttributeList overload delegate to the AttrBuilder one.
Simplify the AttrBuilder overload by avoiding getSlotAttributes, which
creates temporary AttributeLists.
Simplify `AttrBuilder::removeAttributes(AttributeList, unsigned)` by
using getAttributes instead of manually iterating over slots.
Resubmit "[BitVector] Add operator<<= and operator>>=."
This was failing due to the use of assigning a Mask to an
unsigned, rather than to a BitWord. But most systems do not
have sizeof(unsigned) == sizeof(unsigned long), so the mask
was getting truncated.
getSignBit is a static function that creates an APInt with only the sign bit set. getSignMask seems like a better name to convey its functionality. In fact several places use it and then store in an APInt named SignMask.
[APInt] Add isSubsetOf method that can check if one APInt is a subset of another without creating temporary APInts
This question comes up in many places in SimplifyDemandedBits. This makes it easy to ask without allocating additional temporary APInts.
The BitVector class provides a similar functionality through its (IMHO badly named) test(const BitVector&) method. Though its output polarity is reversed.
I've provided one example use case in this patch. I plan to do more as a follow up.
In SimplifyDemandedUseBits, use computeKnownBits directly to handle Constants
Currently we don't explicitly process ConstantDataSequential, ConstantAggregateZero, or ConstantVector, or Undef before applying the Depth limit. Instead they occur after the depth check in the non-instruction path.
For the constant types that we do handle, the code is replicated from computeKnownBits.
This patch fixes the missing constant handling and the reduces the amount of code by just using computeKnownBits directly for any type of Constant.
Petar Jovanovic [Thu, 20 Apr 2017 13:26:46 +0000 (13:26 +0000)]
[mips][msa] Mask vectors holding shift amounts
Masked vectors which hold shift amounts when creating the following nodes:
ISD::SHL, ISD::SRL or ISD::SRA.
Instructions that use said nodes, which have had their arguments altered are
sll, srl, sra, bneg, bclr and bset.
For said instructions, the shift amount or the bit position that is
specified in the corresponding vector elements will be interpreted as the
shift amount/bit position modulo the size of the element in bits.
The problem lies in compiling with -O2 enabled, where the instructions for
formats .w and .d are not generated, but are instead optimized away.
In this case, having shift amounts that are either negative or greater than
the element bit size results in generation of incorrect results when
constant folding.
We remedy this by masking the operands for the nodes mentioned above before
actually creating them, so that the final result is correct before placed
into the constant pool.
This patch adds a few helper functions to obtain new vector
value types based on existing ones without needing to care
about whether they are scalable or not.
I've confined their use to a few common locations right now,
and targets that don't have scalable vectors should never
need to care about these.
John Brawn [Thu, 20 Apr 2017 10:18:13 +0000 (10:18 +0000)]
[ARM] Fix handling of mapping symbols when changing sections
ChangeSection incorrectly registers LastEMSInfo as belonging to the previous
section, not the current section. This happens to work when changing sections
using .section, as the previous section is set to the current section before
the call to ChangeSection, but not when using .popsection.
John Brawn [Thu, 20 Apr 2017 10:13:54 +0000 (10:13 +0000)]
[AArch64] Fix handling of zero immediate in fmov instructions
Currently fmov #0 with a vector destination is handle incorrectly and results in
fmov #-1.9375 being emitted but should instead give an error. This is due to the
way we cope with fmov #0 with a scalar destination being an alias of fmov zr, so
fix this by actually doing it through an alias.
John Brawn [Thu, 20 Apr 2017 10:10:10 +0000 (10:10 +0000)]
[AArch64] Fix handling of integer fp immediates
When an integer is used as an fp immediate we're failing to check the return
value of getFP64Imm, so invalid values are silently permitted. Fix this by
merging together the integer and real handling.
[ARM] Rename HW div feature to HW div Thumb. NFCI.
The hardware div feature refers only to Thumb, but because of its name
it is tempting to use it to check for hardware division in general,
which may cause problems in ARM mode. See https://reviews.llvm.org/D32005.
This patch adds "Thumb" to its name, to make its scope clear. One
notable place where I haven't made the change is in the feature flag
(used with -mattr), which is still hwdiv. Changing it would also require
changes in a lot of tests, including clang tests, and it doesn't seem
like it's worth the effort.
[APInt] Implement APInt::intersects without creating a temporary APInt in the multiword case
Summary: This is a simple question we should be able to answer without creating a temporary to hold the AND result. We can also get an early out as soon as we find a word that intersects.
[APInt] Don't call getActiveBits() in ult/ugt(uint64_t) if its a single word.
The compiled code already needs to check single/multi word for the countLeadingZeros call inside of getActiveBits, but it isn't able to optimize out the leadingZeros call in the single word case that can't produce a value larger than 64.
This shrank the opt binary by about 5-6k on my local x86-64 build.
Kuba Mracek [Wed, 19 Apr 2017 23:34:08 +0000 (23:34 +0000)]
[libFuzzer] Always build libFuzzer
There are two reasons why users might want to build libfuzzer:
- To fuzz LLVM itself
- To get the libFuzzer.a archive file, so that they can attach it to their code
This change always builds libfuzzer, and supports the second use case if the specified flag is set.
The point of this patch is to have something that can potentially be shipped with the compiler, and this also ensures that the version of libFuzzer is correct to use with that compiler.
Philip Reames [Wed, 19 Apr 2017 23:16:13 +0000 (23:16 +0000)]
Refresh the statepoint docs a bit
The documentation had gotten a bit stale. The revised one are by no means perfect, but I tried to remove the obvious incorrect or misleading statements.
[DAG] add splat vector support for 'or' in SimplifyDemandedBits
I've changed one of the tests to not fold away, but we didn't and still don't do the transform
that the comment claims we do (and I don't know why we'd want to do that).
Follow-up to:
https://reviews.llvm.org/rL300725
https://reviews.llvm.org/rL300763
ARMFrameLowering: Reserve emergency spill slot for large arguments
Re-commit after revert in r300668. Changed getMaxFPOffset() to a
more conservative heuristic instead of trying to be clever and missing
for some exotic calling conventions.
We need to reserve an emergency spill slot in cases with large argument
types that could overflow immediate offsets for FP relative address
calculations.
[APInt] Cast calls to add/sub/mul overflow methods to void if only their overflow bool out param is used.
This is preparation for a clang change to improve the [[nodiscard]] warning to not be ignored on methods that return a class marked [[nodiscard]] that are defined in the class itself. See D32207.
We should consider adding wrapper methods to APInt that return the overflow flag directly and discard the APInt result. This would eliminate the void casts and the need to create a bool before the call to pass to the out param.
Eli Friedman [Wed, 19 Apr 2017 20:19:58 +0000 (20:19 +0000)]
[SCEV] Make SCEV or modeling more aggressive.
Use haveNoCommonBitsSet to figure out whether an "or" instruction
is equivalent to addition. This handles more cases than just
checking for a constant on the RHS.
Using address range map to speedup finding inline stack for address.
Summary:
In the current implementation, to find inline stack for an address incurs expensive linear search in 2 places:
* linear search for the top-level DIE
* recursive linear traverse the DIE tree to find the path to the leaf DIE
In this patch, a map is built from address to its corresponding leaf DIE. The inline stack is built by traversing from the leaf DIE up to the root DIE. This speeds up batch symbolization by ~10X without noticible memory overhead.
Matt Arsenault [Wed, 19 Apr 2017 19:38:10 +0000 (19:38 +0000)]
AMDGPU: Don't emit amd_kernel_code_t for callable functions
This is inserted directly in the text section. The relocation
for the function ends up resolving to the beginning of the
amd_kernel_code_t header rather than the actual function
entry point.
Also skip some of the comments for initialization
that only makes sense for kernels.
Matt Arsenault [Wed, 19 Apr 2017 18:29:07 +0000 (18:29 +0000)]
StructurizeCFG: Directly invert cmp instructions
The most common case for a branch condition is
a single use compare. Directly invert the branch
predicate rather than adding a lot of xor i1 true
which the DAG will have to fold later.
This produces nicer to read structurizer output.
This produces some random changes in codegen
due to the DAG swapping branch conditions itself,
and then does a poor job of dealing with those
inverts.
Sanjoy Das [Wed, 19 Apr 2017 18:21:09 +0000 (18:21 +0000)]
[GVN] Don't coerce non-integral pointers to integers or vice versa
Summary:
See http://llvm.org/docs/LangRef.html#non-integral-pointer-type
The NewGVN test does not fail without these changes (perhaps it does
try to coerce pointers <-> integers to begin with?), but I added the
test case anyway.
[DAG] add splat vector support for 'and' in SimplifyDemandedBits
The patch itself is simple: stop discriminating against vectors in visitAnd() and again in
SimplifyDemandedBits().
Some notes for reference:
1. We're not consistent about calls to SimplifyDemandedBits in the various visitXXX functions.
Sometimes, we check if the RHS is a constant first. Other times (like here), we just dive in.
2. I'd like to break the vector shackles in steps for the sake of risk minimization, but we could
make similar simultaneous changes in other places if we think that would be better.
3. I don't know what the intent of the changed tests in this patch was supposed to be, but since
they wiggled in a positive way, I'm just going with that. :)
4. In the rotate tests, note that we can see through non-splat constants. This is a result of D24253.
5. My motivation for being here now is to make D31944 look better, so this is step 1 of N towards
improving the vector codegen in that patch without writing any actual new code.
Prefer addAttr(Attribute::AttrKind) over the AttributeList overload
This should simplify the call sites, which typically want to tweak one
attribute at a time. It should also avoid creating ephemeral
AttributeLists that live forever.
[APInt] Move the 'return *this' from the slow cases of assignment operators inline. We should let the compiler see that the fast/slow cases both return *this.
I don't think we chain assignments together very often so this shouldn't matter much.
[InstSimplify] fold identity shuffles (recursing if needed)
This patch simplifies the examples from D31509 and D31927 (PR30630) and catches
the basic identity shuffle tests that Zvi recently added.
I'm not sure if we have something like this in DAGCombiner, but we should?
It's worth noting that "MaxRecurse / RecursionLimit" is only 3 on entry at the moment.
We might want to bump that up if there are longer shuffle chains like this in the wild.
For now, we're ignoring shuffles that have undef mask elements because it's not
clear how those should be handled.
but using this function you cannot produce a word with every
bit set to 1 (i.e. leadingBitMask<uint8_t>(8)) because left
shift is undefined when N is greater than or equal to the
number of bits in the word.
This patch provides an efficient, branch-free implementation
that works for all values of N in [0, CHAR_BIT*sizeof(T)]
Also, make a few changes to allow using the pass in .mir testcases.
Among other things, change the abbreviation from opt-amode to amode-opt,
because otherwise lit would expand the "opt" part to the full path to
the opt binary.