Evgeniy Stepanov [Mon, 26 Oct 2015 18:28:25 +0000 (18:28 +0000)]
[safestack] Fast access to the unsafe stack pointer on AArch64/Android.
Android libc provides a fixed TLS slot for the unsafe stack pointer,
and this change implements direct access to that slot on AArch64 via
__builtin_thread_pointer() + offset.
This change also moves more code into TargetLowering and its
target-specific subclasses to get rid of target-specific codegen
in SafeStackPass.
This change does not touch the ARM backend because ARM lowers
builting_thread_pointer as aeabi_read_tp, which is not available
on Android.
The previous iteration of this change was reverted in r250461. This
version leaves the generic, compiler-rt based implementation in
SafeStack.cpp instead of moving it to TargetLoweringBase in order to
allow testing without a TargetMachine.
We were previously overflowing a 32-bit multiply operation when emitting large
(>512MB) bitcode files, resulting in corrupted bitcode. Fix by extending
one of the operands to 64 bits.
There are a few other 32-bit integer types in this code that seem like they
also ought to be extended to 64 bits; this will be done separately.
ARM/ELF: Better codegen for global variable addresses.
In PIC mode we were previously computing global variable addresses (or GOT
entry addresses) by adding the PC, the PC-relative GOT displacement and
the GOT-relative symbol/GOT entry displacement. Because the latter two
displacements are fixed, we ended up performing one more addition than
necessary.
This change causes us to compute addresses using a single PC-relative
displacement, resulting in a shorter code sequence. This reduces code size
by about 4% in a recent build of Chromium for Android.
As a result of this change we no longer need to compute the GOT base address
in the ARM backend, which allows us to remove the Global Base Reg pass and
SDAG lowering for the GOT.
We also now no longer use the GOT when addressing a symbol which is known
to be defined in the same linkage unit. Specifically, the symbol must have
either hidden visibility or a strong definition in the current module in
order to not use the the GOT.
This is a change from the previous behaviour where we would use the GOT to
address externally visible symbols defined in the same module. I think the
only cases where this could matter are cases involving symbol interposition,
but we don't really support that well anyway.
Cong Hou [Mon, 26 Oct 2015 18:00:17 +0000 (18:00 +0000)]
Check the case that the numerator and denominator are both zeros when getting edge probabilities in BPI and return 100% in this case.
This issue is triggered in PGO mode when bootstrapping LLVM. It seems that it is not guaranteed that edge weights are always greater than zero which are read from profile data.
James Molloy [Mon, 26 Oct 2015 14:10:46 +0000 (14:10 +0000)]
[ValueTracking] Extend r251146 to catch a fairly common case
Even though we may not know the value of the shifter operand, it's possible we know the shifter operand is non-zero. This can allow us to infer more known bits - for example:
%1 = load %p !range {1, 5}
%2 = shl %q, %1
We don't know %1, but we do know that it is nonzero so %2[0] is known zero, and importantly %2 is known non-zero.
Calling isKnownNonZero is nontrivially expensive so use an Optional to run it lazily and cache its result.
Silviu Baranga [Mon, 26 Oct 2015 13:50:06 +0000 (13:50 +0000)]
[SCEV] Fix issues found during the review of r251283. NFC.
Summary:
Replace (const SCEVAddRecExpr *) with cast<SCEVAddRecExpr>.
Rename SCEVApplyRewriter to SCEVLoopAddRecRewriter (which is a more
appropriate name) since the description is "takes a scalar evolution
expression and applies the Map (Loop -> SCEV) to all AddRecExprs."
Vectorization of memory instruction (Load/Store) is possible when the pointer is coming from GEP. The GEP analysis allows to estimate the profit.
In some cases we have a "bitcast" between GEP and memory instruction.
I added code that skips the "bitcast".
Summary:
This patch adds support for using the "interrupt" attribute on Mips
for interrupt handling functions. At this time only mips32r2+ with the
o32 ABI with the static relocation model is supported. Unsupported
configurations will be rejected
Patch by Simon Dardis (+ clang-format & some trivial changes to follow the
LLVM coding standards by me).
Silviu Baranga [Mon, 26 Oct 2015 10:25:05 +0000 (10:25 +0000)]
[InstCombine] Teach instcombine not to create extra PHI nodes when folding GEPs
Summary:
InstCombine tries to transform GEP(PHI(GEP1, GEP2, ..)) into GEP(GEP(PHI(...))
when possible. However, this may leave the old PHI node around. Even if we
do end up folding the GEPs, having an extra PHI node might not be beneficial.
This change makes the transformation more conservative. We now only do this if
the PHI has only one use, and can therefore be removed after the transformation.
Igor Breger [Mon, 26 Oct 2015 08:37:12 +0000 (08:37 +0000)]
AVX512: Add AVX-512 not materializable instructions.
Otherwise value can be reused , despite its value could be changed - produces incorrect assembler.
Tobias Grosser [Sun, 25 Oct 2015 22:55:59 +0000 (22:55 +0000)]
RegionInfo: Correctly expand regions
Instead of playing around with dominance to verify if the possible expansion of
a scop region is indeed a single entry single exit region, we now distinguish
two cases. In case we only append a basic block, all edges entering this basic
block need to have come from within the region that is expanded. In case we join
two regions, the source basic blocks of the edges that end at the entry node of
the region that is appended most be part of either the original region or the
region that is appended.
Scalarizer for masked.gather and masked.scatter intrinsics.
When the target does not support these intrinsics they should be converted to a chain of scalar load or store operations.
If the mask is not constant, the scalarizer will build a chain of conditional basic blocks.
I added isLegalMaskedGather() isLegalMaskedScatter() APIs.
[X86] Use correct calling convention for MCU psABI libcalls
When using the MCU psABI, compiler-generated library calls should pass
some parameters in-register. However, since inreg marking for x86 is currently
done by the front end, it will not be applied to backend-generated calls.
This is a workaround for PR3997, which describes a similar issue for -mregparm.
Davide Italiano [Sat, 24 Oct 2015 22:09:54 +0000 (22:09 +0000)]
[CodeGen] Get rid of NDEBUG to ensure structure stability.
I think it's fine to keep this fields around in terms of overhead,
I wasn't able to measure any substantial regression while running the
test suite, but, in case this causes some regression I'm ready to revert
and work on an alternative solution.
This was tested building with clang/gcc both in Debug and Release mode
and passes the test-suite.
Yaron Keren [Sat, 24 Oct 2015 19:27:28 +0000 (19:27 +0000)]
Add libuuid to required system libraries list for mingw.
This list is produced by llvm-config --system-libs to be used
by external programs using the llvm libraries, such as creduce.
In r250501 llvm/Support/Windows/Path.inc started to use the constant
FOLDERID_Profile from libuuid.
Simon Pilgrim [Sat, 24 Oct 2015 13:17:26 +0000 (13:17 +0000)]
[X86][XOP] Add support for lowering vector rotations
This patch adds support for lowering to the XOP VPROT / VPROTI vector bit rotation instructions.
This has required changes to the DAGCombiner rotation pattern matching to support vector types - so far I've only changed it to support splat vectors, but generalising this further is feasible in the future.
Sanjoy Das [Sat, 24 Oct 2015 05:37:35 +0000 (05:37 +0000)]
Extract out getConstantRangeFromMetadata; NFC
The loop idiom creating a ConstantRange is repeated twice in the
codebase, time to give it a name and a home.
The loop is also repeated in `rangeMetadataExcludesValue`, but using
`getConstantRangeFromMetadata` there would not be an NFC -- the range
returned by `getConstantRangeFromMetadata` may contain a value that none
of the subranges did.
Rafael Espindola [Fri, 23 Oct 2015 21:48:05 +0000 (21:48 +0000)]
Add a RAW mode to StringTableBuilder.
In this mode it just tries to tail merge the strings without imposing any other
format constrains. It will not, for example, add a null byte between them.
Also add support for keeping a tentative size and offset if we decide to
not optimize after all.
This will be used shortly in lld for merging SHF_STRINGS sections.
Hal Finkel [Fri, 23 Oct 2015 20:37:08 +0000 (20:37 +0000)]
Handle non-constant shifts in computeKnownBits, and use computeKnownBits for constant folding in InstCombine/Simplify
First, the motivation: LLVM currently does not realize that:
((2072 >> (L == 0)) >> 7) & 1 == 0
where L is some arbitrary value. Whether you right-shift 2072 by 7 or by 8, the
lowest-order bit is always zero. There are obviously several ways to go about
fixing this, but the generic solution pursued in this patch is to teach
computeKnownBits something about shifts by a non-constant amount. Previously,
we would give up completely on these. Instead, in cases where we know something
about the low-order bits of the shift-amount operand, we can combine (and
together) the associated restrictions for all shift amounts consistent with
that knowledge. As a further generalization, I refactored all of the logic for
all three kinds of shifts to have this capability. This works well in the above
case, for example, because the dynamic shift amount can only be 0 or 1, and
thus we can say a lot about the known bits of the result.
This brings us to the second part of this change: Even when we know all of the
bits of a value via computeKnownBits, nothing used to constant-fold the result.
This introduces the necessary code into InstCombine and InstSimplify. I've
added it into both because:
1. InstCombine won't automatically pick up the associated logic in
InstSimplify (InstCombine uses InstSimplify, but not via the API that
passes in the original instruction).
2. Putting the logic in InstCombine allows the resulting simplifications to become
part of the iterative worklist
3. Putting the logic in InstSimplify allows the resulting simplifications to be
used by everywhere else that calls SimplifyInstruction (inlining, unrolling,
and many others).
And this requires a small change to our definition of an ephemeral value so
that we don't break the rest case from r246696 (where the icmp feeding the
@llvm.assume, is also feeding a br). Under the old definition, the icmp would
not be considered ephemeral (because it is used by the br), but this causes the
assume to remove itself (in addition to simplifying the branch structure), and
it seems more-useful to prevent that from happening.
Tim Northover [Fri, 23 Oct 2015 20:30:02 +0000 (20:30 +0000)]
GVN: don't try to replace instruction with itself.
After some look-ahead PRE was added for GEPs, an instruction could end
up in the table of candidates before it was actually inspected. When
this happened the pass might decide it was the best candidate to
replace itself. This didn't go well.
Sanjoy Das [Fri, 23 Oct 2015 20:09:57 +0000 (20:09 +0000)]
[SCEV] Fix stylistic issue in MatchBinaryAddToConst; NFCI
Instead of checking `(FlagsPresent & ExpectedFlags) != 0`, check
`(FlagsPresent & ExpectedFlags) == ExpectedFlags`. Right now they're
equivalent since `ExpectedFlags` can only be either `FlagNUW` or
`FlagNSW`, but if we ever pass in `ExpectedFlags` as `FlagNUW | FlagNSW`
then checking `(FlagsPresent & ExpectedFlags) != 0` would be wrong.
Sanjoy Das [Fri, 23 Oct 2015 20:09:55 +0000 (20:09 +0000)]
[Inliner] Don't inline through callsites with operand bundles
Summary:
This change teaches the LLVM inliner to not inline through callsites
with unknown operand bundles. Currently all operand bundles are
"unknown" operand bundles but in the near future we will add support for
inlining through some select kinds of operand bundles.
Reid Kleckner [Fri, 23 Oct 2015 19:35:38 +0000 (19:35 +0000)]
[X86] Clean up the tail call eligibility logic
Summary:
The logic here isn't straightforward because our support for
TargetOptions::GuaranteedTailCallOpt.
Also fix a bug where we were allowing tail calls to cdecl functions from
fastcall and vectorcall functions. We were special casing thiscall and
stdcall callers rather than checking for any convention that requires
clearing stack arguments before returning.
Oleg Ranevskyy [Fri, 23 Oct 2015 17:17:59 +0000 (17:17 +0000)]
[ARM CodeGen] @llvm.debugtrap call may be removed when restoring callee saved registers
Summary:
When ARMFrameLowering::emitPopInst generates a "pop" instruction to restore the callee saved registers, it checks if the LR register is among them. If so, the function may decide to remove the basic block's terminator and replace it with a "pop" to the PC register instead of LR.
This leads to a problem when the block's terminator is preceded by a "llvm.debugtrap" call. The MI iterator points to the trap in such a case, which is also a terminator. If the function decides to restore LR to PC, it erroneously removes the trap.
Joseph Tremoulet [Fri, 23 Oct 2015 15:06:05 +0000 (15:06 +0000)]
[CodeGen] Mark setjmp/catchret MBBs address-taken
Summary:
This ensures that BranchFolding (and similar) won't remove these blocks.
Also allow AsmPrinter::EmitBasicBlockStart to process MBBs which are
address-taken but do not have BBs that are address-taken, since otherwise
its call to getAddrLabelSymbolTableToEmit would fail an assertion on such
blocks. I audited the other callers of getAddrLabelSymbolTableToEmit
(and getAddrLabelSymbol); they all have BBs known to be address-taken
except for the call through getAddrLabelSymbol from
WinException::create32bitRef; that call is actually now unreachable, so
I've removed it and updated the signature of create32bitRef.
James Molloy [Fri, 23 Oct 2015 14:17:03 +0000 (14:17 +0000)]
[BasicAA] Bugfix for r251016
If the loaded type sizes don't match the element type of the sequential type, all bets are off and the addresses may, indeed, overlap.
Surprisingly, this just got caught in one test, on one builder, out of the 30+ builders testing this change. Congratulations go to http://lab.llvm.org:8011/builders/clang-aarch64-lnt/builds/5205.