This new class in a global context contain arch-specific knowledge in order
to provide LLVM libraries, tools and projects with the ability to understand
the architectures. For now, only FPU, ARCH and ARCH extensions on ARM are
supported.
Current behaviour it to parse from free-text to enum values and back, so that
all users can share the same parser and codes. This simplifies a lot both the
ASM/Obj streamers in the back-end (where this came from), and the front-end
parsers for command line arguments (where this is going to be used next).
The previous implementation, using .def/.h includes is deprecated due to its
inflexibility to be built without the backend support and for being too
cumbersome. As more architectures join this scheme, and as more features of
such architectures are added (such as hardware features, type sizes, etc) into
a full blown TargetDescription class, having a set of classes is the most
sane implementation.
The ultimate goal of this refactor both LLVM's and Clang's target description
classes into one unique interface, so that we can de-duplicate and standardise
the descriptions, as well as make it available for other front-ends, tools,
etc.
The FPU parsing for command line options in Clang has been converted to use
this new library and a number of aliases were added for compatibility:
* A bogus neon-vfpv3 alias (neon defaults to vfp3)
* armv5/v6
* {fp4/fp5}-{sp/dp}-d16
Next steps:
* Port Clang's ARCH/EXT parsing to use this library.
* Create a TableGen back-end to generate this information.
* Run this TableGen process regardless of which back-ends are built.
* Expose more information and rename it to TargetDescription.
* Continue re-factoring Clang to use as much of it as possible.
Pete Cooper [Fri, 8 May 2015 20:46:54 +0000 (20:46 +0000)]
[Fast-ISel] Clear kill flags on registers replaced by updateValueMap.
When selecting an extract instruction, we don't actually generate code but instead work out which register we are reading, and rewrite uses of the extract def to the source register. This is done via updateValueMap,.
However, its possible that the source register we are rewriting *to* to also have uses. If those uses are after a kill of the value we are rewriting *from* then we have uses after a kill and the verifier fails.
This code checks for the case where the to register is also used, and if so it clears all kill on the from register. This is conservative, but better that always clearing kills on the from register.
Sanjoy Das [Fri, 8 May 2015 18:58:55 +0000 (18:58 +0000)]
[BasicAA] Fix zext & sext handling
Summary:
There are several unhandled edge cases in BasicAA's GetLinearExpression
method. This changes fixes outstanding issues, including zext / sext of
a constant with the sign bit set, and the refusal to decompose zexts or
sexts of wrapping arithmetic.
However, the copy here should only have the kill flag on the 32-bit path, not the 64-bit one.
Otherwise, we are killing the source of the truncate which could be used later in the program.
This extension offers the backend the opportunity to insert (somewhat) arbitrary code to manage the transition from GC-aware code to code that is not GC-aware and back.
In order to support the injection of transition code, this extension wraps the STATEPOINT ISD node generated by the usual lowering lowering with two additional nodes: GC_TRANSITION_START and GC_TRANSITION_END. The transition arguments that were passed passed to the intrinsic (if any) are lowered and provided as operands to these nodes and may be used by the backend during code generation.
Eventually, the lowering of the GC_TRANSITION_{START,END} nodes should be informed by the GC strategy in use for the function containing the intrinsic call; for now, these nodes are instead replaced with no-ops.
Brendon Cahoon [Fri, 8 May 2015 16:16:29 +0000 (16:16 +0000)]
[Hexagon] Update AnalyzeBranch, etc target hooks
Improved the AnalyzeBranch, InsertBranch, and RemoveBranch
functions in order to handle more of our branch instructions.
This requires changes to analyzeCompare and PredicateInstructions.
Specifically, we've added support for new value compare jumps,
improved handling of endloop, added more compare instructions,
and improved support for predicate instructions.
[X86] Teach 'getTargetShuffleMask' how to look through ISD::WrapperRIP when decoding a PSHUFB mask.
The function 'getTargetShuffleMask' already knows how to deal with PSHUFB nodes
where the mask node is a load from constant pool, and the constant pool node
is wrapped by a X86ISD::Wrapper node. This patch extends that logic by teaching
it how to also look through X86ISD::WrapperRIP.
This helps function combineX86ShufflesRecusively to combine more shuffle
sequences containing PSHUFB nodes if we are in RIPRel PIC mode.
Before this change, llc (with -relocation-model=pic -march=x86-64) was unable
to decode a pshufb where the mask was loaded from a constant pool. For example,
the no-op shuffle from test 'x86-fold-pshufb.ll' was not folded into its
operand, so instead of generating a single 'movaps' the backend always
generated a sub-optimal 'movdqa + pshufb' sequence.
James Y Knight [Fri, 8 May 2015 13:47:01 +0000 (13:47 +0000)]
Fix alignment checks in MergeConsecutiveStores.
1) check whether the alignment of the memory is sufficient for the
*merged* store or load to be efficient.
Not doing so can result in some ridiculously poor code generation, if
merging creates a vector operation which must be aligned but isn't.
2) DON'T check that the alignment of each load/store is equal. If
you're merging 2 4-byte stores, the first *might* have 8-byte
alignment, but the second certainly will have 4-byte alignment. We do
want to allow those to be merged.
John Brawn [Fri, 8 May 2015 12:52:02 +0000 (12:52 +0000)]
[ARM] Reject invalid -march values
Restructure Triple::getARMCPUForArch so that invalid values will
return nullptr, while retaining the behaviour that an argument
specifying no particular architecture version will give a default
CPU. This will be used by clang to give an error on invalid -march
values.
Also restructure the extraction of the architecture version from
the MArch string a little to hopefully make what it's doing clearer.
Igor Laevsky [Fri, 8 May 2015 11:59:09 +0000 (11:59 +0000)]
This change is refactoring only. It moves basic block normalization for invokes to happen before replacement of a call with safepoint in "ReplaceWithStatepoint". Previously it was partly done before replacement of calls with safepoint and partly after call replacement but before RAUW's for gc_relocates, which was confusing.
[mips] Emit the .insn directive for empty basic blocks.
Summary:
In microMIPS, labels need to know whether they are on code or data. This is
indicated with STO_MIPS_MICROMIPS and can be inferred by being followed
by instructions. For empty basic blocks, we can ensure this by emitting the
.insn directive after the label.
Also, this fixes some failures in our out-of-tree microMIPS buildbots, for the
exception handling regression tests under: SingleSource/Regression/C++/EH
Simon Atanasyan [Fri, 8 May 2015 07:05:04 +0000 (07:05 +0000)]
[yaml2elf] Replace error message by assert call in writeSectionContent methods
Now caller of ELFState::writeSectionContent() methods is responsible to check
a section type and selects an appropriate writeSectionContent method.
So unexpected section type inside writeSectionContent method indicates
a wrong usage of the method and should be guarded by assert.
Pete Cooper [Thu, 7 May 2015 21:48:26 +0000 (21:48 +0000)]
Clear kill flags in tail duplication.
If we duplicate an instruction then we must also clear kill flags on any uses we rewrite.
Otherwise we might be killing a register which was used in other BBs.
For example, here the entry BB ended up with these instructions, the ADD having been tail duplicated.
[lib/Fuzzer] change the way we use taint information for fuzzing. Now, we run a single unit and collect suggested mutations based on tracing+taint data, then apply the suggested mutations one by one. The previous scheme was slower and more complex.
Alex Lorenz [Thu, 7 May 2015 18:48:48 +0000 (18:48 +0000)]
Fix r236754: Add the missing yaml-bench dir to the makefile for utils.
This commit adds the missing yaml-bench utility to the
makefile in utils. It was missing before and it caused
the regression tests to fail on some buildbots when llvm-lit
couldn't find yaml-bench when llvm was built without
cmake after I committed r236754.
Add VSX Scalar loads and stores to the PPC back end
This patch corresponds to review:
http://reviews.llvm.org/D9440
It adds a new register class to the PPC back end to contain single precision
values in VSX registers. Additionally, it adds scalar loads and stores for
VSX registers.
Alex Lorenz [Thu, 7 May 2015 18:08:46 +0000 (18:08 +0000)]
YAML: Enable the YAMLParser tests.
This commit enables the tests located in test/YAMLParser directory.
Those tests were never actually enabled, as llvm-lit didn't pick up the
files with the 'data' extension. The commit renames those test files to files
with the 'test' extension so that llvm-lit would find them.
This commit also modifies yaml-bench so that it returns an error status
if an error occurred during parsing. It also adds the '-use-color'
command line option to yaml-bench (to make sure that file check matches
the error messages in the output stream).
This commit modifies some of the renamed tests so that they wouldn't
fail. It gets rid of XFAILs and uses the 'not' command instead for
some of the tests that have to fail during parsing. This commit
also adds some 'FIXME' comments to a couple of tests that are
supposed to fail but currently pass because of various bugs
in the implementation of the yaml parser.
Diego Novillo [Thu, 7 May 2015 17:22:06 +0000 (17:22 +0000)]
Fix information loss in branch probability computation.
Summary:
This addresses PR 22718. When branch weights are too large, they were
being clamped to the range [1, MaxWeightForBB]. But this clamping is
only applied to edges that go outside the range, so it distorts the
relative branch probabilities.
This patch changes the weight calculation to scale every branch so the
relative probabilities are preserved. The scaling is done differently
now. First, all the branch weights are added up, and if the sum exceeds
32 bits, it computes an integer scale to bring all the weights within
the range.
The patch fixes an existing test that had slightly wrong branch
probabilities due to the previous clamping. It now gets branch weights
scaled accordingly.
Populate list of vectorizable functions for Accelerate library.
Summary:
This patch adds majority of supported by Accelerate library functions to the
list of vectorizable functions.
The full list of available vector functions could be found here:
https://developer.apple.com/library/mac/documentation/Performance/Conceptual/vecLib/index.html
Sanjay Patel [Thu, 7 May 2015 16:51:12 +0000 (16:51 +0000)]
Use intrinsic pattern to make a simpler match
This is a follow-on to r236740 where I took Andrea's advice
in D9504 to remove a redundant pattern...except that I removed
the wrong pattern!
AFAICT, there is no change in the final code produced because
subsequent passes would clean up the extra instructions created
by the more complicated pattern.
Steven Wu [Thu, 7 May 2015 16:20:51 +0000 (16:20 +0000)]
Fix another hang caused by ManagedStatic in SignalHandler
Fix two other variables that might cause the same hang fixed in r235914.
The hang is caused by constructing ManagedStatic in signalhandler. In
this case, if FileToRemove or CallBacksToRun is not contructed, it means
there is no work to do.
Sanjay Patel [Thu, 7 May 2015 15:48:53 +0000 (15:48 +0000)]
[x86] eliminate unnecessary shuffling/moves with unary scalar math ops (PR21507)
Finish the job that was abandoned in D6958 following the refactoring in
http://reviews.llvm.org/rL230221:
1. Uncomment the intrinsic def for the AVX r_Int instruction.
2. Add missing r_Int entries to the load folding tables; there are already
tests that check these in "test/Codegen/X86/fold-load-unops.ll", so I
haven't added any more in this patch.
3. Add patterns to solve PR21507 ( https://llvm.org/bugs/show_bug.cgi?id=21507 ).
After r236617, branch probabilities are no longer guaranteed to be >= 1. This
patch makes the swich lowering code handle that correctly, without bumping the
branch weights by 1 which might cause overflow and skews the probabilities.
Covered by @zero_weight_tree in test/CodeGen/X86/switch.ll.
AVX-512: Added all forms of FP compare instructions for KNL and SKX.
Added intrinsics for the instructions. CC parameter of the intrinsics was changed from i8 to i32 according to the spec.
Philip Reames [Thu, 7 May 2015 00:19:14 +0000 (00:19 +0000)]
[JumpThreading] Simplify comparisons when simplifying branches
If we have recognized that a conditional is constant at a particular location in the code (while trying to decide if we can simplify a conditional branch), we can eagerly replace that condition with a constant if it's definition is post dominated by the branch in question.
In practice, this ends up being a compile time savings at most. JumpThreading would have visited each using branch anyways. CVP would have visited the cmp itself again. Unless LVI gives up early, we shouldn't gain any addition power by doing this transformation early. What we do gain is simplicity and compile time.
Akira Hatanaka [Wed, 6 May 2015 23:54:14 +0000 (23:54 +0000)]
Let llc and opt override "-target-cpu" and "-target-features" via command line
options.
This commit fixes a bug in llc and opt where "-mcpu" and "-mattr" wouldn't
override function attributes "-target-cpu" and "-target-features" in the IR.
Alex Lorenz [Wed, 6 May 2015 23:21:29 +0000 (23:21 +0000)]
YAML: Fix crash in the skip method of KeyValueNode class.
This commit changes the 'skip' method in the 'KeyValueNode' class
to ensure that it doesn't dereference a null pointer when calling
the 'skip' method of its value child node. It also adds a unittest
that ensures that the crash doesn't occur.
This change is motivated by a patch that implements parsing
of YAML block scalars (http://reviews.llvm.org/D9503), as one
of the unittests in that patch triggered this problem.
Pete Cooper [Wed, 6 May 2015 23:19:56 +0000 (23:19 +0000)]
Change typeIncompatible to return an AttrBuilder instead of new-ing an AttributeSet.
This makes use of the new API which can remove attributes from a set given a builder.
This is much faster than creating a temporary set and reduces llc time by about 0.3% which was all spent creating temporary attributes sets on the context.
Pete Cooper [Wed, 6 May 2015 23:19:43 +0000 (23:19 +0000)]
Add remove method to operate on AttrBuilder instead of AttributeSet.
Prior to this change we would have to construct a temporary AttributeSet (which isn't temporary at all given that its allocated on the context), just to contain the attributes in the builder, then call remove on that.
Now we can just remove any attributes from the (lightweight and really temporary) builder itself.
Will be used in a future commit to remove some temporary attributes sets.
Justin Bogner [Wed, 6 May 2015 23:19:35 +0000 (23:19 +0000)]
InstrProf: Give coverage its own errors instead of piggy backing on instrprof
Since the coverage mapping reader and the instrprof reader were
emitting a shared set of error codes, the error messages you'd get
back from llvm-cov were ambiguous about what was actually wrong. Add
another error category to fix this.
I've also improved the wording on a couple of the instrprof errors,
for consistency.
Alex Lorenz [Wed, 6 May 2015 23:00:45 +0000 (23:00 +0000)]
YAML: Extract the code that skips a comment into a separate method, NFC.
This commit extracts the code that skips over a YAML comment from
the 'scanToNextToken' method into a separate 'skipComment' method.
This refactoring is motivated by a patch that implements parsing
of YAML block scalars (http://reviews.llvm.org/D9503), as the
method that parses a block scalar reuses the 'skipComment' method.
Pete Cooper [Wed, 6 May 2015 22:51:04 +0000 (22:51 +0000)]
Handle dead defs in the if converter.
We had code such as this:
r2 = ...
t2Bcc
label1:
ldr ... r2
label2;
return r2<dead, def>
The if converter was transforming this to
r2<def> = ...
return [pred] r2<dead,def>
ldr <r2, kill>
return
which fails the machine verifier because the ldr now reads from a dead def.
The fix here detects dead defs in stepForward and passes them back to the caller in the clobbers list. The caller then clears the dead flag from the def is the value is live.