Artem Belevich [Wed, 24 Jan 2018 17:41:02 +0000 (17:41 +0000)]
[CUDA] Disable PGO and coverage instrumentation in NVPTX.
NVPTX does not have runtime support necessary for profiling to work
and even call arc collection is prohibitively expensive. Furthermore,
there's no easy way to collect the samples. NVPTX also does not
support global constructors that clang generates if sample/arc collection
is enabled.
Wei Mi [Tue, 23 Jan 2018 23:27:57 +0000 (23:27 +0000)]
Adjust MaxAtomicInlineWidth for i386/i486 targets.
This is to fix the bug reported in https://bugs.llvm.org/show_bug.cgi?id=34347#c6.
Currently, all MaxAtomicInlineWidth of x86-32 targets are set to 64. However,
i386 doesn't support any cmpxchg related instructions. i486 only supports cmpxchg.
So in this patch MaxAtomicInlineWidth is reset as follows:
For i386, the MaxAtomicInlineWidth should be 0 because no cmpxchg is supported.
For i486, the MaxAtomicInlineWidth should be 32 because it supports cmpxchg.
For others 32 bits x86 cpu, the MaxAtomicInlineWidth should be 64 because of cmpxchg8b.
We would previously treat `SEL` as a pointer-only type. This is not the
case. It should be treated similarly to `id` and `Class`. Add some
test cases to ensure that it will be properly handled as well.
These symbols are supposed to be preserved even by the linker. Use the
`llvm.used` to ensure that the symbols are not removed by DCE in the
linker. This should be a no-op change on MachO since the symbols are
annotated as `no_dead_strip`.
George Karpenkov [Tue, 23 Jan 2018 19:28:52 +0000 (19:28 +0000)]
[analyzer] Show full analyzer invocation for reproducibility in HTML reports
Analyzing problems which appear in scan-build results can be very
difficult, as after the launch no exact invocation is stored, and it's
super-hard to launch the debugger.
With this patch, the exact analyzer invocation appears in the footer,
and can be copied to debug/check reproducibility/etc.
AST: adjust ObjC MS mangling to work with typedefs
Rather than hardcode the pointerness of the `id` and `class` types,
handle them generically. This allows for the template type
specialization of `remove_pointer<id>` which would look through the `id`
type and deal with the `objc_object` structure without the pointer.
Artem Belevich [Tue, 23 Jan 2018 19:08:18 +0000 (19:08 +0000)]
[CUDA] CUDA has no device-side library builtins.
We should (almost) never consider a device-side declaration to match a
library builtin functio. Otherwise clang may ignore the implementation
provided by the CUDA headers and emit clang's idea of the builtin.
The tests are targeting Windows but do not specify an environment. When
executed on Linux, they would use an ELF output rather than the COFF
output. Explicitly provide an environment.
Fedor Sergeev [Tue, 23 Jan 2018 12:24:01 +0000 (12:24 +0000)]
[Solaris] Make RHEL devtoolsets handling Linux-specific
Summary:
This patch is meant to address the last outstanding review comment on the already approved
(but not yet commited) https://reviews.llvm.org/D35755, namely making the handling of the RHEL
devtoolsets Linux-specific.
Don't know if it's best integrated into the former or applied subsequently.
Tested on i386-pc-solaris2.11 and x86_64-pc-linux-gnu.
[clang-format] Ignore UnbreakableTailLength sometimes during breaking
Summary:
This patch fixes an issue where the UnbreakableTailLength would be counted towards
the length of a token during breaking, even though we can break after the token.
For example, this proto text with column limit 20
```
# ColumnLimit: 20 V
foo: {
bar: {
bazoo: "aaaaaaa"
}
}
```
was broken:
```
# ColumnLimit: 20 V
foo: {
bar: {
bazoo:
"aaaaaaa"
}
}
```
because the 2 closing `}` were counted towards the string literal's `UnbreakableTailLength`.
Volodymyr Sapsai [Mon, 22 Jan 2018 22:29:24 +0000 (22:29 +0000)]
Reland "[CodeGen] Fix crash when a function taking transparent union is redeclared."
When a function taking transparent union is declared as taking one of
union members earlier in the translation unit, clang would hit an
"Invalid cast" assertion during EmitFunctionProlog. This case
corresponds to function f1 in test/CodeGen/transparent-union-redecl.c.
We decided to cast i32 to union because after merging function
declarations function parameter type becomes int,
CGFunctionInfo::ArgInfo type matches with ABIArgInfo type, so we decide
it is a trivial case. But these types should also be castable to
parameter declaration type which is not the case here.
Now the fix is in converting from ABIArgInfo type to VarDecl type and using
argument demotion when necessary.
Additional tests in Sema/transparent-union.c capture current behavior and make
sure there are no regressions.
Chandler Carruth [Mon, 22 Jan 2018 22:05:25 +0000 (22:05 +0000)]
Introduce the "retpoline" x86 mitigation technique for variant #2 of the speculative execution vulnerabilities disclosed today, specifically identified by CVE-2017-5715, "Branch Target Injection", and is one of the two halves to Spectre..
Summary:
First, we need to explain the core of the vulnerability. Note that this
is a very incomplete description, please see the Project Zero blog post
for details:
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
The basis for branch target injection is to direct speculative execution
of the processor to some "gadget" of executable code by poisoning the
prediction of indirect branches with the address of that gadget. The
gadget in turn contains an operation that provides a side channel for
reading data. Most commonly, this will look like a load of secret data
followed by a branch on the loaded value and then a load of some
predictable cache line. The attacker then uses timing of the processors
cache to determine which direction the branch took *in the speculative
execution*, and in turn what one bit of the loaded value was. Due to the
nature of these timing side channels and the branch predictor on Intel
processors, this allows an attacker to leak data only accessible to
a privileged domain (like the kernel) back into an unprivileged domain.
The goal is simple: avoid generating code which contains an indirect
branch that could have its prediction poisoned by an attacker. In many
cases, the compiler can simply use directed conditional branches and
a small search tree. LLVM already has support for lowering switches in
this way and the first step of this patch is to disable jump-table
lowering of switches and introduce a pass to rewrite explicit indirectbr
sequences into a switch over integers.
However, there is no fully general alternative to indirect calls. We
introduce a new construct we call a "retpoline" to implement indirect
calls in a non-speculatable way. It can be thought of loosely as
a trampoline for indirect calls which uses the RET instruction on x86.
Further, we arrange for a specific call->ret sequence which ensures the
processor predicts the return to go to a controlled, known location. The
retpoline then "smashes" the return address pushed onto the stack by the
call with the desired target of the original indirect call. The result
is a predicted return to the next instruction after a call (which can be
used to trap speculative execution within an infinite loop) and an
actual indirect branch to an arbitrary address.
On 64-bit x86 ABIs, this is especially easily done in the compiler by
using a guaranteed scratch register to pass the target into this device.
For 32-bit ABIs there isn't a guaranteed scratch register and so several
different retpoline variants are introduced to use a scratch register if
one is available in the calling convention and to otherwise use direct
stack push/pop sequences to pass the target address.
This "retpoline" mitigation is fully described in the following blog
post: https://support.google.com/faqs/answer/7625886
We also support a target feature that disables emission of the retpoline
thunk by the compiler to allow for custom thunks if users want them.
These are particularly useful in environments like kernels that
routinely do hot-patching on boot and want to hot-patch their thunk to
different code sequences. They can write this custom thunk and use
`-mretpoline-external-thunk` *in addition* to `-mretpoline`. In this
case, on x86-64 thu thunk names must be:
```
__llvm_external_retpoline_r11
```
or on 32-bit:
```
__llvm_external_retpoline_eax
__llvm_external_retpoline_ecx
__llvm_external_retpoline_edx
__llvm_external_retpoline_push
```
And the target of the retpoline is passed in the named register, or in
the case of the `push` suffix on the top of the stack via a `pushl`
instruction.
There is one other important source of indirect branches in x86 ELF
binaries: the PLT. These patches also include support for LLD to
generate PLT entries that perform a retpoline-style indirection.
The only other indirect branches remaining that we are aware of are from
precompiled runtimes (such as crt0.o and similar). The ones we have
found are not really attackable, and so we have not focused on them
here, but eventually these runtimes should also be replicated for
retpoline-ed configurations for completeness.
For kernels or other freestanding or fully static executables, the
compiler switch `-mretpoline` is sufficient to fully mitigate this
particular attack. For dynamic executables, you must compile *all*
libraries with `-mretpoline` and additionally link the dynamic
executable and all shared libraries with LLD and pass `-z retpolineplt`
(or use similar functionality from some other linker). We strongly
recommend also using `-z now` as non-lazy binding allows the
retpoline-mitigated PLT to be substantially smaller.
When manually apply similar transformations to `-mretpoline` to the
Linux kernel we observed very small performance hits to applications
running typical workloads, and relatively minor hits (approximately 2%)
even for extremely syscall-heavy applications. This is largely due to
the small number of indirect branches that occur in performance
sensitive paths of the kernel.
When using these patches on statically linked applications, especially
C++ applications, you should expect to see a much more dramatic
performance hit. For microbenchmarks that are switch, indirect-, or
virtual-call heavy we have seen overheads ranging from 10% to 50%.
However, real-world workloads exhibit substantially lower performance
impact. Notably, techniques such as PGO and ThinLTO dramatically reduce
the impact of hot indirect calls (by speculatively promoting them to
direct calls) and allow optimized search trees to be used to lower
switches. If you need to deploy these techniques in C++ applications, we
*strongly* recommend that you ensure all hot call targets are statically
linked (avoiding PLT indirection) and use both PGO and ThinLTO. Well
tuned servers using all of these techniques saw 5% - 10% overhead from
the use of retpoline.
We will add detailed documentation covering these components in
subsequent patches, but wanted to make the core functionality available
as soon as possible. Happy for more code review, but we'd really like to
get these patches landed and backported ASAP for obvious reasons. We're
planning to backport this to both 6.0 and 5.0 release streams and get
a 5.0 release with just this cherry picked ASAP for distros and vendors.
This patch is the work of a number of people over the past month: Eric, Reid,
Rui, and myself. I'm mailing it out as a single commit due to the time
sensitive nature of landing this and the need to backport it. Huge thanks to
everyone who helped out here, and everyone at Intel who helped out in
discussions about how to craft this. Also, credit goes to Paul Turner (at
Google, but not an LLVM contributor) for much of the underlying retpoline
design.
Ilya Biryukov [Mon, 22 Jan 2018 17:18:28 +0000 (17:18 +0000)]
[CodeComplete] Fix completion in the middle of idents in macro calls
Summary:
This patch removes IdentifierInfo from completion token after remembering
the identifier in the preprocessor.
Prior to this patch, completion token had the IdentifierInfo set to null when
completing at the start of identifier and to the II for completion prefix
when in the middle of identifier.
This patch unifies how code completion token is handled when it is insterted
before the identifier and in the middle of the identifier.
The actual IdentifierInfo can still be obtained from the Preprocessor.
Raphael Isemann [Mon, 22 Jan 2018 15:27:25 +0000 (15:27 +0000)]
[modules] Correctly overload getModule in the MultiplexExternalSemaSource
Summary:
The MultiplexExternalSemaSource doesn't correctly overload the `getModule` function,
causing the multiplexer to not forward this call as intended.
Devin Coughlin [Sat, 20 Jan 2018 23:11:17 +0000 (23:11 +0000)]
[analyzer] Provide a check name when MallocChecker enables CStringChecker
Fix an assertion failure caused by a missing CheckName. The malloc checker
enables "basic" support in the CStringChecker, which causes some CString
bounds checks to be enabled. In this case, make sure that we have a
valid CheckName for the BugType.
Craig Topper [Sat, 20 Jan 2018 18:36:06 +0000 (18:36 +0000)]
[X86] Put the code that defines __GCC_HAVE_SYNC_COMPARE_AND_SWAP_16 for the preprocessor with the other __GCC_HAVE_SYNC_COMPARE_AND_SWAP_* defines. NFC
Kamil Rytarowski [Sat, 20 Jan 2018 01:03:45 +0000 (01:03 +0000)]
Link sanitized programs on NetBSD with -lkvm
Summary:
kvm - kernel memory interface
The kvm(3) functions like kvm_open(), kvm_getargv() or kvm_getenvv()
are used in programs that can request information about a kernel and
its processes. The LLVM sanitizers will make use of them on NetBSD.
Volodymyr Sapsai [Fri, 19 Jan 2018 23:41:47 +0000 (23:41 +0000)]
[Lex] Fix crash on code completion in comment in included file.
This fixes PR32732 by updating CurLexerKind to reflect available lexers.
We were hitting null pointer in Preprocessor::Lex because CurLexerKind
was CLK_Lexer but CurLexer was null. And we set it to null in
Preprocessor::HandleEndOfFile when exiting a file with code completion
point.
To reproduce the crash it is important for a comment to be inside a
class specifier. In this case in Parser::ParseClassSpecifier we improve
error recovery by pushing a semicolon token back into the preprocessor
and later on try to lex a token because we haven't reached the end of
file.
Also clang crashes only on code completion in included file, i.e. when
IncludeMacroStack is not empty. Though we reset CurLexer even if include
stack is empty. The difference is that during pushing back a semicolon
token, preprocessor calls EnterCachingLexMode which decides it is
already in caching mode because various lexers are null and
IncludeMacroStack is not empty. As the result, CurLexerKind remains
CLK_Lexer instead of updating to CLK_CachingLexer.
Richard Trieu [Fri, 19 Jan 2018 20:46:19 +0000 (20:46 +0000)]
Allow BlockDecl in CXXRecord scope to have no access specifier.
Using a BlockDecl in a default member initializer causes it to be attached to
CXXMethodDecl without its access specifier being set. This prevents a crash
where getAccess is called on this BlockDecl, since that method expects any
Decl in CXXRecord scope to have an access specifier.
Don Hinton [Fri, 19 Jan 2018 18:31:12 +0000 (18:31 +0000)]
[cmake] Also pass CMAKE_ASM_COMPILER_ID to next stage when bootstrapping
Summary:
When setting CMAKE_ASM_COMPILER=clang, we also need to set
CMAKE_ASM_COMPILER_ID=Clang.
This is needed because cmake won't set CMAKE_ASM_COMPILER_ID if
CMAKE_ASM_COMPILER is already set.
Without CMAKE_ASM_COMPILER_ID, cmake can't set
CMAKE_ASM_COMPILER_OPTIONS_TARGET either, which means
CMAKE_ASM_COMPILER_TARGET is ignored, causing cross compiling to fail,
i.e., `--target=${CMAKE_ASM_COMPILER_TARGET}` isn't passed.
Daniel Neilson [Fri, 19 Jan 2018 17:12:54 +0000 (17:12 +0000)]
Change memcpy/memove/memset to have dest and source alignment attributes (Step 1).
Summary:
Upstream LLVM is changing the the prototypes of the @llvm.memcpy/memmove/memset
intrinsics. This change updates the Clang tests for this change.
The @llvm.memcpy/memmove/memset intrinsics currently have an explicit argument
which is required to be a constant integer. It represents the alignment of the
dest (and source), and so must be the minimum of the actual alignment of the
two.
This change removes the alignment argument in favour of placing the alignment
attribute on the source and destination pointers of the memory intrinsic call.
For example, code which used to read:
call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 100, i32 4, i1 false)
will now read
call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 4 %dest, i8* align 4 %src, i32 100, i1 false)
At this time the source and destination alignments must be the same (Step 1).
Step 2 of the change, to be landed shortly, will relax that contraint and allow
the source and destination to have different alignments.
Sanjay Patel [Fri, 19 Jan 2018 15:14:51 +0000 (15:14 +0000)]
[CodeGenCXX] annotate a GEP to a derived class with 'inbounds' (PR35909)
The standard says:
[expr.static.cast] p11: "If the prvalue of type “pointer to cv1 B” points to a B
that is actually a subobject of an object of type D, the resulting pointer points
to the enclosing object of type D. Otherwise, the behavior is undefined."
Therefore, the GEP must be inbounds.
This should solve the failure to optimize away a null check shown in PR35909:
https://bugs.llvm.org/show_bug.cgi?id=35909
Nico Weber [Thu, 18 Jan 2018 21:40:27 +0000 (21:40 +0000)]
Remove TautologicalInRangeCompare from Extra and TautologicalCompare.
This removes the following (already default-off) warnings from -Wextra:
-Wtautological-type-limit-compare,
-Wtautological-unsigned-zero-compare
-Wtautological-unsigned-enum-zero-compare
On the thread "[cfe-dev] -Wtautological-constant-compare issues", clang
code owners Richard Smith, John McCall, and Reid Kleckner as well as
libc++ code owner Marshall Clow stated that these new warnings are not
yet ready for prime time and shouldn't be part of -Wextra.
Furthermore, Vedant Kumar (Apple), Peter Hosek (Fuchsia), and me (Chromium)
expressed the same concerns (Vedant on that thread, Peter on
https://reviews.llvm.org/D39462, me on https://reviews.llvm.org/D41512).
So remove them from -Wextra, and remove TautologicalInRangeCompare from
TautologicalCompare too until they're usable with real-world code.
Ben Hamilton [Thu, 18 Jan 2018 18:37:16 +0000 (18:37 +0000)]
[ClangFormat] ObjCSpaceBeforeProtocolList should be true in the google style
Summary:
The Google style guide is neutral on whether there should be a
space before the protocol list in an Objective-C @interface or
@implementation.
The majority of Objective-C code in both Apple's public
header files and Google's open-source uses a space before
the protocol list, so this changes the google style to
default ObjCSpaceBeforeProtocolList to true.
Test Plan: make -j12 FormatTests && ./tools/clang/unittests/Format/FormatTests
Jonas Hahnfeld [Thu, 18 Jan 2018 15:38:03 +0000 (15:38 +0000)]
[OpenMP] Correct generation of offloading entries
Firstly, each offloading entry must have a unique name or the
linker will complain if there are multiple files with target
regions. Secondly, the compiler must not introduce padding so
mark the struct with a PackedAttr.
Ilya Biryukov [Thu, 18 Jan 2018 15:16:53 +0000 (15:16 +0000)]
[Frontend] Allow to use PrecompiledPreamble without calling CanReuse
Summary:
The new method 'OverridePreamble' allows to override the preamble of
any source file without checking if preamble bounds or dependencies
were changed.
Richard Trieu [Thu, 18 Jan 2018 04:28:56 +0000 (04:28 +0000)]
Fix Scope::dump()
The dump function for Scope only has 20 out of the 24 flags. Since it looped
until no flags were left, having an unknown flag lead to an infinite loop.
That loop has been changed to a single pass for each flag, plus an assert to
alert if new flags are added.
Artem Dergachev [Thu, 18 Jan 2018 01:01:56 +0000 (01:01 +0000)]
[analyzer] NFC: RetainCount: Protect from dumping raw region to path notes.
MemRegion::getString() is a wrapper around MemRegion::dump(), which is not
user-friendly and should never be used for diagnostic messages.
Actual cases where raw dumps were reaching the user were unintentionally fixed
in r315736; these were noticed accidentally and shouldn't be reproducible
anymore. For now RetainCountChecker only tracks pointers through variable
regions, and for those dumps are "fine". However, we should still use a less
dangerous method for producing our path notes.
This patch replaces the dump with printing a variable name, asserting that this
is indeed a variable.
Artem Dergachev [Thu, 18 Jan 2018 00:53:50 +0000 (00:53 +0000)]
[analyzer] operator new: Fix callback order for CXXNewExpr.
PreStmt<CXXNewExpr> was never called.
Additionally, under c++-allocator-inlining=true, PostStmt<CXXNewExpr> was
called twice when the allocator was inlined: once after evaluating the
new-expression itself, once after evaluating the allocator call which, for the
lack of better options, uses the new-expression as the call site.
Artem Dergachev [Thu, 18 Jan 2018 00:50:19 +0000 (00:50 +0000)]
[analyzer] operator new: Add a new ProgramPoint for check::NewAllocator.
Add PostAllocatorCall program point to represent the moment in the analysis
between the operator new() call and the constructor call. Pointer cast from
"void *" to the correct object pointer type has already happened by this point.
The new program point, unlike the previously used PostImplicitCall, contains a
reference to the new-expression, which allows adding path diagnostics over it.
Artem Dergachev [Thu, 18 Jan 2018 00:44:41 +0000 (00:44 +0000)]
[analyzer] Suppress "this" pointer escape during construction.
Pointer escape event notifies checkers that a pointer can no longer be reliably
tracked by the analyzer. For example, if a pointer is passed into a function
that has no body available, or written into a global, MallocChecker would
no longer report memory leaks for such pointer.
In case of operator new() under -analyzer-config c++-allocator-inlining=true,
MallocChecker would start tracking the pointer allocated by operator new()
only to immediately meet a pointer escape event notifying the checker that the
pointer has escaped into a constructor (assuming that the body of the
constructor is not available) and immediately stop tracking it. Even though
it is theoretically possible for such constructor to put "this" into
a global container that would later be freed, we prefer to preserve the old
behavior of MallocChecker, i.e. a memory leak warning, in order to
be able to find any memory leaks in C++ at all. In fact, c++-allocator-inlining
*reduces* the amount of false positives coming from this-pointers escaping in
constructors, because it'd be able to inline constructors in some cases.
With other checkers working similarly, we simply suppress the escape event for
this-value of the constructor, regardless of analyzer options.
Artem Dergachev [Thu, 18 Jan 2018 00:10:21 +0000 (00:10 +0000)]
[analyzer] operator new: Fix path diagnostics around the operator call.
Implements finding appropriate source locations for intermediate diagnostic
pieces in path-sensitive bug reports that need to descend into an inlined
operator new() call that was called via new-expression. The diagnostics have
worked correctly when operator new() was called "directly".
Artem Dergachev [Wed, 17 Jan 2018 23:46:13 +0000 (23:46 +0000)]
[analyzer] operator new: Add a new checker callback, check::NewAllocator.
The callback runs after operator new() and before the construction and allows
the checker to access the casted return value of operator new() (in the
sense of r322780) which is not available in the PostCall callback for the
allocator call.
Update MallocChecker to use the new callback instead of PostStmt<CXXNewExpr>,
which gets called after the constructor.
Artem Dergachev [Wed, 17 Jan 2018 22:58:35 +0000 (22:58 +0000)]
[analyzer] operator new: Fix memory space for the returned region.
Make sure that with c++-allocator-inlining=true we have the return value of
conservatively evaluated operator new() in the correct memory space (heap).
This is a regression/omission that worked well in c++-allocator-inlining=false.
Heap regions are superior to regular symbolic regions because they have
stricter aliasing constraints: heap regions do not alias each other or global
variables.
Douglas Yung [Wed, 17 Jan 2018 22:53:15 +0000 (22:53 +0000)]
[DOXYGEN] Fix doxygen and content issues in xmmintrin.h
- Fix inaccurate instruction listings.
- Fix small issues in _mm_getcsr and _mm_setcsr.
- Fix description of NaN handling in comparison intrinsics.
- Fix inaccurate description of _mm_movemask_pi8.
- Fix inaccurate instruction mappings.
- Fix typos.
- Clarify wording on some descriptions.
- Fix bit ranges in return value.
- Fix typo in _mm_move_ms intrinsic instruction since it operates on singe-precision values, not double.
- This patch was made by Craig Flores
Artem Dergachev [Wed, 17 Jan 2018 22:51:19 +0000 (22:51 +0000)]
[analyzer] operator new: Model the cast of returned pointer into object type.
According to [basic.stc.dynamic.allocation], the return type of any C++
overloaded operator new() is "void *". However, type of the new-expression
"new T()" and the type of "this" during construction of "T" are both "T *".
Hence an implicit cast, which is not present in the AST, needs to be performed
before the construction. This patch adds such cast in the case when the
allocator was indeed inlined. For now, in the case where the allocator was *not*
inlined we still use the same symbolic value (which is a pure SymbolicRegion of
type "T *") because it is consistent with how we represent the casts and causes
less surprise in the checkers after switching to the new behavior.
The better approach would be to represent that value as a cast over a
SymbolicRegion of type "void *", however we have technical difficulties
conjuring such region without any actual expression of type "void *" present in
the AST.
Artem Dergachev [Wed, 17 Jan 2018 22:40:36 +0000 (22:40 +0000)]
[analyzer] NFC: Forbid array elements of void type.
Represent the symbolic value for results of pointer arithmetic on void pointers
in a different way: instead of making void-typed element regions, make
char-typed element regions.
Add an assertion that ensures that no void-typed regions are ever constructed.
This is a refactoring of internals that should not immediately affect
the analyzer's (default) behavior.
Artem Dergachev [Wed, 17 Jan 2018 22:34:23 +0000 (22:34 +0000)]
[analyzer] operator new: Use the correct region for the constructor.
The -analyzer-config c++-allocator-inlining experimental option allows the
analyzer to reason about C++ operator new() similarly to how it reasons about
regular functions. In this mode, operator new() is correctly called before the
construction of an object, with the help of a special CFG element.
However, the subsequent construction of the object was still not performed into
the region of memory returned by operator new(). The patch fixes it.
Passing the value from operator new() to the constructor and then to the
new-expression itself was tricky because operator new() has no call site of its
own in the AST. The new expression itself is not a good call site because it
has an incorrect type (operator new() returns 'void *', while the new expression
is a pointer to the allocated object type). Additionally, lifetime of the new
expression in the environment makes it unsuitable for passing the value.
For that reason, an additional program state trait is introduced to keep track
of the return value.
Finally this patch relaxes restrictions on the memory region class that are
required for inlining the constructor. This change affects the old mode as well
(c++-allocator-inlining=false) and seems safe because these restrictions were
an overkill compared to the actual problems observed.
Ana Pazos [Wed, 17 Jan 2018 22:09:58 +0000 (22:09 +0000)]
[RISCV] Propagate -mabi and -march values to GNU assembler.
When using -fno-integrated-as flag, the gnu assembler produces code
with some default march/mabi which later causes linker failure due
to incompatible mabi/march.
In this patch we explicitly propagate -mabi and -march flags to the
GNU assembler.
In this patch we explicitly propagate -mabi and -march flags to the GNU assembler.
George Karpenkov [Wed, 17 Jan 2018 20:27:29 +0000 (20:27 +0000)]
[analyzer] introduce getSVal(Stmt *) helper on ExplodedNode, make sure the helper is used consistently
In most cases using
`N->getState()->getSVal(E, N->getLocationContext())`
is ugly, verbose, and also opens up more surface area for bugs if an
inconsistent location context is used.
This patch introduces a helper on an exploded node, and ensures
consistent usage of either `ExplodedNode::getSVal` or
`CheckContext::getSVal` across the codebase.
As a result, a large number of redundant lines is removed.