Teresa Johnson [Fri, 13 Jul 2018 21:35:51 +0000 (21:35 +0000)]
[ThinLTO] Ensure we always select the same function copy to import
In order to always import the same copy of a linkonce function,
even when encountering it with different thresholds (a higher one then a
lower one), keep track of the summary we decided to import.
This ensures that the backend only gets a single definition to import
for each GUID, so that it doesn't need to choose one.
Move the largest threshold the GUID was considered for import into the
current module out of the ImportMap (which is part of a larger map
maintained across the whole index), and into a new map just maintained
for the current module we are computing imports for. This saves some
memory since we no longer have the thresholds maintained across the
whole index (and throughout the in-process backends when doing a normal
non-distributed ThinLTO build), at the cost of some additional
information being maintained for each invocation of ComputeImportForModule
(the selected summary pointer for each import).
There is an additional map lookup for each callee being considered for
importing, however, this was able to subsume a map lookup in the
Worklist iteration that invokes computeImportForFunction. We also are
able to avoid calling selectCallee if we already failed to import at the
same or higher threshold.
I compared the run time and peak memory for the SPEC2006 471.omnetpp
benchmark (running in-process ThinLTO backends), as well as for a large
internal benchmark with a distributed ThinLTO build (so just looking at
the thin link time/memory). Across a number of runs with and without
this change there was no significant change in the time and memory.
(I tried a few other variations of the change but they also didn't
improve time or peak memory).
As discussed in https://reviews.llvm.org/D49179#1158957 and later,
the IR for 'check for [no] signed truncation' pattern can be improved:
https://rise4fun.com/Alive/gBf
^ that pattern will be produced by Implicit Integer Truncation sanitizer,
https://reviews.llvm.org/D48958 https://bugs.llvm.org/show_bug.cgi?id=21530
in signed case, therefore it is probably a good idea to improve it.
The DAGCombine will reverse this transform, see
https://reviews.llvm.org/D49266
[LowerTypeTests] Limit when icall jumptable entries are emitted
Summary:
Currently LowerTypeTests emits jumptable entries for all live external
and address-taken functions; however, we could limit the number of
functions that we emit entries for significantly.
For Cross-DSO CFI, we continue to emit jumptable entries for all
exported definitions. In the non-Cross-DSO CFI case, we only need to
emit jumptable entries for live functions that are address-taken in live
functions. This ignores exported functions and functions that are only
address taken in dead functions. This change uses ThinLTO summary data
(now emitted for all modules during ThinLTO builds) to determine
address-taken and liveness info.
The logic for emitting jumptable entries is more conservative in the
regular LTO case because we don't have summary data in the case of
monolithic LTO builds; however, once summaries are emitted for all LTO
builds we can unify the Thin/monolithic LTO logic to only use summaries
to determine the liveness of address taking functions.
This change is a partial fix for PR37474. It reduces the build size for
nacl_helper by ~2-3%, the reduction is due to nacl_helper compiling in
lots of unused code and unused functions that are address taken in dead
functions no longer being being considered live due to emitted jumptable
references. The reduction for chromium is ~0.1-0.2%.
[dwarfdump] Add pretty printer for accelerator table based on Atom.
For instance, When dumping .apple_types, the second atom represents the
DW_TAG. In addition to printing the raw value, we now also pretty print
the value if the ATOM tells us how.
[TableGen] Suppress type validation when parsing pattern fragments
Currently, any attempt to define a PatFrag involving any floating-point
only (or vector only) node causes a hard assertion failure in TableGen
if the current target does not have any floating-point (or vector)
types.
This is annoying if you want to provide convenience fragments in common
code (e.g. include/llvm/Target/TargetSelectionDAG.td) that is parsed on
all platforms, including those that miss such types.
But really, there's no reason not accept this when parsing the fragment
-- of course it would be an error for such a target to actually *use*
such a fragment anywhere, but as long as it doesn't, I think TableGen
shouldn't error out.
The immediate cause of the assertion failure is the test inside the
ValidateOnExit destructor. This patch simply disables that check while
infering types during parsing of pattern fragments (only).
Matt Arsenault [Fri, 13 Jul 2018 16:40:37 +0000 (16:40 +0000)]
AMDGPU: Properly handle shader inputs with split arguments
This needs to refer to arguments by their original argument
index, not the argument split index which depends on what
the type splitting decides to do.
Also avoid increment PSInputNum for each split piece.
Matt Arsenault [Fri, 13 Jul 2018 16:40:25 +0000 (16:40 +0000)]
AMDGPU: Fix handling of alignment padding in DAG argument lowering
This was completely broken if there was ever a struct argument, as
this information is thrown away during the argument analysis.
The offsets as passed in to LowerFormalArguments are not useful,
as they partially depend on the legalized result register type,
and they don't consider the alignment in the first place.
Ignore the Ins array, and instead figure out from the raw IR type
what we need to do. This seems to fix the padding computation
if the DAG lowering is forced (and stops breaking arguments
following padded arguments if the arguments were only partially
lowered in the IR)
[Tablegen] Optimize isSubsetOf() in AsmMatcherEmitter.cpp. NFC
isSubsetOf() could be very slow if the hierarchy of the RegisterClasses
of the target is very complicated.
This is mainly caused by the fact that isSubset() is called
multiple times over the same SuperClass of a register class
if this ends up being the super class of a register class
from multiple paths.
Revert "CallGraphSCCPass: iterate over all functions."
This reverts commit r336419: use-after-free on CallGraph::FunctionMap elements
due to the use of a stale iterator in CGPassManager::runOnModule.
The iterator may be invalidated if a pass removes a function, ex.:
llvm::LegacyInlinerBase::inlineCalls
inlineCallsImpl
llvm::CallGraph::removeFunctionFromModule
Roman Lebedev [Fri, 13 Jul 2018 16:14:37 +0000 (16:14 +0000)]
[NFC][X86][AArch64] Negative tests for 'check for [no] signed truncation' pattern
See D49247, D49266
I'm only adding the sane negative tests, and not
adding the one-use tests yet. Also, not adding
negative tests for the second pattern with inverted operands yet,
since it's handling will be added in later differential.
[PowerPC] Materialize more constants with CR-field set in late peephole
Revision r322373 fixed a bug in how we materialize constants when the CR-field
needs to be set.
However the fix is overly conservative. It will only do the transform if
AND-ing the input with the new constant produces the same new constant.
This is of course correct, but not necessarily required.
If there are no futher uses of the constant, the constant can be changed.
If there are no uses of the GPR result, the final result of the materialization
isn't important other than it needs to compare to zero correctly (lt, gt, eq).
Joel Galenson [Fri, 13 Jul 2018 15:19:33 +0000 (15:19 +0000)]
[cfi-verify] Support AArch64.
This patch adds support for AArch64 to cfi-verify.
This required three changes to cfi-verify. First, it generalizes checking if an instruction is a trap by adding a new isTrap flag to TableGen (and defining it for x86 and AArch64). Second, the code that ensures that the operand register is not clobbered between the CFI check and the indirect call needs to allow a single dereference (in x86 this happens as part of the jump instruction). Third, we needed to ensure that return instructions are not counted as indirect branches. Technically, returns are indirect branches and can be covered by CFI, but LLVM's forward-edge CFI does not protect them, and x86 does not consider them, so we keep that behavior.
In addition, we had to improve AArch64's code to evaluate the branch target of a MCInst to handle calls where the destination is not the first operand (which it often is not).
[TableGen] Support multi-alternative pattern fragments
A TableGen instruction record usually contains a DAG pattern that will
describe the SelectionDAG operation that can be implemented by this
instruction. However, there will be cases where several different DAG
patterns can all be implemented by the same instruction. The way to
represent this today is to write additional patterns in the Pattern
(or usually Pat) class that map those extra DAG patterns to the
instruction. This usually also works fine.
However, I've noticed cases where the current setup seems to require
quite a bit of extra (and duplicated) text in the target .td files.
For example, in the SystemZ back-end, there are quite a number of
instructions that can implement an "add-with-overflow" operation.
The same instructions also need to be used to implement just plain
addition (simply ignoring the extra overflow output). The current
solution requires creating extra Pat pattern for every instruction,
duplicating the information about which particular add operands
map best to which particular instruction.
This patch enhances TableGen to support a new PatFrags class, which
can be used to encapsulate multiple alternative patterns that may
all match to the same instruction. It operates the same way as the
existing PatFrag class, except that it accepts a list of DAG patterns
to match instead of just a single one. As an example, we can now define
a PatFrags to match either an "add-with-overflow" or a regular add
operation:
defm AR : BinaryRRAndK<"ar", 0x1A, 0xB9F8, z_sadd, GR32, GR32>;
These SystemZ target changes are implemented here as well.
Note that PatFrag is now defined as a subclass of PatFrags, which
means that some users of internals of PatFrag need to be updated.
(E.g. instead of using PatFrag.Fragment you now need to use
!head(PatFrag.Fragments).)
The implementation is based on the following main ideas:
- InlinePatternFragments may now replace each original pattern
with several result patterns, not just one.
- parseInstructionPattern delays calling InlinePatternFragments
and InferAllTypes. Instead, it extracts a single DAG match
pattern from the main instruction pattern.
- Processing of the DAG match pattern part of the main instruction
pattern now shares most code with processing match patterns from
the Pattern class.
- Direct use of main instruction patterns in InferFromPattern and
EmitResultInstructionAsOperand is removed; everything now operates
solely on DAG match patterns.
[SLH] Introduce a new pass to do Speculative Load Hardening to mitigate
Spectre variant #1 for x86.
There is a lengthy, detailed RFC thread on llvm-dev which discusses the
high level issues. High level discussion is probably best there.
I've split the design document out of this patch and will land it
separately once I update it to reflect the latest edits and updates to
the Google doc used in the RFC thread.
This patch is really just an initial step. It isn't quite ready for
prime time and is only exposed via debugging flags. It has two major
limitations currently:
1) It only supports x86-64, and only certain ABIs. Many assumptions are
currently hard-coded and need to be factored out of the code here.
2) It doesn't include any options for more fine-grained control, either
of which control flow edges are significant or which loads are
important to be hardened.
3) The code is still quite rough and the testing lighter than I'd like.
However, this is enough for people to begin using. I have had numerous
requests from people to be able to experiment with this patch to
understand the trade-offs it presents and how to use it. We would also
like to encourage work to similar effect in other toolchains.
The ARM folks are actively developing a system based on this for
AArch64. We hope to merge this with their efforts when both are far
enough along. But we also don't want to block making this available on
that effort.
Many thanks to the *numerous* people who helped along the way here. For
this patch in particular, both Eric and Craig did a ton of review to
even have confidence in it as an early, rough cut at this functionality.
Simon Pilgrim [Fri, 13 Jul 2018 11:09:52 +0000 (11:09 +0000)]
[SLPVectorizer] Add initial alternate opcode support for cast instructions. (REAPPLIED-2)
We currently only support binary instructions in the alternate opcode shuffles.
This patch is an initial attempt at adding cast instructions as well, this raises several issues that we probably want to address as we continue to generalize the alternate mechanism:
1 - Duplication of cost determination - we should probably add scalar/vector costs helper functions and get BoUpSLP::getEntryCost to use them instead of determining costs directly.
2 - Support alternate instructions with the same opcode (e.g. casts with different src types) - alternate vectorization of calls with different IntrinsicIDs will require this.
3 - Allow alternates to be a different instruction type - mixing binary/cast/call etc.
4 - Allow passthrough of unsupported alternate instructions - related to PR30787/D28907 'copyable' elements.
Reapplied with fix to only accept 2 different casts if they come from the same source type (PR38154).
[UpdateTestChecks] Teach the x86 asm parser to skip over the function
begin label emitted for some routines with personality functions and
such.
Without this, we don't even recognize such functions as appearing in the
output and so don't attach any assertions to them. Happy to tweak this
or improve it if folks w/ deeper knowledge of the asm sequences that
show up here want.
[x86] Teach the EFLAGS copy lowering to handle much more complex control
flow patterns including forks, merges, and even cyles.
This tries to cover a reasonably comprehensive set of patterns that
still don't require PHIs or PHI placement. The coverage was inspired by
the amazing variety of patterns produced when copy EFLAGS and restoring
it to implement Speculative Load Hardening. Without this patch, we
simply cannot make such complex and invasive changes to x86 instruction
sequences due to EFLAGS.
I've added "just" one test, but this test covers many different
complexities and corner cases of this approach. It is actually more
comprehensive, as far as I can tell, than anything that I have
encountered in the wild on SLH.
Because the test is so complex, I've tried to give somewhat thorough
comments and an ASCII-art diagram of the control flows to make it a bit
easier to read and maintain long-term.
This patch adds support for the following unpack instructions:
- PUNPKLO, PUNPKHI Unpack elements from low/high half and
place into elements of twice their size.
e.g. punpklo p0.h, p0.b
- UUNPKLO, UUNPKHI Unpack elements from low/high half and
SUNPKLO, SUNPKHI place into elements of twice their size
after zero- or sign-extending the values.
Simon Pilgrim [Fri, 13 Jul 2018 09:25:32 +0000 (09:25 +0000)]
[AArch64] Updated bigendian buildvector tests
As suggested by @efriedma on D49262 - changed the extractelement to a store to prevent SimplifyDemandedVectorElts from simplifying the build vectors - this keeps the immediate generation which was the point of the tests.
Petar Jovanovic [Fri, 13 Jul 2018 08:24:26 +0000 (08:24 +0000)]
[LiveDebugValues] Tracking copying value between registers
During the execution of long functions or functions that have a lot of
inlined code it could come to the situation where tracked value could be
transferred from one register to another. The transfer is recognized only if
destination register is a callee saved register and if source register is
killed. We do not salvage caller-saved registers since there is a great
chance that killed register would outlive it.
[XRay][compiler-rt] Add PID field to llvm-xray tool and add PID metadata record entry in FDR mode
Summary:
llvm-xray changes:
- account-mode - process-id {...} shows after thread-id
- convert-mode - process {...} shows after thread
- parses FDR and basic mode pid entries
- Checks version number for FDR log parsing.
Basic logging changes:
- Update header version from 2 -> 3
FDR logging changes:
- Update header version from 2 -> 3
- in writeBufferPreamble, there is an additional PID Metadata record (after thread id record and tsc record)
Test cases changes:
- fdr-mode.cc, fdr-single-thread.cc, fdr-thread-order.cc modified to catch process id output in the log.
[X86] Remove isel patterns that turns packed add/sub/mul/div+movss/sd into scalar intrinsic instructions.
This is not an optimization we should be doing in isel. This is more suitable for a DAG combine.
My main concern is a future time when we support more FPENV. Changing a packed op to a scalar op could cause us to miss some exceptions that should have occured if we had done a packed op. A DAG combine would be better able to manage this.
[DomTreeUpdater] Ignore updates when both DT and PDT are nullptrs
Summary:
Previously, when both DT and PDT are nullptrs and the UpdateStrategy is Lazy, DomTreeUpdater still pends updates inside.
After this patch, DomTreeUpdater will ignore all updates from(`applyUpdates()/insertEdge*()/deleteEdge*()`) in this case. (call `delBB()` still pends BasicBlock deletion until a flush event according to the doc).
The behavior of DomTreeUpdater previously documented won't change after the patch.
Joel E. Denny [Fri, 13 Jul 2018 03:08:23 +0000 (03:08 +0000)]
[FileCheck] Implement -v and -vv for tracing matches
-v prints all directive pattern matches.
-vv additionally prints info that might be noise to users but that can
be helpful to FileCheck developers.
To maximize code reuse and to make diagnostics more consistent, this
patch also adjusts and extends some of the existing diagnostics.
CHECK-NOT failures now report variables uses. Many more diagnostics
now report the check prefix and kind of directive.
[InstCombine] return when SimplifyAssociativeOrCommutative makes a change
This bug was created by rL335258 because we used to always call instsimplify
after trying the associative folds. After that change it became possible
for subsequent folds to encounter unsimplified code (and potentially assert
because of it).
Instead of carrying changed state through instcombine, we can just return
immediately. This allows instsimplify to run, so we can continue assuming
that easy folds have already occurred.
CodeGen: Remove pipeline dependencies on StackProtector; NFC
This re-applies r336929 with a fix to accomodate for the Mips target
scheduling multiple SelectionDAG instances into the pass pipeline.
PrologEpilogInserter and StackColoring depend on the StackProtector analysis
being alive from the point it is run until PEI, which requires that they are all
scheduled in the same FunctionPassManager. Inserting a (machine) ModulePass
between StackProtector and PEI results in these passes being in separate
FunctionPassManagers and the StackProtector is not available for PEI.
PEI and StackColoring don't use much information from the StackProtector pass,
so transfering the required information to MachineFrameInfo is cleaner than
keeping the StackProtector pass around. This commit moves the SSP layout
information to MFI instead of keeping it in the pass.
This patch set (D37580, D37581, D37582, D37583, D37584, D37585, D37586, D37587)
is a first draft of the pagerando implementation described in
http://lists.llvm.org/pipermail/llvm-dev/2017-June/113794.html.
and because %2 can be replaced with @llvm.strip.invariant.group(%0)
and that %2 and %1 will produce the same value (because strip is readnone)
we can replace compare with true.
Matt Davis [Thu, 12 Jul 2018 22:59:53 +0000 (22:59 +0000)]
[llvm-mca] Add cycleBegin/cycleEnd callbacks to mca::Stage.
Summary:
This patch clears up some of the semantics within the Stage class. Now, preExecute
can be called multiple times per simulated cycle. Previously preExecute was
only called once per cycle, and postExecute could have been called multiple
times.
Now, cycleStart/cycleEnd are called only once per simulated cycle.
preExecute/postExecute can be called multiple times per cycle. This
occurs because multiple execution events can occur during a single cycle.
When stages are executed (Pipeline::runCycle), the postExecute hook will
be called only if all Stages return a success from their 'execute' callback.
Bill Wendling [Thu, 12 Jul 2018 20:35:58 +0000 (20:35 +0000)]
[gold-plugin] Disable section ordering for relocatable links
Not all programs want section ordering when compiled with LTO.
In particular, the Linux kernel is very sensitive when it comes to linking, and
doesn't boot when each function is placed in its own sections.
Matt Morehouse [Thu, 12 Jul 2018 20:24:58 +0000 (20:24 +0000)]
[SanitizerCoverage] Add associated metadata to 8-bit counters.
Summary:
This allows counters associated with unused functions to be
dead-stripped along with their functions. This approach is the same one
we used for PC tables.
Fixes an issue where LLD removes an unused PC table but leaves the 8-bit
counter.
[X86] Add show-mc-encoding to some fast-isel intrinsic tests so we can observe a failure to select EVEX instructions.
The one I noticed is sqrtsss/sd, but there could be others.
I had to add a couple new tests that don't have insertelement in there to catch this on the fast-isel path. Otherwise we trigger an abort and use SelectionDAG.
CodeGen: Remove pipeline dependencies on StackProtector; NFC
PrologEpilogInserter and StackColoring depend on the StackProtector analysis
being alive from the point it is run until PEI, which requires that they are all
scheduled in the same FunctionPassManager. Inserting a (machine) ModulePass
between StackProtector and PEI results in these passes being in separate
FunctionPassManagers and the StackProtector is not available for PEI.
PEI and StackColoring don't use much information from the StackProtector pass,
so transfering the required information to MachineFrameInfo is cleaner than
keeping the StackProtector pass around. This commit moves the SSP layout
information to MFI instead of keeping it in the pass.
This patch set (D37580, D37581, D37582, D37583, D37584, D37585, D37586, D37587)
is a first draft of the pagerando implementation described in
http://lists.llvm.org/pipermail/llvm-dev/2017-June/113794.html.
Wolfgang Pieb [Thu, 12 Jul 2018 18:18:21 +0000 (18:18 +0000)]
[DWARF v5] Generate range list tables into the .debug_rnglists section. No support for split DWARF
and no use of DW_FORM_rnglistx with the DW_AT_ranges attribute.
[X86] Connect the flags user from PCMPISTR instructions to the correct node from the instruction.
We were accidentally connecting it to result 0 instead of result 1. This was caught by the machine verifier that noticed the flags were dead, but we were using them somehow. I'm still not clear what actually happened downstream.
Stephen Hines [Thu, 12 Jul 2018 17:42:17 +0000 (17:42 +0000)]
Add --strip-all option back to llvm-strip.
Summary:
This option appears to have been dropped as part of the refactoring in
r331663. Unfortunately, if we want to use llvm-strip as a drop-in
replacement for strip, this option should still be available.
As discussed in https://reviews.llvm.org/D49179#1158957 and later,
the IR can be improved:
https://rise4fun.com/Alive/gBf
^ that pattern will be produced by Implicit Integer Truncation sanitizer,
https://reviews.llvm.org/D48958
https://bugs.llvm.org/show_bug.cgi?id=21530
in signed case, therefore it is probably a good idea to improve it.
But as it looks from these tests,
i think we want to revert at least some cases in DAGCombine.
Matt Davis [Thu, 12 Jul 2018 16:56:17 +0000 (16:56 +0000)]
[llvm-mca] Simplify eventing by adding an onEvent templated method.
Summary:
This patch eliminates some redundancy in iterating across Listeners for the
Instruction and Stall HWEvents, by introducing a template onEvent routine.
This change was suggested by @courbet in https://reviews.llvm.org/D48576. I
hope that this patch addresses that suggestion appropriately. I do like this
change better than what we had previously.
Roman Lebedev [Thu, 12 Jul 2018 14:56:17 +0000 (14:56 +0000)]
[InstCombine] icmp-logical.ll: restore the original intention of the test.
While that fold is clearly not happening [anymore],
we do now have separate test cases for these cases,
so we should be ok to slightly adjust these tests
to not potentially loose test coverage.
As suggested by Hiroshi Yamauchi in
https://reviews.llvm.org/D49179#1159345
Andrew Ng [Thu, 12 Jul 2018 14:40:21 +0000 (14:40 +0000)]
[ThinLTO] Escape module paths when printing
We have located a bug in AssemblyWriter::printModuleSummaryIndex(). This
function outputs path strings incorrectly. Backslashes in the strings
are not correctly escaped.
Consequently, if a path name contains a backslash followed by two
hexadecimal characters, the sequence is incorrectly interpreted when the
output is read by another component. This mangles the path and results
in error.
This patch fixes this issue by calling printEscapedString() to output
the module paths.
[DebugInfo][X86] Add start-after flags to MIR tests
These tests would fail with -verify-machineinstrs because the MI
generated from the IR would be merged with the one already in the MIR
files, and we get the following error:
```
*** Bad machine code: Function has NoVRegs property but there are VReg operands ***
- function: f
```
Simon Pilgrim [Thu, 12 Jul 2018 13:29:41 +0000 (13:29 +0000)]
[X86][SSE] Utilize ZeroableElements for canWidenShuffleElements
canWidenShuffleElements can do a better job if given a mask with ZeroableElements info. Apparently, ZeroableElements was being only used to identify AllZero candidates, but possibly we could plug it into more shuffle matchers.
Simon Pilgrim [Thu, 12 Jul 2018 13:03:58 +0000 (13:03 +0000)]
[X86][AVX] Use Zeroable mask to improve shuffle mask widening
Noticed while updating D42044, lowerV2X128VectorShuffle can improve the shuffle mask with the zeroable data to create a target shuffle mask to recognise more 'zero upper 128' patterns.
NOTE: lowerV4X128VectorShuffle could benefit as well but the code needs refactoring first to discriminate between SM_SentinelUndef and SM_SentinelZero for negative shuffle indices.
Sam McCall [Thu, 12 Jul 2018 07:11:28 +0000 (07:11 +0000)]
[Support] Require llvm::Error passed to formatv() to be wrapped in fmt_consume()
Summary:
Someone must be responsible for handling an Error. When formatv takes
ownership of an Error, the formatv_object destructor must take care of this.
Passing an error by value to formatv() is not considered explicit enough to mark
the error as handled (see D49013), so we require callers to use a format adapter
to confirm this intent.
[Dominators] Add isUpdateLazy() method to the DomTreeUpdater
Summary:
Previously, when people need to deal with DTU with different UpdateStrategy using different actions, they need to
```
if (DTU.getUpdateStrategy() == DomTreeUpdater::UpdateStrategy::Lazy) {
...
}
if (DTU.getUpdateStrategy() == DomTreeUpdater::UpdateStrategy::Eager) {
...
}
```
After the patch, they can avoid code patterns above
```
if (DTU.isUpdateLazy()){
...
}
if (!DTU.isUpdateLazy()){
...
}
```
[X86] Remove patterns and ISD nodes for the old scalar FMA intrinsic lowering.
We now use llvm.fma.f32/f64 or llvm.x86.fmadd.f32/f64 intrinsics that use scalar types rather than vector types. So we don't these special ISD nodes that operate on the lowest element of a vector.
[x86] Fix another trivial bug in x86 flags copy lowering that has been
there for a long time.
The boolean tracking whether we saw a kill of the flags was supposed to
be per-block we are scanning and instead was outside that loop and never
cleared. It requires a quite contrived test case to hit this as you have
to have multiple levels of successors and interleave them with kills.
I've included such a test case here.
This is another bug found testing SLH and extracted to its own focused
patch.