]> granicus.if.org Git - llvm/commit
[x86] avoid cmov in movmsk reduction
authorSanjay Patel <spatel@rotateright.com>
Thu, 28 Mar 2019 14:16:13 +0000 (14:16 +0000)
committerSanjay Patel <spatel@rotateright.com>
Thu, 28 Mar 2019 14:16:13 +0000 (14:16 +0000)
commit629c6da2016cc0c2235d5fc6e5ed85bbd2203bbd
tree95f59151f1618fe26b8e64d482b6cdf2c4404b55
parent60639d89eb6809f64f789ee8f69d3f316d99e2f3
[x86] avoid cmov in movmsk reduction

This is probably the least important of our movmsk problems, but I'm starting
at the bottom to reduce distractions.

We were creating a select_cc which bypasses the select and bitmask codegen
optimizations that we have now. If we produce a compare+negate instead, we
allow things like neg/sbb carry bit hacks, and in all cases we avoid a cmov.
There's no partial register update danger in these sequences because we always
produce the zero-register xor ahead of the 'set' if needed.

There seems to be a missing fold for sext of a bool bit here:

negl %ecx
movslq %ecx, %rax

...but that's an independent transform.

Differential Revision: https://reviews.llvm.org/D59818

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@357172 91177308-0d34-0410-b5e6-96231b3b80d8
lib/Target/X86/X86ISelLowering.cpp
test/CodeGen/X86/vector-compare-all_of.ll
test/CodeGen/X86/vector-compare-any_of.ll