base_frame_target is supposed to track the idealized bit
allocation based on error score and not the actual bits
allocated to each frame.
The clamping of this value based on the VBR min and max pct values
was causing a bug where in some cases the loop that adjusts the
active max quantizer for each GF group was running out of bits at
the end of a KF group. This caused a spike in Q and some ugly artifacts.
A second change makes sure that the calculation of the active
Q range for a group DOES, however, take account of clamping.
vp10: allow MV refs to point outside visible image.
In VP9, the ref MV had to point to a block that itself fully resided
within the visible image, i.e. all borders of the image had to be
within the visible borders of the coded frame. This is somewhat
illogical, and had obscure side effects, e.g. clamping of fairly
reasonable motion vectors such as 0,0 were clipped to negative values
if the block was overhanging on frame edges (such as the last rows
on 1080p content), which makes no sense whatsoever.
Instead, relax clamping constraints such that the ref MVs are allowed
to point to blocks exactly outside the visible edges in both Y as well
as UV planes, including the 8tap filter edges (that's why the offset is
8 pixels + block size).
This has various benefits:
- simplify implementations because we don't have to switch between
multiple probability tables depending on frametype
- allows fw subexp and bw adaptivity for partitions/uvmode in keyframes
Ronald S. Bultje [Tue, 13 Oct 2015 18:06:28 +0000 (14:06 -0400)]
vp10: make segmentation probs use generic probability model.
Locate them (code-wise) in frame_context, and have them be updated
as any other probability using the subexp forward and adaptive bw
updates.
See issue 1040 point 1.
TODOs:
- real-world default probabilities
- why is counts sometimes NULL in the decoder? Does that mean bw
adaptivity updates only work on some frames? (I haven't looked
very closely yet, maybe this is a red herring.)
hui su [Fri, 16 Oct 2015 01:04:50 +0000 (18:04 -0700)]
VP10: some changes to palette mode
Account for rounding in distortion calculation in k-means;
carry out rounding before duplicates removal of base colors;
replace numbers with macros;
use prefix increment.
Slight coding gain (<0.1%) on screen_content testset.
Marco [Tue, 29 Sep 2015 22:38:49 +0000 (15:38 -0700)]
Adjustment on limiting cyclic refresh on steady blocks.
Adjust the qp threshold and consec_zeromv threshold for
limiting cyclic refresh. Also increase the refresh period
when the limit amount is significant, and some code-cleanup.
Small gain in PSNR/SSIM metrics: ~0.25/0.3 gain on RTC set, speed 7.
Marco [Tue, 13 Oct 2015 02:17:44 +0000 (19:17 -0700)]
VP9: Rate control update for re-encode screen-content.
For the re-encoding (at max-qp) on the detected high-content change:
update rate correction factor, reset rate over/under-shoot flags,
and update/reset the rate control for layered coding.
Changes to the breakout behavior for partition selection.
The biggest impact is on speed 0 where encode speed in
some cases more than doubles with typically less than 1%
impact on quality.
Speed 0 encode speed impact examples
Animation test clip: +128%
Park Joy: +59%
Old town Cross: + 109%
This change (in a new config experiment: universal_hp) removes the
bitstream parsing dependency of the HP MV bit on the ref MV to be
coded. It also cleans up clearing of the HP bit in near/nearestMV,
since HP is always on if it's set in the frame header.
This admittedly doesn't clean up the crap that could be cleaned up,
but that's mostly because I think this needs some careful review;
not so much for coding style, but more from hardware people and from
the codec team on what we/you want. It would also be nice to get some
actual numbers on the real quality impact of this change. If, for
example, hardware people come up and tell us they don't actually care
anymore, we should probably just this code as-is and do nothing (i.e.
discard this patch).
vp10: remove clamp_mv2() call from vp10_find_best_ref_mvs().
This actually has no effect whatsoever, since the input MVs themselves
are clamped by clamp_mv_ref() already, which is significantly more
restrictive in its bounds.
Geza Lore [Thu, 8 Oct 2015 14:44:49 +0000 (15:44 +0100)]
Optimization of 8bit block error for high bitdepth
If high bit depth configuration is enabled, but encoding in profile 0,
the code now falls back on optimized SSE2 assembler to compute the
block errors, similar to when high bit depth is not enabled.
jackychen [Wed, 7 Oct 2015 17:44:04 +0000 (10:44 -0700)]
VP9 denoiser bug-fix: artifact caused by false buffer swap.
The artifact occurs periodically when VP9 denoiser is on and
refresh_golden_frame happen. When refresh_golden_frame happen,
we should copy the frame buffer instead of swapping the pointers.
James Zern [Sat, 26 Sep 2015 03:43:04 +0000 (20:43 -0700)]
vp9/tile_worker_hook: add multiple tile decoding
this reduces the number of synchronizations in decode_tiles_mt() and
improves overall performance when the number of threads is less than the
number of tiles
The serial decode check is too strict for tile-threaded decoding as
there is no guarantee on the decode order nor which specific error
will take precedence. Currently a tile-level error is not forwarded so
the frame will simply be marked corrupt.
Marco [Thu, 1 Oct 2015 22:46:06 +0000 (15:46 -0700)]
Add first_spatial_layer_to_encode to SVC.
Use the existing VP9_SET_SVC control to set the
first spatial layer to encode.
Since we loop over all spatial layers inside the encoder, the
setting of spatial_layer_id via VP9_SET_SVC has no relevance.
Use it instead to set the first_spatial_layer_to_encode,
which allows an application to skip encoding lower layer(s).
Julia Robson [Tue, 6 Oct 2015 11:11:14 +0000 (12:11 +0100)]
SSSE3 optimisation for quantize in high bit depth
When configured with high bit detpth enabled, the 8bit quantize
function stopped using optimised code. This made 8bit content
decode slowly. This commit re-enables the SSSE3 optimisations.
We have historically added new bits to cat6 whenever we added a new
transform size (or bitdepth, for that matter). However, we have
always coded these new bits regardless of the actual transform size,
which means that for smaller transforms, we code bits that cannot
possibly be set. The coding (quality) impact of this is negligible,
but the bigger issue is that this allows creating bitstreams with
coefficient values that are nonsensible and can cause int overflows,
which then de facto become part of the bitstream spec. By not coding
these bits, we remove this possibility.