Paul Wilkins [Thu, 28 Mar 2013 19:46:35 +0000 (19:46 +0000)]
Fixed incorrect use of compute_qdelta()
This function expects real Q values as inputs
not index values.
The use-age her impacts the Q chosen for force key
frames. Though this is a bug fix I have not yet verified
whether following the bug fix the q multiplier value used is
correct.
John Koleszar [Wed, 3 Apr 2013 23:12:11 +0000 (16:12 -0700)]
Remove special case vp9_decode_coefs_4x4
This code was only called in the BPRED case, but had no real special
case associated with it. Made BPRED behave like all other modes. No
bitstream change.
Yunqing Wang [Wed, 3 Apr 2013 19:22:50 +0000 (12:22 -0700)]
Modify vp9_setup_interp_filters function
Took vp9_setup_scale_factors_for_frame() out from
vp9_setup_interp_filters(), so that it is only called once per
frame instead of per macroblock. Decoder tests showed a 1.5%
performance gain.
General code cleanup in loopfilter code. Modification of setup_frame_size,
so now VP9_COMMON is modified in one place after all width/height checks
passed.
Dmitry Kovalev [Fri, 29 Mar 2013 23:31:46 +0000 (16:31 -0700)]
Adding functions with common code for superblock decoding.
Adding decode_sb_8x8 and decode_sb_4x4 with common code for superblock
decoding. Renaming decode_superblock32 to decode_sb32 and
decode_superblock64 to decode_sb64.
Uncommenting Track elements related to BlockAdditional and adding
the new AlphaMode element as specified in the matroska spec here:
http://matroska.org/technical/specs/index.html#AlphaMode
Calculate SSIM over both reconstruction as well as postproc buffer.
We used to calculate SSIM only over the postproc buffer, whereas we
calculate PSNR for both. Compared to postproc-SSIM, this is about 0.3%
higher for derf, 1.4% lower for hd and 0.5% lower for stdhd, although
it is highly variable on a per-clip basis.
Deb Mukherjee [Tue, 26 Mar 2013 22:23:30 +0000 (15:23 -0700)]
Framework changes in nzc to allow more flexibility
The patch adds the flexibility to use standard EOB based coding
on smaller block sizes and nzc based coding on larger blocksizes.
The tx-sizes that use nzc based coding and those that use EOB based
coding are controlled by a function get_nzc_used().
By default, this function uses nzc based coding for 16x16 and 32x32
transform blocks, which seem to bridge the performance gap
substantially.
All sets are now lower by 0.5% to 0.7%, as opposed to ~1.8% before.
Paul Wilkins [Tue, 26 Mar 2013 14:40:24 +0000 (14:40 +0000)]
Adjust mv_ratio_accumulator threshold.
This threshold effectively limits the amount of motion
from one end of a GF/ARF group to the other.
This patch makes the threshold depend on image size.
Ronald S. Bultje [Mon, 25 Mar 2013 19:14:01 +0000 (12:14 -0700)]
Scatter-based scantables.
This gains about 0.2% on derf, 0.1% on hd and 0.4% on stdhd. I can put
this under an experimental flag if wanted, just trying to get my patch
queue in shape.
Deb Mukherjee [Tue, 12 Mar 2013 21:21:08 +0000 (14:21 -0700)]
Implicit weighted prediction experiment
Adds an experiment to use a weighted prediction of two INTER
predictors, where the weight is one of (1/4, 3/4), (3/8, 5/8),
(1/2, 1/2), (5/8, 3/8) or (3/4, 1/4), and is chosen implicitly
based on consistency of the predictors to the already
reconstructed pixels to the top and left of the current macroblock
or superblock.
Currently the weighting is not applied to SPLITMV modes, which
default to the usual (1/2, 1/2) weighting. However the code is in
place controlled by a macro. The same weighting is used for Y and
UV components, where the weight is derived from analyzing the Y
component only.
Ronald S. Bultje [Mon, 25 Mar 2013 19:28:24 +0000 (12:28 -0700)]
Redo banding for all transforms.
Now that the first AC coefficient in both directions use the same DC
as their context, there no longer is a purpose in letting both have
their own band. Merging these two bands allows us to split bands for
some of the very high-frequency AC bands.
In addition, I'm redoing the banding for the 1D-ADST col/row scans. I
don't think the old banding made any sense at all (it merged the last
coefficient of the first row/col in the same band as the first two of
the second row/col), which was clearly an oversight from the band being
applied in scan-order (rather than in their actual position). Now,
coefficients at the same position will be in the same band, regardless
what scan order is used. I think this makes most sense for the purpose
of banding, which is basically "predict energy for this coefficient
depending on the energy of context coefficients" (i.e. pt).
After full re-training, together with previous patch, derf gains about
1.2-1.3%, and hd/stdhd gain about 0.9-1.0%.
Ronald S. Bultje [Tue, 26 Mar 2013 23:46:09 +0000 (16:46 -0700)]
Use above/left (instead of previous in scan-order) as token context.
Pearson correlation for above or left is significantly higher than for
previous-in-scan-order (absolute values depend on position in scan, but
in general, we gain about 0.1-0.2 by using either above or left; using
both basically just makes this even better). For eob branch skipping,
we continue to use the previous token in scan order.
This helps about 0.9% on derf after re-training on a limited data set.
Full re-training and results on larger-resolution clips are pending.
Note that this commit breaks trellis, so we can probably get further
gains out of it by fixing trellis at some later point.
John Koleszar [Wed, 20 Mar 2013 17:22:20 +0000 (10:22 -0700)]
Add an in-loop deringing experiment
Adds a per-frame, strength adjustable, in loop deringing filter. Uses
the existing vp9_post_proc_down_and_across 5 tap thresholded blur
code, with a brute force search for the threshold.
Results almost strictly positive on the YT HD set, either having no
effect or helping PSNR in the range of 1-3% (overall average 0.8%).
Results more mixed for the CIF set, (-0.5 min, 1.4 max, 0.1 avg).
This has an almost strictly negative impact to SSIM, so examining a
different filter or a more balanced search heuristic is in order.
Deb Mukherjee [Wed, 13 Mar 2013 18:03:17 +0000 (11:03 -0700)]
Modeling default coef probs with distribution
Replaces the default tables for single coefficient magnitudes with
those obtained from an appropriate distribution. The EOB node
is left unchanged. The model is represeted as a 256-size codebook
where the index corresponds to the probability of the Zero or the
One node. Two variations are implemented corresponding to whether
the Zero node or the One-node is used as the peg. The main advantage
is that the default prob tables will become considerably smaller and
manageable. Besides there is substantially less risk of over-fitting
for a training set.
Various distributions are tried and the one that gives the best
results is the family of Generalized Gaussian distributions with
shape parameter 0.75. The results are within about 0.2% of fully
trained tables for the Zero peg variant, and within 0.1% of the
One peg variant.
The forward updates are optionally (controlled by a macro)
model-based, i.e. restricted to only convey probabilities from the
codebook. Backward updates can also be optionally (controlled by
another macro) model-based, but is turned off by default. Currently
model-based forward updates work about the same as unconstrained
updates, but there is a drop in performance with backward-updates
being model based.
The model based approach also allows the probabilities for the key
frames to be adjusted from the defaults based on the base_qindex of
the frame. Currently the adjustment function is a placeholder that
adjusts the prob of EOB and Zero node from the nominal one at higher
quality (lower qindex) or lower quality (higher qindex) ends of the
range. The rest of the probabilities are then derived based on the
model from the adjusted prob of zero.