Paul Wilkins [Tue, 26 Mar 2013 14:40:24 +0000 (14:40 +0000)]
Adjust mv_ratio_accumulator threshold.
This threshold effectively limits the amount of motion
from one end of a GF/ARF group to the other.
This patch makes the threshold depend on image size.
Ronald S. Bultje [Mon, 25 Mar 2013 19:14:01 +0000 (12:14 -0700)]
Scatter-based scantables.
This gains about 0.2% on derf, 0.1% on hd and 0.4% on stdhd. I can put
this under an experimental flag if wanted, just trying to get my patch
queue in shape.
Deb Mukherjee [Tue, 12 Mar 2013 21:21:08 +0000 (14:21 -0700)]
Implicit weighted prediction experiment
Adds an experiment to use a weighted prediction of two INTER
predictors, where the weight is one of (1/4, 3/4), (3/8, 5/8),
(1/2, 1/2), (5/8, 3/8) or (3/4, 1/4), and is chosen implicitly
based on consistency of the predictors to the already
reconstructed pixels to the top and left of the current macroblock
or superblock.
Currently the weighting is not applied to SPLITMV modes, which
default to the usual (1/2, 1/2) weighting. However the code is in
place controlled by a macro. The same weighting is used for Y and
UV components, where the weight is derived from analyzing the Y
component only.
Ronald S. Bultje [Mon, 25 Mar 2013 19:28:24 +0000 (12:28 -0700)]
Redo banding for all transforms.
Now that the first AC coefficient in both directions use the same DC
as their context, there no longer is a purpose in letting both have
their own band. Merging these two bands allows us to split bands for
some of the very high-frequency AC bands.
In addition, I'm redoing the banding for the 1D-ADST col/row scans. I
don't think the old banding made any sense at all (it merged the last
coefficient of the first row/col in the same band as the first two of
the second row/col), which was clearly an oversight from the band being
applied in scan-order (rather than in their actual position). Now,
coefficients at the same position will be in the same band, regardless
what scan order is used. I think this makes most sense for the purpose
of banding, which is basically "predict energy for this coefficient
depending on the energy of context coefficients" (i.e. pt).
After full re-training, together with previous patch, derf gains about
1.2-1.3%, and hd/stdhd gain about 0.9-1.0%.
Ronald S. Bultje [Tue, 26 Mar 2013 23:46:09 +0000 (16:46 -0700)]
Use above/left (instead of previous in scan-order) as token context.
Pearson correlation for above or left is significantly higher than for
previous-in-scan-order (absolute values depend on position in scan, but
in general, we gain about 0.1-0.2 by using either above or left; using
both basically just makes this even better). For eob branch skipping,
we continue to use the previous token in scan order.
This helps about 0.9% on derf after re-training on a limited data set.
Full re-training and results on larger-resolution clips are pending.
Note that this commit breaks trellis, so we can probably get further
gains out of it by fixing trellis at some later point.
John Koleszar [Wed, 20 Mar 2013 17:22:20 +0000 (10:22 -0700)]
Add an in-loop deringing experiment
Adds a per-frame, strength adjustable, in loop deringing filter. Uses
the existing vp9_post_proc_down_and_across 5 tap thresholded blur
code, with a brute force search for the threshold.
Results almost strictly positive on the YT HD set, either having no
effect or helping PSNR in the range of 1-3% (overall average 0.8%).
Results more mixed for the CIF set, (-0.5 min, 1.4 max, 0.1 avg).
This has an almost strictly negative impact to SSIM, so examining a
different filter or a more balanced search heuristic is in order.
Deb Mukherjee [Wed, 13 Mar 2013 18:03:17 +0000 (11:03 -0700)]
Modeling default coef probs with distribution
Replaces the default tables for single coefficient magnitudes with
those obtained from an appropriate distribution. The EOB node
is left unchanged. The model is represeted as a 256-size codebook
where the index corresponds to the probability of the Zero or the
One node. Two variations are implemented corresponding to whether
the Zero node or the One-node is used as the peg. The main advantage
is that the default prob tables will become considerably smaller and
manageable. Besides there is substantially less risk of over-fitting
for a training set.
Various distributions are tried and the one that gives the best
results is the family of Generalized Gaussian distributions with
shape parameter 0.75. The results are within about 0.2% of fully
trained tables for the Zero peg variant, and within 0.1% of the
One peg variant.
The forward updates are optionally (controlled by a macro)
model-based, i.e. restricted to only convey probabilities from the
codebook. Backward updates can also be optionally (controlled by
another macro) model-based, but is turned off by default. Currently
model-based forward updates work about the same as unconstrained
updates, but there is a drop in performance with backward-updates
being model based.
The model based approach also allows the probabilities for the key
frames to be adjusted from the defaults based on the base_qindex of
the frame. Currently the adjustment function is a placeholder that
adjusts the prob of EOB and Zero node from the nominal one at higher
quality (lower qindex) or lower quality (higher qindex) ends of the
range. The rest of the probabilities are then derived based on the
model from the adjusted prob of zero.
Paul Wilkins [Fri, 22 Mar 2013 15:47:17 +0000 (15:47 +0000)]
Disable zero bin mode boost.
As things stand the zero bin mode boost is hurting somewhat.
In part this seems to be because the boost applied as is
interferes with the rd mode selection loop.
Average gains (derf 0.072, yt 0.243, ythd 0.179 std-hd 0.212%)
Dmitry Kovalev [Thu, 21 Mar 2013 19:51:57 +0000 (12:51 -0700)]
Changing initialization order of mb_to_top_edge & mb_to_bottom_edge
Making consistent initialization of mb_to_{top,botton,left,right}_edge
variables after set_mb_row & set_mb_col calls. A little bit of code cleanup
additionally.
Yunqing Wang [Fri, 15 Mar 2013 18:33:10 +0000 (11:33 -0700)]
Optimize 8x8 idct function
Wrote sse2 functions of vp9_short_idct8x8 and vp9_short_idct10_8x8.
Compared to c version, the sse2 version is 2X faster. The decoder
test didn't show noticeable gain since 8x8 idct doesn't take much
of decoding time (less than 1% in my test).
John Koleszar [Thu, 14 Mar 2013 21:36:08 +0000 (14:36 -0700)]
Replace scaling byte with explicit display size
If the intended display size is different than the size the frame is
coded at, then send that size explicitly in the bitstream. Adds a new
bit to the frame header to indicate whether the extra size fields
are present.
Deb Mukherjee [Sat, 16 Mar 2013 16:26:52 +0000 (09:26 -0700)]
Context-pred fix to not use top/left on edges
This fix resolves some of the mismatch issues being seen
recently. While this is the right thing to do when tiling
is used for this experiment, it is not the underlying cause
of the the mismatches.
Something else is causing writing outside of the allowable
frame area in the encoder leading to this mismatch.
John Koleszar [Sat, 16 Mar 2013 00:26:24 +0000 (17:26 -0700)]
Fix use of NaN in firstpass
If the second reference is better than the first in the long term,
it was possible to try to take the fractional exponent of a
negative number, giving an undefined result.
John Koleszar [Thu, 14 Mar 2013 00:09:05 +0000 (17:09 -0700)]
Fix pulsing issue with scaling
Updates the YV12_BUFFER_CONFIG structure to be crop-aware. The
exiting width/height parameters are left unchanged, storing the
width and height algined to a 16 byte boundary. The cropped
dimensions are added as new fields.
This fixes a nasty visual pulse when switching between scaled and
unscaled frame dimensions due to a mismatch between the scaling
ratio and the 16-byte aligned sizes.
John Koleszar [Wed, 13 Mar 2013 19:15:43 +0000 (12:15 -0700)]
Add VP9_GET_REFERENCE control
This is like VP8_COPY_REFERENCE, but returns a pointer to the reference
frame rather than a copy of it. This is useful when the application
doesn't know what the size of the reference is, as is the case when
scaling is in effect.