Kalman filtering of patches for frame-recursive video denoising

Pablo Arias, Jean-Michel Morel
CMLA, ENS Paris-Saclay



Overview


A frame recursive video denoising method computes each output frame as a function of only the current noisy frame and the previous denoised output. Frame recursive methods were among the earliest approaches for video denoising. However, in the last fifteen years they have been almost exclusively used for real-time applications but retaining a significantly lower denoising performance than the state-of-the-art. In this work we propose Backward Non-Local Kalman (BNLK), a simple frame recursive method which is fast, has a low memory complexity and competes with more complex state-of-the-art methods which require to process several input frames for producing each output frame. Furthermore, the proposed approach recovers many details that are missed by most non-recursive methods. As an additional contribution, we also propose an off-line post-processing of the denoised video that boosts denoising quality and temporal consistency.

This webpage contains the video denoising results shown in the paper, as well as links to our code and other related work.

Overview
Figure Overview of the frame denoising method. To denoise each frame, the method takes as input the current noisy frame and the previous denoised frame. The latter is registered to the current noisy frame using the optical flow from t to t-1. We extract groups of similar patches from the current noisy frame and their correspondents in the previous denoised frame. Using this set of pairs of corresponding patches we learn the parameters of a linear Gaussian dynamical model in the DCT domain, and use them to estimate the noise-free patches. The estimated patches are then aggregated over the output frame. We iterate this procedure two times for each frame.


Publications and code


paper|code |web P. Arias, J.-M. Morel, Kalman filtering of patches for frame-recursive video denoising, NTIRE CVPRW 2019. (methods BNLK2 and BNLK2+S)


Our previous works on video denoising:

preprint|code A. Davy, T. Ehret, J.-M. Morel, P. Arias, G. Facciolo, Non-local video denoising by CNN. ICIP, 2019. (VNLnet method)

preprint|code T. Ehret, A. Davy, J.-M. Morel, G. Facciolo, P. Arias, Model-blind video denoising via frame-to-frame training. CVPR, 2019.

paper|code|web T. Ehret, P. Arias, J.-M. Morel, NL-Kalman: A recursive video denoising algorithm. ICIP, 2018. (method FNLK)

paper|web P. Arias, G. Facciolo, J.-M. Morel, A comparison of patch-based models in video denoising. IVMSP, 2018. (method BM4D-OF)

paper|code|web P.Arias, J.-M. Morel Video denoising via empirical Bayesian estimation of space-time patches. JMIV 60(1):70-93, 2017. (method VNLB)

paper T. Ehret, P. Arias, J.-M. Morel, Global patch search boost video denoising. VISIGRAPP 2016.


Results






We evaluated our method on seven 960x540 grayscale test sequences. The sequences were taken from Derf's video database. The originals are RGB sequences of resolution 1920x1080. We converted them to grayscale by averaging the channels and downscaled them.

We show results for two versions of our method: an on-line version denoted BNLK2 ("2" because we apply two iterations) and an off-line post-processing called BNLK2+S. We compared them against the following algorithms:

VNLB:
Method of [Arias, Morel, 2017], using the hard-thresholding estimator for the a priori variances (VNLB-H in the paper), with a 3D patch size of 10x10x2. Implementation available here.
VNLnet:
CNN-based method introduced in [Davy et al. 2019]. The network is fed with 15 frames that have been register to the reference frame by matching large patches. Implementation available here.
SPTWO:
Method of [Buades, Lisani, Miladinović, 2016] Implementation available here.
BM4D-OF:
Modified version of the BM4D method [Maggioni et al. 2013] using 3D patches 10x10x2 and optical flow to guide the patch search. See this article for more details.
V-BM4D-mp:
Oficial implementation of [Maggioni et al. 2012] setting the parameters according to the modified profile (best parameter profile available)
V-BM3D-np:
Oficial implementation of [Dabov, Foi, Egiazarian, 2007] setting the parameters according to the normal profile (best parameter profile available)
FNLK:
Forward Non-Local Kalman method [Ehret et al. 2018]. Implementation available here. This method is closely related to the proposed BNLK. In fact, BNLK can be seen as an approximate and simplified version of FNLK, with reduced running time and memory footprint.



Average PSNR
Average SSIM
Figure: PSNR and SSIM averaged over 7 grayscale sequences with resolution 960x540 and 100 frames. Recursive methods are shown in purple and non-recusive methods in red. The staturation of the color of the bars indicates the noise standard deviation. From the lightest to the strongest color: 40, 20, 10. The non-recursive methods require processing several input frame to produce each output frame (from VBM3D uses 9 frames, to VBM4D, VNLnet and VNLB which use 15 frames). Recursive methods such as FNLK and BNLK2 match the performance of VBM3D and VBM4D using only the current noisy frame and the previous denoisied frame.




Downloads


The following are links to the results obtained by the above methods for each sequence and noise level. We have splitted them by sequence and noise level and in non-recursive and recursive methods.

Noise-free sequences: crowd park joy pedestrians station sunflower touchdown tractor

Recursive methods (FNLK, BNLK2, BNLK2+S):
sigma 10: crowd park joy pedestrians station sunflower touchdown tractor
sigma 20: crowd park joy pedestrians station sunflower touchdown tractor
sigma 40: crowd park joy pedestrians station sunflower touchdown tractor

Non-recursive methods (VBM3D, VBM4D, SPTWO, VNLDCT, BM4D-OF, VNLB):
sigma 10: crowd park joy pedestrians station sunflower touchdown tractor
sigma 20: crowd park joy pedestrians station sunflower touchdown tractor
sigma 40: crowd park joy pedestrians station sunflower touchdown tractor

VNLnet sigmas 10, 20 and 40: crowd park joy pedestrians station sunflower touchdown tractor




Institutional acknowledgements


Work partly financed by IDEX Paris-Saclay IDI 2016, ANR-11-IDEX-0003-02, Office of Naval research grant N00014-17-1-2552, DGA Astrid project «filmer la Terre» no ANR-17-ASTR-0013-01, MENRT.