Deep Combiner for Independent and Correlated Pixel Estimates

Jonghee Back 1 , Binh-Son Hua 2, 3 , Toshiya Hachisuka 4 , Bochang Moon 1

GIST 1 , VinAI Research 2 , VinUniversity 3 , The University of Tokyo 4

ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2020)

Deep Combiner for Independent and Correlated Pixel Estimates
Our framework allows us to combine two different types of images, independent pixel estimates (e.g., path traced images) and correlated pixel estimates (e.g., denoised images), and reduces remaining errors (residual noise or systematic errors) of the existing methods such as Nonlinearly weighted First-Order Regression (NFOR) [Bitterli et al. 2016], Kernel-Predicting Convolutional Networks (KPCN) [Bako et al. 2017], and Gradient-domain Path Tracing with L1 and L2 reconstruction (GPT-L1 and GPT-L2) [Kettunen et al. 2015]. The numbers are the relative mean squared error (relMSE) [Rousselle et al. 2011].

Abstract

Monte Carlo integration is an efficient method to solve a high-dimensional integral in light transport simulation, but it typically produces noisy images due to its stochastic nature. Many existing methods, such as image denoising and gradient-domain reconstruction, aim to mitigate this noise by introducing some form of correlation among pixels. While those existing methods reduce noise, they are known to still suffer from method-specific residual noise or systematic errors. We propose a unified framework that reduces such remaining errors. Our framework takes a pair of images, one with independent estimates, and the other with the corresponding correlated estimates. Correlated pixel estimates are generated by various existing methods such as denoising and gradient-domain rendering. Our framework then combines the two images via a novel combination kernel.We model our combination kernel as a weighting function with a deep neural network that exploits the correlation among pixel estimates. To improve the robustness of our framework for outliers, we additionally propose an extension to handle multiple image buffers. The results demonstrate that our unified framework can successfully reduce the error of existing methods while treating them as black-boxes.

Contents

BibTex