James-Stein Gradient Combiner for Inverse Monte Carlo Rendering

Jeongmin Gu 1 , Bochang Moon 1

Gwangju Institute of Science and Technology

ACM SIGGRAPH 2025 Conference Proceedings

James-Stein Gradient Combiner for Inverse Monte Carlo Rendering
Visualization of texture parameters inferred by a gradient-based optimization framework [Jakob et al. 2022], which iteratively updates the parameters from an initial guess (g) using the gradient of a loss function with respect to the parameters. The loss is defined as the discrepancy between a rendered image generated using the parameters and a user-provided target image (a). We vary the types of gradients: (c) unbiased [Vicini et al. 2021], (d) biased [Chang et al. 2024], and (e) ours, and show the updated texture parameters in parameter space during optimization with each gradient type, e.g., 20 to 100 optimization iterations. Eight samples per pixel (spp) are used to estimate the gradients. Scene optimization using either unbiased but noisy gradients (c) or smooth but biased gradients (d) leads to the inference of either noisy parameters (c) or blurred parameters (d). We combine the two types of gradients (unbiased and biased) in parameter space using our gradient combiner and pass the combined gradients to the optimizer. This gradient combination enables robust optimization (e) while mitigating noise in the unbiased gradient and bias in the biased one.

Abstract

Inferring scene parameters such as BSDFs and volume densities from user- provided target images has been achieved using a gradient-based optimiza- tion framework, which iteratively updates the parameters using the gradient of a loss function defined by the differences between rendered and target images. The gradient can be unbiasedly estimated via a physics-based ren- dering, i.e., differentiable Monte Carlo rendering. However, the estimated gradient can become noisy unless a large number of samples are used for gradient estimation, and relying on this noisy gradient often slows opti- mization convergence. An alternative is to exploit a biased version of the gradient, e.g., a filtered gradient, to achieve faster optimization convergence. Unfortunately, this can result in less noisy but overly blurred scene pa- rameters compared to those obtained using unbiased gradients. This paper proposes a gradient combiner that blends unbiased and biased gradients in parameter space instead of relying solely on one gradient type (i.e., unbiased or biased). We demonstrate that optimization with our combined gradient enables more accurate inference of scene parameters than using unbiased or biased gradients alone.

Contents