REF^2-NeRF: Reflection and Refraction aware Neural Radiance Field

Wooseok Kim, Taiki Fukiage, Takeshi Oishi

Paper Bibtex Code Dataset
Ground Truth MS-NeRF REF^2-NeRF

Abstract

Recently, significant progress has been made in the study of methods for 3D reconstruction from multiple images using implicit neural representations, exemplified by the neural radiance field (NeRF) method. Such methods, which are based on volume rendering, can model various light phenomena, and various extended methods have been proposed to accommodate different scenes and situations. However, when handling scenes with multiple glass objects, e.g., objects in a glass showcase, modeling the target scene accurately has been challenging due to the presence of multiple reflection and refraction effects. Thus, this paper proposes a NeRF-based modeling method for scenes containing a glass case. In the proposed method, refraction and reflection are modeled using elements that are dependent and independent of the viewer's perspective. This approach allows us to estimate the surfaces where refraction occurs, i.e., glass surfaces, and enables the separation and modeling of both direct and reflected light components. Compared to existing methods, the proposed method enables more accurate modeling of both glass refraction and the overall scene.

Method Pipeline

Model Pipeline

Glass network MLP models refraction occurred by transparent object and adjusted each sampled position. Then, we decompose the scene into view-dependent and view-independent components to separate reflection from input images and model both.

Novel View Rendering(Ground Truth & REF^2-NeRF)

Scene decomposition

Depth(View-independent)

BibTeX

	@misc{kim2023ref2nerf,
		title={REF$^2$-NeRF: Reflection and Refraction aware Neural Radiance Field}, 
		author={Wooseok Kim and Taiki Fukiage and Takeshi Oishi},
		year={2023},
		eprint={2311.17116},
		archivePrefix={arXiv},
		primaryClass={cs.CV}
  }
   

Acknowledgement

We borrow this website's template from MS-NeRF.