Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Neural radiance field (NeRF) has emerged as a versatile scene representation. However, it is still unintuitive to edit a pretrained NeRF because the network parameters and the scene appearance are often not explicitly associated. In this article, we introduce the first framework that enables users to retouch undesired regions in a pretrained NeRF scene without accessing any training data and category-specific data prior. The user first draws a free-form mask to specify a region containing the unwanted objects over an arbitrary rendered view from the pretrained NeRF. Our framework transfers the user-drawn mask to other rendered views and estimates guiding color and depth images within transferred masked regions. Next, we formulate an optimization problem that jointly inpaints the image content in all masked regions by updating NeRF's parameters. We demonstrate our framework on diverse scenes and show it obtained visually plausible and structurally consistent results using less user manual efforts.

Download full-text PDF

Source
http://dx.doi.org/10.1109/MCG.2023.3336224DOI Listing

Publication Analysis

Top Keywords

pretrained nerf
16
masked regions
8
nerf
5
nerf-in free-form
4
free-form inpainting
4
pretrained
4
inpainting pretrained
4
nerf rgb-d
4
rgb-d priors
4
priors neural
4

Similar Publications

3D reconstruction is a pivotal technology that recreates three-dimensional structures from two-dimensional representations, facilitating AI's understanding and interaction with the real world. However, existing methods pose challenges from two perspectives, i.e.

View Article and Find Full Text PDF

We present a new generalizable NeRF method that is able to directly generalize to new unseen scenarios and perform novel view synthesis with as few as two source views. The key to our approach lies in the explicitly modeled correspondence matching information, so as to provide the geometry prior to the prediction of NeRF color and density for volume rendering. The explicit correspondence matching is quantified with the cosine similarity between image features sampled at the 2D projections of a 3D point on different views, which is able to provide reliable cues about the surface geometry.

View Article and Find Full Text PDF

3D scene stylization refers to generating stylized images of the scene at arbitrary novel view angles following a given set of style images while ensuring consistency when rendered from different views. Recently, several 3D style transfer methods leveraging the scene reconstruction capabilities of pre-trained neural radiance fields (NeRF) have been proposed. To successfully stylize a scene this way, one must first reconstruct a photo-realistic radiance field from collected images of the scene.

View Article and Find Full Text PDF

A monocular thoracoscopic 3D scene reconstruction framework based on NeRF.

Med Biol Eng Comput

July 2025

School of Computer Science and Engineering, Northeastern University, No. 169, Baoyuan Street, Shenyang, Liaoning, 110819, China.

With the increasing use of image-based 3D reconstruction in medical procedures, accurate scene reconstruction plays a crucial role in surgical navigation and assisted treatment. However, the monotonous colors, limited image features, and obvious brightness fluctuations of thoracoscopic scenes make the feature point matching process, on which traditional 3D reconstruction methods rely, unstable and unreliable. It brings a great challenge to accurate 3D reconstruction.

View Article and Find Full Text PDF

3D neural rendering enables photo-realistic reconstruction of a specific scene by encoding discontinuous inputs into a neural representation. Despite the remarkable rendering results, the storage of network parameters is not transmission-friendly and not extendable to metaverse applications. In this paper, we propose an invertible neural rendering approach that enables generating an interactive 3D model from a single image (i.

View Article and Find Full Text PDF