收录:
摘要:
Significant progress has been achieved in deep 3D reconstruction from a single frontal view with the aid of generative models; however, the unreliable nature of generated multi-views continues to present challenges in this domain. In this study, we propose Recon3D, a novel framework for 3D reconstruction. Recon3D exclusively utilizes a generated back view, which can be obtained more reliably through generative models based on the frontal reference image, as explicit priors. By incorporating these priors and guidance from a generative model, which is fine-tuned with Dreambooth and then enhanced with ControlNet, we effectively supervise NeRF rendering in the latent space. Subsequently, we convert the NeRF representation into an explicit point cloud and further optimize the explicit representation by referencing high-quality textured reference views. Extensive experiments demonstrate that our method achieves state-of-the-art performance in rendering novel views with superior geometry and texture quality. © 2024 IEEE.
关键词:
通讯作者信息:
电子邮件地址:
来源 :
ISSN: 2160-7508
年份: 2024
页码: 2802-2811
语种: 英文
归属院系: