NeRF-based 3D Reconstruction (Tutor: Fengyi Zhang, wechat: zfy15665875118)
¡¡
In this
task, you are required to reconstruct a scene based on NeRF
1. Capture a set of images for a real-world scene with your device.
2. Calibrate the images using SfM tools such as Colmap to get the corresponding
poses.
3. Run a NeRF-based model to reconstruct the scene from the images and poses you
prepared.
4. Convert the reconstruction result to a 3D mesh (in ¡®ply¡¯ or ¡®obj¡¯ format).
5. Describe the above process in detail in your report. The following details
are required:
i. How is the scene you have chosen and how did you prepare your data
ii. Which NeRF project were you based on and what is your understanding on it
iii. What hyper-parameters did you adjust for adapting to your own data and why
iv. (Optional) What modification do you make to the codebase beyond
hyper-parameter adjustments to improve the reconstruction quality?
Criteria:
1. Minimum level: Geometry reconstruction of an
object-level scene
Since NeRF was originally designed to model object-level scenes, it should be
easy for you to reconstruct the geometry of an object as shown in Fig. 1.
¡¡
Fig. 1. Geometry mesh without textures of an object-level scene.
¡¡
2. Intermediate level: Geometry and appearance
reconstruction of an object-level scene
However, it is still an open question to color your geometry mesh as shown in
Fig. 2 because NeRF was originally designed for novel view synthesis instead of
3D reconstruction. Reference 3 provides a simple way. Of course, we encourage
you to design or utilize more advanced methods to obtain more accurate color
meshes.
¡¡
Fig. 2. Mesh with textures of an object-level scene.
¡¡
3. Advanced level: Geometry (and appearance)
reconstruction of large or unbounded scenes
Again, since NeRF was originally designed to model object-level scenes,
additional work is required to enable its application in large or unbounded
scenes as shown in Fig. 3. For example, you may need a faster NeRF codebase for
large-scale scenes, and certain coordinates mapping for unbounded scenes
Fig. 3. Mesh with textures of a large scene.
¡¡
References
1. Mildenhall, B., et al. "NeRF: Representing scenes as neural radiance fields
for view synthesis." ECCV. 2020.
2. Colmap: https://github.com/colmap/colmap
3. Color mesh: https://github.com/kwea123/nerf_pl/blob/master/README_mesh.md
4. A faster NeRF codebase: https://github.com/ashawkey/torch-ngp
¡¡
Created on: Nov. 09, 2023