No description
configs | ||
.gitignore | ||
data_loader_split.py | ||
data_verifier.py | ||
ddp_model.py | ||
ddp_test_nerf.py | ||
ddp_train_nerf.py | ||
environment.yml | ||
nerf_network.py | ||
nerf_sample_ray_split.py | ||
README.md | ||
utils.py |
NeRF++
Codebase for arXiv preprint:
- Work with 360 capture of large-scale unbounded scenes.
- Support multi-gpu training and inference with PyTorch DistributedDataParallel (DDP).
- Optimize per-image autoexposure (experimental feature).
Data
- Download our preprocessed data from tanks_and_temples, lf_data.
- Put the data in the sub-folder data/ of this code directory.
- Data format.
- Each scene consists of 3 splits: train/test/validation.
- Intrinsics and poses are stored as flattened 4x4 matrices (row-major).
- Poses are camera-to-world, not world-to-camera transformations.
- Opencv camera coordinate system is adopted, i.e., x--->right, y--->down, z--->scene.
- To convert camera poses between Opencv and Opengl conventions, the following snippet can be used for both Opengl2Opencv and Opencv2Opengl.
import numpy as np def convert_pose(C2W): flip_yz = np.eye(4) flip_yz[1, 1] = -1 flip_yz[2, 2] = -1 C2W = np.matmul(C2W, flip_yz) return C2W
- Scene normalization: move the average camera center to origin, and put all the camera centers inside the unit sphere.
Create environment
conda env create --file environment.yml
conda activate nerf-ddp
Training (Use all available GPUs by default)
python ddp_train_nerf.py --config configs/tanks_and_temples/tat_training_truck.txt
Testing (Use all available GPUs by default)
python ddp_test_nerf.py --config configs/tanks_and_temples/tat_training_truck.txt \
--render_splits test,camera_path