No description
Find a file
2020-10-12 22:17:37 -04:00
configs update readme 2020-10-12 11:42:46 -04:00
.gitignore clean code 2020-10-12 11:05:53 -04:00
data_loader_split.py clean code 2020-10-12 11:05:53 -04:00
data_verifier.py first commit 2020-10-11 21:33:31 -04:00
ddp_model.py clean code 2020-10-12 11:05:53 -04:00
ddp_test_nerf.py clean code 2020-10-12 11:05:53 -04:00
ddp_train_nerf.py clean code 2020-10-12 11:05:53 -04:00
environment.yml add environment 2020-10-11 22:33:12 -04:00
nerf_network.py remove auxiliary loss 2020-10-11 22:11:56 -04:00
nerf_sample_ray_split.py clean code 2020-10-12 11:05:53 -04:00
README.md Update README.md 2020-10-12 22:17:37 -04:00
utils.py first commit 2020-10-11 21:33:31 -04:00

NeRF++

Codebase for arXiv preprint:

  • Work with 360 capture of large-scale unbounded scenes.
  • Support multi-gpu training and inference with PyTorch DistributedDataParallel (DDP).
  • Optimize per-image autoexposure (experimental feature)

Data

  • Download our preprocessed data from tanks_and_temples, lf_data.
  • Put the data in the sub-folder data/ of this code directory.
  • Data format.
    • Each scene consists of 3 splits: train/test/validation.
    • Intrinsics and poses are stored as flattened 4x4 matrices.
    • Opencv camera coordinate system is adopted, i.e., x--->right, y--->down, z--->scene.
    • Scene normalization: move the average camera center to origin, and put all the camera centers inside the unit sphere.

Create environment

conda env create --file environment.yml
conda activate nerf-ddp

Training (Use all available GPUs by default)

python ddp_train_nerf.py --config configs/tanks_and_temples/tat_training_truck.txt

Testing (Use all available GPUs by default)

python ddp_test_nerf.py --config configs/tanks_and_temples/tat_training_truck.txt \
                        --render_splits test,camera_path