nerf_plus_plus/README.md

30 lines
1.2 KiB
Markdown
Raw Normal View History

2020-10-12 17:05:53 +02:00
# NeRF++
Codebase for paper:
* Work with 360 capture of large-scale unbounded scenes.
* Support multi-gpu training and inference.
2020-10-12 17:42:46 +02:00
* Optimize per-image autoexposure (ongoing work)
2020-10-12 17:05:53 +02:00
## Data
2020-10-12 17:30:30 +02:00
* Download our preprocessed data from [tanks_and_temples](https://drive.google.com/file/d/11KRfN91W1AxAW6lOFs4EeYDbeoQZCi87/view?usp=sharing), [lf_data](https://drive.google.com/file/d/1gsjDjkbTh4GAR9fFqlIDZ__qR9NYTURQ/view?usp=sharing).
* Put the data in the sub-folder data/ of this code directory.
2020-10-12 17:05:53 +02:00
* Data format.
2020-10-12 17:09:10 +02:00
* Each scene consists of 3 splits: train/test/validation.
* Intrinsics and poses are stored as flattened 4x4 matrices.
* Opencv camera coordinate system is adopted, i.e., x--->right, y--->down, z--->scene.
* Scene normalization: move the average camera center to origin, and put all the camera centers inside the unit sphere.
## Create environment
```bash
conda env create --file environment.yml
```
2020-10-12 17:05:53 +02:00
2020-10-12 18:02:40 +02:00
## Training (Use all available GPUs by default)
2020-10-12 17:05:53 +02:00
```python
python ddp_train_nerf.py --config configs/tanks_and_temples/tat_training_truck.txt
```
2020-10-12 18:02:40 +02:00
## Testing (Use all available GPUs by default)
2020-10-12 17:05:53 +02:00
```python
2020-10-12 18:03:32 +02:00
python ddp_test_nerf.py --config configs/tanks_and_temples/tat_training_truck.txt \
--render_splits test,camera_path
2020-10-12 17:05:53 +02:00
```