📝 updating the documentation
This commit is contained in:
parent
0acee99124
commit
cc9eb2cb4e
1 changed files with 8 additions and 3 deletions
11
FORK.md
11
FORK.md
|
@ -21,7 +21,7 @@ pip3 install -r requirements.txt
|
|||
You will also need [COLMAP](colmap.github.io/)
|
||||
|
||||
## How to run
|
||||
|
||||
### Dataset
|
||||
First you need to create or find a dataset. A large set of images (at least 30,
|
||||
more if you want a 360 degree reconstruction).
|
||||
In order to maximise the quality of the reconstuction it is recommended to take
|
||||
|
@ -34,6 +34,7 @@ with tools like ImageMagick (mogrify or convert)
|
|||
mogrify -resize 800 *.jpg
|
||||
```
|
||||
|
||||
### Camera pose estimation
|
||||
Then use the wrapper colmap `colmap_runner/run_colmap.py`
|
||||
|
||||
First change the two lines corresponding to input and output at the end of the
|
||||
|
@ -46,8 +47,7 @@ python3 run_colmap.py
|
|||
```
|
||||
|
||||
Then you will need to use the `format_dataset.py` script to transform the
|
||||
wrapper COLMAP binary format data into the datastructure requirements by
|
||||
NeRF++.
|
||||
wrapper COLMAP binary format data into the datastructure required by NeRF++.
|
||||
|
||||
You again need to change the `input_path` and `output_path`.
|
||||
|
||||
|
@ -55,6 +55,11 @@ You again need to change the `input_path` and `output_path`.
|
|||
python3 format_dataset.py
|
||||
```
|
||||
|
||||
Before training you can visualise your camera poses using the
|
||||
`camera_visualizer/visualize_cameras.py` script.
|
||||
|
||||
### Training the model
|
||||
|
||||
You then need to create the configuration file, copying the example in
|
||||
`configs` and tweaking the values to your need is recommended. Refer to the
|
||||
help inside `ddp_train_nerf.py` if you need to understand a parameter.
|
||||
|
|
Loading…
Reference in a new issue