📝 Adding instructions
This commit is contained in:
parent
90d053705a
commit
365767a6ae
1 changed files with 73 additions and 0 deletions
73
FORK.md
Normal file
73
FORK.md
Normal file
|
@ -0,0 +1,73 @@
|
|||
# Why this fork
|
||||
|
||||
This is a fork from the original NeRF++ implementation. The original code has
|
||||
some isses I needed to fix in order to get the algorithm working and to be able
|
||||
to reproduce the authors work.
|
||||
Hopefully this version should be better explained, will the necessary scripts
|
||||
and without bugs to enable someone to train a dataset from start to finish
|
||||
without having delve into the code if do not want to.
|
||||
|
||||
## How to install
|
||||
|
||||
Create a virtual env if you want to
|
||||
```
|
||||
pip3 -m venv env
|
||||
source env/bin/activate
|
||||
```
|
||||
Install the needed dependencies
|
||||
```
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
You will also need [COLMAP](colmap.github.io/)
|
||||
|
||||
## How to run
|
||||
|
||||
First you need to create or find a dataset. A large set of images (at least 30,
|
||||
more if you want a 360 degree reconstruction).
|
||||
In order to maximise the quality of the reconstuction it is recommended to take
|
||||
pictures with the same illumination, from differents angles from the same
|
||||
subject (take a step between each picture, do not only rotate).
|
||||
Remember that higher quality pictures can always be resized later if needed
|
||||
with tools like ImageMagick (mogrify or convert)
|
||||
|
||||
```
|
||||
mogrify -resize 800 *.jpg
|
||||
```
|
||||
|
||||
Then use the wrapper colmap `colmap_runner/run_colmap.py`
|
||||
|
||||
First change the two lines corresponding to input and output at the end of the
|
||||
script `img_dir` is your dataset and `output_dir` is the ouput. If your COLMAP
|
||||
binary is not located in `/usr/local/bin/colmap` also change it.
|
||||
|
||||
```
|
||||
cd colmap_runner
|
||||
python3 run_colmap.py
|
||||
```
|
||||
|
||||
Then you will need to use the `format_dataset.py` script to transform the
|
||||
wrapper COLMAP binary format data into the datastructure requirements by
|
||||
NeRF++.
|
||||
|
||||
You again need to change the `input_path` and `output_path`.
|
||||
|
||||
```
|
||||
python3 format_dataset.py
|
||||
```
|
||||
|
||||
You then need to create the configuration file, copying the example in
|
||||
`configs` and tweaking the values to your need is recommended. Refer to the
|
||||
help inside `ddp_train_nerf.py` if you need to understand a parameter.
|
||||
|
||||
```
|
||||
python3 ddp_train_nerf.py --config configs/lupo/training_lupo.txt
|
||||
```
|
||||
|
||||
Your training should be running, if you want to visualise the training in real
|
||||
time using tensorboard you can use:
|
||||
|
||||
```
|
||||
tensorboard --logdir logs --host localhost
|
||||
```
|
||||
And opening the given socket (`ip:port`) in your browser.
|
||||
It should be 0.0.0.0:6006
|
Loading…
Reference in a new issue