From 7aee03e7ce6b1984d012a0a2bc2a149fd89bad39 Mon Sep 17 00:00:00 2001 From: Kai Zhang Date: Sat, 20 Mar 2021 22:04:12 -0400 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index dd06a57..f49659d 100644 --- a/README.md +++ b/README.md @@ -66,7 +66,7 @@ You can use the scripts inside `colmap_runner` to generate camera parameters fro * Distortion-free images are inside `out_dir/posed_images/images`. * Raw COLMAP intrinsics and poses are stored as a json file `out_dir/posed_images/kai_cameras.json`. * Normalized cameras are stored in `out_dir/posed_images/kai_cameras_normalized.json`. See the **Scene normalization method** in the **Data** section. - * Split distortion-free images and `kai_cameras_normalized.json` according to your need. + * Split distortion-free images and `kai_cameras_normalized.json` according to your need. You might find the self-explanatory `data_loader_split.py` helpful when you try converting the json file to data format compatible with NeRF++. ## Visualize cameras in 3D Check `camera_visualizer/visualize_cameras.py` for visualizing cameras in 3D. It creates an interactive viewer for you to inspect whether your cameras have been normalized to be compatible with this codebase. Below is a screenshot of the viewer: green cameras are used for training, blue ones are for testing, while yellow ones denote a novel camera path to be synthesized; red sphere is the unit sphere.