bianly20 commited on
Commit
78c0046
1 Parent(s): 0649533

add README

Browse files
Files changed (2) hide show
  1. README.md +19 -0
  2. dataloader.py +43 -0
README.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Data Format
2
+
3
+ Here we explain the `poses_bounds.npy` file format. This file stores a numpy array of size Nx17 (where N is the number of input videos). You can load the data using the following codes.
4
+
5
+ ```
6
+ poses_arr = np.load(os.path.join(basedir, 'poses_bounds.npy'))
7
+ poses = poses_arr[:, :-2].reshape([-1, 3, 5]).transpose([1,2,0])
8
+ bds = poses_arr[:, -2:].transpose([1,0])
9
+ ```
10
+
11
+ Each row of length 17 gets reshaped into a 3x5 pose matrix and 2 depth values that bound the closest and farthest scene content from that point of view.
12
+
13
+ The pose matrix is a 3x4 camera-to-world affine transform concatenated with a 3x1 column `[image height, image width, focal length]` to represent the intrinsics (we assume the principal point is centered and that the focal length is the same for both x and y).
14
+
15
+ <big>NOTE: In our dataset, the focal length for different cameras are different!!!</big>
16
+
17
+ The right-handed coordinate system of the the rotation (first 3x3 block in the camera-to-world transform) is as follows: from the point of view of the camera, the three axes are `[down, right, backwards]` which some people might consider to be `[-y,x,z]`, where the camera is looking along `-z`. (The more conventional frame `[x,y,z]` is `[right, up, backwards]`. The COLMAP frame is `[right, down, forwards]` or `[x,-y,-z]`.)
18
+
19
+ We also provide an example of our dataloader in `dataloader.py`.
dataloader.py ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch.utils.data import Dataset
3
+ import glob
4
+ import numpy as np
5
+ import os
6
+ from tqdm import tqdm
7
+
8
+
9
+
10
+ class Robo360(Dataset):
11
+ def __init__(self, datadir, downsample=4):
12
+
13
+ self.root_dir = datadir
14
+ self.downsample = downsample
15
+
16
+ self.read_meta()
17
+
18
+
19
+ def read_meta(self):
20
+
21
+ poses_bounds = np.load(os.path.join(self.root_dir, 'poses_bounds.npy')) # (N_cams, 17)
22
+
23
+ poses = poses_bounds[:, :15].reshape(-1, 3, 5) # (N_images, 3, 5)
24
+ self.near_fars = poses_bounds[:, -2:] # (N_images, 2)
25
+
26
+ # Step 1: rescale focal length according to training resolution
27
+ H, W, _ = poses[0, :, -1]
28
+ self.focal = poses[:, -1, -1]
29
+ self.img_wh = np.array([int(W / self.downsample), int(H / self.downsample)])
30
+ self.focal = self.focal * self.img_wh[0] / W
31
+
32
+ # Step 2: correct poses
33
+ # Original poses has rotation in form "down right back", change to "right up back"
34
+ # See https://github.com/bmild/nerf/issues/34
35
+ self.poses = np.concatenate([poses[..., 1:2], -poses[..., :1], poses[..., 2:4]], -1)
36
+
37
+
38
+
39
+ def __len__(self):
40
+ return 0
41
+
42
+ def __getitem__(self, idx):
43
+ return None