Deblur e-NeRF Synthetic Event Dataset
This repository contains the synthetic event dataset used in Deblur e-NeRF to study the collective effect of camera speed and scene illuminance on the quality of NeRF reconstruction from a moving event camera. It is an extension of the synthetic event dataset used in Robust e-NeRF. The dataset is simulated using an improved version of ESIM with three different camera configurations of increasing difficulty levels (i.e. easy, medium and hard) on seven Realistic Synthetic 360 scenes (adopted in the synthetic experiments of NeRF), resulting in a total of 21 sequence recordings. Please refer to the Deblur e-NeRF paper for more details.
This synthetic event dataset allows for a retrospective comparison between event-based and image-based NeRF reconstruction methods, as the event sequences were simulated under highly similar conditions as the NeRF synthetic dataset. In particular, we adopt the same camera intrinsics and camera distance to the object at the origin. Furthermore, the event camera travels in a hemi-/spherical spiral motion about the object, thereby having a similar camera pose distribution for training. Apart from that, we also use the same test camera poses/views. Nonetheless, this new synthetic event dataset is not only specific to NeRF reconstruction, but also suitable for novel view synthesis, 3D reconstruction, localization and SLAM in general.
If you use this synthetic event dataset for your work, please cite:
@inproceedings{low2024_deblur-e-nerf,
title = {Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or Low-light Conditions},
author = {Low, Weng Fei and Lee, Gim Hee},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2024}
}
Dataset Structure and Contents
This synthetic event dataset is organized first by scene, then by level of difficulty. Each sequence recording is given in the form of a ROS bag named esim.bag
, with the following data streams:
ROS Topic | Data | Publishing Rate (Hz) |
---|---|---|
/cam0/events |
Events | - |
/cam0/pose |
Camera Pose | 1000 |
/imu |
IMU measurements with simulated noise | 1000 |
/cam0/image_raw |
RGB image | 250 |
/cam0/depthmap |
Depth map | 10 |
/cam0/optic_flow |
Optical flow map | 10 |
/cam0/camera_info |
Camera intrinsic and lens distortion parameters | 10 |
It is obtained by running the improved ESIM with the associated esim.conf
configuration file, which references camera intrinsics configuration files pinhole_mono_nodistort_f={1111, 1250}.yaml
and camera trajectory CSV files {hemisphere, sphere}_spiral-rev=4[...].csv
.
The validation and test views of each scene are given in the views/
folder, which is structured according to the NeRF synthetic dataset (except for the depth and normal maps). These views are rendered from the scene Blend-files, given in the scenes/
folder. Specifically, we create a Conda environment with Blender as a Python module installed, according to these instructions, to run the bpy_render_views.py
Python script for rendering the evaluation views.
Setup
- Install Git LFS according to the official instructions.
- Setup Git LFS for your user account with:
git lfs install
- Clone this dataset repository into the desired destination directory with:
git lfs clone https://huggingface.co/datasets/wengflow/deblur-e-nerf
- To minimize disk usage, remove the
.git/
folder. However, this would complicate the pulling of changes in this upstream dataset repository.
- Downloads last month
- 235