--- dataset_info: features: - name: sample_key dtype: string - name: vid0_thumbnail dtype: image - name: vid1_thumbnail dtype: image - name: videos dtype: string - name: action dtype: string - name: action_name dtype: string - name: action_description dtype: string - name: source_dataset dtype: string - name: sample_hash dtype: int64 - name: retrieval_frames dtype: string - name: differences_annotated dtype: string - name: differences_gt dtype: string - name: split dtype: string splits: - name: test num_bytes: 15523770.0 num_examples: 557 download_size: 6621934 dataset_size: 15523770.0 configs: - config_name: default data_files: - split: test path: data/test-* --- # Dataset card for VidDiff benchmark This is the dataset for the preprint "Video Action Differencing". It is under review, so we do not link to the paper and we release this dataset anonymously. If you need to contact us, you can find the author contact info by searching it on arxiv. Getting the dataset requires a few steps, and this is because you have to download videos from different sources. You'll need to first use huggingface hub to get the video filenames and annotations, then download the videos from other sources, and then run an extra script to load the videos to the dataset. ## Getting the data - annotations Everything except the videos are available from the hub like this: ``` from datasets import load_dataset repo_name = "viddiff/VidDiffBench" dataset = load_dataset(repo_name) ``` ## Get extra scripts from this repo To get the data loading scripts runL: ``` GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/viddiff/VidDiffBench data/ ``` Which puts some `.py` files in the folder `data`, and skips downloading larger data files. ## Get the data - videos We get videos from prior works (which should be cited if you use the benchmark - see the last section). The source dataset is in the dataset column `source_dataset`. A few datasets let us redistribute videos, so you can download them from this HF repo like this: ``` python data/download_data.py ``` This includes the source datasets [Humann](https://caizhongang.com/projects/HuMMan/) and [JIGSAWS](https://cirl.lcsr.jhu.edu/research/hmm/datasets/jigsaws_release/). The other dataset you'll need to download from the original source. Here's how to do that: *Download EgoExo4d videos* Request an access key from the [docs](https://docs.ego-exo4d-data.org/getting-started/) (it takes 48hrs). Then follow the instructions to install the CLI download tool `egoexo`. We only need a small number of these videos, so get the uids list from `data/egoexo4d_uids.json` and use `egoexo` to download: ``` uids=$(jq -r '.[]' data/egoexo4d_uids.json | tr '\n' ' ' | sed 's/ $//') egoexo -o data/src_EgoExo4D --parts downscaled_takes/448 --uids $uids ``` *Download FineDiving videos* Follow the instructions in [the repo](https://github.com/xujinglin/FineDiving), download the whole thing, and set up a link to it: `ln -s data/src_FineDiving`. ## Making the final dataset with videos Install these packages: ``` pip install numpy Pillow datasets decord lmdb tqdm huggingface_hub ``` Now you can load a dataset, and then load videos. The dataset splits are organized into the 'categories' which are 'fitness', 'ballsports', 'diving', 'music', and 'surgery'. For example to get everything in 'ballsports' and 'diving', run: ``` from data.load_dataset import load_dataset, load_all_videos dataset = load_dataset(splits=['ballsports', 'diving'], subset_mode="0") videos = load_all_videos(dataset, cache=True) ``` Here, `videos[0]` and `videos[1]` are lists of length `len(dataset)`. Each sample has two videos to compare, so for sample `i`, video A is `videos[0][i]` and video B is `videos[0][i]`. For video A, the video itself is `videos[0][i]['video']` and is a numpy array with shape `(nframes,3,H,W)`; the fps is in `videos[0][i]['fps']`. By passing the argument `cache=True` to `load_all_videos`, we create a cache directory at `cache/cache_data/`, and save copies of the videos using numpy memmap (total directory size for the whole dataset is 55Gb). Loading the videos and caching will take a few minutes per split, and about 25mins for the whole dataset. But on subsequent runs, it should be fast - a few seconds for the whole dataset. Finally, you can get just subsets, for example setting `subset_mode=3_per_action` will take 3 video pairs per action. ## License The annotations and all other non-video metadata is realeased under an MIT license. The videos retain the license of the original dataset creators, and the source dataset is given in dataset column `source_dataset`. - EgoExo4D, license is online at [this link](https://ego4d-data.org/pdfs/Ego-Exo4D-Model-License.pdf) - JIGSAWS release notes at [this link](https://cirl.lcsr.jhu.edu/research/hmm/datasets/jigsaws_release/ ) - Humman uses "S-Lab License 1.0" at [this link](https://caizhongang.com/projects/HuMMan/license.txt) - FineDiving use [this MIT license](https://github.com/xujinglin/FineDiving/blob/main/LICENSE) ## Citation This is an anonymous dataset while the paper is under review. If you use it, please look for the bibtex citation by finding it on arxiv under "Video Action Differencing". ``` (google the paper "Video action differencing" to cite) ``` Please also cite the original source datasets. This is all of them, as taken from their own websites or google scholar: ``` @inproceedings{cai2022humman, title={{HuMMan}: Multi-modal 4d human dataset for versatile sensing and modeling}, author={Cai, Zhongang and Ren, Daxuan and Zeng, Ailing and Lin, Zhengyu and Yu, Tao and Wang, Wenjia and Fan, Xiangyu and Gao, Yang and Yu, Yifan and Pan, Liang and Hong, Fangzhou and Zhang, Mingyuan and Loy, Chen Change and Yang, Lei and Liu, Ziwei}, booktitle={17th European Conference on Computer Vision, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part VII}, pages={557--577}, year={2022}, organization={Springer} } @inproceedings{parmar2022domain, title={Domain Knowledge-Informed Self-supervised Representations for Workout Form Assessment}, author={Parmar, Paritosh and Gharat, Amol and Rhodin, Helge}, booktitle={Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part XXXVIII}, pages={105--123}, year={2022}, organization={Springer} } @inproceedings{grauman2024ego, title={Ego-exo4d: Understanding skilled human activity from first-and third-person perspectives}, author={Grauman, Kristen and Westbury, Andrew and Torresani, Lorenzo and Kitani, Kris and Malik, Jitendra and Afouras, Triantafyllos and Ashutosh, Kumar and Baiyya, Vijay and Bansal, Siddhant and Boote, Bikram and others}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={19383--19400}, year={2024} } @inproceedings{gao2014jhu, title={Jhu-isi gesture and skill assessment working set (jigsaws): A surgical activity dataset for human motion modeling}, author={Gao, Yixin and Vedula, S Swaroop and Reiley, Carol E and Ahmidi, Narges and Varadarajan, Balakrishnan and Lin, Henry C and Tao, Lingling and Zappella, Luca and B{\'e}jar, Benjam{\i}n and Yuh, David D and others}, booktitle={MICCAI workshop: M2cai}, volume={3}, number={2014}, pages={3}, year={2014} } ```