task_categories:
- image-classification
- image-segmentation
tags:
- fish
- traits
- processed
- RGB
- biology
- image
- animals
- CV
pretty_name: Fish-Vista
size_categories:
- 10K<n<100K
language:
- en
configs:
- config_name: species_classification
data_files:
- split: train
path: classification_train.csv
- split: test
path: classification_test.csv
- split: val
path: classification_val.csv
- config_name: species_trait_identification
data_files:
- split: train
path: identification_train.csv
- split: test_insp
path: identification_test_insp.csv
- split: test_lvsp
path: identification_test_lvsp.csv
- split: val
path: identification_val.csv
- config_name: trait_segmentation
data_files:
- segmentation_data.csv
- segmentation_masks/images/*.png
Dataset Card for Fish-Visual Trait Analysis (Fish-Vista)
- Note that the '</Use this dataset>' option will only load the CSV files. To download the entire dataset, including all processed images and segmentation annotations, refer to Instructions for downloading dataset and images.
- See [Example Code to Use the Segmentation Dataset])(https://huggingface.co/datasets/imageomics/fish-vista#example-code-to-use-the-segmentation-dataset)
Instructions for downloading dataset and images
- Install Git LFS
- Git clone the fish-vista repository
- Run the following commands in a terminal:
git clone https://huggingface.co/datasets/imageomics/fish-vista
cd fish-vista
- Run the following commands to move all chunked images to a single directory:
mkdir AllImages
find Images -type f -exec mv -v {} AllImages \;
rm -rf Images
mv AllImages Images
You should now have all the images in the Images directory
Install requirements.txt
pip install -r requirements.txt
- Run the following commands to download and process copyrighted images
python download_and_process_nd_images.py --save_dir Images
- This will download and process the CC-BY-ND images that we do not provide in the Images folder
Dataset Structure
/dataset/
segmentation_masks/
annotations/
images/
Images/
chunk_1
filename 1
filename 2
...
filename 10k
chunk_2
filename 1
filename 2
...
filename 10k
.
.
.
chunk_6
filename 1
filename 2
...
filename 10k
ND_Processing_Files
download_and_process_nd_images.py
classification_train.csv
classification_test.csv
classification_val.csv
identification_train.csv
identification_test.csv
identification_val.csv
segmentation_data.csv
segmentation_train.csv
segmentation_test.csv
segmentation_val.csv
metadata/
figures/
# figures included in README
data-bib.bib
Data Instances
Species Classification (FV-419):
classification_<split>.csv
- Approximately 48K images of 419 species for species classification tasks.
- There are about 35K training, 7.6K test, and 5K validation images.
Trait Identification (FV-682):
identification_<split>.csv
- Approximately 53K images of 682 species for trait identification based on species-level trait labels (i.e., presence/absence of traits based on trait labels for the species from information provided by Phenoscape and FishBase).
- About 38K training, 8K
test_insp
(species in training set), 1.9Ktest_lvsp
(species not in training), and 5.2K validation images. - Train, test, and validation splits are generated based on traits, so there are 628 species in train, 450 species in
test_insp
, 51 species intest_lvsp
, and 451 in the validation set (3 species only in val).
Trait Segmentation (FV-1200):
segmentation_<split>.csv
- Pixel-level annotations of 9 different traits for 2,427 fish images.
- About 1.7k training, 600 test and 120 validation images for the segmentation task
- These are also used as manually annotated test set for Trait Identification.
All Segmentation Data:
segmentation_data.csv
- Essentially a collation of the trait segmentation splits
- Used for evaluating trait identification on the entire FV-1200
Image Information
- Type: JPG
- Size (x pixels by y pixels): Variable
- Background (color or none): Uniform (White)
Data Fields
CSV Columns are as follows:
filename
: Unique filename for our processed images.source_filename
: Filename of the source image. Non-unique, since one source filename can result in multiple crops in our processed dataset.original_format
: Original format, all jpg/jpeg.arkid
: ARKID from FishAIR for the original images. Non-unique, since one source file can result in multiple crops in our processed dataset.family
: Taxonomic familysource
: Source museum collection. GLIN, Idigbio or Morphbankowner
: Owner institution within the source collection.standardized_species
: Open-tree-taxonomy-resolved species name. This is the species name that we provide for Fish-Vistaoriginal_url
: URL to download the original, unprocessed imagefile_name
: Links to the image inside the repository. Necessary for HF data viewer. Not to be confused withfilename
license
: License information for the original imageadipose_fin
: Presence/absence of the adipose fin trait. NA for the classification (FV-419) dataset, since it is only used for identification. 1 indicates presence and 0 indicates absence. This is used for trait identification.pelvic_fin
: Presence/absence of the pelvic trait. NA for the classification (FV-419) dataset, since it is only used for identification. 1 indicates presence and 0 indicates absence. This is only used for trait identification.barbel
: Presence/absence of the barbel trait. NA for the classification (FV-419) dataset, since it is only used for identification. 1 indicates presence and 0 indicates absence. This is used for trait identification.multiple_dorsal_fin
: Presence/absence of the dorsal fin trait. NA for the classification (FV-419) dataset, since it is only used for identification. 1 indicates presence, 0 indicates absence and -1 indicates unknown. This is used for trait identification.
Data Splits
For each task (or subset), the split is indicated by the CSV name (e.g., classification_<split>.csv
). More information is provided in Data Instances, above.
Example Code to Use the Segmentation Dataset
We provide an example code to use the FV-1200 segmentation dataset for convenience of users. Please install pillow, numpy, pandas and matplotlib before trying the code:
from PIL import Image
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import json
# Set the the fish_vista_repo_dir to the path of your cloned fish-vista HF repository. This code assumes you are running from within the fish-vista directory
fish_vista_repo_dir = '.'
# segmentation_masks/images contains the annotated segmentation maps for the traits.
# If image filename is <image_filename>.jpg, the corresponding annotation is contained in segmentation_masks/images/<image_filename>.png
seg_mask_path = os.path.join(fish_vista_repo_dir, 'segmentation_masks/images')
# seg_id_trait_map.json maps the annotation id to the corresponding trait name.
# For example, pixels annotated with 1 correspond to the trait: 'Head'
id_trait_map_file = os.path.join(fish_vista_repo_dir, 'segmentation_masks/seg_id_trait_map.json')
with open(id_trait_map_file, 'r') as f:
id_trait_map = json.load(f)
# Read a segmentation csv file
train_path = os.path.join(fish_vista_repo_dir, 'segmentation_train.csv')
train_df = pd.read_csv(train_path)
# Get image and segmentation mask of image at index 'idx'
idx = 0
img_filename = train_df.iloc[idx].filename
img_mask_filename = os.path.splitext(img_filename)[0]+'.png'
# Load and view the mask
img_mask = Image.open(os.path.join(seg_mask_path, img_mask_filename))
plt.imshow(img_mask)
# List the traits that are present in this image
img_mask_arr = np.asarray(img_mask)
print([id_trait_map[str(trait_id)] for trait_id in np.unique(img_mask_arr)])
Dataset Details
Dataset Description
The Fish-Visual Trait Analysis (Fish-Vista) dataset is a large, annotated collection of 60K fish images spanning 1900 different species; it supports several challenging and biologically relevant tasks including species classification, trait identification, and trait segmentation. These images have been curated through a sophisticated data processing pipeline applied to a cumulative set of images obtained from various museum collections. Fish-Vista provides fine-grained labels of various visual traits present in each image. It also offers pixel-level annotations of 9 different traits for 2427 fish images, facilitating additional trait segmentation and localization tasks.
The Fish Vista dataset consists of museum fish images from Great Lakes Invasives Network (GLIN), iDigBio, and Morphbank databases. We acquired these images, along with associated metadata including the scientific species names, the taxonomical family the species belong to, and licensing information, from the Fish-AIR repository.
Supported Tasks and Leaderboards
Figure 2. Comparison of the fine-grained classification performance of different imbalanced classification methods. |
Languages
English
Dataset Creation
Curation Rationale
Fishes are integral to both ecological systems and economic sectors, and studying fish traits is crucial for understanding biodiversity patterns and macro-evolution trends. Currently available fish datasets tend to focus on species classification. They lack finer-grained labels for traits. When segmentation annotations are available in existing datasets, they tend to be for the entire specimen, allowing for segmenation of background, but not trait segmentation. The ultimate goal of Fish-Vista is to provide a clean, carefully curated, high-resolution dataset that can serve as a foundation for accelerating biological discoveries using advances in AI.
Source Data
Images and taxonomic labels were aggregated by Fish-AIR from
- Great Lakes Invasives Network (GLIN)
- iDigBio
- Morphbank
- Illinois Natural History Survey (INHS)
- Minnesota Biodiversity Atlas, Bell Museum
- University of Michigan Museum of Zoology (UMMZ), Division of Fishes
- University of Wisconsin-Madison Zoological Museum - Fish
- Field Museum of Natural History (Zoology, FMNH) Fish Collection
- The Ohio State University Fish Division, Museum of Biological Diversity (OSUM), Occurrence dataset
Phenoscape and FishBase were used to provide the information on traits at the species level.
Open Tree Taxonomy was used to standardize the species names provided by Fish-AIR.
Data Collection and Processing
We carefully curated a set of 60K images sourced from various museum collections through Fish-AIR, including Great Lakes Invasives Network (GLIN), iDigBio, and Morphbank. Our pipeline incorporates rigorous stages such as duplicate removal, metadata-driven filtering, cropping, background removal using the Segment Anything Model (SAM), and a final manual filtering phase. Fish-Vista supports several biologically meaningful tasks such as species classification, trait identification, and trait segmentation.
Annotations
Annotation process
Phenoscape and FishBase were used to provide the information on species-level traits (the species-trait matrix).
Open Tree Taxonomy was used to standardize the species names provided by Fish-AIR.
Image-level trait segmentations were manually annotated as described below.
The annotation process for the segmentation subset was led by Wasila Dahdul. She provided guidance and oversight to a team of three people from NEON, who used CVAT to label nine external traits in the images. These traits correspond to the following terms for anatomical structures in the UBERON anatomy ontology:
- Eye, UBERON_0000019
- Head, UBERON_0000033
- Barbel, UBERON_2000622
- Dorsal fin, UBERON_0003097
- Adipose fin, UBERON_2000251
- Pectoral fin, UBERON_0000151
- Pelvic fin, UBERON_0000152
- Anal fin, UBERON_4000163
- Caudal fin, UBERON_4000164
Personal and Sensitive Information
None
Considerations for Using the Data
Discussion of Biases and Other Known Limitations
- This dataset is imbalanced and long tailed
- It inherits biases inherent to museum images
- Train sets may contain noisy images (in very small numbers)
Recommendations
Licensing Information
The source images in our dataset come with various licenses, mostly within the Creative Commons family. We provide license and citation information, including the source institution for each image, in our metadata CSV files available in the HuggingFace repository. Additionally, we attribute each image to the original FishAIR URL from which it was downloaded.
A small subset of our images (approximately 1k) from IDigBio are licensed under CC-BY-ND, which prohibits us from distributing processed versions of these images. Therefore, we do not publish these 1,000 images in the repository. Instead, we provide the URLs for downloading the original images and a processing script that can be applied to obtain the processed versions we use.
Our dataset is licensed under CC-BY-NC 4.0. However, individual images within our dataset may have different licenses, which are specified in our CSV files.
Citation
If you use Fish-Vista in your research, please cite both our paper and the dataset. Please be sure to also cite the original data sources using the citations provided in metadata/data-bib.bib.
BibTeX:
Paper
@misc{mehrab2024fishvista,
title={Fish-Vista: A Multi-Purpose Dataset for Understanding & Identification of Traits from Images},
author={Kazi Sajeed Mehrab and M. Maruf and Arka Daw and Harish Babu Manogaran and Abhilash Neog and Mridul Khurana and Bahadir Altintas and Yasin Bakis and Elizabeth G Campolongo and Matthew J Thompson and Xiaojun Wang and Hilmar Lapp and Wei-Lun Chao and Paula M. Mabee and Henry L. Bart Jr. au2 and Wasila Dahdul and Anuj Karpatne},
year={2024},
eprint={2407.08027},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.08027},
}
Data
@dataset{fishvistaData,
title = {Fish-Vista: A Multi-Purpose Dataset for Understanding & Identification of Traits from Images},
author = {Kazi Sajeed Mehrab and M. Maruf and Arka Daw and Harish Babu Manogaran and Abhilash Neog and Mridul Khurana and Bahadir Altintas and Yasin Bakış and Elizabeth G Campolongo and Matthew J Thompson and Xiaojun Wang and Hilmar Lapp and Wei-Lun Chao and Paula M. Mabee and Henry L. Bart Jr. and Wasila Dahdul and Anuj Karpatne},
year = {2024},
url = {https://huggingface.co/datasets/imageomics/fish-vista},
doi = {10.57967/hf/3471},
publisher = {Hugging Face}
}
Acknowledgements
This work was supported by the Imageomics Institute, which is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under Award #2118240 (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We would like to thank Shelley Riders, Jerry Tatum, and Cesar Ortiz and for segmentation data annotation.
Glossary
More Information
Dataset Card Authors
Kazi Sajeed Mehrab and Elizabeth G. Campolongo