--- license: cc-by-nc-sa-4.0 language: - en tags: - spatial-transcriptomics - histology - pathology task_categories: - image-classification - feature-extraction - image-segmentation size_categories: - 100B #### What is HEST-1k? - A collection of 1,108 spatial transcriptomic profiles, each linked and aligned to a Whole Slide Image (with pixel size > 1.15 µm/px) and metadata. - HEST-1k was assembled from 131 public and internal cohorts encompassing: - 25 organs - 2 species (Homo Sapiens and Mus Musculus) - 320 cancer samples from 25 cancer types. HEST-1k processing enabled the identification of 1.5 million expression/morphology pairs and 60 million nuclei ## Instructions for Setting Up HuggingFace Account and Token ### 1. Create an Account on HuggingFace Follow the instructions provided on the [HuggingFace sign-up page](https://huggingface.co/join). ### 2. Accept terms of use of HEST 1. On this page click request access (access will be automatically granted) 2. At this stage, you can already manually inspect the data by navigating in the `Files and version` ### 3. Create a Hugging Face Token 1. **Go to Settings:** Navigate to your profile settings by clicking on your profile picture in the top right corner and selecting `Settings` from the dropdown menu. 2. **Access Tokens:** In the settings menu, find and click on `Access tokens`. 3. **Create New Token:** - Click on `New token`. - Set the token name (e.g., `hest`). - Set the access level to `Write`. - Click on `Create`. 4. **Copy Token:** After the token is created, copy it to your clipboard. You will need this token for authentication. ### 4. Logging Run the following ``` pip install datasets ``` ``` from huggingface_hub import login login(token="YOUR HUGGINGFACE TOKEN") ``` #### Download the entire HEST-1k dataset: ```python import datasets local_dir='hest_data' # hest will be dowloaded to this folder # Note that the full dataset is around 1TB of data dataset = datasets.load_dataset( 'MahmoodLab/hest', cache_dir=local_dir, patterns='*' ) ``` #### Download a subset of HEST-1k: ```python import datasets local_dir='hest_data' # hest will be dowloaded to this folder ids_to_query = ['TENX96', 'TENX99'] # list of ids to query list_patterns = [f"*{id}[_.]**" for id in ids_to_query] dataset = datasets.load_dataset( 'MahmoodLab/hest', cache_dir=local_dir, patterns=list_patterns ) ``` #### Query HEST by organ, techonology, oncotree code... ```python import datasets import pandas as pd local_dir='hest_data' # hest will be dowloaded to this folder meta_df = pd.read_csv("hf://datasets/MahmoodLab/hest/HEST_v1_0_2.csv") # Filter the dataframe by organ, oncotree code... meta_df = meta_df[meta_df['oncotree_code'] == 'IDC'] meta_df = meta_df[meta_df['organ'] == 'Breast'] ids_to_query = meta_df['id'].values list_patterns = [f"*{id}[_.]**" for id in ids_to_query] dataset = datasets.load_dataset( 'MahmoodLab/hest', cache_dir=local_dir, patterns=list_patterns ) ``` ## Loading the data with the python library `hest` Once downloaded, you can then easily load the dataset as a `List[HESTData]`: ```python from hest import load_hest print('load hest...') hest_data = load_hest('hest_data') # location of the data print('loaded hest') for d in hest_data: print(d) ``` Please visit the [github repo](https://github.com/mahmoodlab/hest) and the [documentation](https://hest.readthedocs.io/en/latest/) for more information about the `hest` library API. #### Data organization For each sample: - `wsis/`: H&E stained Whole Slide Images in pyramidal Generic TIFF (or pyramidal Generic BigTIFF if >4.1GB) - `st/`: spatial transcriptomics expressions in a scanpy `.h5ad` object - `metadata/`: metadata - `spatial_plots/`: overlay of the WSI with the st spots - `thumbnails/`: downscaled version of the WSI - `tissue_seg/`: tissue segmentation masks: - {id}_mask.jpg: downscaled or full resolution greyscale tissue mask - {id}_mask.pkl: tissue/holes contours in a pickle file - {id}_vis.jpg: visualization of the tissue mask on the downscaled WSI - `cellvit_seg/`: cellvit nuclei segmentation - `pixel_size_vis/`: visualization of the pixel size - `patches/`: 256x256 H&E patches (0.5µm/px) extracted around ST spots in a .h5 object optimized for deep-learning. Each patch is matched to the corresponding ST profile (see `st/`) with a barcode. - `patches_vis/`: visualization of the mask and patches on a downscaled WSI. ### How to cite: ``` @article{jaume2024hest, author = {Jaume, Guillaume and Doucet, Paul and Song, Andrew H. and Lu, Ming Y. and Almagro-Perez, Cristina and Wagner, Sophia J. and Vaidya, Anurag J. and Chen, Richard J. and Williamson, Drew F. K. and Kim, Ahrong and Mahmood, Faisal}, title = {{HEST-1k: A Dataset for Spatial Transcriptomics and Histology Image Analysis}}, journal = {arXiv}, year = {2024}, month = jun, eprint = {2406.16192}, url = {https://arxiv.org/abs/2406.16192v1} } ``` ### Contact: - Guillaume Jaume Harvard Medical School, Boston, Mahmood Lab (`gjaume@bwh.harvard.edu`) - Paul Doucet Harvard Medical School, Boston, Mahmood Lab (`pdoucet@bwh.harvard.edu`) The dataset is distributed under the Attribution-NonCommercial-ShareAlike 4.0 International license (CC BY-NC-SA 4.0 Deed)