--- license: cc-by-4.0 task_categories: - image-to-text - visual-question-answering language: - en size_categories: - 100M The OmniCorpus contains three sections: - **OmniCorpus-CC**: processed from dumps in Common Crawl from 2013 to Nov./Dec. 2023. - **OmniCorpus-CW**: sourced from Chinese internet resources, will be availiable in [OpenDataLab](https://opendatalab.com/) platform. - **OmniCorpus-YT**: samples Youtube video frames as images and collects subtitles as texts. Code for pre-training, evaluating, main body extracting, and filtering have been released in the official [repository](https://github.com/OpenGVLab/OmniCorpus). A pre-trained model is availiable [here](). We are processing and uploading the rest data sections as soon as possible. ### Update (2024-10-16): We are uploading the natural arrangement version of the OmniCorpus-CC documents. Coming soon: - Documents with Similarities: Documents with split at the sentence level, resulting in minor differences of text content. # Data Pipeline Our data pipeline consists of five key stages: main body extraction, preliminary text filtering, document deduplication, image downloading \& filtering, and detailed text filtering. Each stage efficiently reduces the dataset to retain only high-quality data. Please refer to our paper for more details about the data pipeline. image # Usages The image-text interleaved documents are recommanded for the following usages: - Pre-training multimodal large language model (MLLM): Recent MLLMs (such as Flamingo series, EMU series, IDEFICS series, MM1, Cambrian-1, and xGen-MM) have shown that image-text interleaved data aids multimodal in-context learning and maintains the capabilities of large language models during multimodal fine-tuning. - Long text-image retrieval: We provide image-text similarities calculated with CLIP, which can convert the documents to image-text retrieval dataset with longer text. A retrieval model pre-trained on such data can retrieval images based on longer text, which can be used for multimodal RAG, converting pure text to multimodal sample, etc. - Source for futher dataset research: Our data is large-scale, which can serve as the source for researches for data curation strategies. We provide many useful attributes as metadata for each document, which can enrich the filtering strategy and reduce the cost. - ...... # Data Format Following common practices, the data is organized into Parquet file format. You might encounter errors when using `pandas.read_parquet` (because the data structure contains nested elements). We recommend using fastparquet to load the parquet files. ```Python import fastparquet df = fastparquet.ParquetFile(parquet_file_path).to_pandas() # You can also use iter_batches parquet_file = pq.ParquetFile(filepath) for batch in parquet_file.iter_batches(): df = batch.to_pandas() ``` You can convert the i-th document and convert it into a dictionary. ```Python doc_dict = df.iloc[i].to_dict() ``` The document format is as follow: ```json { 'images': [ , None, , None, ], 'texts': [ None, None, , ] 'metadata': [ , None, , None ], 'general_metadata': { "url": , "id": , "domain": , "fluency_prob": , "non_advertisement_prob": , "porn_prob": , "politics_prob": , "toxic_prob": , } } ``` Each image metadata is as follow: ```json { "img_url_sha": , "width": , "height": , "bytes": , "d_hash": , "p_hash": , "d_hash_dup_count": , "p_hash_dup_count": , "aesthetic prob": , "unsafe prob": , } ``` # License OmniCorpus is released under a [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/deed.en) license, with the primary intent of supporting research activities. # Citation ``` @article{li2024omnicorpus, title={OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text}, author={Li, Qingyun and Chen, Zhe and Wang, Weiyun and Wang, Wenhai and Ye, Shenglong and Jin, Zhenjiang and others}, journal={arXiv preprint arXiv:2406.08418}, year={2024} } ```