Any-to-Any
Safetensors
ml-4m
4M-7_XL_COYO700M / README.md
roman-bachmann's picture
Update README.md
fbd2e6f verified
|
raw
history blame
1.87 kB
metadata
license: other
license_name: sample-code-license
license_link: LICENSE
library_name: ml-4m

4M: Massively Multimodal Masked Modeling

David Mizrahi*, Roman Bachmann*, Oğuzhan Fatih Kar, Teresa Yeo, Mingfei Gao, Afshin Dehghan, Amir Zamir

Official implementation and pre-trained models for "4M: Massively Multimodal Masked Modeling" (NeurIPS 2023).

Website | Paper | GitHub

4M is a framework for training "any-to-any" foundation models, using tokenization and masking to scale to many diverse modalities. Models trained using 4M can perform a wide range of vision tasks, transfer well to unseen tasks and modalities, and are flexible and steerable multimodal generative models.

Installation

For install instructions, please see https://github.com/apple/ml-4m.

Usage

This model can be loaded from Hugging Face Hub as follows:

from fourm.models.fm import FM
fm = FM.from_pretrained('EPFL-VILAB/4M-7_XL_COYO700M')

Please see https://github.com/apple/ml-4m/blob/main/README_GENERATION.md for more detailed instructions and https://github.com/apple/ml-4m for other 4M model and tokenizer checkpoints.

Safetensors checkpoints are hosted under https://huggingface.co/EPFL-VILAB/4M.

Citation

If you find this repository helpful, please consider citing our work:

@inproceedings{mizrahi20234m,
    title={{4M}: Massively Multimodal Masked Modeling},
    author={David Mizrahi and Roman Bachmann and O{\u{g}}uzhan Fatih Kar and Teresa Yeo and Mingfei Gao and Afshin Dehghan and Amir Zamir},
    booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
    year={2023},
}

License

The model weights in this repository are released under the Sample Code license as found in the LICENSE file.