Papers
arxiv:2412.14123

AnySat: An Earth Observation Model for Any Resolutions, Scales, and Modalities

Published on Dec 18
Β· Submitted by g-astruc on Dec 19
Authors:
,

Abstract

Geospatial models must adapt to the diversity of Earth observation data in terms of resolutions, scales, and modalities. However, existing approaches expect fixed input configurations, which limits their practical applicability. We propose AnySat, a multimodal model based on joint embedding predictive architecture (JEPA) and resolution-adaptive spatial encoders, allowing us to train a single model on highly heterogeneous data in a self-supervised manner. To demonstrate the advantages of this unified approach, we compile GeoPlex, a collection of 5 multimodal datasets with varying characteristics and 11 distinct sensors. We then train a single powerful model on these diverse datasets simultaneously. Once fine-tuned, we achieve better or near state-of-the-art results on the datasets of GeoPlex and 4 additional ones for 5 environment monitoring tasks: land cover mapping, tree species identification, crop type classification, change detection, and flood segmentation. The code and models are available at https://github.com/gastruc/AnySat.

Community

Paper author Paper submitter
β€’
edited 8 days ago

Key Features:
🌍 Versatile Model:Handles diverse datasets with resolutions spanning 3–11 channels, tiles ranging from 0.3 to 2600 hectares, and any combination of 11 sensors.
πŸš€ Simple to Use: Install and download AnySat with a single line of code, select your desired modalities and patch size, and immediately generate rich features.
πŸ¦‹ Flexible Task Adaptation: Supports fine-tuning and linear probing for tasks like tile-wise classification and semantic segmentation.
πŸ§‘β€πŸŽ“ Multi-dataset Training: Trains a single model across multiple datasets with varying characteristics.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2412.14123 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2412.14123 in a Space README.md to link it from this page.

Collections including this paper 2