Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ tags:
|
|
8 |
---
|
9 |
|
10 |
### Model and Inputs
|
11 |
-
Prithvi is a first-of-its-kind temporal Vision transformer pre-trained by the IBM and NASA team on
|
12 |
|
13 |
![](GFM.png)
|
14 |
|
@@ -16,7 +16,7 @@ The model expects remote sensing data in a video format (B, C, T, H, W). Note th
|
|
16 |
other works around remote sensing modeling. Being able to handle a time series of remote sensing images can benefit a variety of downstream tasks. The model can also handle static images, which can be simply fed into the model with T=1.
|
17 |
|
18 |
### Pre-training
|
19 |
-
The model was pre-trained with NASA's HLS2 L30 product (30m granularity) from the
|
20 |
|
21 |
1. Blue
|
22 |
2. Green
|
|
|
8 |
---
|
9 |
|
10 |
### Model and Inputs
|
11 |
+
Prithvi is a first-of-its-kind temporal Vision transformer pre-trained by the IBM and NASA team on contiguous US Harmonised Landsat Sentinel 2 (HLS) data. Particularly, the model adopts a self-supervised encoder developed with a ViT architecture and Masked AutoEncoder learning strategy with an L1 loss function. The model includes spatial attention across multiple patches and also temporal attention for each patch.
|
12 |
|
13 |
![](GFM.png)
|
14 |
|
|
|
16 |
other works around remote sensing modeling. Being able to handle a time series of remote sensing images can benefit a variety of downstream tasks. The model can also handle static images, which can be simply fed into the model with T=1.
|
17 |
|
18 |
### Pre-training
|
19 |
+
The model was pre-trained with NASA's HLS2 L30 product (30m granularity) from the contiguous United States. The bands that were used are the following:
|
20 |
|
21 |
1. Blue
|
22 |
2. Green
|