Update README.md
Browse files
README.md
CHANGED
@@ -10,17 +10,20 @@ tags:
|
|
10 |
- time-series
|
11 |
---
|
12 |
|
13 |
-
#
|
14 |
|
15 |
<p align="center" width="100%">
|
16 |
<img src="ttm_image.webp" width="600">
|
17 |
</p>
|
18 |
|
19 |
TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research.
|
20 |
-
**With less than 1 Million parameters, TTM introduces the notion of the first-ever “tiny” pre-trained models for Time-Series Forecasting.**
|
21 |
|
22 |
|
23 |
-
TTM
|
|
|
|
|
|
|
24 |
|
25 |
|
26 |
TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight
|
|
|
10 |
- time-series
|
11 |
---
|
12 |
|
13 |
+
# Granite-TimeSeries-TTM-R1 Model Card
|
14 |
|
15 |
<p align="center" width="100%">
|
16 |
<img src="ttm_image.webp" width="600">
|
17 |
</p>
|
18 |
|
19 |
TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research.
|
20 |
+
**With less than 1 Million parameters, TTM (accepted in NeurIPS 24) introduces the notion of the first-ever “tiny” pre-trained models for Time-Series Forecasting.**
|
21 |
|
22 |
|
23 |
+
TTM-R1 comprises TTM variants pre-trained on 250M public training samples. We have another set of TTM models released under TTM-R2 trained on a much larger pretraining
|
24 |
+
dataset (~700M samples) which can be accessed from [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r2) In general, TTM-R2 models perform better than
|
25 |
+
TTM-R1 models as they are trained on larger pretraining dataset. However, the choice of R1 vs R2 depends on your target data distribution. Hence requesting users to
|
26 |
+
try both R1 and R2 variants and pick the best for your data.
|
27 |
|
28 |
|
29 |
TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight
|