Update ReadMe (#6)
Browse files- Update ReadMe (549da1efcb4066c6b9a705a2a78b73ac275a8015)
- Update README.md (96e1918907f5695f13a2c10b3ef04407e64d2c76)
Co-authored-by: Vijay Ekambaram <vijaye12@users.noreply.huggingface.co>
README.md
CHANGED
@@ -16,8 +16,8 @@ forecasters, pre-trained on publicly available time series data with various aug
|
|
16 |
fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. Refer to our [paper](https://arxiv.org/pdf/2401.03955v5.pdf) for more details.
|
17 |
|
18 |
|
19 |
-
**The current open-source version supports point forecasting use-cases ranging from minutely to hourly resolutions
|
20 |
-
(Ex. 10 min, 15 min, 1 hour
|
21 |
|
22 |
**Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
|
23 |
|
@@ -35,6 +35,12 @@ Stay tuned for the release of the model weights for these newer variants.
|
|
35 |
- Script for Finetuning with cross-channel correlation support - to be added soon
|
36 |
|
37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
## Benchmark Highlights:
|
39 |
|
40 |
- TTM (with less than 1 Million parameters) outperforms the following popular Pre-trained SOTAs demanding several hundred Million to Billions of parameters [paper](https://arxiv.org/pdf/2401.03955v5.pdf):
|
@@ -102,10 +108,7 @@ time-series variates, a critical capability lacking in existing counterparts.
|
|
102 |
In addition, TTM also supports exogenous infusion and categorical data which is not released as part of this version.
|
103 |
Stay tuned for these extended features.
|
104 |
|
105 |
-
|
106 |
-
1. Users have to externally standard scale their data independently for every channel before feeding it to the model (Refer to [TSP](https://github.com/IBM/tsfm/blob/main/tsfm_public/toolkit/time_series_preprocessor.py), our data processing utility for data scaling.)
|
107 |
-
2. Enabling any upsampling or prepending zeros to virtually increase the context length for shorter-length datasets is not recommended and will
|
108 |
-
impact the model performance.
|
109 |
|
110 |
|
111 |
### Model Sources
|
@@ -114,6 +117,9 @@ Stay tuned for these extended features.
|
|
114 |
- **Paper:** https://arxiv.org/pdf/2401.03955v5.pdf
|
115 |
- **Paper (Newer variants, extended benchmarks):** https://arxiv.org/pdf/2401.03955.pdf
|
116 |
|
|
|
|
|
|
|
117 |
|
118 |
## Uses
|
119 |
|
|
|
16 |
fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. Refer to our [paper](https://arxiv.org/pdf/2401.03955v5.pdf) for more details.
|
17 |
|
18 |
|
19 |
+
**The current open-source version supports point forecasting use-cases specifically ranging from minutely to hourly resolutions
|
20 |
+
(Ex. 10 min, 15 min, 1 hour.).**
|
21 |
|
22 |
**Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
|
23 |
|
|
|
35 |
- Script for Finetuning with cross-channel correlation support - to be added soon
|
36 |
|
37 |
|
38 |
+
## Recommended Use
|
39 |
+
1. Users have to externally standard scale their data independently for every channel before feeding it to the model (Refer to [TSP](https://github.com/IBM/tsfm/blob/main/tsfm_public/toolkit/time_series_preprocessor.py), our data processing utility for data scaling.)
|
40 |
+
2. The current open-source version supports only minutely and hourly resolutions(Ex. 10 min, 15 min, 1 hour.). Other lower resolutions (say weekly, or monthly) are currently not supported in this version, as the model needs a minimum context length of 512 or 1024.
|
41 |
+
3. Enabling any upsampling or prepending zeros to virtually increase the context length for shorter-length datasets is not recommended and will
|
42 |
+
impact the model performance.
|
43 |
+
|
44 |
## Benchmark Highlights:
|
45 |
|
46 |
- TTM (with less than 1 Million parameters) outperforms the following popular Pre-trained SOTAs demanding several hundred Million to Billions of parameters [paper](https://arxiv.org/pdf/2401.03955v5.pdf):
|
|
|
108 |
In addition, TTM also supports exogenous infusion and categorical data which is not released as part of this version.
|
109 |
Stay tuned for these extended features.
|
110 |
|
111 |
+
|
|
|
|
|
|
|
112 |
|
113 |
|
114 |
### Model Sources
|
|
|
117 |
- **Paper:** https://arxiv.org/pdf/2401.03955v5.pdf
|
118 |
- **Paper (Newer variants, extended benchmarks):** https://arxiv.org/pdf/2401.03955.pdf
|
119 |
|
120 |
+
### External Blogs on TTM
|
121 |
+
- https://aihorizonforecast.substack.com/p/tiny-time-mixersttms-powerful-zerofew
|
122 |
+
- https://medium.com/@david.proietti_17/predicting-venetian-lagoon-tide-levels-with-multivariate-time-series-modeling-8bafdf229588
|
123 |
|
124 |
## Uses
|
125 |
|