ChengsenWang
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ In this paper, we innovatively model time series as a foreign language and const
|
|
21 |
|
22 |
As depicted in Figure 1(b), during the continuous pre-training stage, we pre-train [LLaMA-2-7B-Base](https://huggingface.co/meta-llama/Llama-2-7b-hf) on [ChengsenWang/ChatTime-1-Pretrain-1M](https://huggingface.co/datasets/ChengsenWang/ChatTime-1-Pretrain-1M), yielding [ChengsenWang/ChatTime-1-7B-Base](https://huggingface.co/ChengsenWang/ChatTime-1-7B-Base).
|
23 |
|
24 |
-
For details on ChatTime models, training data and procedures, and experimental results, please refer to the [arXiv](https://arxiv.org/abs/
|
25 |
|
26 |
![](architecture.png)
|
27 |
|
|
|
21 |
|
22 |
As depicted in Figure 1(b), during the continuous pre-training stage, we pre-train [LLaMA-2-7B-Base](https://huggingface.co/meta-llama/Llama-2-7b-hf) on [ChengsenWang/ChatTime-1-Pretrain-1M](https://huggingface.co/datasets/ChengsenWang/ChatTime-1-Pretrain-1M), yielding [ChengsenWang/ChatTime-1-7B-Base](https://huggingface.co/ChengsenWang/ChatTime-1-7B-Base).
|
23 |
|
24 |
+
For details on ChatTime models, training data and procedures, and experimental results, please refer to the [arXiv](https://arxiv.org/abs/2412.11376).
|
25 |
|
26 |
![](architecture.png)
|
27 |
|