liswei commited on
Commit
198e58d
1 Parent(s): e105461

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -12
README.md CHANGED
@@ -3,7 +3,9 @@ library_name: transformers
3
  license: apache-2.0
4
  datasets:
5
  - liswei/zhtw-news-and-articles-2B
6
- base_model: apple/OpenELM-270M
 
 
7
  language:
8
  - zh
9
  metrics:
@@ -11,16 +13,49 @@ metrics:
11
  pipeline_tag: text-generation
12
  ---
13
 
14
- # Model Card for Chinese-OpenELM-270M
 
 
15
 
16
- Continual pre-trained from [apple/OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) with [liswei/zhtw-news-and-articles-2B](https://huggingface.co/datasets/liswei/zhtw-news-and-articles-2B):
17
 
18
- * Extended vocabulary from 32000 to 61758 tokens with additional Traditional Chinese characters.
19
- * Tokenizer is trained on [liswei/zhtw-news-and-articles-2B](https://huggingface.co/datasets/liswei/zhtw-news-and-articles-2B) and pruned from 96000 to 61758 tokens while maintaining 95% coverage on the pre-training dataset.
20
- * Additional token embeddings are initialized with the mean vector of existing embeddings.
21
- * Traditional Chinese perplexity = 1.6871 on held-out evaluation dataset.
22
- * Applied [GaLore](https://arxiv.org/abs/2403.03507) for efficient training with following hyperparameters:
23
- * Rank: 1024
24
- * Scale: 4.0
25
- * Update interval: 200
26
- * Layer-wise training: False
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  license: apache-2.0
4
  datasets:
5
  - liswei/zhtw-news-and-articles-2B
6
+ base_model:
7
+ - apple/OpenELM-270M
8
+ - liswei/Taiwan-ELM
9
  language:
10
  - zh
11
  metrics:
 
13
  pipeline_tag: text-generation
14
  ---
15
 
16
+ <center>
17
+ <img src="https://huggingface.co/liswei/Taiwan-ELM/resolve/main/Taiwan%20ELM%20Logo.jpeg" alt="Efficient LLM for Taiwan">
18
+ </center>
19
 
20
+ > Efficient LLM for Taiwan
21
 
22
+ # Taiwan ELM
23
+
24
+ Taiwan ELM is a family of Efficient LLMs for Taiwan base on [apple/OpenELM](https://huggingface.co/apple/OpenELM).
25
+ The project aims to provide an efficient model for researchers without access to large-scale computing resources.
26
+
27
+ The model is trained using a custom fork of [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) on 2B Traditional Chinese tokens and 500K instruction samples.
28
+ We will extend the model to train on larger data sets and different base models if there is sufficient demand.
29
+
30
+ ## What is being released?
31
+
32
+ We release both pre-trained base models and instruction tuned variants with 270M and 1.1B parameters.
33
+ Along with the model, datasets used to train the base and instruction-tuned models are also released.
34
+
35
+ List of released models:
36
+ * Taiwan-ELM-270M
37
+ * Taiwan-ELM-1_1B
38
+ * Taiwan-ELM-270M-Instruct
39
+ * Taiwan-ELM-1_1B-Instruct
40
+
41
+ List of released datasets:
42
+ * [liswei/Taiwan-Text-Excellence-2B](https://huggingface.co/datasets/liswei/Taiwan-Text-Excellence-2B)
43
+ * [liswei/PromptPair-TW](https://huggingface.co/datasets/liswei/PromptPair-TW)
44
+
45
+ ## Usage Examples
46
+
47
+ We adapt the LLaMA2 template:
48
+ ```jinja2
49
+ <s>[INST] <<SYS>>
50
+ {{ system_prompt }}
51
+ <</SYS>>
52
+
53
+ {{ user_message }} [/INST]
54
+ ```
55
+
56
+ The model could be load via `AutoModelForCausalLM` with `trust_remote_code=True`:
57
+ ```python
58
+ taiwanelm_270m = AutoModelForCausalLM.from_pretrained("liswei/Taiwan-ELM-270M", trust_remote_code=True)
59
+ ```
60
+
61
+ We also support additional generation methods and speculative generation, please find reference at [OpenELM#usage](https://huggingface.co/apple/OpenELM#usage).