jasonkrone commited on
Commit
74936cb
1 Parent(s): 007173e

fix nan issue

Browse files
LICENSE ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Copyright (C) 2024 Apple Inc. All Rights Reserved.
2
+
3
+ Disclaimer: IMPORTANT: This Apple software is supplied to you by Apple
4
+ Inc. ("Apple") in consideration of your agreement to the following
5
+ terms, and your use, installation, modification or redistribution of
6
+ this Apple software constitutes acceptance of these terms. If you do
7
+ not agree with these terms, please do not use, install, modify or
8
+ redistribute this Apple software.
9
+
10
+ In consideration of your agreement to abide by the following terms, and
11
+ subject to these terms, Apple grants you a personal, non-exclusive
12
+ license, under Apple's copyrights in this original Apple software (the
13
+ "Apple Software"), to use, reproduce, modify and redistribute the Apple
14
+ Software, with or without modifications, in source and/or binary forms;
15
+ provided that if you redistribute the Apple Software in its entirety and
16
+ without modifications, you must retain this notice and the following
17
+ text and disclaimers in all such redistributions of the Apple Software.
18
+ Neither the name, trademarks, service marks or logos of Apple Inc. may
19
+ be used to endorse or promote products derived from the Apple Software
20
+ without specific prior written permission from Apple. Except as
21
+ expressly stated in this notice, no other rights or licenses, express or
22
+ implied, are granted by Apple herein, including but not limited to any
23
+ patent rights that may be infringed by your derivative works or by other
24
+ works in which the Apple Software may be incorporated.
25
+
26
+ The Apple Software is provided by Apple on an "AS IS" basis. APPLE
27
+ MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION
28
+ THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS
29
+ FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND
30
+ OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS.
31
+
32
+ IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL
33
+ OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
34
+ SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
35
+ INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION,
36
+ MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED
37
+ AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE),
38
+ STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE
39
+ POSSIBILITY OF SUCH DAMAGE.
40
+
41
+
42
+ -------------------------------------------------------------------------------
43
+ SOFTWARE DISTRIBUTED IN THIS REPOSITORY:
44
+
45
+ This software includes a number of subcomponents with separate
46
+ copyright notices and license terms - please see the file ACKNOWLEDGEMENTS.
47
+ -------------------------------------------------------------------------------
README.md CHANGED
@@ -1,199 +1,189 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
11
 
12
- ## Model Details
13
 
14
- ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
 
 
 
 
 
 
 
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
 
 
 
 
 
 
 
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
 
 
 
 
 
 
 
 
 
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
45
 
46
- ### Downstream Use [optional]
 
 
 
 
 
 
 
 
 
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
 
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: other
3
+ license_name: apple-sample-code-license
4
+ license_link: LICENSE
5
  ---
6
 
7
+ # OpenELM
8
 
9
+ *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
10
 
11
+ We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research.
12
 
13
+ Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
14
 
 
15
 
 
16
 
17
+ ## Usage
18
 
19
+ We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`.
20
 
21
+ You can try the model by running the following command:
22
+ ```
23
+ python generate_openelm.py --model apple/OpenELM-1_1B-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2
24
+ ```
25
+ Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token.
 
 
26
 
27
+ Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows:
28
+ ```
29
+ python generate_openelm.py --model apple/OpenELM-1_1B-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
30
+ ```
31
+ Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
32
+ ```
33
+ python generate_openelm.py --model apple/OpenELM-1_1B-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]
34
+ ```
35
 
36
+ ## Main Results
37
 
38
+ ### Zero-Shot
 
 
39
 
40
+ | **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** |
41
+ |-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
42
+ | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 |
43
+ | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** |
44
+ | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 |
45
+ | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** |
46
+ | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 |
47
+ | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** |
48
+ | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 |
49
+ | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** |
50
 
51
+ ### LLM360
52
 
53
+ | **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** |
54
+ |-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
55
+ | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 |
56
+ | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** |
57
+ | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 |
58
+ | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** |
59
+ | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 |
60
+ | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** |
61
+ | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 |
62
+ | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** |
63
 
 
64
 
65
+ ### OpenLLM Leaderboard
66
 
67
+ | **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** |
68
+ |-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
69
+ | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 |
70
+ | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** |
71
+ | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 |
72
+ | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** |
73
+ | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 |
74
+ | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** |
75
+ | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 |
76
+ | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** |
77
 
78
+ See the technical report for more results and comparison.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
 
80
  ## Evaluation
81
 
82
+ ### Setup
83
+
84
+ Install the following dependencies:
85
+
86
+ ```bash
87
+
88
+ # install public lm-eval-harness
89
+
90
+ harness_repo="public-lm-eval-harness"
91
+ git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
92
+ cd ${harness_repo}
93
+ # use main branch on 03-15-2024, SHA is dc90fec
94
+ git checkout dc90fec
95
+ pip install -e .
96
+ cd ..
97
+
98
+ # 66d6242 is the main branch on 2024-04-01
99
+ pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
100
+ pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
101
+
102
+ ```
103
+
104
+ ### Evaluate OpenELM
105
+
106
+ ```bash
107
+
108
+ # OpenELM-1_1B-Instruct
109
+ hf_model=apple/OpenELM-1_1B-Instruct
110
+
111
+ # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
112
+ tokenizer=meta-llama/Llama-2-7b-hf
113
+ add_bos_token=True
114
+ batch_size=1
115
+
116
+ mkdir lm_eval_output
117
+
118
+ shot=0
119
+ task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
120
+ lm_eval --model hf \
121
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
122
+ --tasks ${task} \
123
+ --device cuda:0 \
124
+ --num_fewshot ${shot} \
125
+ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
126
+ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
127
+
128
+ shot=5
129
+ task=mmlu,winogrande
130
+ lm_eval --model hf \
131
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
132
+ --tasks ${task} \
133
+ --device cuda:0 \
134
+ --num_fewshot ${shot} \
135
+ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
136
+ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
137
+
138
+ shot=25
139
+ task=arc_challenge,crows_pairs_english
140
+ lm_eval --model hf \
141
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
142
+ --tasks ${task} \
143
+ --device cuda:0 \
144
+ --num_fewshot ${shot} \
145
+ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
146
+ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
147
+
148
+ shot=10
149
+ task=hellaswag
150
+ lm_eval --model hf \
151
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
152
+ --tasks ${task} \
153
+ --device cuda:0 \
154
+ --num_fewshot ${shot} \
155
+ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
156
+ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
157
+
158
+ ```
159
 
 
160
 
161
+ ## Bias, Risks, and Limitations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
162
 
163
+ The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
164
+
165
+ ## Citation
166
+
167
+ If you find our work useful, please cite:
168
+
169
+ ```BibTex
170
+ @article{mehtaOpenELMEfficientLanguage2024,
171
+ title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
172
+ shorttitle = {{OpenELM}},
173
+ url = {https://arxiv.org/abs/2404.14619v1},
174
+ language = {en},
175
+ urldate = {2024-04-24},
176
+ journal = {arXiv.org},
177
+ author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
178
+ month = apr,
179
+ year = {2024},
180
+ }
181
+
182
+ @inproceedings{mehta2022cvnets,
183
+ author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
184
+ title = {CVNets: High Performance Library for Computer Vision},
185
+ year = {2022},
186
+ booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
187
+ series = {MM '22}
188
+ }
189
+ ```
config.json CHANGED
@@ -1,12 +1,11 @@
1
  {
2
- "_name_or_path": "apple/OpenELM-1_1B-instruct",
3
  "activation_fn_name": "swish",
4
  "architectures": [
5
  "OpenELMForCausalLM"
6
  ],
7
  "auto_map": {
8
- "AutoConfig": "apple/OpenELM-1_1B-instruct--configuration_openelm.OpenELMConfig",
9
- "AutoModelForCausalLM": "apple/OpenELM-1_1B-instruct--modeling_openelm.OpenELMForCausalLM"
10
  },
11
  "bos_token_id": 1,
12
  "eos_token_id": 2,
@@ -118,8 +117,8 @@
118
  "rope_freq_constant": 10000,
119
  "rope_max_length": 4096,
120
  "share_input_output_layers": true,
121
- "torch_dtype": "float32",
122
- "transformers_version": "4.44.2",
123
  "use_cache": true,
124
  "vocab_size": 32000
125
  }
 
1
  {
 
2
  "activation_fn_name": "swish",
3
  "architectures": [
4
  "OpenELMForCausalLM"
5
  ],
6
  "auto_map": {
7
+ "AutoConfig": "configuration_openelm.OpenELMConfig",
8
+ "AutoModelForCausalLM": "modeling_openelm.OpenELMForCausalLM"
9
  },
10
  "bos_token_id": 1,
11
  "eos_token_id": 2,
 
117
  "rope_freq_constant": 10000,
118
  "rope_max_length": 4096,
119
  "share_input_output_layers": true,
120
+ "torch_dtype": "bfloat16",
121
+ "transformers_version": "4.39.3",
122
  "use_cache": true,
123
  "vocab_size": 32000
124
  }
configuration_openelm.py ADDED
@@ -0,0 +1,318 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # For licensing see accompanying LICENSE file.
3
+ # Copyright (C) 2024 Apple Inc. All Rights Reserved.
4
+ #
5
+
6
+ """Implements HF OpenELMConfig based on PretrainedConfig"""
7
+ from numbers import Number
8
+ from typing import List, Optional, Union
9
+
10
+ import numpy as np
11
+ from transformers import PretrainedConfig
12
+
13
+
14
+ def make_divisible(
15
+ v: Union[float, int],
16
+ divisor: Optional[int] = 8,
17
+ min_value: Optional[Union[float, int]] = None,
18
+ ) -> Union[float, int]:
19
+ """
20
+ This function is taken from the original tf repo.
21
+ It ensures that all layers have a channel number that is divisible by the divisor
22
+ It can be seen at:
23
+ https://github.com/tensorflow/models/blob/2cfc99eff5e5eb729c6793d2f3d03aa1c9be2b15/research/slim/nets/mobilenet/mobilenet.py#L62
24
+
25
+ Args:
26
+ v: input value
27
+ divisor: default to 8
28
+ min_value: minimum divisor value
29
+ Returns:
30
+ new_v: new divisible value
31
+ """
32
+ if min_value is None:
33
+ min_value = divisor
34
+ new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
35
+ # Make sure that round down does not go down by more than 10%.
36
+ if new_v < 0.9 * v:
37
+ new_v += divisor
38
+ return new_v
39
+
40
+
41
+ def compute_heads(model_dim: int, head_dim: int) -> int:
42
+ """Compute the number of heads.
43
+
44
+ Args:
45
+ model_dim: Model dimension.
46
+ head_dim: Head dimension.
47
+
48
+ Returns:
49
+ An integer denoting number of heads in multi-head attention is returned.
50
+
51
+ Raises:
52
+ ValueError: if model dimension is not divisible by head dimension.
53
+ """
54
+ if model_dim % head_dim == 0:
55
+ return model_dim // head_dim
56
+ else:
57
+ raise ValueError(
58
+ f"Model dimension should be divisible by head dimension. Got: {model_dim} and {head_dim}."
59
+ )
60
+
61
+
62
+ OpenELM_CONFIGS = {
63
+ "OpenELM-270M": dict(
64
+ num_transformer_layers=16,
65
+ model_dim=1280,
66
+ head_dim=64,
67
+ num_gqa_groups=4,
68
+ normalize_qk_projections=True,
69
+ share_input_output_layers=True,
70
+ # Vary the FFN and QKV multipliers to create variable FFN and attention layers respectively.
71
+ ffn_multipliers=(0.5, 4.0),
72
+ qkv_multipliers=(0.5, 1.0),
73
+ ),
74
+ "OpenELM-450M": dict(
75
+ num_transformer_layers=20,
76
+ model_dim=1536,
77
+ head_dim=64,
78
+ num_gqa_groups=4,
79
+ normalize_qk_projections=True,
80
+ share_input_output_layers=True,
81
+ # Vary the FFN and QKV multipliers to create variable FFN and attention layers respectively.
82
+ ffn_multipliers=(0.5, 4.0),
83
+ qkv_multipliers=(0.5, 1.0),
84
+ ),
85
+ "OpenELM-1_1B": dict(
86
+ num_transformer_layers=28,
87
+ model_dim=2048,
88
+ head_dim=64,
89
+ num_gqa_groups=4,
90
+ normalize_qk_projections=True,
91
+ share_input_output_layers=True,
92
+ # Vary the FFN and QKV multipliers to create variable FFN and attention layers respectively.
93
+ ffn_multipliers=(0.5, 4.0),
94
+ qkv_multipliers=(0.5, 1.0),
95
+ ),
96
+ "OpenELM-3B": dict(
97
+ num_transformer_layers=36,
98
+ model_dim=3072,
99
+ head_dim=128,
100
+ num_gqa_groups=4,
101
+ normalize_qk_projections=True,
102
+ share_input_output_layers=True,
103
+ # Vary the FFN and QKV multipliers to create variable FFN and attention layers respectively.
104
+ ffn_multipliers=(0.5, 4.0),
105
+ qkv_multipliers=(0.5, 1.0),
106
+ ),
107
+ }
108
+
109
+
110
+ class OpenELMConfig(PretrainedConfig):
111
+ r"""
112
+ This is the configuration class to store the configuration of a [`OpenELMModel`]. It is used to instantiate an OpenELM model according to the specified arguments, defining the model architecture.
113
+
114
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
115
+ documentation from [`PretrainedConfig`] for more information.
116
+
117
+ Args:
118
+ vocab_size (`int`, *optional*, defaults to 32000):
119
+ Vocabulary size of the OpenELM model.
120
+ max_context_length (`int`, *optional*, defaults to 2048):
121
+ Maximum number of input tokens.
122
+ num_transformer_layers (`int`, *optional*, defaults to 12):
123
+ Number of hidden layers in the Transformer decoder.
124
+ model_dim (`int`, *optional*, defaults to 2048):
125
+ Dimension of the hidden representations.
126
+ head_dim (`int`, *optional*, defaults to 128):
127
+ The attention head dimension.
128
+ qkv_multipliers (`Union[Number, List[Number]]`, *optional*, defaults to 1.0):
129
+ If the qkv_multipliers is a Number, then all attention layers have the same latent dimensions,
130
+ resulting in uniform allocation of parameters.
131
+ If the qkv_multipliers is a List of Number, then each attention layer have different latent dimensions
132
+ assuming qkv_multipliers[0] != qkv_multipliers[1]. This results in variable allocation of parameters in attention layer.
133
+ This scaling is known as layer-wise or block-wise scaling: https://arxiv.org/abs/2008.00623
134
+ num_query_heads (`Union[int, None]`, *optional*, defaults to None):
135
+ The number of query heads, computed from `compute_heads(model_dim=model_dim, head_dim=head_dim)`.
136
+ num_gqa_groups (`int`, *optional*, defaults to 1):
137
+ This variable allows to switch between multi-head attention, group query attention, and multi-query attention.
138
+ When num_gqa_groups == 1, then it is multi-head attention.
139
+ When 1 < num_gqa_groups < num_heads and num_heads is divisible by num_gqa_groups, then it is group query attention
140
+ When num_gqa_groups == num_heads, then it is multi-query attention
141
+ ffn_multipliers (`Union[Number, List[Number]]`, *optional*, defaults to 4.0):
142
+ Feed-forward network (FFN) multipliers.
143
+ If the ffn_multipliers is a Number, then all FFN layers have the same latent dimensions,
144
+ resulting in uniform allocation of parameters.
145
+ If the ffn_multipliers is a List of Number, then each FFN layer have different latent dimensions
146
+ assuming ffn_multipliers[0] != ffn_multipliers[1]. This results in variable allocation of parameters in FFN layer.
147
+ This scaling is known as layer-wise or block-wise scaling: https://arxiv.org/abs/2008.00623
148
+ ffn_with_glu (`bool`, *optional*, defaults to True):
149
+ Whether to use FFN with Gated Linear Unit (GLU)
150
+ ffn_dim_divisor (`int`, *optional*, defaults to 256):
151
+ The ffn layer dimension divisor.
152
+ activation_fn_name (`str` or `function`, *optional*, defaults to `"swish"`):
153
+ The non-linear activation function (function or string) in the decoder.
154
+ normalization_layer_name (`str` or `function`, *optional*, defaults to `"rms_norm"`):
155
+ Type of normalization layer.
156
+ normalize_qk_projections (`bool`, *optional*, defaults to False):
157
+ Whether to normalize queries and keys after projections
158
+ share_input_output_layers (`bool`, *optional*, defaults to False):
159
+ Whether to share the embedding between input and output linear layer
160
+ rope_freq_constant (`int`, *optional*, defaults to 10000):
161
+ The base period of the RoPE embeddings.
162
+ rope_max_length (`int`, *optional*, defaults to 4096):
163
+ That rope_max_length is set to twice of max_context_length.
164
+ This allows flexibility in token lengths during training or fine-tuning.
165
+ initializer_range (`float`, *optional*, defaults to 0.02):
166
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
167
+ use_cache (`bool`, *optional*, defaults to `True`):
168
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
169
+ relevant if `config.is_decoder=True`.
170
+ bos_token_id (`int`, *optional*, defaults to 2):
171
+ Beginning of stream token id.
172
+ eos_token_id (`int`, *optional*, defaults to 1):
173
+ End of stream token id.
174
+ """
175
+
176
+ model_type = "openelm"
177
+
178
+ def __init__(
179
+ self,
180
+ vocab_size: int = 32000,
181
+ max_context_length: int = 2048,
182
+ num_transformer_layers: int = 12,
183
+ model_dim: int = 2048,
184
+ head_dim: int = 128,
185
+ qkv_multipliers: Union[Number, List[Number]] = 1.0,
186
+ num_query_heads: Union[int, None] = None,
187
+ num_gqa_groups: int = 1,
188
+ ffn_multipliers: Union[Number, List[Number]] = 4.0,
189
+ ffn_with_glu: bool = True,
190
+ ffn_dim_divisor: int = 256,
191
+ activation_fn_name: str = "swish",
192
+ normalization_layer_name: str = "rms_norm",
193
+ normalize_qk_projections: bool = False,
194
+ share_input_output_layers: bool = False,
195
+ rope_freq_constant: int = 10000,
196
+ rope_max_length: int = 4096,
197
+ initializer_range: float = 0.02,
198
+ use_cache: bool = True,
199
+ bos_token_id: int = 1,
200
+ eos_token_id: int = 2,
201
+ **kwargs,
202
+ ) -> None:
203
+ self.vocab_size = vocab_size
204
+ self.max_context_length = max_context_length
205
+ self.num_transformer_layers = num_transformer_layers
206
+ self.model_dim = model_dim
207
+ self.head_dim = head_dim
208
+ self.qkv_multipliers = qkv_multipliers
209
+ self.num_query_heads = num_query_heads
210
+ self.num_gqa_groups = num_gqa_groups
211
+ self.ffn_multipliers = ffn_multipliers
212
+ self.ffn_with_glu = ffn_with_glu
213
+ self.ffn_dim_divisor = ffn_dim_divisor
214
+ self.activation_fn_name = activation_fn_name
215
+ self.normalization_layer_name = normalization_layer_name
216
+ self.normalize_qk_projections = normalize_qk_projections
217
+ self.share_input_output_layers = share_input_output_layers
218
+ self.rope_freq_constant = rope_freq_constant
219
+ self.rope_max_length = rope_max_length
220
+ self.num_query_heads = (
221
+ compute_heads(model_dim=model_dim, head_dim=head_dim)
222
+ if num_query_heads is None
223
+ else num_query_heads
224
+ )
225
+ self.initializer_range = initializer_range
226
+
227
+ self.__post_init__()
228
+ super().__init__(
229
+ use_cache=use_cache,
230
+ bos_token_id=bos_token_id,
231
+ eos_token_id=eos_token_id,
232
+ **kwargs,
233
+ )
234
+
235
+ def __post_init__(self) -> None:
236
+ if self.num_gqa_groups is not None:
237
+ head_multiple_of = self.num_gqa_groups
238
+ else:
239
+ head_multiple_of = 2
240
+
241
+ if isinstance(self.qkv_multipliers, Number):
242
+ # All attention layers have the same latent dimensions, resulting in uniform allocation of parameters.
243
+ qkv_dim = make_divisible(
244
+ self.model_dim * self.qkv_multipliers,
245
+ divisor=self.head_dim * head_multiple_of,
246
+ )
247
+ query_dims = [int(qkv_dim)] * self.num_transformer_layers
248
+
249
+ elif (
250
+ isinstance(self.qkv_multipliers, (tuple, list))
251
+ and len(self.qkv_multipliers) == 2
252
+ ):
253
+ # Each attention layer have different latent dimensions assuming qkv_multipliers[0] != qkv_multipliers[1].
254
+ # This results in variable allocation of parameters in attention layer.
255
+ # This scaling is known as layer-wise or block-wise scaling: https://arxiv.org/abs/2008.00623
256
+ qkv_multipliers = [
257
+ round(v, 2)
258
+ for v in np.linspace(
259
+ self.qkv_multipliers[0],
260
+ self.qkv_multipliers[1],
261
+ num=self.num_transformer_layers,
262
+ dtype=float,
263
+ )
264
+ ]
265
+ # Make sure that scaled model dimension is divisible by scaled head dimension.
266
+ query_dims = [
267
+ int(
268
+ make_divisible(
269
+ self.model_dim * m, divisor=self.head_dim * head_multiple_of
270
+ )
271
+ )
272
+ for m in qkv_multipliers
273
+ ]
274
+ else:
275
+ raise NotImplementedError(
276
+ f"QKV multipliers should be a single number or a list containing exactly two numbers. Got: {qkv_multipliers}."
277
+ )
278
+
279
+ # compute the number of query, key, and value heads
280
+ # For multi-head and multi-query attention, the number of heads for query, key, and value are the same.
281
+ # For group query attention, the number of key and value heads are the same.
282
+ self.num_query_heads = [
283
+ int(compute_heads(q_dim, self.head_dim)) for q_dim in query_dims
284
+ ]
285
+ self.num_kv_heads = [
286
+ q_heads // self.num_gqa_groups for q_heads in self.num_query_heads
287
+ ]
288
+
289
+ # Feed-forward network (FFN) multipliers
290
+ if isinstance(self.ffn_multipliers, Number):
291
+ # All FFN layers have the same latent dimensions, resulting in uniform allocation of parameters.
292
+ self.ffn_multipliers = [self.ffn_multipliers] * self.num_transformer_layers
293
+ elif isinstance(self.ffn_multipliers, (tuple, list)):
294
+ # Each FFN layer have different latent dimensions assuming ffn_multipliers[0] != ffn_multipliers[1].
295
+ # This results in variable allocation of parameters in FFN layer.
296
+ # This scaling is known as layer-wise or block-wise scaling: https://arxiv.org/abs/2008.00623
297
+ if len(self.ffn_multipliers) == 2:
298
+ self.ffn_multipliers = [
299
+ round(v, 2)
300
+ for v in np.linspace(
301
+ self.ffn_multipliers[0],
302
+ self.ffn_multipliers[1],
303
+ num=self.num_transformer_layers,
304
+ dtype=float,
305
+ )
306
+ ]
307
+ else:
308
+ assert (
309
+ len(self.ffn_multipliers) == self.num_transformer_layers
310
+ ), f"{len(self.ffn_multipliers)=}!={self.num_transformer_layers=}"
311
+ else:
312
+ raise NotImplementedError(
313
+ f"FFN multipliers should be a single number or a list containing exactly two numbers. Got: {qkv_multipliers}."
314
+ )
315
+
316
+ # check num_query_heads divisible by num_kv_heads for every layer
317
+ for layer_idx in range(len(query_dims)):
318
+ assert self.num_query_heads[layer_idx] % self.num_kv_heads[layer_idx] == 0
generate_openelm.py ADDED
@@ -0,0 +1,240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # For licensing see accompanying LICENSE file.
3
+ # Copyright (C) 2024 Apple Inc. All Rights Reserved.
4
+ #
5
+
6
+ """Module to generate OpenELM output given a model and an input prompt."""
7
+ import os
8
+ import logging
9
+ import time
10
+ import argparse
11
+ from typing import Optional, Union
12
+ import torch
13
+
14
+ from transformers import AutoTokenizer, AutoModelForCausalLM
15
+
16
+
17
+ def generate(
18
+ prompt: str,
19
+ model: Union[str, AutoModelForCausalLM],
20
+ hf_access_token: str = None,
21
+ tokenizer: Union[str, AutoTokenizer] = 'meta-llama/Llama-2-7b-hf',
22
+ device: Optional[str] = None,
23
+ max_length: int = 1024,
24
+ assistant_model: Optional[Union[str, AutoModelForCausalLM]] = None,
25
+ generate_kwargs: Optional[dict] = None,
26
+ ) -> str:
27
+ """ Generates output given a prompt.
28
+
29
+ Args:
30
+ prompt: The string prompt.
31
+ model: The LLM Model. If a string is passed, it should be the path to
32
+ the hf converted checkpoint.
33
+ hf_access_token: Hugging face access token.
34
+ tokenizer: Tokenizer instance. If model is set as a string path,
35
+ the tokenizer will be loaded from the checkpoint.
36
+ device: String representation of device to run the model on. If None
37
+ and cuda available it would be set to cuda:0 else cpu.
38
+ max_length: Maximum length of tokens, input prompt + generated tokens.
39
+ assistant_model: If set, this model will be used for
40
+ speculative generation. If a string is passed, it should be the
41
+ path to the hf converted checkpoint.
42
+ generate_kwargs: Extra kwargs passed to the hf generate function.
43
+
44
+ Returns:
45
+ output_text: output generated as a string.
46
+ generation_time: generation time in seconds.
47
+
48
+ Raises:
49
+ ValueError: If device is set to CUDA but no CUDA device is detected.
50
+ ValueError: If tokenizer is not set.
51
+ ValueError: If hf_access_token is not specified.
52
+ """
53
+ if not device:
54
+ if torch.cuda.is_available() and torch.cuda.device_count():
55
+ device = "cuda:0"
56
+ logging.warning(
57
+ 'inference device is not set, using cuda:0, %s',
58
+ torch.cuda.get_device_name(0)
59
+ )
60
+ else:
61
+ device = 'cpu'
62
+ logging.warning(
63
+ (
64
+ 'No CUDA device detected, using cpu, '
65
+ 'expect slower speeds.'
66
+ )
67
+ )
68
+
69
+ if 'cuda' in device and not torch.cuda.is_available():
70
+ raise ValueError('CUDA device requested but no CUDA device detected.')
71
+
72
+ if not tokenizer:
73
+ raise ValueError('Tokenizer is not set in the generate function.')
74
+
75
+ if not hf_access_token:
76
+ raise ValueError((
77
+ 'Hugging face access token needs to be specified. '
78
+ 'Please refer to https://huggingface.co/docs/hub/security-tokens'
79
+ ' to obtain one.'
80
+ )
81
+ )
82
+
83
+ if isinstance(model, str):
84
+ checkpoint_path = model
85
+ model = AutoModelForCausalLM.from_pretrained(
86
+ checkpoint_path,
87
+ trust_remote_code=True
88
+ )
89
+ model.to(device).eval()
90
+ if isinstance(tokenizer, str):
91
+ tokenizer = AutoTokenizer.from_pretrained(
92
+ tokenizer,
93
+ token=hf_access_token,
94
+ )
95
+
96
+ # Speculative mode
97
+ draft_model = None
98
+ if assistant_model:
99
+ draft_model = assistant_model
100
+ if isinstance(assistant_model, str):
101
+ draft_model = AutoModelForCausalLM.from_pretrained(
102
+ assistant_model,
103
+ trust_remote_code=True
104
+ )
105
+ draft_model.to(device).eval()
106
+
107
+ # Prepare the prompt
108
+ tokenized_prompt = tokenizer(prompt)
109
+ tokenized_prompt = torch.tensor(
110
+ tokenized_prompt['input_ids'],
111
+ device=device
112
+ )
113
+
114
+ tokenized_prompt = tokenized_prompt.unsqueeze(0)
115
+
116
+ # Generate
117
+ stime = time.time()
118
+ output_ids = model.generate(
119
+ tokenized_prompt,
120
+ max_length=max_length,
121
+ pad_token_id=0,
122
+ assistant_model=draft_model,
123
+ **(generate_kwargs if generate_kwargs else {}),
124
+ )
125
+ generation_time = time.time() - stime
126
+
127
+ output_text = tokenizer.decode(
128
+ output_ids[0].tolist(),
129
+ skip_special_tokens=True
130
+ )
131
+
132
+ return output_text, generation_time
133
+
134
+
135
+ def openelm_generate_parser():
136
+ """Argument Parser"""
137
+
138
+ class KwargsParser(argparse.Action):
139
+ """Parser action class to parse kwargs of form key=value"""
140
+ def __call__(self, parser, namespace, values, option_string=None):
141
+ setattr(namespace, self.dest, dict())
142
+ for val in values:
143
+ if '=' not in val:
144
+ raise ValueError(
145
+ (
146
+ 'Argument parsing error, kwargs are expected in'
147
+ ' the form of key=value.'
148
+ )
149
+ )
150
+ kwarg_k, kwarg_v = val.split('=')
151
+ try:
152
+ converted_v = int(kwarg_v)
153
+ except ValueError:
154
+ try:
155
+ converted_v = float(kwarg_v)
156
+ except ValueError:
157
+ converted_v = kwarg_v
158
+ getattr(namespace, self.dest)[kwarg_k] = converted_v
159
+
160
+ parser = argparse.ArgumentParser('OpenELM Generate Module')
161
+ parser.add_argument(
162
+ '--model',
163
+ dest='model',
164
+ help='Path to the hf converted model.',
165
+ required=True,
166
+ type=str,
167
+ )
168
+ parser.add_argument(
169
+ '--hf_access_token',
170
+ dest='hf_access_token',
171
+ help='Hugging face access token, starting with "hf_".',
172
+ type=str,
173
+ )
174
+ parser.add_argument(
175
+ '--prompt',
176
+ dest='prompt',
177
+ help='Prompt for LLM call.',
178
+ default='',
179
+ type=str,
180
+ )
181
+ parser.add_argument(
182
+ '--device',
183
+ dest='device',
184
+ help='Device used for inference.',
185
+ type=str,
186
+ )
187
+ parser.add_argument(
188
+ '--max_length',
189
+ dest='max_length',
190
+ help='Maximum length of tokens.',
191
+ default=256,
192
+ type=int,
193
+ )
194
+ parser.add_argument(
195
+ '--assistant_model',
196
+ dest='assistant_model',
197
+ help=(
198
+ (
199
+ 'If set, this is used as a draft model '
200
+ 'for assisted speculative generation.'
201
+ )
202
+ ),
203
+ type=str,
204
+ )
205
+ parser.add_argument(
206
+ '--generate_kwargs',
207
+ dest='generate_kwargs',
208
+ help='Additional kwargs passed to the HF generate function.',
209
+ type=str,
210
+ nargs='*',
211
+ action=KwargsParser,
212
+ )
213
+ return parser.parse_args()
214
+
215
+
216
+ if __name__ == '__main__':
217
+ args = openelm_generate_parser()
218
+ prompt = args.prompt
219
+
220
+ output_text, genertaion_time = generate(
221
+ prompt=prompt,
222
+ model=args.model,
223
+ device=args.device,
224
+ max_length=args.max_length,
225
+ assistant_model=args.assistant_model,
226
+ generate_kwargs=args.generate_kwargs,
227
+ hf_access_token=args.hf_access_token,
228
+ )
229
+
230
+ print_txt = (
231
+ f'\r\n{"=" * os.get_terminal_size().columns}\r\n'
232
+ '\033[1m Prompt + Generated Output\033[0m\r\n'
233
+ f'{"-" * os.get_terminal_size().columns}\r\n'
234
+ f'{output_text}\r\n'
235
+ f'{"-" * os.get_terminal_size().columns}\r\n'
236
+ '\r\nGeneration took'
237
+ f'\033[1m\033[92m {round(genertaion_time, 2)} \033[0m'
238
+ 'seconds.\r\n'
239
+ )
240
+ print(print_txt)
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "_from_model_config": true,
3
  "bos_token_id": 1,
4
  "eos_token_id": 2,
5
- "transformers_version": "4.44.2"
6
  }
 
2
  "_from_model_config": true,
3
  "bos_token_id": 1,
4
  "eos_token_id": 2,
5
+ "transformers_version": "4.39.3"
6
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d3bf5efdf37171c999eaac6940f6e7979f83ad68e164342ec587fba365e602ec
3
- size 4319591488
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b2de324cd4a3c6a4d08d8b105d3145ef1cffdee7a4d669cae31d9bcb5c1197d
3
+ size 2159808696
modeling_openelm.py ADDED
@@ -0,0 +1,1008 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # For licensing see accompanying LICENSE file.
3
+ # Copyright (C) 2024 Apple Inc. All Rights Reserved.
4
+ #
5
+
6
+ from typing import List, Optional, Tuple, Union
7
+
8
+ import torch
9
+ import torch.utils.checkpoint
10
+ from torch import Tensor, nn
11
+ from torch.nn import CrossEntropyLoss
12
+ from torch.nn import functional as F
13
+ from transformers import PreTrainedModel
14
+ from transformers.activations import ACT2FN
15
+ from transformers.cache_utils import Cache, DynamicCache, StaticCache
16
+ from transformers.modeling_outputs import (
17
+ BaseModelOutputWithPast,
18
+ CausalLMOutputWithPast,
19
+ )
20
+ from transformers.utils import logging
21
+
22
+ logger = logging.get_logger(__name__)
23
+
24
+ # this import has to be relative, otherwise, when setting trust_remote_code=True
25
+ # huggingface transformers won't be able to load the module correctly
26
+ from .configuration_openelm import OpenELMConfig, make_divisible
27
+
28
+
29
+ class OpenELMRMSNorm(nn.Module):
30
+ def __init__(self, num_features: int, eps: float = 1e-6):
31
+ """
32
+ Initialize the OpenELMRMSNorm normalization layer.
33
+
34
+ Args:
35
+ dim (int): The dimension of the input tensor.
36
+ eps (float, optional): A small value added to the denominator for numerical stability. Default is 1e-6.
37
+
38
+ Attributes:
39
+ eps (float): A small value added to the denominator for numerical stability.
40
+ weight (nn.Parameter): Learnable scaling parameter.
41
+
42
+ """
43
+ super().__init__()
44
+ self.eps = eps
45
+ self.weight = nn.Parameter(torch.ones(num_features))
46
+ self.num_features = num_features
47
+
48
+ def _norm(self, x: Tensor) -> Tensor:
49
+ """
50
+ Apply the OpenELMRMSNorm normalization to the input tensor.
51
+
52
+ Args:
53
+ x (torch.Tensor): The input tensor.
54
+
55
+ Returns:
56
+ torch.Tensor: The normalized tensor.
57
+
58
+ """
59
+ return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
60
+
61
+ def forward(self, x: Tensor) -> Tensor:
62
+ """
63
+ Forward pass through the OpenELMRMSNorm layer.
64
+
65
+ Args:
66
+ x (torch.Tensor): The input tensor.
67
+
68
+ Returns:
69
+ torch.Tensor: The output tensor after applying OpenELMRMSNorm.
70
+
71
+ """
72
+ output = self._norm(x.float()).type_as(x)
73
+ return output * self.weight
74
+
75
+ def extra_repr(self) -> str:
76
+ return (
77
+ super().extra_repr() + f"num_features={self.num_features}, eps={self.eps}"
78
+ )
79
+
80
+
81
+ class OpenELMPreTrainedModel(PreTrainedModel):
82
+ config_class = OpenELMConfig
83
+ base_model_prefix = "transformer"
84
+ supports_gradient_checkpointing = True
85
+ _no_split_modules = ["OpenELMDecoderLayer"]
86
+ _skip_keys_device_placement = "past_key_values"
87
+
88
+ def __init__(self, *inputs, **kwargs) -> None:
89
+ super().__init__(*inputs, **kwargs)
90
+
91
+ def _init_weights(self, module: nn.Module) -> None:
92
+ """Initialize the weights."""
93
+ if isinstance(module, nn.Linear):
94
+ # Slightly different from the TF version which uses truncated_normal for initialization
95
+ # cf https://github.com/pytorch/pytorch/pull/5617
96
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
97
+ if module.bias is not None:
98
+ module.bias.data.zero_()
99
+ elif isinstance(module, nn.Embedding):
100
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
101
+ if module.padding_idx is not None:
102
+ module.weight.data[module.padding_idx].zero_()
103
+ elif isinstance(module, OpenELMRMSNorm):
104
+ module.weight.data.fill_(1.0)
105
+
106
+
107
+ def _rotate_half(x: Tensor) -> Tensor:
108
+ x1, x2 = x.chunk(2, dim=-1)
109
+ return torch.cat((-x2, x1), dim=-1)
110
+
111
+
112
+ def _apply_rotary_pos_emb(x: Tensor, pos_sin: Tensor, pos_cos: Tensor) -> Tensor:
113
+ return (x * pos_cos) + (_rotate_half(x) * pos_sin)
114
+
115
+
116
+ class OpenELMRotaryEmbedding(torch.nn.Module):
117
+ """
118
+ The rotary position embeddings (aka RoPE) from `RoFormer <https://arxiv.org/abs/2104.09864>`_.
119
+
120
+ RoPE encodes the position information of tokens using a rotation matrix, and is able to capture
121
+ explicit relative positional dependencies.
122
+
123
+ Args:
124
+ model_dim: The dimensionality of the model's hidden state.
125
+ max_seq_length: Maximum sequence length.
126
+ freq_constant: A constant used for computing frequencies.
127
+ """
128
+
129
+ def __init__(
130
+ self, model_dim: int, max_seq_length: int, freq_constant: int = 10000
131
+ ) -> None:
132
+ inv_freq = 1.0 / (
133
+ freq_constant
134
+ ** (torch.arange(0, model_dim, 2, dtype=torch.float32) / model_dim)
135
+ )
136
+ super().__init__()
137
+
138
+ self.model_dim = model_dim
139
+ self.freq_constant = freq_constant
140
+ self.max_seq_length = max_seq_length
141
+
142
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
143
+ self._cached_cos = None
144
+ self._cached_sin = None
145
+ self._cached_seq_length = max_seq_length
146
+ self._compute_sin_cos_embeddings(max_seq_length)
147
+
148
+ def extra_repr(self) -> str:
149
+ return f"\tmodel_dim={self.model_dim}, max_seq_length={self.max_seq_length}, freq_constant={self.freq_constant}"
150
+
151
+ def _compute_sin_cos_embeddings(
152
+ self,
153
+ key_len: int,
154
+ key_device: torch.device = torch.device("cpu"),
155
+ key_dtype: torch.dtype = torch.float32,
156
+ ) -> None:
157
+ """
158
+ Compute sine and cos embeddings.
159
+
160
+ Args:
161
+ key_len: Number of tokens in the key embeddings in the transformer model.
162
+ device: Device where the key embeddings are stored.
163
+ key_dtype: Data type of the key embeddings.
164
+
165
+ Returns:
166
+ None
167
+
168
+ ...note:
169
+ We recalculate the sine and cosine embeddings if any of the following conditions are met:
170
+ 1. The number of tokens in key embeddings are greater than the cached sequence length.
171
+ 2. Sine and cosine caches are empty.
172
+ 3. The device and data type of sine and cosine embeddings does not match with the key embeddings.
173
+ """
174
+ if (
175
+ key_len > self._cached_seq_length
176
+ or self._cached_cos is None
177
+ or (self._cached_cos is not None and self._cached_cos.device != key_device)
178
+ or (self._cached_cos is not None and self._cached_cos.dtype != key_dtype)
179
+ or self._cached_sin is None
180
+ or (self._cached_sin is not None and self._cached_sin.device != key_device)
181
+ or (self._cached_sin is not None and self._cached_sin.dtype != key_dtype)
182
+ ):
183
+ self._cached_seq_length = max(key_len, self._cached_seq_length)
184
+
185
+ # The shape of 'pos_index' is [number of key tokens]
186
+ pos_index = torch.arange(
187
+ self._cached_seq_length,
188
+ dtype=torch.float32,
189
+ device=self.inv_freq.device,
190
+ )
191
+ # The shape of 'pos_index_theta' is [number of key tokens, model dimension]
192
+ pos_index_theta = torch.einsum("i,j->ij", pos_index, self.inv_freq)
193
+ # The shape of 'emb' is [number of key tokens, model dimension]
194
+ emb = torch.cat((pos_index_theta, pos_index_theta), dim=-1)
195
+
196
+ # the shape of cos and sin embeddings is [number of key tokens, model_dim]
197
+ cos_emb = emb.cos().to(dtype=key_dtype, device=key_device)
198
+ sin_emb = emb.sin().to(dtype=key_dtype, device=key_device)
199
+
200
+ # the shape of cached cos and sin embeddings is [1, 1, number of key tokens, model_dim]
201
+ self._cached_cos = cos_emb[None, None, :, :]
202
+ self._cached_sin = sin_emb[None, None, :, :]
203
+
204
+ def forward(
205
+ self,
206
+ query: torch.Tensor,
207
+ key: torch.Tensor,
208
+ ) -> Tuple[torch.Tensor, torch.Tensor]:
209
+ """
210
+ The forward function of RoPE embeddings.
211
+
212
+ Args:
213
+ query: Query embeddings in the transformer model. The shape of query embeddings is
214
+ [Batch, number of query heads, number of query tokens, model dimension].
215
+ key: Key embeddings in the transformer model. The shape of key embeddings is
216
+ [Batch, number of key heads, number of key tokens, model dimension].
217
+
218
+ Returns:
219
+ A tuple containing the query and key embeddings with positional information. The shape of the returned query
220
+ and key embeddings is the same as the input query and key embeddings respectively.
221
+
222
+ ...note:
223
+ The RoPE embedding computation is done in full-precision. After the computation, input query and key tensors
224
+ are casted to original input datatype.
225
+ """
226
+ dim = key.shape[-1]
227
+ key_len = key.shape[2]
228
+ query_len = query.shape[2]
229
+
230
+ assert dim == self.model_dim
231
+ assert key.device == query.device
232
+ assert key.dtype == query.dtype
233
+
234
+ # In the context of self-attention, the lengths of keys and queries are equal.
235
+ # However, in generation tasks, such as predicting the next token in a sequence, the lengths of keys and queries
236
+ # can differ. For instance, when employing key-value (KV) caching for sequence prediction, the keys
237
+ # represent embeddings of previous tokens and the current token, while the query corresponds
238
+ # to the embedding of the current token only.
239
+ assert (
240
+ key_len >= query_len
241
+ ), "Number of keys has to be greater than or equal to number of queries."
242
+
243
+ query_float = query.float()
244
+ key_float = key.float()
245
+
246
+ self._compute_sin_cos_embeddings(
247
+ key_len, key_device=key_float.device, key_dtype=key_float.dtype
248
+ )
249
+ query_float = _apply_rotary_pos_emb(
250
+ x=query_float,
251
+ pos_sin=self._cached_sin[..., key_len - query_len : key_len, :],
252
+ pos_cos=self._cached_cos[..., key_len - query_len : key_len, :],
253
+ )
254
+ key_float = _apply_rotary_pos_emb(
255
+ x=key_float,
256
+ pos_sin=self._cached_sin[..., :key_len, :],
257
+ pos_cos=self._cached_cos[..., :key_len, :],
258
+ )
259
+
260
+ return query_float.type_as(query), key_float.type_as(key)
261
+
262
+
263
+ class OpenELMMultiHeadCausalAttention(nn.Module):
264
+ def __init__(self, config: OpenELMConfig, layer_idx: int) -> None:
265
+ super().__init__()
266
+ self.layer_idx = layer_idx
267
+ head_dim = config.head_dim
268
+ q_heads = config.num_query_heads[layer_idx]
269
+ k_heads = config.num_kv_heads[layer_idx]
270
+ v_heads = config.num_kv_heads[layer_idx]
271
+
272
+ self.qkv_proj = nn.Linear(
273
+ in_features=config.model_dim,
274
+ out_features=(q_heads + k_heads + v_heads) * head_dim,
275
+ bias=False,
276
+ )
277
+
278
+ self.pos_embedding = OpenELMRotaryEmbedding(
279
+ model_dim=config.head_dim,
280
+ max_seq_length=config.rope_max_length,
281
+ freq_constant=config.rope_freq_constant,
282
+ )
283
+
284
+ if config.normalize_qk_projections:
285
+ self.q_norm = OpenELMRMSNorm(
286
+ num_features=config.head_dim,
287
+ )
288
+ self.k_norm = OpenELMRMSNorm(
289
+ num_features=config.head_dim,
290
+ )
291
+ else:
292
+ self.q_norm = None
293
+ self.k_norm = None
294
+
295
+ self.out_proj = nn.Linear(
296
+ in_features=q_heads * head_dim,
297
+ out_features=config.model_dim,
298
+ bias=False,
299
+ )
300
+
301
+ self.head_dim = config.head_dim
302
+ self.num_q_heads = q_heads
303
+ self.num_k_heads = k_heads
304
+ self.num_v_heads = v_heads
305
+ self.transformer_dim = config.model_dim
306
+ self.num_groups = self.num_q_heads // self.num_k_heads
307
+
308
+ def extra_repr(self) -> str:
309
+ return (
310
+ super().extra_repr()
311
+ + f"query_heads={self.num_q_heads}, key_heads={self.num_k_heads}, value_heads={self.num_v_heads}"
312
+ )
313
+
314
+ def forward(
315
+ self,
316
+ hidden_states: torch.Tensor,
317
+ attention_mask: Optional[torch.Tensor] = None,
318
+ past_key_value: Optional[Cache] = None,
319
+ output_attentions: bool = False,
320
+ use_cache: bool = False,
321
+ cache_position: Optional[torch.LongTensor] = None,
322
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
323
+ """
324
+ Forward pass of multi-head self-attention.
325
+
326
+ Args:
327
+ hidden_states: Input tensor of the shape [batch size, sequence length, model dimension].
328
+ past_key_value: Tensor storing the cached keys and values.
329
+ output_attentions: output attention weights.
330
+ use_cache: Specifies whether to use kv-cache for generation.
331
+ cache_position: used for updating the kv-cache.
332
+
333
+ Returns:
334
+ The output of the same shape as the input, optionally with a tensor containing cached keys and values.
335
+ """
336
+
337
+ # scaled_dot_product_attention does not return attention weights, set output_attentions to False
338
+ output_attentions = False
339
+ batch_size, seq_length, d_model = hidden_states.size()
340
+
341
+ # [B, S, d] --> [B, S, (q_h + k_h + v_h) * h]
342
+ qkv = self.qkv_proj(hidden_states)
343
+ # [B, S, (q_h + k_h + v_h) * h] --> [B, S, (q_h + k_h + v_h), h]
344
+ qkv = qkv.reshape(
345
+ batch_size,
346
+ seq_length,
347
+ self.num_q_heads + self.num_k_heads + self.num_v_heads,
348
+ self.head_dim,
349
+ )
350
+ # [B, S, (q_h + k_h + v_h), h] --> [B, (q_h + k_h + v_h), S, h]
351
+ qkv = qkv.transpose(1, 2)
352
+ # [B, (q_h + k_h + v_h), S, h] --> [B, q_h, S h], [B, k_h, S, h], [B, v_h, S, h]
353
+ queries, keys, values = qkv.split(
354
+ [self.num_q_heads, self.num_k_heads, self.num_v_heads], dim=1
355
+ )
356
+
357
+ if self.q_norm is not None:
358
+ queries = self.q_norm(queries)
359
+
360
+ if self.k_norm is not None:
361
+ keys = self.k_norm(keys)
362
+
363
+ past_key_value = getattr(self, "past_key_value", past_key_value)
364
+
365
+ if past_key_value is not None:
366
+ # sin and cos are specific to RoPE models; position_ids needed for the static cache
367
+ # cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
368
+ cache_kwargs = {"cache_position": cache_position}
369
+ keys, values = past_key_value.update(
370
+ keys, values, self.layer_idx, cache_kwargs
371
+ )
372
+
373
+ # Add positional embedding
374
+ queries, keys = self.pos_embedding(queries, keys)
375
+
376
+ if self.num_groups != 1:
377
+ # GQA
378
+ # [B, k_h, S, h] --> [B, q_h, S, h]
379
+ keys = keys.repeat_interleave(self.num_groups, dim=1)
380
+ # [B, v_h, S, h] --> [B, q_h, S, h]
381
+ values = values.repeat_interleave(self.num_groups, dim=1)
382
+
383
+ causal_mask = attention_mask
384
+ if attention_mask is not None and cache_position is not None:
385
+ causal_mask = causal_mask[:, :, cache_position, : keys.shape[-2]]
386
+
387
+ attn_output = F.scaled_dot_product_attention(
388
+ queries,
389
+ keys,
390
+ values,
391
+ attn_mask=causal_mask,
392
+ dropout_p=0,
393
+ )
394
+
395
+ attn_output = attn_output.transpose(1, 2).contiguous()
396
+ attn_output = attn_output.reshape(
397
+ batch_size, seq_length, self.num_q_heads * self.head_dim
398
+ )
399
+ attn_output = self.out_proj(attn_output)
400
+ if not output_attentions:
401
+ attn_weights = None
402
+ return attn_output, attn_weights, past_key_value
403
+
404
+
405
+ class OpenELMFeedForwardNetwork(nn.Module):
406
+ def __init__(self, config: OpenELMConfig, layer_idx: int) -> None:
407
+ super().__init__()
408
+ ffn_multiplier = config.ffn_multipliers[layer_idx]
409
+ intermediate_dim = int(
410
+ make_divisible(
411
+ ffn_multiplier * config.model_dim,
412
+ divisor=config.ffn_dim_divisor,
413
+ )
414
+ )
415
+ if config.ffn_with_glu:
416
+ # FFN with Gated linear unit, as described in https://arxiv.org/abs/2002.05202v1.
417
+ self.proj_1 = nn.Linear(
418
+ in_features=config.model_dim,
419
+ out_features=2 * intermediate_dim,
420
+ bias=False,
421
+ )
422
+ self.proj_2 = nn.Linear(
423
+ in_features=intermediate_dim,
424
+ out_features=config.model_dim,
425
+ bias=False,
426
+ )
427
+ self.ffn_with_glu = True
428
+ else:
429
+ # Standard FFN, as described in https://arxiv.org/abs/1706.03762
430
+ self.proj_1 = nn.Linear(
431
+ in_features=config.model_dim,
432
+ out_features=intermediate_dim,
433
+ bias=False,
434
+ )
435
+ self.proj_2 = nn.Linear(
436
+ in_features=intermediate_dim,
437
+ out_features=config.model_dim,
438
+ bias=False,
439
+ )
440
+ self.ffn_with_glu = False
441
+
442
+ self.act = ACT2FN[config.activation_fn_name]
443
+
444
+ def extra_repr(self) -> str:
445
+ return super().extra_repr() + f"(ffn_with_glu) : {self.ffn_with_glu}"
446
+
447
+ def forward(self, x: Tensor) -> Tensor:
448
+ """Forward function of FFN layer.
449
+
450
+ Args:
451
+ x: Input tensor of the shape [batch size, sequence length, model dimension].
452
+
453
+ Returns:
454
+ A tensor of the same shape as the input.
455
+ """
456
+ if self.ffn_with_glu:
457
+ y_12 = self.proj_1(x)
458
+ y_1, y_2 = y_12.chunk(2, dim=-1)
459
+ y = self.act(y_1) * y_2
460
+ return self.proj_2(y)
461
+ else:
462
+ return self.proj_2(self.act(self.proj_1(x)))
463
+
464
+
465
+ class OpenELMDecoderLayer(nn.Module):
466
+ def __init__(self, config: OpenELMConfig, layer_idx: int) -> None:
467
+ super().__init__()
468
+ self.attn = OpenELMMultiHeadCausalAttention(config=config, layer_idx=layer_idx)
469
+ self.ffn = OpenELMFeedForwardNetwork(config=config, layer_idx=layer_idx)
470
+ self.ffn_norm = OpenELMRMSNorm(
471
+ num_features=config.model_dim,
472
+ )
473
+ self.attn_norm = OpenELMRMSNorm(
474
+ num_features=config.model_dim,
475
+ )
476
+
477
+ def forward(
478
+ self,
479
+ hidden_states: torch.Tensor,
480
+ attention_mask: Optional[torch.Tensor] = None,
481
+ position_ids: Optional[torch.LongTensor] = None,
482
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
483
+ output_attentions: Optional[bool] = False,
484
+ use_cache: Optional[bool] = False,
485
+ cache_position: Optional[torch.LongTensor] = None,
486
+ **kwargs,
487
+ ) -> Tuple[
488
+ torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]
489
+ ]:
490
+ """
491
+ Args:
492
+ hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
493
+ attention_mask (`torch.FloatTensor`, *optional*):
494
+ attention mask of size `(batch_size, sequence_length)` if flash attention is used or `(batch_size, 1,
495
+ query_sequence_length, key_sequence_length)` if default attention is used.
496
+ output_attentions (`bool`, *optional*):
497
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
498
+ returned tensors for more detail.
499
+ use_cache (`bool`, *optional*):
500
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
501
+ (see `past_key_values`).
502
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
503
+ """
504
+ residual = hidden_states
505
+ hidden_states = self.attn_norm(hidden_states)
506
+
507
+ # Self Attention
508
+ hidden_states, self_attn_weights, present_key_value = self.attn(
509
+ hidden_states=hidden_states,
510
+ attention_mask=attention_mask,
511
+ past_key_value=past_key_value,
512
+ output_attentions=output_attentions,
513
+ use_cache=use_cache,
514
+ cache_position=cache_position,
515
+ **kwargs,
516
+ )
517
+ hidden_states = residual + hidden_states
518
+
519
+ # Fully Connected
520
+ residual = hidden_states
521
+ hidden_states = self.ffn_norm(hidden_states)
522
+ hidden_states = self.ffn(hidden_states)
523
+ hidden_states = residual + hidden_states
524
+
525
+ outputs = (hidden_states,)
526
+
527
+ if output_attentions:
528
+ outputs += (self_attn_weights,)
529
+
530
+ if use_cache:
531
+ outputs += (present_key_value,)
532
+
533
+ return outputs
534
+
535
+
536
+ class OpenELMModel(OpenELMPreTrainedModel):
537
+ config_class = OpenELMConfig
538
+
539
+ def __init__(self, config: OpenELMConfig):
540
+ super().__init__(config)
541
+ self.config = config
542
+
543
+ self.token_embeddings = nn.Embedding(
544
+ embedding_dim=config.model_dim,
545
+ num_embeddings=config.vocab_size,
546
+ )
547
+
548
+ self.layers = nn.ModuleList(
549
+ OpenELMDecoderLayer(config=config, layer_idx=layer_idx)
550
+ for layer_idx in range(config.num_transformer_layers)
551
+ )
552
+ self.norm = OpenELMRMSNorm(num_features=config.model_dim)
553
+ if config.share_input_output_layers:
554
+ self.classifier = None
555
+ else:
556
+ self.classifier = nn.Linear(
557
+ in_features=config.model_dim,
558
+ out_features=config.vocab_size,
559
+ bias=False,
560
+ )
561
+ self.num_transformer_layers = config.num_transformer_layers
562
+ self.gradient_checkpointing = False
563
+
564
+ # Register a causal mask to separate causal and padding mask creation. Merging happens in the attention class.
565
+ # NOTE: This is not friendly with TorchScript, ONNX, ExportedProgram serialization for very large `max_context_length`.
566
+ causal_mask = torch.full(
567
+ (config.max_context_length, config.max_context_length),
568
+ fill_value=True,
569
+ dtype=torch.bool,
570
+ )
571
+ self.register_buffer(
572
+ "causal_mask", torch.triu(causal_mask, diagonal=1), persistent=False
573
+ )
574
+
575
+ # Initialize weights and apply final processing
576
+ self.post_init()
577
+ self.reset_parameters(config=config)
578
+
579
+ def get_input_embeddings(self):
580
+ return self.token_embeddings
581
+
582
+ def set_input_embeddings(self, new_embeddings: torch.Tensor):
583
+ self.token_embeddings = new_embeddings
584
+
585
+ def reset_parameters(self, config: OpenELMConfig) -> None:
586
+ """Initialize the layers in Language Model
587
+
588
+ The initialization scheme is followed, following `OPT <https://arxiv.org/pdf/2205.01068.pdf>`_.
589
+
590
+ Args:
591
+ use_megatron_std: Use standard deviation as described in Megatron-LM.
592
+
593
+ Returns:
594
+ None
595
+ """
596
+ for module in self.modules():
597
+ if isinstance(module, nn.Linear):
598
+ std = module.in_features**-0.5
599
+ torch.nn.init.normal_(module.weight, mean=0.0, std=std)
600
+ if module.bias is not None:
601
+ torch.nn.init.zeros_(module.bias)
602
+ elif isinstance(module, nn.Embedding):
603
+ std = module.embedding_dim**-0.5
604
+ torch.nn.init.normal_(module.weight, mean=0.0, std=std)
605
+ elif isinstance(module, OpenELMRMSNorm):
606
+ if module.weight is not None:
607
+ torch.nn.init.ones_(module.weight)
608
+ if hasattr(module, "bias") and module.bias is not None:
609
+ torch.nn.init.zeros_(module.bias)
610
+
611
+ model_dim = config.model_dim
612
+ n_layers = config.num_transformer_layers
613
+ std = (model_dim**-0.5) * ((2 * n_layers) ** -0.5)
614
+ for param_name, param in self.named_parameters():
615
+ if param_name.endswith("out_proj.weight") or param_name.endswith(
616
+ "ffn.proj_2.weight"
617
+ ):
618
+ torch.nn.init.normal_(param, mean=0.0, std=std)
619
+
620
+ def forward(
621
+ self,
622
+ input_ids: torch.LongTensor = None,
623
+ attention_mask: Optional[torch.Tensor] = None,
624
+ position_ids: Optional[torch.LongTensor] = None,
625
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
626
+ inputs_embeds: Optional[torch.FloatTensor] = None,
627
+ use_cache: Optional[bool] = None,
628
+ output_attentions: Optional[bool] = None,
629
+ output_hidden_states: Optional[bool] = None,
630
+ return_dict: Optional[bool] = None,
631
+ cache_position: Optional[torch.LongTensor] = None,
632
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
633
+ output_attentions = (
634
+ output_attentions
635
+ if output_attentions is not None
636
+ else self.config.output_attentions
637
+ )
638
+ output_hidden_states = (
639
+ output_hidden_states
640
+ if output_hidden_states is not None
641
+ else self.config.output_hidden_states
642
+ )
643
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
644
+ return_dict = (
645
+ return_dict if return_dict is not None else self.config.use_return_dict
646
+ )
647
+
648
+ if (input_ids is None) ^ (inputs_embeds is not None):
649
+ raise ValueError(
650
+ "You cannot specify both input_ids and inputs_embeds at the same time, and must specify either one"
651
+ )
652
+
653
+ if self.gradient_checkpointing and self.training and use_cache:
654
+ logger.warning_once(
655
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`."
656
+ )
657
+ use_cache = False
658
+
659
+ if inputs_embeds is None:
660
+ inputs_embeds = self.token_embeddings(input_ids)
661
+
662
+ past_seen_tokens = 0
663
+ if use_cache: # kept for BC (cache positions)
664
+ if not isinstance(past_key_values, StaticCache):
665
+ past_key_values = DynamicCache.from_legacy_cache(past_key_values)
666
+ past_seen_tokens = past_key_values.get_seq_length()
667
+
668
+ if cache_position is None:
669
+ cache_position = torch.arange(
670
+ past_seen_tokens,
671
+ past_seen_tokens + inputs_embeds.shape[1],
672
+ device=inputs_embeds.device,
673
+ )
674
+
675
+ if position_ids is None:
676
+ position_ids = cache_position.unsqueeze(0)
677
+
678
+ causal_mask = self._update_causal_mask(attention_mask, inputs_embeds)
679
+
680
+ # embed positions
681
+ hidden_states = inputs_embeds
682
+
683
+ # decoder layers
684
+ all_hidden_states = () if output_hidden_states else None
685
+ all_self_attns = () if output_attentions else None
686
+ next_decoder_cache = None
687
+
688
+ for decoder_layer in self.layers:
689
+ if output_hidden_states:
690
+ all_hidden_states += (hidden_states,)
691
+
692
+ if self.gradient_checkpointing and self.training:
693
+ layer_outputs = self._gradient_checkpointing_func(
694
+ decoder_layer.__call__,
695
+ hidden_states,
696
+ causal_mask,
697
+ position_ids,
698
+ past_key_values,
699
+ output_attentions,
700
+ use_cache,
701
+ cache_position,
702
+ )
703
+ else:
704
+ layer_outputs = decoder_layer(
705
+ hidden_states,
706
+ attention_mask=causal_mask,
707
+ position_ids=position_ids,
708
+ past_key_value=past_key_values,
709
+ output_attentions=output_attentions,
710
+ use_cache=use_cache,
711
+ cache_position=cache_position,
712
+ )
713
+
714
+ hidden_states = layer_outputs[0]
715
+
716
+ if use_cache:
717
+ next_decoder_cache = layer_outputs[2 if output_attentions else 1]
718
+
719
+ if output_attentions:
720
+ all_self_attns += (layer_outputs[1],)
721
+
722
+ hidden_states = self.norm(hidden_states)
723
+
724
+ # add hidden states from the last decoder layer
725
+ if output_hidden_states:
726
+ all_hidden_states += (hidden_states,)
727
+
728
+ next_cache = None
729
+ if use_cache:
730
+ next_cache = (
731
+ next_decoder_cache.to_legacy_cache()
732
+ if isinstance(next_decoder_cache, Cache)
733
+ else next_decoder_cache
734
+ )
735
+ if not return_dict:
736
+ return tuple(
737
+ v
738
+ for v in [hidden_states, next_cache, all_hidden_states, all_self_attns]
739
+ if v is not None
740
+ )
741
+ return BaseModelOutputWithPast(
742
+ last_hidden_state=hidden_states,
743
+ past_key_values=next_cache,
744
+ hidden_states=all_hidden_states,
745
+ attentions=all_self_attns,
746
+ )
747
+
748
+ def _update_causal_mask(self, attention_mask, input_tensor):
749
+ if self.config._attn_implementation == "flash_attention_2":
750
+ if attention_mask is not None and 0.0 in attention_mask:
751
+ return attention_mask
752
+ return None
753
+
754
+ batch_size, seq_length = input_tensor.shape[:2]
755
+ dtype = input_tensor.dtype
756
+ device = input_tensor.device
757
+
758
+ # support going beyond cached `max_position_embedding`
759
+ if seq_length > self.causal_mask.shape[-1]:
760
+ causal_mask = torch.full(
761
+ (2 * self.causal_mask.shape[-1], 2 * self.causal_mask.shape[-1]),
762
+ fill_value=1,
763
+ )
764
+ self.register_buffer(
765
+ "causal_mask", torch.triu(causal_mask, diagonal=1), persistent=False
766
+ )
767
+
768
+ # We use the current dtype to avoid any overflows
769
+ min_dtype = torch.finfo(dtype).min / 2
770
+ causal_mask = (
771
+ self.causal_mask[None, None, :, :].repeat(batch_size, 1, 1, 1).to(dtype)
772
+ * min_dtype
773
+ )
774
+
775
+ causal_mask = causal_mask.to(dtype=dtype, device=device)
776
+ if attention_mask is not None and attention_mask.dim() == 2:
777
+ mask_length = attention_mask.shape[-1]
778
+ padding_mask = causal_mask[..., :mask_length].eq(0.0) * attention_mask[
779
+ :, None, None, :
780
+ ].eq(0.0)
781
+ causal_mask[..., :mask_length] = causal_mask[..., :mask_length].masked_fill(
782
+ padding_mask, min_dtype
783
+ )
784
+
785
+ if self.config._attn_implementation == "sdpa" and attention_mask is not None:
786
+ # For dynamo, rather use a check on fullgraph=True once this is possible (https://github.com/pytorch/pytorch/pull/120400).
787
+ is_tracing = (
788
+ torch.jit.is_tracing()
789
+ or isinstance(input_tensor, torch.fx.Proxy)
790
+ or (hasattr(torch, "_dynamo") and torch._dynamo.is_compiling())
791
+ )
792
+ if not is_tracing and torch.any(attention_mask != 1):
793
+ # Attend to all tokens in masked rows from the causal_mask, for example the relevant first rows when
794
+ # using left padding. This is required by F.scaled_dot_product_attention memory-efficient attention path.
795
+ # Details: https://github.com/pytorch/pytorch/issues/110213
796
+ causal_mask = causal_mask.mul(
797
+ ~torch.all(causal_mask == min_dtype, dim=-1, keepdim=True)
798
+ ).to(dtype)
799
+
800
+ return causal_mask
801
+
802
+
803
+ class OpenELMForCausalLM(OpenELMPreTrainedModel):
804
+ _tied_weights_keys = ["lm_head.weight"]
805
+
806
+ def __init__(self, config: OpenELMConfig):
807
+ super().__init__(config)
808
+ self.transformer = OpenELMModel(config)
809
+ self.vocab_size = config.vocab_size
810
+ if config.share_input_output_layers:
811
+ self.lm_head = None
812
+ else:
813
+ self.lm_head = nn.Linear(config.model_dim, config.vocab_size, bias=False)
814
+
815
+ # Initialize weights and apply final processing
816
+ self.post_init()
817
+
818
+ def get_input_embeddings(self):
819
+ return self.transformer.token_embeddings
820
+
821
+ def set_input_embeddings(self, value):
822
+ self.transformer.token_embeddings = value
823
+
824
+ def get_output_embeddings(self):
825
+ return self.lm_head
826
+
827
+ def set_output_embeddings(self, new_embeddings):
828
+ self.lm_head = new_embeddings
829
+
830
+ def set_decoder(self, decoder):
831
+ self.transformer = decoder
832
+
833
+ def get_decoder(self):
834
+ return self.transformer
835
+
836
+ def forward(
837
+ self,
838
+ input_ids: torch.LongTensor = None,
839
+ attention_mask: Optional[torch.Tensor] = None,
840
+ position_ids: Optional[torch.LongTensor] = None,
841
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
842
+ inputs_embeds: Optional[torch.FloatTensor] = None,
843
+ labels: Optional[torch.LongTensor] = None,
844
+ use_cache: Optional[bool] = None,
845
+ output_attentions: Optional[bool] = None,
846
+ output_hidden_states: Optional[bool] = None,
847
+ return_dict: Optional[bool] = None,
848
+ cache_position: Optional[torch.LongTensor] = None,
849
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
850
+ output_attentions = (
851
+ output_attentions
852
+ if output_attentions is not None
853
+ else self.config.output_attentions
854
+ )
855
+ output_hidden_states = (
856
+ output_hidden_states
857
+ if output_hidden_states is not None
858
+ else self.config.output_hidden_states
859
+ )
860
+ return_dict = (
861
+ return_dict if return_dict is not None else self.config.use_return_dict
862
+ )
863
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
864
+ outputs = self.transformer(
865
+ input_ids=input_ids,
866
+ attention_mask=attention_mask,
867
+ position_ids=position_ids,
868
+ past_key_values=past_key_values,
869
+ inputs_embeds=inputs_embeds,
870
+ use_cache=use_cache,
871
+ output_attentions=output_attentions,
872
+ output_hidden_states=output_hidden_states,
873
+ return_dict=return_dict,
874
+ cache_position=cache_position,
875
+ )
876
+
877
+ hidden_states = outputs[0]
878
+ if self.lm_head is None:
879
+ # shared
880
+ logits = F.linear(
881
+ hidden_states, weight=self.transformer.token_embeddings.weight
882
+ )
883
+ else:
884
+ logits = self.lm_head(hidden_states)
885
+ logits = logits[:, : self.config.vocab_size]
886
+ loss = None
887
+ if labels is not None:
888
+ # Shift so that tokens < n predict n
889
+ shift_logits = logits[..., :-1, :].contiguous()
890
+ shift_labels = labels[..., 1:].contiguous()
891
+ # Flatten the tokens
892
+ loss_fct = CrossEntropyLoss()
893
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
894
+ shift_labels = shift_labels.view(-1)
895
+ # Enable model parallelism
896
+ shift_labels = shift_labels.to(shift_logits.device)
897
+ loss = loss_fct(shift_logits, shift_labels)
898
+
899
+ if not return_dict:
900
+ output = (logits,) + outputs[1:]
901
+ return (loss,) + output if loss is not None else output
902
+
903
+ return CausalLMOutputWithPast(
904
+ loss=loss,
905
+ logits=logits,
906
+ past_key_values=outputs.past_key_values,
907
+ hidden_states=outputs.hidden_states,
908
+ attentions=outputs.attentions,
909
+ )
910
+
911
+ def prepare_inputs_for_generation(
912
+ self,
913
+ input_ids,
914
+ past_key_values=None,
915
+ attention_mask=None,
916
+ inputs_embeds=None,
917
+ **kwargs,
918
+ ):
919
+ past_length = 0
920
+ if past_key_values is not None:
921
+ if isinstance(past_key_values, Cache):
922
+ cache_length = past_key_values.get_seq_length()
923
+ past_length = past_key_values.seen_tokens
924
+ max_cache_length = past_key_values.get_max_length()
925
+ else:
926
+ cache_length = past_length = past_key_values[0][0].shape[2]
927
+ max_cache_length = None
928
+
929
+ # Keep only the unprocessed tokens:
930
+ # 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where
931
+ # some of the inputs are exclusively passed as part of the cache (e.g. when passing input_embeds as
932
+ # input)
933
+ if (
934
+ attention_mask is not None
935
+ and attention_mask.shape[1] > input_ids.shape[1]
936
+ ):
937
+ input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :]
938
+ # 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard
939
+ # input_ids based on the past_length.
940
+ elif past_length < input_ids.shape[1]:
941
+ input_ids = input_ids[:, past_length:]
942
+ # 3 - Otherwise (past_length >= input_ids.shape[1]), let's assume input_ids only has unprocessed tokens.
943
+
944
+ # If we are about to go beyond the maximum cache length, we need to crop the input attention mask.
945
+ if (
946
+ max_cache_length is not None
947
+ and attention_mask is not None
948
+ and cache_length + input_ids.shape[1] > max_cache_length
949
+ ):
950
+ attention_mask = attention_mask[:, -max_cache_length:]
951
+
952
+ position_ids = kwargs.get("position_ids", None)
953
+ if attention_mask is not None and position_ids is None:
954
+ # create position_ids on the fly for batch generation
955
+ position_ids = attention_mask.long().cumsum(-1) - 1
956
+ position_ids.masked_fill_(attention_mask == 0, 1)
957
+ if past_key_values:
958
+ position_ids = position_ids[:, -input_ids.shape[1] :]
959
+
960
+ if self.generation_config.cache_implementation == "static":
961
+ # generation with static cache
962
+ cache_position = kwargs.get("cache_position", None)
963
+ if cache_position is None:
964
+ past_length = 0
965
+ else:
966
+ past_length = cache_position[-1] + 1
967
+ input_ids = input_ids[:, past_length:]
968
+ position_ids = position_ids[:, past_length:]
969
+
970
+ # we should only keep a `cache_position` in generate, and do +=1.
971
+ # same goes for position ids. Could also help with continued generation.
972
+ cache_position = torch.arange(
973
+ past_length,
974
+ past_length + position_ids.shape[-1],
975
+ device=position_ids.device,
976
+ )
977
+
978
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
979
+ if inputs_embeds is not None and past_key_values is None:
980
+ model_inputs = {"inputs_embeds": inputs_embeds}
981
+ else:
982
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
983
+ # recompiles graphs as the stride of the inputs is a guard. Ref: https://github.com/huggingface/transformers/pull/29114
984
+ # We could use `next_tokens` directly instead.
985
+ model_inputs = {"input_ids": input_ids.contiguous()}
986
+
987
+ model_inputs.update(
988
+ {
989
+ "position_ids": position_ids.contiguous(),
990
+ "cache_position": cache_position,
991
+ "past_key_values": past_key_values,
992
+ "use_cache": kwargs.get("use_cache"),
993
+ "attention_mask": attention_mask,
994
+ }
995
+ )
996
+ return model_inputs
997
+
998
+ @staticmethod
999
+ def _reorder_cache(past_key_values, beam_idx):
1000
+ reordered_past = ()
1001
+ for layer_past in past_key_values:
1002
+ reordered_past += (
1003
+ tuple(
1004
+ past_state.index_select(0, beam_idx.to(past_state.device))
1005
+ for past_state in layer_past
1006
+ ),
1007
+ )
1008
+ return reordered_past