Text Generation
Transformers
Safetensors
English
llama
nvidia
llama3.1
conversational
text-generation-inference
okuchaiev commited on
Commit
dfe497a
1 Parent(s): 250db5c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -8
README.md CHANGED
@@ -98,11 +98,6 @@ print(generated_text)
98
 
99
 
100
 
101
- ## Contact
102
-
103
- E-Mail: [Zhilin Wang](mailto:zhilinw@nvidia.com)
104
-
105
-
106
  ## Citation
107
 
108
  If you find this model useful, please cite the following works
@@ -129,14 +124,13 @@ If you find this model useful, please cite the following works
129
 
130
  ## References(s):
131
 
 
132
  * [HelpSteer2-Preference](https://arxiv.org/abs/2410.01257)
133
- * [SteerLM method](https://arxiv.org/abs/2310.05344)
134
- * [HelpSteer](https://arxiv.org/abs/2311.09528)
135
  * [HelpSteer2](https://arxiv.org/abs/2406.08673)
136
  * [Introducing Llama 3.1: Our most capable models to date](https://ai.meta.com/blog/meta-llama-3-1/)
137
  * [Meta's Llama 3.1 Webpage](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1)
138
  * [Meta's Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md)
139
-
140
 
141
  ## Model Architecture:
142
  **Architecture Type:** Transformer <br>
@@ -167,6 +161,9 @@ v1.0
167
 
168
  # Training & Evaluation:
169
 
 
 
 
170
  ## Datasets:
171
 
172
  **Data Collection Method by dataset** <br>
 
98
 
99
 
100
 
 
 
 
 
 
101
  ## Citation
102
 
103
  If you find this model useful, please cite the following works
 
124
 
125
  ## References(s):
126
 
127
+ * [NeMo Aligner](https://arxiv.org/abs/2405.01481)
128
  * [HelpSteer2-Preference](https://arxiv.org/abs/2410.01257)
 
 
129
  * [HelpSteer2](https://arxiv.org/abs/2406.08673)
130
  * [Introducing Llama 3.1: Our most capable models to date](https://ai.meta.com/blog/meta-llama-3-1/)
131
  * [Meta's Llama 3.1 Webpage](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1)
132
  * [Meta's Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md)
133
+
134
 
135
  ## Model Architecture:
136
  **Architecture Type:** Transformer <br>
 
161
 
162
  # Training & Evaluation:
163
 
164
+ ## Alignment methodology
165
+ * REINFORCE implemented in NeMo Aligner
166
+
167
  ## Datasets:
168
 
169
  **Data Collection Method by dataset** <br>