wassemgtk commited on
Commit
d525193
1 Parent(s): d1f4fcc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +129 -1
README.md CHANGED
@@ -1,3 +1,131 @@
1
  ---
2
- license: other
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ library_name: nemo
5
+ datasets:
6
+ - the_pile
7
+ tags:
8
+ - text generation
9
+ - pytorch
10
+ - causal-lm
11
+ license: cc-by-4.0
12
+
13
  ---
14
+ # Palmyra-20B
15
+
16
+ <style>
17
+ img {
18
+ display: inline;
19
+ }
20
+ </style>
21
+
22
+
23
+ ## Model Description
24
+
25
+ Model description
26
+ Palmyra was primarily pretrained with English text, there is still a trace amount of non-English data present within the training corpus that was accessed through CommonCrawl. A causal language modeling (CLM) objective was utilized during the process of the model's pretraining. Similar to GPT-3, Palmyra is a member of the same family of models that only contain a decoder. As a result, it was pretrained utilizing the objective of self-supervised causal language modeling.
27
+ Palmyra uses the prompts and general experimental setup from GPT-3 in order to conduct its evaluation in accordance with GPT-3. Read the official paper if you want more information about this.
28
+
29
+ ## Getting started
30
+
31
+ ### Step 1: Install NeMo and dependencies
32
+
33
+ You will need to install NVIDIA Apex and NeMo.
34
+
35
+ ```
36
+ git clone https://github.com/ericharper/apex.git
37
+ cd apex
38
+ git checkout nm_v1.11.0
39
+ pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
40
+ ```
41
+
42
+ ```
43
+ pip install nemo_toolkit['nlp']==1.11.0
44
+ ```
45
+
46
+ ### Step 2: Launch eval server
47
+
48
+ **Note.** The example below launches a model variant with Tensor Parallelism (TP) of 4 and Pipeline Parallelism (PP) of 1 on two GPUs.
49
+
50
+
51
+ ```
52
+ git clone https://github.com/NVIDIA/NeMo.git
53
+ cd NeMo/examples/nlp/language_modeling
54
+ git checkout v1.11.0
55
+ python megatron_gpt_eval.py gpt_model_file=palmyara_gpt_20b.nemo server=True tensor_model_parallel_size=4 trainer.devices=4
56
+ ```
57
+
58
+ ### Step 3: Send prompts to your model!
59
+ ```python
60
+ import json
61
+ import requests
62
+
63
+ port_num = 5555
64
+ headers = {"Content-Type": "application/json"}
65
+
66
+ def request_data(data):
67
+ resp = requests.put('http://localhost:{}/generate'.format(port_num),
68
+ data=json.dumps(data),
69
+ headers=headers)
70
+ sentences = resp.json()['sentences']
71
+ return sentences
72
+
73
+
74
+ data = {
75
+ "sentences": ["Tell me an interesting fact about space travel."]*1,
76
+ "tokens_to_generate": 50,
77
+ "temperature": 1.0,
78
+ "add_BOS": True,
79
+ "top_k": 0,
80
+ "top_p": 0.9,
81
+ "greedy": False,
82
+ "all_probs": False,
83
+ "repetition_penalty": 1.2,
84
+ "min_tokens_to_generate": 2,
85
+ }
86
+
87
+ sentences = request_data(data)
88
+ print(sentences)
89
+ ```
90
+
91
+
92
+ ## Training Data
93
+
94
+ | part | MassiveText (sampling) | tokens (B) | url | sampling ratio |
95
+ |:---------------|-----------------------:|:----------:| :------------------------------------|---------------:|
96
+ | mc4 filtered | MassiveWeb (48%) | 1331 | gs://mc4/final/web | 58% |
97
+ | TrustedWeb | - | - | gs://mc4/final/trusted_web | - |
98
+ | realnews | News (10%) | 21 | gs://mc4/final/news | 10% |
99
+ | c4 | c4 (10%) | - | gs://mc4/final/c4 | - |
100
+ | wikipedia-40B | wikipedia (2%) | 2 | gs://mc4/final/wikipedia | 5% |
101
+ | github | github (3%) | - | gs://mc4/final/github | - |
102
+ | books | books (27%) | 24 | gs://mc4/final/books | 27% |
103
+ | youtube | - | - | gs://mc4/final/youtube | - |
104
+
105
+
106
+
107
+ ## Evaluation results
108
+
109
+ *Zero-shot performance.* Evaluated using [LM Evaluation Test Suite from AI21](https://github.com/AI21Labs/lm-evaluation)
110
+
111
+ | ARC-Challenge | ARC-Easy | RACE-middle | RACE-high | Winogrande | RTE | BoolQA | HellaSwag | PiQA |
112
+ | ------------- | -------- | ----------- | --------- | ---------- | --- | ------ | --------- | ---- |
113
+ | 0.3976 | 0.5566 | 0.5007 | 0.4171 | 0.6133 | 0.5812 | 0.6356 | 0.6298 | 0.7492 |
114
+
115
+ ## Limitations
116
+
117
+ The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
118
+
119
+ ## References
120
+
121
+ [1] [Improving Language Understanding by Generative Pre-Training](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
122
+
123
+ [2] [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf)
124
+
125
+ [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
126
+
127
+ [4] [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
128
+
129
+ ## Licence
130
+
131
+ License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.