Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,21 @@ This model is a GPT-Neo transformer decoder model designed using EleutherAI's re
|
|
16 |
|
17 |
It was trained on a thoroughly cleaned corpus of Romanian text of about 40GB composed of Oscar, Opus, Wikipedia, literature and various other bits and pieces of text, joined together and deduplicated. It was trained for about a month, totaling 5.8M steps on a v3 TPU machine.
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
### Authors:
|
20 |
* Dumitrescu Stefan
|
21 |
* Mihai Ilie
|
|
|
16 |
|
17 |
It was trained on a thoroughly cleaned corpus of Romanian text of about 40GB composed of Oscar, Opus, Wikipedia, literature and various other bits and pieces of text, joined together and deduplicated. It was trained for about a month, totaling 5.8M steps on a v3 TPU machine.
|
18 |
|
19 |
+
```python
|
20 |
+
from transformers import GPTNeoForCausalLM, GPT2Tokenizer
|
21 |
+
|
22 |
+
model = GPTNeoForCausalLM.from_pretrained("iliemihai/gpt-neo-romanian-125m")
|
23 |
+
tokenizer = GPT2Tokenizer.from_pretrained("iliemihai/gpt-neo-romanian-125m")
|
24 |
+
|
25 |
+
prompt = "Cine a fost mihai eminescu"
|
26 |
+
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
|
27 |
+
|
28 |
+
output = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=64)
|
29 |
+
result = tokenizer.decode(output[0], skip_special_tokens=True)
|
30 |
+
|
31 |
+
print(result)
|
32 |
+
```
|
33 |
+
|
34 |
### Authors:
|
35 |
* Dumitrescu Stefan
|
36 |
* Mihai Ilie
|