New model from https://wandb.ai/wandb/huggingtweets/runs/2wwe80fw
Browse files- README.md +24 -14
- config.json +2 -1
- pytorch_model.bin +1 -1
- tokenizer.json +0 -0
- training_args.bin +2 -2
README.md
CHANGED
@@ -1,17 +1,27 @@
|
|
1 |
---
|
2 |
language: en
|
3 |
-
thumbnail: https://
|
4 |
tags:
|
5 |
- huggingtweets
|
6 |
widget:
|
7 |
- text: "My dream is"
|
8 |
---
|
9 |
|
10 |
-
<div>
|
11 |
-
<div
|
12 |
-
|
13 |
-
|
14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
</div>
|
16 |
|
17 |
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
|
@@ -28,24 +38,24 @@ To understand how the model was developed, check the [W&B report](https://wandb.
|
|
28 |
|
29 |
## Training data
|
30 |
|
31 |
-
The model was trained on
|
32 |
|
33 |
-
| Data |
|
34 |
| --- | --- |
|
35 |
| Tweets downloaded | 3241 |
|
36 |
-
| Retweets |
|
37 |
-
| Short tweets |
|
38 |
-
| Tweets kept |
|
39 |
|
40 |
-
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/
|
41 |
|
42 |
## Training procedure
|
43 |
|
44 |
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @drsweety303's tweets.
|
45 |
|
46 |
-
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/
|
47 |
|
48 |
-
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/
|
49 |
|
50 |
## How to use
|
51 |
|
|
|
1 |
---
|
2 |
language: en
|
3 |
+
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
|
4 |
tags:
|
5 |
- huggingtweets
|
6 |
widget:
|
7 |
- text: "My dream is"
|
8 |
---
|
9 |
|
10 |
+
<div class="inline-flex flex-col" style="line-height: 1.5;">
|
11 |
+
<div class="flex">
|
12 |
+
<div
|
13 |
+
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1377788200199065611/vkwcelvm_400x400.jpg')">
|
14 |
+
</div>
|
15 |
+
<div
|
16 |
+
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
|
17 |
+
</div>
|
18 |
+
<div
|
19 |
+
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
|
20 |
+
</div>
|
21 |
+
</div>
|
22 |
+
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
|
23 |
+
<div style="text-align: center; font-size: 16px; font-weight: 800">Dr. Roberta Bobby</div>
|
24 |
+
<div style="text-align: center; font-size: 14px;">@drsweety303</div>
|
25 |
</div>
|
26 |
|
27 |
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
|
|
|
38 |
|
39 |
## Training data
|
40 |
|
41 |
+
The model was trained on tweets from Dr. Roberta Bobby.
|
42 |
|
43 |
+
| Data | Dr. Roberta Bobby |
|
44 |
| --- | --- |
|
45 |
| Tweets downloaded | 3241 |
|
46 |
+
| Retweets | 291 |
|
47 |
+
| Short tweets | 285 |
|
48 |
+
| Tweets kept | 2665 |
|
49 |
|
50 |
+
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39xsuhxd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
|
51 |
|
52 |
## Training procedure
|
53 |
|
54 |
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @drsweety303's tweets.
|
55 |
|
56 |
+
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wwe80fw) for full transparency and reproducibility.
|
57 |
|
58 |
+
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wwe80fw/artifacts) is logged and versioned.
|
59 |
|
60 |
## How to use
|
61 |
|
config.json
CHANGED
@@ -19,6 +19,7 @@
|
|
19 |
"n_layer": 12,
|
20 |
"n_positions": 1024,
|
21 |
"resid_pdrop": 0.1,
|
|
|
22 |
"summary_activation": null,
|
23 |
"summary_first_dropout": 0.1,
|
24 |
"summary_proj_to_labels": true,
|
@@ -34,7 +35,7 @@
|
|
34 |
"top_p": 0.95
|
35 |
}
|
36 |
},
|
37 |
-
"transformers_version": "4.
|
38 |
"use_cache": true,
|
39 |
"vocab_size": 50257
|
40 |
}
|
|
|
19 |
"n_layer": 12,
|
20 |
"n_positions": 1024,
|
21 |
"resid_pdrop": 0.1,
|
22 |
+
"scale_attn_weights": true,
|
23 |
"summary_activation": null,
|
24 |
"summary_first_dropout": 0.1,
|
25 |
"summary_proj_to_labels": true,
|
|
|
35 |
"top_p": 0.95
|
36 |
}
|
37 |
},
|
38 |
+
"transformers_version": "4.6.0",
|
39 |
"use_cache": true,
|
40 |
"vocab_size": 50257
|
41 |
}
|
pytorch_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 510408315
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:21a89f688f7f3ee90dfd63da5421ca192dc838de2ea4ff079d1f0cd9a468163d
|
3 |
size 510408315
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c2c8102f3c17722fe56b7f852997507e910184791659fde5726521abb42d77d7
|
3 |
+
size 2415
|