Update README.md (#2)
Browse files- Update README.md (c66c58b6d926dd0d37987e69cb6083e4e40afb89)
Co-authored-by: Bartek Szmelczynski <Bearnardd@users.noreply.huggingface.co>
README.md
CHANGED
@@ -11,7 +11,7 @@ datasets:
|
|
11 |
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-cased).
|
12 |
It was introduced in [this paper](https://arxiv.org/abs/1910.01108).
|
13 |
The code for the distillation process can be found
|
14 |
-
[here](https://github.com/huggingface/transformers/tree/
|
15 |
This model is cased: it does make a difference between english and English.
|
16 |
|
17 |
All the training details on the pre-training, the uses, limitations and potential biases (included below) are the same as for [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased).
|
|
|
11 |
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-cased).
|
12 |
It was introduced in [this paper](https://arxiv.org/abs/1910.01108).
|
13 |
The code for the distillation process can be found
|
14 |
+
[here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation).
|
15 |
This model is cased: it does make a difference between english and English.
|
16 |
|
17 |
All the training details on the pre-training, the uses, limitations and potential biases (included below) are the same as for [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased).
|