|
--- |
|
language: |
|
- multilingual |
|
- af |
|
- sq |
|
- ar |
|
- an |
|
- hy |
|
- ast |
|
- az |
|
- ba |
|
- eu |
|
- bar |
|
- be |
|
- bn |
|
- inc |
|
- bs |
|
- br |
|
- bg |
|
- my |
|
- ca |
|
- ceb |
|
- ce |
|
- zh |
|
- cv |
|
- hr |
|
- cs |
|
- da |
|
- nl |
|
- en |
|
- et |
|
- fi |
|
- fr |
|
- gl |
|
- ka |
|
- de |
|
- el |
|
- gu |
|
- ht |
|
- he |
|
- hi |
|
- hu |
|
- is |
|
- io |
|
- id |
|
- ga |
|
- it |
|
- ja |
|
- jv |
|
- kn |
|
- kk |
|
- ky |
|
- ko |
|
- la |
|
- lv |
|
- lt |
|
- roa |
|
- nds |
|
- lm |
|
- mk |
|
- mg |
|
- ms |
|
- ml |
|
- mr |
|
- mn |
|
- min |
|
- ne |
|
- new |
|
- nb |
|
- nn |
|
- oc |
|
- fa |
|
- pms |
|
- pl |
|
- pt |
|
- pa |
|
- ro |
|
- ru |
|
- sco |
|
- sr |
|
- hr |
|
- scn |
|
- sk |
|
- sl |
|
- aze |
|
- es |
|
- su |
|
- sw |
|
- sv |
|
- tl |
|
- tg |
|
- th |
|
- ta |
|
- tt |
|
- te |
|
- tr |
|
- uk |
|
- ud |
|
- uz |
|
- vi |
|
- vo |
|
- war |
|
- cy |
|
- fry |
|
- pnb |
|
- yo |
|
license: apache-2.0 |
|
datasets: |
|
- wikiann |
|
examples: null |
|
widget: |
|
- text: মারভিন দি মারসিয়ান |
|
example_title: Sentence_1 |
|
- text: লিওনার্দো দা ভিঞ্চি |
|
example_title: Sentence_2 |
|
- text: বসনিয়া ও হার্জেগোভিনা |
|
example_title: Sentence_3 |
|
- text: সাউথ ইস্ট ইউনিভার্সিটি |
|
example_title: Sentence_4 |
|
- text: মানিক বন্দ্যোপাধ্যায় লেখক |
|
example_title: Sentence_5 |
|
--- |
|
|
|
# BERT multilingual base model (cased) |
|
|
|
Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective. |
|
It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in |
|
[this repository](https://github.com/google-research/bert). This model is case sensitive: it makes a difference |
|
between english and English. |
|
|
|
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by |
|
the Hugging Face team. |
|
|
|
## Model description |
|
|
|
BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means |
|
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of |
|
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it |
|
was pretrained with two objectives: |
|
|
|
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run |
|
the entire masked sentence through the model and has to predict the masked words. This is different from traditional |
|
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like |
|
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the |
|
sentence. |
|
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes |
|
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to |
|
predict if the two sentences were following each other or not. |
|
|
|
This way, the model learns an inner representation of the languages in the training set that can then be used to |
|
extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a |
|
standard classifier using the features produced by the BERT model as inputs. |
|
|
|
## Intended uses & limitations |
|
|
|
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to |
|
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for |
|
fine-tuned versions on a task that interests you. |
|
|
|
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) |
|
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text |
|
generation you should look at model like GPT2. |
|
|
|
### How to use |
|
|
|
You can use this model directly with a pipeline for named entity recognition: |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForTokenClassification |
|
from transformers import pipeline |
|
tokenizer = AutoTokenizer.from_pretrained("orgcatorg/bert-base-multilingual-cased-ner") |
|
model = AutoModelForTokenClassification.from_pretrained("orgcatorg/bert-base-multilingual-cased-ner") |
|
nlp = pipeline("ner", model=model, tokenizer=tokenizer) |
|
example = "মারভিন দি মারসিয়ান" |
|
ner_results = nlp(example) |
|
ner_results |
|
``` |
|
|
|
## Training data |
|
|
|
The BERT model was pretrained on the 104 languages with the largest Wikipedias. You can find the complete list |
|
[here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages). |
|
|
|
## Training procedure |
|
|
|
### Preprocessing |
|
|
|
The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a |
|
larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese, |
|
Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character. |
|
|
|
The inputs of the model are then of the form: |
|
|
|
``` |
|
[CLS] Sentence A [SEP] Sentence B [SEP] |
|
``` |
|
|
|
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in |
|
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a |
|
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two |
|
"sentences" has a combined length of less than 512 tokens. |
|
|
|
The details of the masking procedure for each sentence are the following: |
|
- 15% of the tokens are masked. |
|
- In 80% of the cases, the masked tokens are replaced by `[MASK]`. |
|
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. |
|
- In the 10% remaining cases, the masked tokens are left as is. |
|
|
|
|
|
### BibTeX entry and citation info |
|
|
|
```bibtex |
|
@article{DBLP:journals/corr/abs-1810-04805, |
|
author = {Jacob Devlin and |
|
Ming{-}Wei Chang and |
|
Kenton Lee and |
|
Kristina Toutanova}, |
|
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language |
|
Understanding}, |
|
journal = {CoRR}, |
|
volume = {abs/1810.04805}, |
|
year = {2018}, |
|
url = {http://arxiv.org/abs/1810.04805}, |
|
archivePrefix = {arXiv}, |
|
eprint = {1810.04805}, |
|
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, |
|
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, |
|
bibsource = {dblp computer science bibliography, https://dblp.org} |
|
} |
|
```--- |
|
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 |
|
# Doc / guide: https://huggingface.co/docs/hub/model-cards |
|
{} |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
|
|
- **Developed by:** [More Information Needed] |
|
- **Funded by [optional]:** [More Information Needed] |
|
- **Shared by [optional]:** [More Information Needed] |
|
- **Model type:** [More Information Needed] |
|
- **Language(s) (NLP):** [More Information Needed] |
|
- **License:** [More Information Needed] |
|
- **Finetuned from model [optional]:** [More Information Needed] |
|
|
|
### Model Sources [optional] |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** [More Information Needed] |
|
- **Paper [optional]:** [More Information Needed] |
|
- **Demo [optional]:** [More Information Needed] |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
[More Information Needed] |
|
|
|
### Downstream Use [optional] |
|
|
|
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> |
|
|
|
[More Information Needed] |
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
|
|
[More Information Needed] |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
[More Information Needed] |
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. |
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
[More Information Needed] |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
[More Information Needed] |
|
|
|
### Training Procedure |
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
#### Preprocessing [optional] |
|
|
|
[More Information Needed] |
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> |
|
|
|
#### Speeds, Sizes, Times [optional] |
|
|
|
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> |
|
|
|
[More Information Needed] |
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
|
|
[More Information Needed] |
|
|
|
#### Factors |
|
|
|
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
|
|
|
[More Information Needed] |
|
|
|
#### Metrics |
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
[More Information Needed] |
|
|
|
### Results |
|
|
|
[More Information Needed] |
|
|
|
#### Summary |
|
|
|
|
|
|
|
## Model Examination [optional] |
|
|
|
<!-- Relevant interpretability work for the model goes here --> |
|
|
|
[More Information Needed] |
|
|
|
## Environmental Impact |
|
|
|
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
- **Hardware Type:** [More Information Needed] |
|
- **Hours used:** [More Information Needed] |
|
- **Cloud Provider:** [More Information Needed] |
|
- **Compute Region:** [More Information Needed] |
|
- **Carbon Emitted:** [More Information Needed] |
|
|
|
## Technical Specifications [optional] |
|
|
|
### Model Architecture and Objective |
|
|
|
[More Information Needed] |
|
|
|
### Compute Infrastructure |
|
|
|
[More Information Needed] |
|
|
|
#### Hardware |
|
|
|
[More Information Needed] |
|
|
|
#### Software |
|
|
|
[More Information Needed] |
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
[More Information Needed] |
|
|
|
**APA:** |
|
|
|
[More Information Needed] |
|
|
|
## Glossary [optional] |
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> |
|
|
|
[More Information Needed] |
|
|
|
## More Information [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Model Card Authors [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Model Card Contact |
|
|
|
[More Information Needed] |