Serega6678's picture
Update README.md
52006a4 verified
|
raw
history blame
2.77 kB
metadata
license: mit
language:
  - en
  - fr
  - de
  - it
  - es
  - pt
  - pl
  - nl
  - ru
pipeline_tag: token-classification
inference: false
tags:
  - token-classification
  - entity-recognition
  - foundation-model
  - feature-extraction
  - mBERT
  - Multilingual Bert
  - BERT
  - generic

SOTA Entity Recognition Multilingual Foundation Model by NuMind 🔥

This model provides the best embedding for the Entity Recognition task and supports 9+ languages.

Checkout other models by NuMind:

  • SOTA Entity Recognition Foundation Model in English: link
  • SOTA Sentiment Analysis Foundation Model: English, Multilingual

About

Multilingual BERT finetunned on an artificially annotated multilingual subset of Oscar dataset. This model provides domain & language independent embedding for Entity Recognition Task. We fine-tunned it only on 9 languages but the model can generalize over other languages that are supported by the Multilingual BERT.

Metrics:

Read more about evaluation protocol & datasets in our blog post

Model F1 macro
bert-base-multilingual-cased 0.5206
ours 0.5892
ours + two emb 0.6231

Usage

Embeddings can be used out of the box or fine-tuned on specific datasets.

Get embeddings:

import torch
import transformers


model = transformers.AutoModel.from_pretrained(
    'numind/NuNER-multilingual-v0.1',
    output_hidden_states=True,
)
tokenizer = transformers.AutoTokenizer.from_pretrained(
    'numind/NuNER-multilingual-v0.1',
)

text = [
    "NuMind is an AI company based in Paris and USA.",
    "NuMind est une entreprise d'IA basée à Paris et aux États-Unis.",
    "See other models from us on https://huggingface.co/numind"
]
encoded_input = tokenizer(
    text,
    return_tensors='pt',
    padding=True,
    truncation=True
)
output = model(**encoded_input)

# two emb trick: for better quality
emb = torch.cat(
    (output.hidden_states[-1], output.hidden_states[-7]),
    dim=2
)

# single emb: for better speed
# emb = output.hidden_states[-1]

Citation

@misc{bogdanov2024nuner, title={NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data}, author={Sergei Bogdanov and Alexandre Constantin and Timothée Bernard and Benoit Crabbé and Etienne Bernard}, year={2024}, eprint={2402.15343}, archivePrefix={arXiv}, primaryClass={cs.CL} }