Fine-Tuned RoBERTa for Multilingual NER
Introduction
This model is a fine-tuned version of the RoBERTa base model, specialized for Named Entity Recognition (NER) tasks in English, Spanish, French, and Romanian.
Capabilities
The model is currently capable of recognizing:
- Common names in English, Spanish, French, and Romanian
- Most commonly used acronyms in these languages
Training Data
The model was fine-tuned using the ner_acro_combined
dataset.
Usage
This fine-tuned model is designed for:
- Performing NER tasks in multilingual contexts
- Identifying commonly used names and acronyms in the specified languages
Contributing
If you have suggestions for improvements or bug reports related to this model, please feel free to open an issue or submit a pull request.
- Downloads last month
- 3