Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Size:
10K - 100K
multilinguality: | |
- monolingual | |
task_categories: | |
- token-classification | |
task_ids: | |
- named-entity-recognition | |
train-eval-index: | |
- task: token-classification | |
task_id: entity_extraction | |
splits: | |
train_split: train | |
eval_split: test | |
val_split: validation | |
col_mapping: | |
tokens: tokens | |
ner_tags: tags | |
metrics: | |
- type: seqeval | |
name: seqeval | |
# Dataset description | |
This dataset was created for fine-tuning the model [mbert-base-cased-NER-NL-legislation-refs](https://huggingface.co/romjansen/mbert-base-cased-NER-NL-legislation-refs) and consists of 512 token long examples which each contain one or more legislation references. These examples were created from a weakly labelled corpus of Dutch case law which was scraped from [Linked Data Overheid](https://linkeddata.overheid.nl/), pre-tokenized and labelled ([biluo_tags_from_offsets](https://spacy.io/api/top-level#biluo_tags_from_offsets)) through [spaCy](https://spacy.io/) and further tokenized through applying Hugging Face's [AutoTokenizer.from_pretrained()](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoTokenizer.from_pretrained) for [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)'s tokenizer. |