Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Size:
10K - 100K
metadata
multilinguality:
- monolingual
task_categories:
- token-classification
task_ids:
- named-entity-recognition
train-eval-index:
- task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
val_split: validation
col_mapping:
tokens: tokens
ner_tags: tags
metrics:
- type: seqeval
name: seqeval
Dataset description
This dataset was created for fine-tuning the model mbert-base-cased-NER-NL-legislation-refs and consists of 512 token long examples which each contain one or more legislation references. These examples were created from a weakly labelled corpus of Dutch case law which was scraped from Linked Data Overheid, pre-tokenized and labelled (biluo_tags_from_offsets) through spaCy and further tokenized through applying Hugging Face's AutoTokenizer.from_pretrained() for bert-base-multilingual-cased's tokenizer.