frenchNER_3entities / README.md
bourdoiscatie's picture
Update README.md
1a055f0 verified
metadata
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
dataset_info:
  features:
    - name: tokens
      sequence: string
    - name: ner_tags
      sequence: int64
    - name: dataset
      dtype: string
  splits:
    - name: test
      num_bytes: 16147720
      num_examples: 42144
    - name: train
      num_bytes: 161576681
      num_examples: 349195
    - name: validation
      num_bytes: 12398792
      num_examples: 33464
  download_size: 43074463
  dataset_size: 190123193
task_categories:
  - token-classification
language:
  - fr
size_categories:
  - 100K<n<1M
license: cc-by-4.0

Dataset information

Dataset concatenating NER datasets, available in French and open-source, for 3 entities (LOC, PER, ORG).
There are a total of 420,264 rows, of which 346,071 are for training, 32,951 for validation and 41,242 for testing.
Our methodology is described in a blog post available in English or French.

Usage

from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/frenchNER_3entities")

Dataset

Details of rows

Dataset Original Splits Note
Multiconer 16,548 train / 857 validation / 0 test In practice, we use the original validation set as test set
and creat a new val set from 5% of train created, i.e.
15,721 train / 827 validation / 857 test
Multinerd 140,880 train / 17,610 val / 17,695 test
Pii-masking-200k 61,958 train / 0 validation / 0 test Only dataset without duplicate data or leaks
Wikiann 20,000 train / 10,000 val / 10,000 test
Wikiner 120,682 train / 0 validation / 13,410 test In practice, 5% of val created from train set, i.e.
113,296 train / 5,994 validation / 13,393 test

Removing duplicate data and leaks

The sum of the values of the datasets listed here gives the following result:

DatasetDict({
    train: Dataset({
        features: ['tokens', 'ner_tags', 'dataset'],
        num_rows: 351855
    })
    validation: Dataset({
        features: ['tokens', 'ner_tags', 'dataset'],
        num_rows: 34431
    })
    test: Dataset({
        features: ['tokens', 'ner_tags', 'dataset'],
        num_rows: 41945
    })
})

However, a data item in training split A may not be in A's test split, but may be present in B's test set, creating a leak when we create the A+B dataset.
The same logic applies to duplicate data. So we need to make sure we remove them.
After our clean-up, we finally have the following numbers:

DatasetDict({
    train: Dataset({
        features: ['tokens', 'ner_tags', 'dataset'],
        num_rows: 346071
    })
    validation: Dataset({
        features: ['tokens', 'ner_tags', 'dataset'],
        num_rows: 32951
    })
    test: Dataset({
        features: ['tokens', 'ner_tags', 'dataset'],
        num_rows: 41242
    })
})

Note: in practice, the test split contains 8 lines which we failed to deduplicate, i.e. 0.019%.

Details of entities (after cleaning)


Datasets

Splits

O

PER

LOC

ORG

Multiconer

train

200,093

18,060

7,165

6,967

validation

10,900

1,069

389

328

test

11,287

979

387

381

Multinerd

train

3,041,998

149,128

105,531

68,796

validation

410,934

17,479

13,988

3,475

test

417,886

18,567

14,083

3,636

Pii-masking-200k

train

2,405,215

29,838

42,154

12,310

Wikiann

train

60,165

20,288

17,033

24,429

validation

30,046

10,098

8,698

12,819

test

31,488

10,764

9,512

13,480

Wikiner

train

2,691,294

110,079

131,839

38,988

validation

140,935

5,481

7,204

2,121

test

313,210

13,324

15,213

3,894

Total

train

8,398,765

327,393

303,722

151,490

validation

592,815

34,127

30,279

18,743

test

773,871

43,634

39,195

21,391

Columns

dataset_train = dataset['train'].to_pandas()
dataset_train.head()

     tokens 	                                            ner_tags 	                                        dataset
0 	[On, a, souvent, voulu, faire, de, La, Bruyère... 	[0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, ... 	wikiner
1 	[Les, améliorations, apportées, par, rapport, ... 	[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, ... 	wikiner
2 	[Cette, assemblée, de, notables, ,, réunie, en... 	[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, ... 	wikiner
3 	[Wittgenstein, projetait, en, effet, d', élabo... 	[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... 	wikiner
4 	[Le, premier, écrivain, à, écrire, des, fictio... 	[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, ... 	wikiner
  • the tokens column contains the tokens
  • the ner_tags column contains the NER tags (IOB format with 0="O", 1="PER", 2="ORG" and 3="LOC")
  • the dataset column identifies the row's original dataset (if you wish to apply filters to it)

Split

  • train corresponds to the concatenation of multiconer + multinerd + pii-masking-200k + wikiann + wikiner
  • validation corresponds to the concatenation of multiconer + multinerd + wikiann + wikiner
  • test corresponds to the concatenation of multiconer + multinerd + wikiann + wikiner

Citations

multiconer

@inproceedings{multiconer2-report,  
    title={{SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)}},  
    author={Fetahu, Besnik and Kar, Sudipta and Chen, Zhiyu and Rokhlenko, Oleg and Malmasi, Shervin},  
    booktitle={Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)},  
    year={2023},  
    publisher={Association for Computational Linguistics}}


@article{multiconer2-data,  
    title={{MultiCoNER v2: a Large Multilingual dataset for Fine-grained and Noisy Named Entity Recognition}},  
    author={Fetahu, Besnik and Chen, Zhiyu and Kar, Sudipta and Rokhlenko, Oleg and Malmasi, Shervin},  
    year={2023}}

multinerd

@inproceedings{tedeschi-navigli-2022-multinerd,  
    title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)",  
    author = "Tedeschi, Simone and  Navigli, Roberto",  
    booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",  
    month = jul,  
    year = "2022",  
    address = "Seattle, United States",  
    publisher = "Association for Computational Linguistics",  
    url = "https://aclanthology.org/2022.findings-naacl.60",  
    doi = "10.18653/v1/2022.findings-naacl.60",  
    pages = "801--812"}

pii-masking-200k

@misc {ai4privacy_2023,  
    author = { {ai4Privacy} },  
    title = { pii-masking-200k (Revision 1d4c0a1) },  
    year = 2023,  
    url = { https://huggingface.co/datasets/ai4privacy/pii-masking-200k },  
    doi = { 10.57967/hf/1532 },  
    publisher = { Hugging Face }}

wikiann

@inproceedings{rahimi-etal-2019-massively,  
    title = "Massively Multilingual Transfer for {NER}",  
    author = "Rahimi, Afshin and Li, Yuan and Cohn, Trevor",  
    booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",  
    month = jul,  
    year = "2019",  
    address = "Florence, Italy",  
    publisher = "Association for Computational Linguistics",  
    url = "https://www.aclweb.org/anthology/P19-1015",  
    pages = "151--164"}

wikiner

 @article{NOTHMAN2013151,  
    title = {Learning multilingual named entity recognition from Wikipedia},  
    journal = {Artificial Intelligence},  
    volume = {194},  
    pages = {151-175},  
    year = {2013},  
    note = {Artificial Intelligence, Wikipedia and Semi-Structured Resources},  
    issn = {0004-3702},  
    doi = {https://doi.org/10.1016/j.artint.2012.03.006},  
    url = {https://www.sciencedirect.com/science/article/pii/S0004370212000276},  
    author = {Joel Nothman and Nicky Ringland and Will Radford and Tara Murphy and James R. Curran}}

frenchNER_3entities

@misc {frenchNER2024,  
    author       = { {BOURDOIS, Loïck} },  
    organization  = { {Centre Aquitain des Technologies de l'Information et Electroniques} },  
    title        = { frenchNER_3entities },  
    year         = 2024,  
    url          = { https://huggingface.co/CATIE-AQ/frenchNER_3entities },  
    doi          = { 10.57967/hf/1751 },  
    publisher    = { Hugging Face }  
}

License

cc-by-4.0