File size: 9,600 Bytes
7d350ad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
# coding=utf-8
# Copyright 2020 HuggingFace Datasets Authors.
# Modified by Vésteinn Snæbjarnarson 2021
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


# Lint as: python3
"""Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition"""

import datasets


logger = datasets.logging.get_logger(__name__)


_CITATION = """\
@misc{20.500.12537/42,
 title = {{MIM}-{GOLD}-{NER} – named entity recognition corpus},
 author = {Ing{\'o}lfsd{\'o}ttir, Svanhv{\'{\i}}t and Gu{\dh}j{\'o}nsson, {\'A}smundur Alma and Loftsson, Hrafn},
 url = {http://hdl.handle.net/20.500.12537/42},
 note = {{CLARIN}-{IS}},
 copyright = {Icelandic Gigaword Corpus Part1},
 year = {2020} }
"""

_DESCRIPTION = """\
This Icelandic named entity (NE) corpus, MIM-GOLD-NER, is a version of the MIM-GOLD corpus tagged for NEs. Over 48 thousand NEs are tagged in this corpus of one million tokens, which can be used for training named entity recognizers for Icelandic.
The MIM-GOLD-NER corpus was developed at Reykjavik University in 2018–2020, funded by the Strategic Research and Development Programme for Language Technology (LT). Two LT students were in charge of the corpus annotation and of training named entity recognizers using machine learning methods.
A semi-automatic approach was used for annotating the corpus. Lists of Icelandic person names, location names, and company names were compiled and used for extracting and classifying as many named entities as possible. Regular expressions were then used to find certain numerical entities in the corpus. After this automatic pre-processing step, the whole corpus was reviewed manually to correct any errors. The corpus is tagged for eight named entity types:
PERSON – names of humans, animals and other beings, real or fictional.
LOCATION – names of locations, real or fictional, i.e. buildings, street and place names, both real and fictional. All geographical and geopolitical entities such as cities, countries, counties and regions, as well as planet names and other outer space entities.
ORGANIZATION – companies and other organizations, public or private, real or fictional. Schools, churches, swimming pools, community centers, musical groups, other affiliations.
MISCELLANEOUS – proper nouns that don’t belong to the previous three categories, such as products, books and movie titles, events, such as wars, sports tournaments, festivals, concerts, etc.
DATE – absolute temporal units of a full day or longer, such as days, months, years, centuries, both written numerically and alphabetically.
TIME – absolute temporal units shorter than a full day, such as seconds, minutes, or hours, both written numerically and alphabetically.
MONEY – exact monetary amounts in any currency, both written numerically and alphabetically.
PERCENT – percentages, both written numerically and alphabetically
MIM-GOLD-NER is intended for training of named entity recognizers for Icelandic. It is in the CoNLL format, and the position of each token within the NE is marked using the BIO tagging format. The corpus can be used in its entirety or by training on subsets of the text types that best fit the intended domain.
The Named Entity Corpus corpus is distributed with the same special user license as MIM-GOLD, which is based on the MIM license, since the texts in MIM-GOLD were sampled from the MIM corpus."""

_URL = "https://huggingface.co/datasets/rominaoji/MIM-GOLD-NER/resolve/main/"
_TRAINING_FILE = "train.txt"
_DEV_FILE = "dev.txt"
_TEST_FILE = "test.txt"


class MIMGoldNERConfig(datasets.BuilderConfig):
    """BuilderConfig for MIM-GOLD-NER"""

    def __init__(self, **kwargs):
        """BuilderConfig for MIM-GOLD-NER.
        Args:
          **kwargs: keyword arguments forwarded to super.
        """
        super(MIMGoldNERConfig, self).__init__(**kwargs)


class MIMGoldNER(datasets.GeneratorBasedBuilder):
    """MIM-GOLD-NER dataset."""

    BUILDER_CONFIGS = [
        MIMGoldNERConfig(name="mim-gold-ner", version=datasets.Version("2.0.0"), description="MIM-GOLD-NER dataset"),
    ]

    def _info(self):
        return datasets.DatasetInfo(
            description=_DESCRIPTION,
            features=datasets.Features(
                {
                    "id": datasets.Value("string"),
                    "tokens": datasets.Sequence(datasets.Value("string")),
                    "ner_tags": datasets.Sequence(
                        datasets.features.ClassLabel(
                            names=[
                                "O",
                                "B-Date",
                                "B-Location",
                                "B-Miscellaneous",
                                "B-Money",
                                "B-Organization",
                                "B-Percent",
                                "B-Person",
                                "B-Time",
                                "I-Date",
                                "I-Location",
                                "I-Miscellaneous",
                                "I-Money",
                                "I-Organization",
                                "I-Percent",
                                "I-Person",
                                "I-Time"
                            ]
                        )
                    ),
                    "conll_ner_tags": datasets.Sequence(
                        datasets.features.ClassLabel(
                            names=[
                                "O",
                                "B-PER",
                                "I-PER",
                                "B-ORG",
                                "I-ORG",
                                "B-LOC",
                                "I-LOC",
                                "B-MISC",
                                "I-MISC"
                            ]
                        )
                    ),
                }
            ),
            supervised_keys=None,
            homepage="http://hdl.handle.net/20.500.12537/42",
            citation=_CITATION,
        )

    def _split_generators(self, dl_manager):
        """Returns SplitGenerators."""
        urls_to_download = {
            "train": f"{_URL}{_TRAINING_FILE}",
            "dev": f"{_URL}{_DEV_FILE}",
            "test": f"{_URL}{_TEST_FILE}",
        }
        downloaded_files = dl_manager.download_and_extract(urls_to_download)

        return [
            datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
            datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
            datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
        ]

    def _mim2conll(self, ner_tags):

        MIM2CONLL = {
            "O": "O", 
            "B-Date": "O",
            "B-Location": "B-LOC",
            "B-Miscellaneous": "B-MISC",
            "B-Money": "O",
            "B-Organization": "B-ORG",
            "B-Percent": "O",
            "B-Person": "B-PER",
            "B-Time": "O",
            "I-Date": "O",
            "I-Location": "I-LOC",
            "I-Miscellaneous": "I-MISC",
            "I-Money": "O",
            "I-Organization": "I-ORG",
            "I-Percent": "O",
            "I-Person": "I-PER",
            "I-Time": "O"
        }

        return " ".join([MIM2CONLL[tag] for tag in ner_tags.split()])

    def _generate_examples(self, filepath):
        logger.info("⏳ Generating examples from = %s", filepath)
        with open(filepath, encoding="utf-8") as f:
            guid = 0
            tokens = []
            ner_tags = []
            conll_ner_tags = []
            for line in f:
                if line.startswith("-DOCSTART-") or line == "" or line == "\n":
                    if tokens:
                        yield guid, {
                            "id": str(guid),
                            "tokens": tokens,
                            "ner_tags": ner_tags,
                            "conll_ner_tags": conll_ner_tags,
                        }
                        guid += 1
                        tokens = []
                        ner_tags = []
                        conll_ner_tags = []
                else:
                    # tokens are tab separated
                    splits = line.split("\t")
                    tokens.append(splits[0])
                    try:
                       sentence_ner_tags = splits[1].rstrip()
                       ner_tags.append(sentence_ner_tags)
                       sentence_conll_ner_tags = self._mim2conll(sentence_ner_tags)
                       conll_ner_tags.append(sentence_conll_ner_tags)
                    except:
                        print(splits)
                        raise
            # last example
            yield guid, {
                "id": str(guid),
                "tokens": tokens,
                "ner_tags": ner_tags,
                "conll_ner_tags": ner_tags,

            }