zhang19991111 commited on
Commit
46bd564
1 Parent(s): a6d6b8a

Upload 9 files

Browse files
README.md ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: cc-by-sa-4.0
4
+ library_name: span-marker
5
+ tags:
6
+ - span-marker
7
+ - token-classification
8
+ - ner
9
+ - named-entity-recognition
10
+ - generated_from_span_marker_trainer
11
+ metrics:
12
+ - precision
13
+ - recall
14
+ - f1
15
+ widget:
16
+ - text: Inductively Coupled Plasma - Mass Spectrometry ( ICP - MS ) analysis of Longcliffe
17
+ SP52 limestone was undertaken to identify other impurities present , and the effect
18
+ of sorbent mass and SO2 concentration on elemental partitioning in the carbonator
19
+ between solid sorbent and gaseous phase was investigated , using a bubbler sampling
20
+ system .
21
+ - text: We extensively evaluate our work against benchmark and competitive protocols
22
+ across a range of metrics over three real connectivity and GPS traces such as
23
+ Sassy [ 44 ] , San Francisco Cabs [ 45 ] and Infocom 2006 [ 33 ] .
24
+ - text: In this research , we developed a robust two - layer classifier that can accurately
25
+ classify normal hearing ( NH ) from hearing impaired ( HI ) infants with congenital
26
+ sensori - neural hearing loss ( SNHL ) based on their Magnetic Resonance ( MR
27
+ ) images .
28
+ - text: In situ Peak Force Tapping AFM was employed for determining morphology and
29
+ nano - mechanical properties of the surface layer .
30
+ - text: By means of a criterion of Gilmer for polynomially dense subsets of the ring
31
+ of integers of a number field , we show that , if h∈K[X ] maps every element of
32
+ OK of degree n to an algebraic integer , then h(X ) is integral - valued over
33
+ OK , that is , h(OK)⊂OK .
34
+ pipeline_tag: token-classification
35
+ base_model: allenai/scibert_scivocab_uncased
36
+ model-index:
37
+ - name: SpanMarker with allenai/scibert_scivocab_uncased on my-data
38
+ results:
39
+ - task:
40
+ type: token-classification
41
+ name: Named Entity Recognition
42
+ dataset:
43
+ name: my-data
44
+ type: unknown
45
+ split: test
46
+ metrics:
47
+ - type: f1
48
+ value: 0.685430463576159
49
+ name: F1
50
+ - type: precision
51
+ value: 0.6981450252951096
52
+ name: Precision
53
+ - type: recall
54
+ value: 0.6731707317073171
55
+ name: Recall
56
+ ---
57
+
58
+ # SpanMarker with allenai/scibert_scivocab_uncased on my-data
59
+
60
+ This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for Named Entity Recognition. This SpanMarker model uses [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) as the underlying encoder.
61
+
62
+ ## Model Details
63
+
64
+ ### Model Description
65
+ - **Model Type:** SpanMarker
66
+ - **Encoder:** [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased)
67
+ - **Maximum Sequence Length:** 256 tokens
68
+ - **Maximum Entity Length:** 8 words
69
+ <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
70
+ - **Language:** en
71
+ - **License:** cc-by-sa-4.0
72
+
73
+ ### Model Sources
74
+
75
+ - **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
76
+ - **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
77
+
78
+ ### Model Labels
79
+ | Label | Examples |
80
+ |:---------|:--------------------------------------------------------------------------------------------------------|
81
+ | Data | "an overall mitochondrial", "defect", "Depth time - series" |
82
+ | Material | "cross - shore measurement locations", "the subject 's fibroblasts", "COXI , COXII and COXIII subunits" |
83
+ | Method | "EFSA", "an approximation", "in vitro" |
84
+ | Process | "translation", "intake", "a significant reduction of synthesis" |
85
+
86
+ ## Evaluation
87
+
88
+ ### Metrics
89
+ | Label | Precision | Recall | F1 |
90
+ |:---------|:----------|:-------|:-------|
91
+ | **all** | 0.6981 | 0.6732 | 0.6854 |
92
+ | Data | 0.6269 | 0.6402 | 0.6335 |
93
+ | Material | 0.8085 | 0.7562 | 0.7815 |
94
+ | Method | 0.4211 | 0.4 | 0.4103 |
95
+ | Process | 0.6891 | 0.6488 | 0.6683 |
96
+
97
+ ## Uses
98
+
99
+ ### Direct Use for Inference
100
+
101
+ ```python
102
+ from span_marker import SpanMarkerModel
103
+
104
+ # Download from the 🤗 Hub
105
+ model = SpanMarkerModel.from_pretrained("span-marker-allenai/scibert_scivocab_uncased-me")
106
+ # Run inference
107
+ entities = model.predict("In situ Peak Force Tapping AFM was employed for determining morphology and nano - mechanical properties of the surface layer .")
108
+ ```
109
+
110
+ ### Downstream Use
111
+ You can finetune this model on your own dataset.
112
+
113
+ <details><summary>Click to expand</summary>
114
+
115
+ ```python
116
+ from span_marker import SpanMarkerModel, Trainer
117
+
118
+ # Download from the 🤗 Hub
119
+ model = SpanMarkerModel.from_pretrained("span-marker-allenai/scibert_scivocab_uncased-me")
120
+
121
+ # Specify a Dataset with "tokens" and "ner_tag" columns
122
+ dataset = load_dataset("conll2003") # For example CoNLL2003
123
+
124
+ # Initialize a Trainer using the pretrained model & dataset
125
+ trainer = Trainer(
126
+ model=model,
127
+ train_dataset=dataset["train"],
128
+ eval_dataset=dataset["validation"],
129
+ )
130
+ trainer.train()
131
+ trainer.save_model("span-marker-allenai/scibert_scivocab_uncased-me-finetuned")
132
+ ```
133
+ </details>
134
+
135
+ <!--
136
+ ### Out-of-Scope Use
137
+
138
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
139
+ -->
140
+
141
+ <!--
142
+ ## Bias, Risks and Limitations
143
+
144
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
145
+ -->
146
+
147
+ <!--
148
+ ### Recommendations
149
+
150
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
151
+ -->
152
+
153
+ ## Training Details
154
+
155
+ ### Training Set Metrics
156
+ | Training set | Min | Median | Max |
157
+ |:----------------------|:----|:--------|:----|
158
+ | Sentence length | 3 | 25.6049 | 106 |
159
+ | Entities per sentence | 0 | 5.2439 | 22 |
160
+
161
+ ### Training Hyperparameters
162
+ - learning_rate: 5e-05
163
+ - train_batch_size: 8
164
+ - eval_batch_size: 8
165
+ - seed: 42
166
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
167
+ - lr_scheduler_type: linear
168
+ - lr_scheduler_warmup_ratio: 0.1
169
+ - num_epochs: 10
170
+
171
+ ### Training Results
172
+ | Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
173
+ |:------:|:----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|
174
+ | 2.0134 | 300 | 0.0476 | 0.7297 | 0.5821 | 0.6476 | 0.7880 |
175
+ | 4.0268 | 600 | 0.0532 | 0.7537 | 0.6775 | 0.7136 | 0.8281 |
176
+ | 6.0403 | 900 | 0.0655 | 0.7162 | 0.7080 | 0.7121 | 0.8357 |
177
+ | 8.0537 | 1200 | 0.0761 | 0.7143 | 0.7061 | 0.7102 | 0.8251 |
178
+
179
+ ### Framework Versions
180
+ - Python: 3.10.12
181
+ - SpanMarker: 1.5.0
182
+ - Transformers: 4.36.2
183
+ - PyTorch: 2.0.1+cu118
184
+ - Datasets: 2.16.1
185
+ - Tokenizers: 0.15.0
186
+
187
+ ## Citation
188
+
189
+ ### BibTeX
190
+ ```
191
+ @software{Aarsen_SpanMarker,
192
+ author = {Aarsen, Tom},
193
+ license = {Apache-2.0},
194
+ title = {{SpanMarker for Named Entity Recognition}},
195
+ url = {https://github.com/tomaarsen/SpanMarkerNER}
196
+ }
197
+ ```
198
+
199
+ <!--
200
+ ## Glossary
201
+
202
+ *Clearly define terms in order to be accessible across audiences.*
203
+ -->
204
+
205
+ <!--
206
+ ## Model Card Authors
207
+
208
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
209
+ -->
210
+
211
+ <!--
212
+ ## Model Card Contact
213
+
214
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
215
+ -->
added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "<end>": 31091,
3
+ "<start>": 31090
4
+ }
config.json ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "SpanMarkerModel"
4
+ ],
5
+ "encoder": {
6
+ "_name_or_path": "allenai/scibert_scivocab_uncased",
7
+ "add_cross_attention": false,
8
+ "architectures": null,
9
+ "attention_probs_dropout_prob": 0.1,
10
+ "bad_words_ids": null,
11
+ "begin_suppress_tokens": null,
12
+ "bos_token_id": null,
13
+ "chunk_size_feed_forward": 0,
14
+ "classifier_dropout": null,
15
+ "cross_attention_hidden_size": null,
16
+ "decoder_start_token_id": null,
17
+ "diversity_penalty": 0.0,
18
+ "do_sample": false,
19
+ "early_stopping": false,
20
+ "encoder_no_repeat_ngram_size": 0,
21
+ "eos_token_id": null,
22
+ "exponential_decay_length_penalty": null,
23
+ "finetuning_task": null,
24
+ "forced_bos_token_id": null,
25
+ "forced_eos_token_id": null,
26
+ "hidden_act": "gelu",
27
+ "hidden_dropout_prob": 0.1,
28
+ "hidden_size": 768,
29
+ "id2label": {
30
+ "0": "O",
31
+ "1": "Data",
32
+ "2": "Material",
33
+ "3": "Method",
34
+ "4": "Process"
35
+ },
36
+ "initializer_range": 0.02,
37
+ "intermediate_size": 3072,
38
+ "is_decoder": false,
39
+ "is_encoder_decoder": false,
40
+ "label2id": {
41
+ "Data": 1,
42
+ "Material": 2,
43
+ "Method": 3,
44
+ "O": 0,
45
+ "Process": 4
46
+ },
47
+ "layer_norm_eps": 1e-12,
48
+ "length_penalty": 1.0,
49
+ "max_length": 20,
50
+ "max_position_embeddings": 512,
51
+ "min_length": 0,
52
+ "model_type": "bert",
53
+ "no_repeat_ngram_size": 0,
54
+ "num_attention_heads": 12,
55
+ "num_beam_groups": 1,
56
+ "num_beams": 1,
57
+ "num_hidden_layers": 12,
58
+ "num_return_sequences": 1,
59
+ "output_attentions": false,
60
+ "output_hidden_states": false,
61
+ "output_scores": false,
62
+ "pad_token_id": 0,
63
+ "position_embedding_type": "absolute",
64
+ "prefix": null,
65
+ "problem_type": null,
66
+ "pruned_heads": {},
67
+ "remove_invalid_values": false,
68
+ "repetition_penalty": 1.0,
69
+ "return_dict": true,
70
+ "return_dict_in_generate": false,
71
+ "sep_token_id": null,
72
+ "suppress_tokens": null,
73
+ "task_specific_params": null,
74
+ "temperature": 1.0,
75
+ "tf_legacy_loss": false,
76
+ "tie_encoder_decoder": false,
77
+ "tie_word_embeddings": true,
78
+ "tokenizer_class": null,
79
+ "top_k": 50,
80
+ "top_p": 1.0,
81
+ "torch_dtype": null,
82
+ "torchscript": false,
83
+ "transformers_version": "4.36.2",
84
+ "type_vocab_size": 2,
85
+ "typical_p": 1.0,
86
+ "use_bfloat16": false,
87
+ "use_cache": true,
88
+ "vocab_size": 31096
89
+ },
90
+ "entity_max_length": 8,
91
+ "marker_max_length": 128,
92
+ "max_next_context": null,
93
+ "max_prev_context": null,
94
+ "model_max_length": 256,
95
+ "model_max_length_default": 512,
96
+ "model_type": "span-marker",
97
+ "span_marker_version": "1.5.0",
98
+ "torch_dtype": "float32",
99
+ "trained_with_document_context": false,
100
+ "transformers_version": "4.36.2",
101
+ "vocab_size": 31096
102
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfb4610c7fa6ed2042e86d894a4fabebea7ccee2a529cd824260c210a26526b9
3
+ size 439747140
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "[PAD]",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "101": {
13
+ "content": "[UNK]",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "102": {
21
+ "content": "[CLS]",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "103": {
29
+ "content": "[SEP]",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "104": {
37
+ "content": "[MASK]",
38
+ "lstrip": false,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ },
44
+ "31090": {
45
+ "content": "<start>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": true
51
+ },
52
+ "31091": {
53
+ "content": "<end>",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": true
59
+ }
60
+ },
61
+ "clean_up_tokenization_spaces": true,
62
+ "cls_token": "[CLS]",
63
+ "do_basic_tokenize": true,
64
+ "do_lower_case": true,
65
+ "entity_max_length": 8,
66
+ "marker_max_length": 128,
67
+ "mask_token": "[MASK]",
68
+ "model_max_length": 256,
69
+ "never_split": null,
70
+ "pad_token": "[PAD]",
71
+ "sep_token": "[SEP]",
72
+ "strip_accents": null,
73
+ "tokenize_chinese_chars": true,
74
+ "tokenizer_class": "BertTokenizer",
75
+ "unk_token": "[UNK]"
76
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0ed9a530e1e4f89d5671a57e508ae7ade16401ea64699bffa3f631220bcfc87
3
+ size 4283
vocab.txt ADDED
The diff for this file is too large to render. See raw diff