Huertas97 commited on
Commit
5329fcf
1 Parent(s): a7c53bb

First model version

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ language: "multilingual"
4
+ tags:
5
+ - feature-extraction
6
+ - sentence-similarity
7
+ - transformers
8
+ - multilingual
9
+ ---
10
+
11
+ # mstsb-paraphrase-multilingual-mpnet-base-v2
12
+
13
+ This is a fine-tuned version of `paraphrase-multilingual-mpnet-base-v2` from [sentence-transformers](https://www.SBERT.net) model with [Semantic Textual Similarity Benchmark](http://ixa2.si.ehu.eus/stswiki/index.php/Main_Page) extended to 15 languages: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering, semantic search and measuring the similarity between two sentences.
14
+
15
+ <!--- Describe your model here -->
16
+ This model is fine-tuned version of `paraphrase-multilingual-mpnet-base-v2` for semantic textual similarity with multilingual data. The dataset used for this fine-tuning is STSb extended to 15 languages with Google Translator. For mantaining data quality the sentence pairs with a confidence value below 0.7 were dropped. The extended dataset is available at [GitHub](https://github.com/Huertas97/Multilingual-STSB). The languages included in the extended version are: ar, cs, de, en, es, fr, hi, it, ja, nl, pl, pt, ru, tr, zh-CN, zh-TW. The pooling operation used to condense the word embeddings into a sentence embedding is mean pooling (more info below).
17
+
18
+ <!-- ## Usage (Sentence-Transformers)
19
+
20
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
21
+
22
+ ```
23
+ pip install -U sentence-transformers
24
+ ```
25
+
26
+ Then you can use the model like this:
27
+
28
+ ```python
29
+ from sentence_transformers import SentenceTransformer
30
+ # It support several languages
31
+ sentences = ["This is an example sentence", "Esta es otra frase de ejemplo", "最後の例文"]
32
+
33
+ # The pooling technique is automatically detected (mean pooling)
34
+ model = SentenceTransformer('mstsb-paraphrase-multilingual-mpnet-base-v2')
35
+ embeddings = model.encode(sentences)
36
+ print(embeddings)
37
+ ``` -->
38
+
39
+
40
+
41
+ ## Usage (HuggingFace Transformers)
42
+ Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
43
+
44
+ ```python
45
+ from transformers import AutoTokenizer, AutoModel
46
+ import torch
47
+
48
+ # We should define the proper pooling function: Mean pooling
49
+ # Mean Pooling - Take attention mask into account for correct averaging
50
+ def mean_pooling(model_output, attention_mask):
51
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
52
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
53
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
54
+
55
+
56
+ # Sentences we want sentence embeddings for
57
+ sentences = ["This is an example sentence", "Esta es otra frase de ejemplo", "最後の例文"]
58
+
59
+ # Load model from HuggingFace Hub
60
+ tokenizer = AutoTokenizer.from_pretrained('AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2')
61
+ model = AutoModel.from_pretrained('AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2')
62
+
63
+ # Tokenize sentences
64
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
65
+
66
+ # Compute token embeddings
67
+ with torch.no_grad():
68
+ model_output = model(**encoded_input)
69
+
70
+ # Perform pooling. In this case, max pooling.
71
+ sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
72
+
73
+ print("Sentence embeddings:")
74
+ print(sentence_embeddings)
75
+ ```
76
+
77
+
78
+
79
+ ## Evaluation Results
80
+
81
+ <!--- Describe how your model was evaluated -->
82
+ Check the test results in the Semantic Textual Similarity Tasks. The 15 languages available at the [Multilingual STSB](https://github.com/Huertas97/Multilingual-STSB) have been combined into monolingual and cross-lingual tasks, giving a total of 31 tasks. Monolingual tasks have both sentences from the same language source (e.g., Ar-Ar, Es-Es), while cross-lingual tasks have two sentences, each in a different language being one of them English (e.g., en-ar, en-es). For the sake of readability tasks have been splitted into monolingual and cross-lingual tasks.
83
+
84
+ | Monolingual Task | Pearson Cosine test | Spearman Cosine test |
85
+ |------------------|---------------------|-----------------------|
86
+ | en;en | 0.868048310692506 | 0.8740170943535747 |
87
+ | ar;ar | 0.8267139454193487 | 0.8284459741532022 |
88
+ | cs;cs | 0.8466821720942157 | 0.8485417688803879 |
89
+ | de;de | 0.8517285961812183 | 0.8557680051557893 |
90
+ | es;es | 0.8519185309064691 | 0.8552243211580456 |
91
+ | fr;fr | 0.8430951067985064 | 0.8466614534379704 |
92
+ | hi;hi | 0.8178258630578092 | 0.8176462079184331 |
93
+ | it;it | 0.8475909574305637 | 0.8494216064459076 |
94
+ | ja;ja | 0.8435588859386477 | 0.8456031494178619 |
95
+ | nl;nl | 0.8486765104527032 | 0.8520856765262531 |
96
+ | pl;pl | 0.8407840177883407 | 0.8443070467300299 |
97
+ | pt;pt | 0.8534880178249296 | 0.8578544068829622 |
98
+ | ru;ru | 0.8390897585455678 | 0.8423041443534423 |
99
+ | tr;tr | 0.8382125451820572 | 0.8421587450058385 |
100
+ | zh-CN;zh-CN | 0.826233678946644 | 0.8248515460782744 |
101
+ | zh-TW;zh-TW | 0.8242683809675422 | 0.8235506799952028 |
102
+
103
+ \\
104
+
105
+ | Cross-lingual Task | Pearson Cosine test | Spearman Cosine test |
106
+ |--------------------|---------------------|-----------------------|
107
+ | en;ar | 0.7990830340462535 | 0.7956792016468148 |
108
+ | en;cs | 0.8381274879061265 | 0.8388713450024455 |
109
+ | en;de | 0.8414439600928739 | 0.8441971698649943 |
110
+ | en;es | 0.8442337511356952 | 0.8445035292903559 |
111
+ | en;fr | 0.8378437644605063 | 0.8387903367907733 |
112
+ | en;hi | 0.7951955086055527 | 0.7905052217683244 |
113
+ | en;it | 0.8415686372978766 | 0.8419480899107785 |
114
+ | en;ja | 0.8094306665283388 | 0.8032512280936449 |
115
+ | en;nl | 0.8389526140129767 | 0.8409310421803277 |
116
+ | en;pl | 0.8261309163979578 | 0.825976253023656 |
117
+ | en;pt | 0.8475546209070765 | 0.8506606391790897 |
118
+ | en;ru | 0.8248514914263723 | 0.8224871183202255 |
119
+ | en;tr | 0.8191803661207868 | 0.8194200775744044 |
120
+ | en;zh-CN | 0.8147678083378249 | 0.8102089470690433 |
121
+ | en;zh-TW | 0.8107272160374955 | 0.8056129680510944 |
122
+
123
+
124
+ ## Training
125
+ The model was trained with the parameters:
126
+
127
+ **DataLoader**:
128
+
129
+ `torch.utils.data.dataloader.DataLoader` of length 687 with parameters:
130
+ ```
131
+ {'batch_size': 132, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
132
+ ```
133
+
134
+ **Loss**:
135
+
136
+ `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
137
+
138
+ Parameters of the fit()-Method:
139
+ ```
140
+ {
141
+ "callback": null,
142
+ "epochs": 2,
143
+ "evaluation_steps": 1000,
144
+ "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
145
+ "max_grad_norm": 1,
146
+ "optimizer_class": "<class 'transformers.optimization.AdamW'>",
147
+ "optimizer_params": {
148
+ "lr": 2e-05
149
+ },
150
+ "scheduler": "WarmupLinear",
151
+ "steps_per_epoch": null,
152
+ "warmup_steps": 140,
153
+ "weight_decay": 0.01
154
+ }
155
+ ```
156
+
157
+
158
+ ## Full Model Architecture
159
+ ```
160
+ SentenceTransformer(
161
+ (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
162
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
163
+ )
164
+ ```
165
+
166
+ ## Citing & Authors
167
+
168
+ <!--- Describe where people can find more information -->
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/home/alvaro/.cache/torch/sentence_transformers/sentence-transformers_paraphrase-multilingual-mpnet-base-v2/",
3
+ "architectures": [
4
+ "XLMRobertaModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 2,
9
+ "gradient_checkpointing": false,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 514,
17
+ "model_type": "xlm-roberta",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "output_past": true,
21
+ "pad_token_id": 1,
22
+ "position_embedding_type": "absolute",
23
+ "transformers_version": "4.8.2",
24
+ "type_vocab_size": 1,
25
+ "use_cache": true,
26
+ "vocab_size": 250002
27
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.0.0",
4
+ "transformers": "4.7.0",
5
+ "pytorch": "1.9.0+cu102"
6
+ }
7
+ }
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:990cef9b421d6acfca4f685edfb089bb527b269c44dd9c3f03ce549f3bcf9215
3
+ size 1112259155
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 128,
3
+ "do_lower_case": false
4
+ }
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
3
+ size 5069051
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "/home/alvaro/.cache/torch/sentence_transformers/sentence-transformers_paraphrase-multilingual-mpnet-base-v2/", "tokenizer_class": "XLMRobertaTokenizer"}