jzju commited on
Commit
320dfe6
1 Parent(s): f378e98

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -55
README.md CHANGED
@@ -4,14 +4,18 @@ tags:
4
  - sentence-transformers
5
  - feature-extraction
6
  - sentence-similarity
7
-
 
 
 
8
  ---
9
 
10
  # jzju/sbert-sv-lim2
11
 
 
 
12
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
13
 
14
- <!--- Describe your model here -->
15
 
16
  ## Usage (Sentence-Transformers)
17
 
@@ -32,57 +36,74 @@ embeddings = model.encode(sentences)
32
  print(embeddings)
33
  ```
34
 
35
-
36
-
37
- ## Evaluation Results
38
-
39
- <!--- Describe how your model was evaluated -->
40
-
41
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jzju/sbert-sv-lim2)
42
-
43
-
44
- ## Training
45
- The model was trained with the parameters:
46
-
47
- **DataLoader**:
48
-
49
- `torch.utils.data.dataloader.DataLoader` of length 177 with parameters:
50
- ```
51
- {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
52
- ```
53
-
54
- **Loss**:
55
-
56
- `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
57
-
58
- Parameters of the fit()-Method:
59
- ```
60
- {
61
- "epochs": 10,
62
- "evaluation_steps": 0,
63
- "evaluator": "NoneType",
64
- "max_grad_norm": 1,
65
- "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
66
- "optimizer_params": {
67
- "lr": 2e-05
68
- },
69
- "scheduler": "WarmupLinear",
70
- "steps_per_epoch": null,
71
- "warmup_steps": 100,
72
- "weight_decay": 0.01
73
- }
74
- ```
75
-
76
-
77
- ## Full Model Architecture
78
- ```
79
- SentenceTransformer(
80
- (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
81
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
82
- (2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
83
  )
84
- ```
85
-
86
- ## Citing & Authors
87
-
88
- <!--- Describe where people can find more information -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - sentence-transformers
5
  - feature-extraction
6
  - sentence-similarity
7
+ datasets:
8
+ - sbx/superlim-2
9
+ language:
10
+ - sv
11
  ---
12
 
13
  # jzju/sbert-sv-lim2
14
 
15
+ This model Is trained from [KBLab/bert-base-swedish-cased-new](https://huggingface.co/KBLab/bert-base-swedish-cased-new) with data from [sbx/superlim-2](https://huggingface.co/datasets/sbx/superlim-2)
16
+
17
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
18
 
 
19
 
20
  ## Usage (Sentence-Transformers)
21
 
 
36
  print(embeddings)
37
  ```
38
 
39
+ ## Training Code
40
+ ```python
41
+ from datasets import load_dataset, concatenate_datasets
42
+ from sentence_transformers import SentenceTransformer, InputExample, losses, models, util, datasets
43
+ from torch.utils.data import DataLoader
44
+ from torch import nn
45
+ import random
46
+
47
+ word_embedding_model = models.Transformer("KBLab/bert-base-swedish-cased-new", max_seq_length=256)
48
+ pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension())
49
+ dense_model = models.Dense(
50
+ in_features=pooling_model.get_sentence_embedding_dimension(), out_features=256, activation_function=nn.Tanh()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
  )
52
+ model = SentenceTransformer(modules=[word_embedding_model, pooling_model, dense_model])
53
+
54
+ def pair():
55
+ def norm(x):
56
+ x["label"] = x["label"] / m
57
+ return x
58
+
59
+ dd = []
60
+ for sub in ["swepar", "swesim_relatedness", "swesim_similarity"]:
61
+ ds = concatenate_datasets([d for d in load_dataset("sbx/superlim-2", sub).values()])
62
+ if "sentence_1" in ds.features:
63
+ ds = ds.rename_column("sentence_1", "d1")
64
+ ds = ds.rename_column("sentence_2", "d2")
65
+ else:
66
+ ds = ds.rename_column("word_1", "d1")
67
+ ds = ds.rename_column("word_2", "d2")
68
+ m = max([d["label"] for d in ds])
69
+ dd.append(ds.map(norm))
70
+ ds = concatenate_datasets(dd)
71
+
72
+ train_examples = []
73
+ for d in ds:
74
+ train_examples.append(InputExample(texts=[d["d1"], d["d2"]], label=d["label"]))
75
+ train_dataloader = DataLoader(train_examples, shuffle=True, batch_size=64)
76
+ train_loss = losses.CosineSimilarityLoss(model)
77
+ model.fit(train_objectives=[(train_dataloader, train_loss)], epochs=10, warmup_steps=100)
78
+
79
+ def nli():
80
+ ds = concatenate_datasets([d for d in load_dataset("sbx/superlim-2", "swenli").values()])
81
+
82
+ def add_to_samples(sent1, sent2, label):
83
+ if sent1 not in train_data:
84
+ train_data[sent1] = {0: set(), 1: set(), 2: set()}
85
+ train_data[sent1][label].add(sent2)
86
+
87
+ train_data = {}
88
+ for d in ds:
89
+ add_to_samples(d["premise"], d["hypothesis"], d["label"])
90
+ add_to_samples(d["hypothesis"], d["premise"], d["label"])
91
+
92
+ train_samples = []
93
+ for sent1, others in train_data.items():
94
+ if len(others[0]) > 0 and len(others[1]) > 0:
95
+ train_samples.append(
96
+ InputExample(texts=[sent1, random.choice(list(others[0])), random.choice(list(others[1]))])
97
+ )
98
+ train_samples.append(
99
+ InputExample(texts=[random.choice(list(others[0])), sent1, random.choice(list(others[1]))])
100
+ )
101
+ train_dataloader = datasets.NoDuplicatesDataLoader(train_samples, batch_size=64)
102
+ train_loss = losses.MultipleNegativesRankingLoss(model)
103
+ model.fit(train_objectives=[(train_dataloader, train_loss)], epochs=1, warmup_steps=100)
104
+
105
+ pair()
106
+ nli()
107
+ model.save()
108
+
109
+ ```