pere commited on
Commit
2b630a2
1 Parent(s): 928124d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -28,10 +28,9 @@ widget:
28
  ---
29
 
30
  # nb-sbert
 
31
 
32
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
33
-
34
- <!--- Describe your model here -->
35
 
36
  ### Usage (Sentence-Transformers)
37
 
@@ -50,6 +49,8 @@ sentences = ["This is an example sentence", "Each sentence is converted"]
50
  model = SentenceTransformer('NbAiLab/nb-sbert')
51
  embeddings = model.encode(sentences)
52
  print(embeddings)
 
 
53
  ```
54
 
55
 
 
28
  ---
29
 
30
  # nb-sbert
31
+ This is a [sentence-transformers](https://www.SBERT.net) model that is trained on the [machine translated mnli-dataset](https://huggingface.co/datasets/NbAiLab/mnli-norwegian) starting from [nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base).
32
 
33
+ The model maps sentences & paragraphs to a 768 dimensional dense vector space. This vector can be used for tasks like clustering and semantic search. Below we give some examples on how to use the model in different framework. The easiest way is to simply measure the cosine distance between two sentences. Sentences that are close to each other in meaning, will have a small cosine distance and a similarity close to 1. The model is trained in a way where we try to keep this distance also between languages. Ideally a English-Norwegian sentence pair should have high similarity.
 
 
34
 
35
  ### Usage (Sentence-Transformers)
36
 
 
49
  model = SentenceTransformer('NbAiLab/nb-sbert')
50
  embeddings = model.encode(sentences)
51
  print(embeddings)
52
+
53
+
54
  ```
55
 
56