pere commited on
Commit
45afc65
1 Parent(s): 9aeb321

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -27,8 +27,8 @@ widget:
27
 
28
  ---
29
 
30
- # nb-sbert
31
- This is a [sentence-transformers](https://www.SBERT.net) model trained on the [machine translated mnli-dataset](https://huggingface.co/datasets/NbAiLab/mnli-norwegian) starting from [nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base).
32
 
33
  The model maps sentences & paragraphs to a 768 dimensional dense vector space. This vector can be used for tasks like clustering and semantic search. Below we give some examples on how to use the model in different framework. The easiest way is to simply measure the cosine distance between two sentences. Sentences that are close to each other in meaning, will have a small cosine distance and a similarity close to 1. The model is trained in a way where we try to keep this distance also between languages. Ideally a English-Norwegian sentence pair should have high similarity.
34
 
 
27
 
28
  ---
29
 
30
+ # NB-SBERT
31
+ [Sentence-transformers](https://www.SBERT.net) model trained on the [machine translated mnli-dataset](https://huggingface.co/datasets/NbAiLab/mnli-norwegian) starting from [nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base).
32
 
33
  The model maps sentences & paragraphs to a 768 dimensional dense vector space. This vector can be used for tasks like clustering and semantic search. Below we give some examples on how to use the model in different framework. The easiest way is to simply measure the cosine distance between two sentences. Sentences that are close to each other in meaning, will have a small cosine distance and a similarity close to 1. The model is trained in a way where we try to keep this distance also between languages. Ideally a English-Norwegian sentence pair should have high similarity.
34