hpprc commited on
Commit
7b3bd29
1 Parent(s): 19fae3f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -88
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
- language: []
 
3
  library_name: sentence-transformers
4
  tags:
5
  - sentence-transformers
@@ -8,38 +9,13 @@ tags:
8
  base_model: tohoku-nlp/bert-base-japanese-v3
9
  widget: []
10
  pipeline_tag: sentence-similarity
 
 
 
11
  ---
12
 
13
- # SentenceTransformer based on tohoku-nlp/bert-base-japanese-v3
14
 
15
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [tohoku-nlp/bert-base-japanese-v3](https://huggingface.co/tohoku-nlp/bert-base-japanese-v3). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
16
-
17
- ## Model Details
18
-
19
- ### Model Description
20
- - **Model Type:** Sentence Transformer
21
- - **Base model:** [tohoku-nlp/bert-base-japanese-v3](https://huggingface.co/tohoku-nlp/bert-base-japanese-v3) <!-- at revision 65243d6e5629b969c77309f217bd7b1a79d43c7e -->
22
- - **Maximum Sequence Length:** 512 tokens
23
- - **Output Dimensionality:** 768 tokens
24
- - **Similarity Function:** Cosine Similarity
25
- <!-- - **Training Dataset:** Unknown -->
26
- <!-- - **Language:** Unknown -->
27
- <!-- - **License:** Unknown -->
28
-
29
- ### Model Sources
30
-
31
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
32
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
33
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
34
-
35
- ### Full Model Architecture
36
-
37
- ```
38
- MySentenceTransformer(
39
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
40
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
41
- )
42
- ```
43
 
44
  ## Usage
45
 
@@ -53,64 +29,82 @@ pip install -U sentence-transformers
53
 
54
  Then you can load this model and run inference.
55
  ```python
 
56
  from sentence_transformers import SentenceTransformer
57
 
58
  # Download from the 🤗 Hub
59
- model = SentenceTransformer("sentence_transformers_model_id")
60
- # Run inference
 
61
  sentences = [
62
- 'The weather is lovely today.',
63
- "It's so sunny outside!",
64
- 'He drove to the stadium.',
 
65
  ]
66
- embeddings = model.encode(sentences)
67
- print(embeddings.shape)
68
- # [3, 768]
69
-
70
- # Get the similarity scores for the embeddings
71
- similarities = model.similarity(embeddings, embeddings)
72
- print(similarities.shape)
73
- # [3, 3]
74
- ```
75
-
76
- <!--
77
- ### Direct Usage (Transformers)
78
 
79
- <details><summary>Click to see the direct usage in Transformers</summary>
 
 
80
 
81
- </details>
82
- -->
83
-
84
- <!--
85
- ### Downstream Usage (Sentence Transformers)
86
-
87
- You can finetune this model on your own dataset.
88
 
89
- <details><summary>Click to expand</summary>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
 
91
- </details>
92
- -->
93
 
94
- <!--
95
- ### Out-of-Scope Use
96
 
97
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
98
- -->
99
 
100
- <!--
101
- ## Bias, Risks and Limitations
 
 
 
 
 
 
 
 
102
 
103
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
104
- -->
105
 
106
- <!--
107
- ### Recommendations
 
 
 
 
108
 
109
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
110
- -->
111
 
112
  ## Training Details
113
 
 
114
  ### Framework Versions
115
  - Python: 3.10.13
116
  - Sentence Transformers: 3.0.0
@@ -120,24 +114,10 @@ You can finetune this model on your own dataset.
120
  - Datasets: 2.19.1
121
  - Tokenizers: 0.19.1
122
 
123
- ## Citation
124
 
125
  ### BibTeX
 
126
 
127
- <!--
128
- ## Glossary
129
-
130
- *Clearly define terms in order to be accessible across audiences.*
131
- -->
132
-
133
- <!--
134
- ## Model Card Authors
135
-
136
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
137
- -->
138
-
139
- <!--
140
- ## Model Card Contact
141
-
142
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
143
- -->
 
1
  ---
2
+ language:
3
+ - ja
4
  library_name: sentence-transformers
5
  tags:
6
  - sentence-transformers
 
9
  base_model: tohoku-nlp/bert-base-japanese-v3
10
  widget: []
11
  pipeline_tag: sentence-similarity
12
+ license: apache-2.0
13
+ datasets:
14
+ - cl-nagoya/ruri-dataset-ft
15
  ---
16
 
17
+ # Ruri: Japanese General Text Embeddings
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  ## Usage
21
 
 
29
 
30
  Then you can load this model and run inference.
31
  ```python
32
+ import torch.nn.functional as F
33
  from sentence_transformers import SentenceTransformer
34
 
35
  # Download from the 🤗 Hub
36
+ model = SentenceTransformer("cl-nagoya/ruri-pt-base")
37
+
38
+ # Don't forget to add the prefix "クエリ: " for query-side or "文章: " for passage-side texts.
39
  sentences = [
40
+ "クエリ: 瑠璃色はどんな色?",
41
+ "文章: 瑠璃色(るりいろ)は、紫みを帯びた濃い青。名は、半貴石の瑠璃(ラピスラズリ、英: lapis lazuli)による。JIS慣用色名では「こい紫みの青」(略号 dp-pB)と定義している[1][2]。",
42
+ "クエリ: ワシやタカのように、鋭いくちばしと爪を持った大型の鳥類を総称して「何類」というでしょう?",
43
+ "文章: ワシ、タカ、ハゲワシ、ハヤブサ、コンドル、フクロウが代表的である。これらの猛禽類はリンネ前後の時代(17~18世紀)には鷲類・鷹類・隼類及び梟類に分類された。ちなみにリンネは狩りをする鳥を単一の目(もく)にまとめ、vultur(コンドル、ハゲワシ)、falco(ワシ、タカ、ハヤブサなど)、strix(フクロウ)、lanius(モズ)の4属を含めている。",
44
  ]
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
+ embeddings = model.encode(sentences, convert_to_tensor=True)
47
+ print(embeddings.size())
48
+ # [4, 1024]
49
 
50
+ similarities = F.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
51
+ print(similarities)
52
+ ```
 
 
 
 
53
 
54
+ ## Benchmarks
55
+
56
+ ### JMTEB
57
+ Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
58
+
59
+ |Model|#Param.|Avg.|Retrieval|STS|Classfification|Reranking|Clustering|PairClassification|
60
+ |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
61
+ |[cl-nagoya/sup-simcse-ja-base](https://huggingface.co/cl-nagoya/sup-simcse-ja-base)|111M|68.56|49.64|82.05|73.47|91.83|51.79|62.57|
62
+ |[cl-nagoya/sup-simcse-ja-large](https://huggingface.co/cl-nagoya/sup-simcse-ja-large)|337M|66.51|37.62|83.18|73.73|91.48|50.56|62.51|
63
+ |[cl-nagoya/unsup-simcse-ja-base](https://huggingface.co/cl-nagoya/unsup-simcse-ja-base)|111M|65.07|40.23|78.72|73.07|91.16|44.77|62.44|
64
+ |[cl-nagoya/unsup-simcse-ja-large](https://huggingface.co/cl-nagoya/unsup-simcse-ja-large)|337M|66.27|40.53|80.56|74.66|90.95|48.41|62.49|
65
+ |[pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja)|133M|70.44|59.02|78.71|76.82|91.90|49.78|66.39|
66
+ ||||||||||
67
+ |[sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)|472M|64.70|40.12|76.56|72.66|91.63|44.88|62.33|
68
+ |[intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small)|118M|69.52|67.27|80.07|67.62|93.03|46.91|62.19|
69
+ |[intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base)|278M|70.12|68.21|79.84|69.30|92.85|48.26|62.26|
70
+ |[intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large)|560M|71.65|70.98|79.70|72.89|92.96|51.24|62.15|
71
+ ||||||||||
72
+ |OpenAI/text-embedding-ada-002|-|69.48|64.38|79.02|69.75|93.04|48.30|62.40|
73
+ |OpenAI/text-embedding-3-small|-|70.86|66.39|79.46|73.06|92.92|51.06|62.27|
74
+ |OpenAI/text-embedding-3-large|-|73.97|74.48|82.52|77.58|93.58|53.32|62.35|
75
+ ||||||||||
76
+ |[Ruri-Small](https://huggingface.co/cl-nagoya/ruri-small)|68M|71.53|69.41|82.79|76.22|93.00|51.19|62.11|
77
+ |[Ruri-Base](https://huggingface.co/cl-nagoya/ruri-base)|111M|71.91|69.82|82.87|75.58|92.91|54.16|62.38|
78
+ |[**Ruri-Large**](https://huggingface.co/cl-nagoya/ruri-large) (this model)|337M|73.31|73.02|83.13|77.43|92.99|51.82|62.29|
79
 
 
 
80
 
 
 
81
 
82
+ ## Model Details
 
83
 
84
+ ### Model Description
85
+ - **Model Type:** Sentence Transformer
86
+ - **Base model:** [tohoku-nlp/bert-base-japanese-v3](https://huggingface.co/tohoku-nlp/bert-base-japanese-v3)
87
+ - **Maximum Sequence Length:** 512 tokens
88
+ - **Output Dimensionality:** 768
89
+ - **Similarity Function:** Cosine Similarity
90
+ - **Language:** Japanese
91
+ - **License:** Apache 2.0
92
+ - **Paper:** https://arxiv.org/abs/2409.07737
93
+ <!-- - **Training Dataset:** Unknown -->
94
 
95
+ ### Full Model Architecture
 
96
 
97
+ ```
98
+ SentenceTransformer(
99
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
100
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
101
+ )
102
+ ```
103
 
 
 
104
 
105
  ## Training Details
106
 
107
+
108
  ### Framework Versions
109
  - Python: 3.10.13
110
  - Sentence Transformers: 3.0.0
 
114
  - Datasets: 2.19.1
115
  - Tokenizers: 0.19.1
116
 
117
+ <!-- ## Citation
118
 
119
  ### BibTeX
120
+ -->
121
 
122
+ ## License
123
+ This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).