FareedKhan commited on
Commit
8ef8bbe
1 Parent(s): a2e234c

Add new SentenceTransformer model.

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,715 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mixedbread-ai/deepset-mxbai-embed-de-large-v1
3
+ library_name: sentence-transformers
4
+ metrics:
5
+ - cosine_accuracy@1
6
+ - cosine_accuracy@3
7
+ - cosine_accuracy@5
8
+ - cosine_accuracy@10
9
+ - cosine_precision@1
10
+ - cosine_precision@3
11
+ - cosine_precision@5
12
+ - cosine_precision@10
13
+ - cosine_recall@1
14
+ - cosine_recall@3
15
+ - cosine_recall@5
16
+ - cosine_recall@10
17
+ - cosine_ndcg@10
18
+ - cosine_mrr@10
19
+ - cosine_map@100
20
+ pipeline_tag: sentence-similarity
21
+ tags:
22
+ - sentence-transformers
23
+ - sentence-similarity
24
+ - feature-extraction
25
+ - generated_from_trainer
26
+ - dataset_size:1814
27
+ - loss:MatryoshkaLoss
28
+ - loss:MultipleNegativesRankingLoss
29
+ widget:
30
+ - source_sentence: '
31
+
32
+ The document you provided seems to be a list of compounds, some of which are well-known
33
+ drugs or drugs used in experimental contexts, and others that don''t appear to
34
+ have recognized applications in medicine or science. The list includes substances
35
+ like cortisol, a hormone, and filopram, which is related to anesthetics or possibly
36
+ a misprint or misclassification. The side effects listed are also a mix, with
37
+ some being plausible reactions to certain medication (like Edema, dysphagia) and
38
+ others being quite unusual for commonly recognized drug interactions (like retinal
39
+ vein occlusion, which is not a typical side effect of most medications).
40
+
41
+
42
+ It would be useful to have labels or references indicating which of these compounds
43
+ are actually drugs and which are other chemical substances. For instance, cortisol,
44
+ if given its correct context, would typically have side effects associated with
45
+ cortisol therapy such as fluid retention or electrolyte imbalances.
46
+
47
+
48
+ If you need detailed information on how these substances work or what their possible
49
+ side effects might be, you''ll likely need to refer to a medical compendium or
50
+ a pharmacology resource for accurate data. It''s also important to clarify the
51
+ intended use for this list, whether for educational purposes, research, or another
52
+ context; the provided list appears to be a jumbled amalgamation, which might not
53
+ have clear clinical relevance without additional detail.'
54
+ sentences:
55
+ - Can you suggest medications targeting the GC gene/protein with a proven synergy
56
+ with AVE9633?
57
+ - Could you help identify the gene or protein that facilitates sodium-dependent
58
+ transportation and elimination of organic anions, with a particular emphasis on
59
+ those implicated in the cellular efflux of potentially hazardous organic anions?
60
+ Additionally, I'm interested in understanding if this gene or protein also mediates
61
+ the transport of drugs known to exhibit synergistic pharmacological interactions
62
+ with Ractopamine.
63
+ - Can you list the medications suitable for benign prostatic hyperplasia and tell
64
+ me if any are linked to dysphagia as a side effect?
65
+ - source_sentence: '
66
+
67
+
68
+ The provided information describes a gene that plays a role in multiple biological
69
+ processes and is linked with certain diseases. Here'
70
+ sentences:
71
+ - Which genes or proteins interact with the "Regulation of HSF1-mediated heat shock
72
+ response" pathway and also engage in protein-protein interactions with PRNP?
73
+ - Which anatomical parts lack the expression of genes or proteins involved in the
74
+ L-alanine degradation pathway?
75
+ - What is the name of a disease classified as a variant or subtype of sinoatrial
76
+ node disease in the latest medical disease taxonomy?
77
+ - source_sentence: '
78
+
79
+ The list you''ve provided contains a variety of medications, including antidepressants,
80
+ antihistamines, anxiolytics, and more. Here''s a breakdown by category:
81
+
82
+
83
+ ### Antidepressants
84
+
85
+ - **Amphetamine**
86
+
87
+ - **Cevimeline**
88
+
89
+ - **Esmolol**
90
+
91
+ - **Bortezomib**
92
+
93
+ - **'
94
+ sentences:
95
+ - What are some related conditions to triple-negative breast cancer that could be
96
+ causing persistent fatigue?
97
+ - Which medication is effective against simple Plasmodium falciparum infections
98
+ and functions by engaging with genes or proteins that interact with the minor
99
+ groove of DNA rich in adenine and thymine?
100
+ - Which diseases associated with SRSF2 gene mutations are primarily found in adults
101
+ and the elderly?
102
+ - source_sentence: '
103
+
104
+
105
+ The drug mentioned in the query is "Dabigatran". It belongs to the class of drugs
106
+ known as direct thrombin inhibitors. This class of drugs is used primarily for
107
+ the prevention and treatment of thromboembolic disorders.
108
+
109
+
110
+ Regarding potential side effects, they include:
111
+
112
+ 1. Inflammatory abnormality of the skin (Erythema)
113
+
114
+ 2. Hemolytic anemia (a type of anemia where red blood cells are destroyed prematurely)
115
+
116
+ 3. Thrombocytopenia (low platelet count)
117
+
118
+ 4. Pancytopenia (a decrease in the number of all types of blood cells - red, white,
119
+ and platelet cells)
120
+
121
+ 5. Fever
122
+
123
+ 6. Pain
124
+
125
+ 7. Seizure
126
+
127
+ 8. Headache
128
+
129
+ 9. Vomiting
130
+
131
+ 10. Abdominal pain
132
+
133
+ 11. Hyperactivity
134
+
135
+ 12. Erythroderma (a type of skin flare characterized by a redness over the trunk
136
+ and limbs)
137
+
138
+ 13. Vertigo (a sensation of spinning or motion)
139
+
140
+ 14. Granulocytopenia (low neutrophil count)
141
+
142
+ 15. Pruritus (severe itching)
143
+
144
+ 16. Confusion
145
+
146
+ 17. Eosinophilia (a condition characterized by an increased number of eosinophils,
147
+ a type of white blood cell)
148
+
149
+ 18. Anaphylactic shock (a serious allergic reaction)
150
+
151
+ 19. Hyperkinetic movements
152
+
153
+ 20. Nausea
154
+
155
+ 21. Acute sinusitis (inflammation of the sinus cavities)
156
+
157
+ 22. Agitation
158
+
159
+ 23. Excessive daytime somnolence (excessively sleepy during the day)
160
+
161
+ 24. Aplastic anemia (a condition where the bone marrow fails to produce enough
162
+ new blood cells)
163
+
164
+ 25. Increased blood urea nitrogen (BUN) (a marker of kidney function, indicating
165
+ the kidneys are not working properly)
166
+
167
+ 26. Prolonged prothrombin time (an indication of an increased risk of bleeding,
168
+ due to a reduction in clotting protein)
169
+
170
+ 27. Recurrent tonsillitis (repeated inflammation of the tonsils)
171
+
172
+
173
+ Dabigatran works by inhibiting thrombin (Factor IIa), an enzyme involved in the
174
+ clotting process. If any of these side effects are experienced, it is important
175
+ to seek medical attention or consult with a healthcare provider.'
176
+ sentences:
177
+ - What are the clinical manifestations or phenotypic characteristics associated
178
+ with the subtype of myocardial infarction known as posteroinferior?
179
+ - Could you supply a list of drugs prescribed for respiratory infections that may
180
+ also lead to side effects like hemolytic anemia and nausea?
181
+ - Which diseases are associated with the FAM111A gene that present with both skeletal
182
+ and endocrine system abnormalities?
183
+ - source_sentence: '
184
+
185
+ The list you provided seems to be a mix of various chemical substances, some of
186
+ which appear to be medications, others are chemical compounds, and a few could
187
+ be substances from other fields (e.g., water treatment, food additives). To be
188
+ more precise, it would be helpful to categorize them properly based on their common
189
+ usage:
190
+
191
+
192
+ ### Medications and Drugs:
193
+
194
+ - **Antibiotics**: Cefoxitin, Tobramycin, Amikacin
195
+
196
+ - ** pain and inflammation relievers**: Benoxaprofen, Daptomycin, Ceftolozane,
197
+ Salicylates (Benzydamine, Dexamethasone sodium phosphate)
198
+
199
+ - **Intravenous fluids**: Magnesium trisilicate
200
+
201
+ - **Antivirals**: Ribavirin, Inotersen
202
+
203
+ - **Antibacterial agents**: Epirizole, Floctafenine, Flunixin
204
+
205
+ - **Vaccines**: Vaborbactam, Brincidofovir, Adefovir
206
+
207
+ - **Neuromodulators**: Cefatrizine, Bumadizone, Alminoprofen
208
+
209
+ - **Cancer treatments**: Colistin, Nitrofurantoin, Sisomicin
210
+
211
+
212
+ ### Chemical Compounds:
213
+
214
+ - **Salts and salts of acidity**: Fosfomycin, Azosemide, Mofebutazone
215
+
216
+ - **Amino acids**: Phenylalanine, Nitrosalicylic'
217
+ sentences:
218
+ - Is there a regulatory function associated with the epidermal growth factor receptor
219
+ or its interacting proteins in the control of genes or proteins that participate
220
+ in the inactivation of fast sodium channels during Phase 1 of cardiac action potential
221
+ propagation?
222
+ - Which diseases, either as subtypes or complications, should be considered when
223
+ a patient shows symptoms suggesting neoplastic syndromes?
224
+ - Which drugs interact with the SERPINA1 gene/protein as carriers?
225
+ model-index:
226
+ - name: SentenceTransformer based on mixedbread-ai/deepset-mxbai-embed-de-large-v1
227
+ results:
228
+ - task:
229
+ type: information-retrieval
230
+ name: Information Retrieval
231
+ dataset:
232
+ name: dim 768
233
+ type: dim_768
234
+ metrics:
235
+ - type: cosine_accuracy@1
236
+ value: 0.3910891089108911
237
+ name: Cosine Accuracy@1
238
+ - type: cosine_accuracy@3
239
+ value: 0.4752475247524752
240
+ name: Cosine Accuracy@3
241
+ - type: cosine_accuracy@5
242
+ value: 0.49504950495049505
243
+ name: Cosine Accuracy@5
244
+ - type: cosine_accuracy@10
245
+ value: 0.5544554455445545
246
+ name: Cosine Accuracy@10
247
+ - type: cosine_precision@1
248
+ value: 0.3910891089108911
249
+ name: Cosine Precision@1
250
+ - type: cosine_precision@3
251
+ value: 0.15841584158415842
252
+ name: Cosine Precision@3
253
+ - type: cosine_precision@5
254
+ value: 0.09900990099009901
255
+ name: Cosine Precision@5
256
+ - type: cosine_precision@10
257
+ value: 0.05544554455445544
258
+ name: Cosine Precision@10
259
+ - type: cosine_recall@1
260
+ value: 0.3910891089108911
261
+ name: Cosine Recall@1
262
+ - type: cosine_recall@3
263
+ value: 0.4752475247524752
264
+ name: Cosine Recall@3
265
+ - type: cosine_recall@5
266
+ value: 0.49504950495049505
267
+ name: Cosine Recall@5
268
+ - type: cosine_recall@10
269
+ value: 0.5544554455445545
270
+ name: Cosine Recall@10
271
+ - type: cosine_ndcg@10
272
+ value: 0.4669635292605997
273
+ name: Cosine Ndcg@10
274
+ - type: cosine_mrr@10
275
+ value: 0.439788621719315
276
+ name: Cosine Mrr@10
277
+ - type: cosine_map@100
278
+ value: 0.44615433269461197
279
+ name: Cosine Map@100
280
+ ---
281
+
282
+ # SentenceTransformer based on mixedbread-ai/deepset-mxbai-embed-de-large-v1
283
+
284
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [mixedbread-ai/deepset-mxbai-embed-de-large-v1](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
285
+
286
+ ## Model Details
287
+
288
+ ### Model Description
289
+ - **Model Type:** Sentence Transformer
290
+ - **Base model:** [mixedbread-ai/deepset-mxbai-embed-de-large-v1](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1) <!-- at revision fe450620a047ac704e100d84aebe7cd3fc137021 -->
291
+ - **Maximum Sequence Length:** 512 tokens
292
+ - **Output Dimensionality:** 1024 tokens
293
+ - **Similarity Function:** Cosine Similarity
294
+ - **Training Dataset:**
295
+ - json
296
+ <!-- - **Language:** Unknown -->
297
+ <!-- - **License:** Unknown -->
298
+
299
+ ### Model Sources
300
+
301
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
302
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
303
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
304
+
305
+ ### Full Model Architecture
306
+
307
+ ```
308
+ SentenceTransformer(
309
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
310
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
311
+ (2): Normalize()
312
+ )
313
+ ```
314
+
315
+ ## Usage
316
+
317
+ ### Direct Usage (Sentence Transformers)
318
+
319
+ First install the Sentence Transformers library:
320
+
321
+ ```bash
322
+ pip install -U sentence-transformers
323
+ ```
324
+
325
+ Then you can load this model and run inference.
326
+ ```python
327
+ from sentence_transformers import SentenceTransformer
328
+
329
+ # Download from the 🤗 Hub
330
+ model = SentenceTransformer("FareedKhan/mixedbread-ai_deepset-mxbai-embed-de-large-v1_FareedKhan_prime_synthetic_data_2k_3_8")
331
+ # Run inference
332
+ sentences = [
333
+ '\nThe list you provided seems to be a mix of various chemical substances, some of which appear to be medications, others are chemical compounds, and a few could be substances from other fields (e.g., water treatment, food additives). To be more precise, it would be helpful to categorize them properly based on their common usage:\n\n### Medications and Drugs:\n- **Antibiotics**: Cefoxitin, Tobramycin, Amikacin\n- ** pain and inflammation relievers**: Benoxaprofen, Daptomycin, Ceftolozane, Salicylates (Benzydamine, Dexamethasone sodium phosphate)\n- **Intravenous fluids**: Magnesium trisilicate\n- **Antivirals**: Ribavirin, Inotersen\n- **Antibacterial agents**: Epirizole, Floctafenine, Flunixin\n- **Vaccines**: Vaborbactam, Brincidofovir, Adefovir\n- **Neuromodulators**: Cefatrizine, Bumadizone, Alminoprofen\n- **Cancer treatments**: Colistin, Nitrofurantoin, Sisomicin\n\n### Chemical Compounds:\n- **Salts and salts of acidity**: Fosfomycin, Azosemide, Mofebutazone\n- **Amino acids**: Phenylalanine, Nitrosalicylic',
334
+ 'Which drugs interact with the SERPINA1 gene/protein as carriers?',
335
+ 'Is there a regulatory function associated with the epidermal growth factor receptor or its interacting proteins in the control of genes or proteins that participate in the inactivation of fast sodium channels during Phase 1 of cardiac action potential propagation?',
336
+ ]
337
+ embeddings = model.encode(sentences)
338
+ print(embeddings.shape)
339
+ # [3, 1024]
340
+
341
+ # Get the similarity scores for the embeddings
342
+ similarities = model.similarity(embeddings, embeddings)
343
+ print(similarities.shape)
344
+ # [3, 3]
345
+ ```
346
+
347
+ <!--
348
+ ### Direct Usage (Transformers)
349
+
350
+ <details><summary>Click to see the direct usage in Transformers</summary>
351
+
352
+ </details>
353
+ -->
354
+
355
+ <!--
356
+ ### Downstream Usage (Sentence Transformers)
357
+
358
+ You can finetune this model on your own dataset.
359
+
360
+ <details><summary>Click to expand</summary>
361
+
362
+ </details>
363
+ -->
364
+
365
+ <!--
366
+ ### Out-of-Scope Use
367
+
368
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
369
+ -->
370
+
371
+ ## Evaluation
372
+
373
+ ### Metrics
374
+
375
+ #### Information Retrieval
376
+ * Dataset: `dim_768`
377
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
378
+
379
+ | Metric | Value |
380
+ |:--------------------|:-----------|
381
+ | cosine_accuracy@1 | 0.3911 |
382
+ | cosine_accuracy@3 | 0.4752 |
383
+ | cosine_accuracy@5 | 0.495 |
384
+ | cosine_accuracy@10 | 0.5545 |
385
+ | cosine_precision@1 | 0.3911 |
386
+ | cosine_precision@3 | 0.1584 |
387
+ | cosine_precision@5 | 0.099 |
388
+ | cosine_precision@10 | 0.0554 |
389
+ | cosine_recall@1 | 0.3911 |
390
+ | cosine_recall@3 | 0.4752 |
391
+ | cosine_recall@5 | 0.495 |
392
+ | cosine_recall@10 | 0.5545 |
393
+ | cosine_ndcg@10 | 0.467 |
394
+ | cosine_mrr@10 | 0.4398 |
395
+ | **cosine_map@100** | **0.4462** |
396
+
397
+ <!--
398
+ ## Bias, Risks and Limitations
399
+
400
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
401
+ -->
402
+
403
+ <!--
404
+ ### Recommendations
405
+
406
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
407
+ -->
408
+
409
+ ## Training Details
410
+
411
+ ### Training Dataset
412
+
413
+ #### json
414
+
415
+ * Dataset: json
416
+ * Size: 1,814 training samples
417
+ * Columns: <code>positive</code> and <code>anchor</code>
418
+ * Approximate statistics based on the first 1000 samples:
419
+ | | positive | anchor |
420
+ |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
421
+ | type | string | string |
422
+ | details | <ul><li>min: 3 tokens</li><li>mean: 267.06 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 39.66 tokens</li><li>max: 120 tokens</li></ul> |
423
+ * Samples:
424
+ | positive | anchor |
425
+ |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------|
426
+ | <code><br><br>Based on the provided information, it appears you are describing a complex biological system involving various molecules, drugs, diseases, and anatomical structures. Here's a breakdown:<br><br>### Key Entities<br>1. **Molecules and Targets**<br> - Mentioned molecules include metabolites, phenols, and drugs, with specific functional groups related to their chemical properties.<br> - Targets include enzymes (like acetyl-CoA carboxylase) and diseases causing various health conditions (e.g., Finnish type amyloidosis, lung cancer).<br><br>2. **Functionality and Interactions**<br> - The molecules and drugs interact with various biological processes, pathways, and bodily systems.<br></code> | <code>Identify common genetic targets that interact with both N-(3,5-dibromo-4-hydroxyphenyl)benzamide and 1-Naphthylamine-5-sulfonic acid.</code> |
427
+ | <code><br>The provided list appears to be a collection of gene symbols related to cancer. Gene symbols are used in genetics and molecular biology to identify genes. Each symbol is associated with a specific gene that plays a role in cellular functions, including cancer processes. When studying cancer, researchers often analyze these genes to understand their roles in tumor development, potential as targets for therapy, or as indicators for patient prognosis. For example, some genes listed are known oncogenes or tumor suppressor genes:<br><br>- TP53: A tumor suppressor gene that when mutated can lead to uncontrolled cell growth.<br>- P53, POLD1, PTEN: These are well-known tumor suppressors that help regulate cell division and DNA repair.<br>- BRCA</code> | <code>Which anatomical structures lack expression of genes or proteins involved in the homogentisate degradation pathway?</code> |
428
+ | <code><br><br>The gene in question appears to have a wide range of functions across various biological processes and body systems. It's involved in several key areas that regulate cellular responses, metabolic processes, and organ development. Here is a summary of its potential roles:<br><br>1. **Cell Growth and Regulation**: The gene contributes to growth control in cells, particularly in smooth muscle cells, and seems to influence cell proliferation, which is essential for tissue repair and development.<br><br>2. **Nerve Function**: It plays a role in functions like signal transduction, neurotrophin signaling, and regulation of neural activity, suggesting it’s involved in neural health and development.<br><br>3. **Metabolic Processes**: There is evidence linking</code> | <code>Identify genes or proteins that interact with angiotensin-converting enzyme 2 (ACE2) and are linked to a common phenotype or effect.</code> |
429
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
430
+ ```json
431
+ {
432
+ "loss": "MultipleNegativesRankingLoss",
433
+ "matryoshka_dims": [
434
+ 768
435
+ ],
436
+ "matryoshka_weights": [
437
+ 1
438
+ ],
439
+ "n_dims_per_step": -1
440
+ }
441
+ ```
442
+
443
+ ### Training Hyperparameters
444
+ #### Non-Default Hyperparameters
445
+
446
+ - `eval_strategy`: epoch
447
+ - `learning_rate`: 1e-05
448
+ - `warmup_ratio`: 0.1
449
+ - `bf16`: True
450
+ - `tf32`: False
451
+ - `load_best_model_at_end`: True
452
+
453
+ #### All Hyperparameters
454
+ <details><summary>Click to expand</summary>
455
+
456
+ - `overwrite_output_dir`: False
457
+ - `do_predict`: False
458
+ - `eval_strategy`: epoch
459
+ - `prediction_loss_only`: True
460
+ - `per_device_train_batch_size`: 8
461
+ - `per_device_eval_batch_size`: 8
462
+ - `per_gpu_train_batch_size`: None
463
+ - `per_gpu_eval_batch_size`: None
464
+ - `gradient_accumulation_steps`: 1
465
+ - `eval_accumulation_steps`: None
466
+ - `torch_empty_cache_steps`: None
467
+ - `learning_rate`: 1e-05
468
+ - `weight_decay`: 0.0
469
+ - `adam_beta1`: 0.9
470
+ - `adam_beta2`: 0.999
471
+ - `adam_epsilon`: 1e-08
472
+ - `max_grad_norm`: 1.0
473
+ - `num_train_epochs`: 3
474
+ - `max_steps`: -1
475
+ - `lr_scheduler_type`: linear
476
+ - `lr_scheduler_kwargs`: {}
477
+ - `warmup_ratio`: 0.1
478
+ - `warmup_steps`: 0
479
+ - `log_level`: passive
480
+ - `log_level_replica`: warning
481
+ - `log_on_each_node`: True
482
+ - `logging_nan_inf_filter`: True
483
+ - `save_safetensors`: True
484
+ - `save_on_each_node`: False
485
+ - `save_only_model`: False
486
+ - `restore_callback_states_from_checkpoint`: False
487
+ - `no_cuda`: False
488
+ - `use_cpu`: False
489
+ - `use_mps_device`: False
490
+ - `seed`: 42
491
+ - `data_seed`: None
492
+ - `jit_mode_eval`: False
493
+ - `use_ipex`: False
494
+ - `bf16`: True
495
+ - `fp16`: False
496
+ - `fp16_opt_level`: O1
497
+ - `half_precision_backend`: auto
498
+ - `bf16_full_eval`: False
499
+ - `fp16_full_eval`: False
500
+ - `tf32`: False
501
+ - `local_rank`: 0
502
+ - `ddp_backend`: None
503
+ - `tpu_num_cores`: None
504
+ - `tpu_metrics_debug`: False
505
+ - `debug`: []
506
+ - `dataloader_drop_last`: False
507
+ - `dataloader_num_workers`: 0
508
+ - `dataloader_prefetch_factor`: None
509
+ - `past_index`: -1
510
+ - `disable_tqdm`: False
511
+ - `remove_unused_columns`: True
512
+ - `label_names`: None
513
+ - `load_best_model_at_end`: True
514
+ - `ignore_data_skip`: False
515
+ - `fsdp`: []
516
+ - `fsdp_min_num_params`: 0
517
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
518
+ - `fsdp_transformer_layer_cls_to_wrap`: None
519
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
520
+ - `deepspeed`: None
521
+ - `label_smoothing_factor`: 0.0
522
+ - `optim`: adamw_torch
523
+ - `optim_args`: None
524
+ - `adafactor`: False
525
+ - `group_by_length`: False
526
+ - `length_column_name`: length
527
+ - `ddp_find_unused_parameters`: None
528
+ - `ddp_bucket_cap_mb`: None
529
+ - `ddp_broadcast_buffers`: False
530
+ - `dataloader_pin_memory`: True
531
+ - `dataloader_persistent_workers`: False
532
+ - `skip_memory_metrics`: True
533
+ - `use_legacy_prediction_loop`: False
534
+ - `push_to_hub`: False
535
+ - `resume_from_checkpoint`: None
536
+ - `hub_model_id`: None
537
+ - `hub_strategy`: every_save
538
+ - `hub_private_repo`: False
539
+ - `hub_always_push`: False
540
+ - `gradient_checkpointing`: False
541
+ - `gradient_checkpointing_kwargs`: None
542
+ - `include_inputs_for_metrics`: False
543
+ - `eval_do_concat_batches`: True
544
+ - `fp16_backend`: auto
545
+ - `push_to_hub_model_id`: None
546
+ - `push_to_hub_organization`: None
547
+ - `mp_parameters`:
548
+ - `auto_find_batch_size`: False
549
+ - `full_determinism`: False
550
+ - `torchdynamo`: None
551
+ - `ray_scope`: last
552
+ - `ddp_timeout`: 1800
553
+ - `torch_compile`: False
554
+ - `torch_compile_backend`: None
555
+ - `torch_compile_mode`: None
556
+ - `dispatch_batches`: None
557
+ - `split_batches`: None
558
+ - `include_tokens_per_second`: False
559
+ - `include_num_input_tokens_seen`: False
560
+ - `neftune_noise_alpha`: None
561
+ - `optim_target_modules`: None
562
+ - `batch_eval_metrics`: False
563
+ - `eval_on_start`: False
564
+ - `use_liger_kernel`: False
565
+ - `eval_use_gather_object`: False
566
+ - `batch_sampler`: batch_sampler
567
+ - `multi_dataset_batch_sampler`: proportional
568
+
569
+ </details>
570
+
571
+ ### Training Logs
572
+ | Epoch | Step | Training Loss | dim_768_cosine_map@100 |
573
+ |:-------:|:-------:|:-------------:|:----------------------:|
574
+ | 0 | 0 | - | 0.3930 |
575
+ | 0.0441 | 10 | 1.18 | - |
576
+ | 0.0881 | 20 | 1.0507 | - |
577
+ | 0.1322 | 30 | 0.9049 | - |
578
+ | 0.1762 | 40 | 0.8999 | - |
579
+ | 0.2203 | 50 | 0.6519 | - |
580
+ | 0.2643 | 60 | 0.5479 | - |
581
+ | 0.3084 | 70 | 0.6493 | - |
582
+ | 0.3524 | 80 | 0.4706 | - |
583
+ | 0.3965 | 90 | 0.5459 | - |
584
+ | 0.4405 | 100 | 0.5692 | - |
585
+ | 0.4846 | 110 | 0.7834 | - |
586
+ | 0.5286 | 120 | 0.5341 | - |
587
+ | 0.5727 | 130 | 0.5343 | - |
588
+ | 0.6167 | 140 | 0.4865 | - |
589
+ | 0.6608 | 150 | 0.3942 | - |
590
+ | 0.7048 | 160 | 0.3578 | - |
591
+ | 0.7489 | 170 | 0.5158 | - |
592
+ | 0.7930 | 180 | 0.3426 | - |
593
+ | 0.8370 | 190 | 0.5789 | - |
594
+ | 0.8811 | 200 | 0.5271 | - |
595
+ | 0.9251 | 210 | 0.577 | - |
596
+ | 0.9692 | 220 | 0.5193 | - |
597
+ | 1.0 | 227 | - | 0.4354 |
598
+ | 1.0132 | 230 | 0.4598 | - |
599
+ | 1.0573 | 240 | 0.2735 | - |
600
+ | 1.1013 | 250 | 0.2919 | - |
601
+ | 1.1454 | 260 | 0.3206 | - |
602
+ | 1.1894 | 270 | 0.2851 | - |
603
+ | 1.2335 | 280 | 0.3899 | - |
604
+ | 1.2775 | 290 | 0.3279 | - |
605
+ | 1.3216 | 300 | 0.2155 | - |
606
+ | 1.3656 | 310 | 0.3471 | - |
607
+ | 1.4097 | 320 | 0.327 | - |
608
+ | 1.4537 | 330 | 0.229 | - |
609
+ | 1.4978 | 340 | 0.2902 | - |
610
+ | 1.5419 | 350 | 0.3216 | - |
611
+ | 1.5859 | 360 | 0.2902 | - |
612
+ | 1.6300 | 370 | 0.4527 | - |
613
+ | 1.6740 | 380 | 0.1583 | - |
614
+ | 1.7181 | 390 | 0.3144 | - |
615
+ | 1.7621 | 400 | 0.2573 | - |
616
+ | 1.8062 | 410 | 0.2309 | - |
617
+ | 1.8502 | 420 | 0.3475 | - |
618
+ | 1.8943 | 430 | 0.3082 | - |
619
+ | 1.9383 | 440 | 0.3176 | - |
620
+ | 1.9824 | 450 | 0.2104 | - |
621
+ | **2.0** | **454** | **-** | **0.4453** |
622
+ | 2.0264 | 460 | 0.2615 | - |
623
+ | 2.0705 | 470 | 0.1599 | - |
624
+ | 2.1145 | 480 | 0.1015 | - |
625
+ | 2.1586 | 490 | 0.2154 | - |
626
+ | 2.2026 | 500 | 0.1161 | - |
627
+ | 2.2467 | 510 | 0.2208 | - |
628
+ | 2.2907 | 520 | 0.2035 | - |
629
+ | 2.3348 | 530 | 0.1622 | - |
630
+ | 2.3789 | 540 | 0.1758 | - |
631
+ | 2.4229 | 550 | 0.2782 | - |
632
+ | 2.4670 | 560 | 0.303 | - |
633
+ | 2.5110 | 570 | 0.1787 | - |
634
+ | 2.5551 | 580 | 0.2221 | - |
635
+ | 2.5991 | 590 | 0.1686 | - |
636
+ | 2.6432 | 600 | 0.2522 | - |
637
+ | 2.6872 | 610 | 0.1334 | - |
638
+ | 2.7313 | 620 | 0.1102 | - |
639
+ | 2.7753 | 630 | 0.2499 | - |
640
+ | 2.8194 | 640 | 0.2648 | - |
641
+ | 2.8634 | 650 | 0.1859 | - |
642
+ | 2.9075 | 660 | 0.2385 | - |
643
+ | 2.9515 | 670 | 0.2283 | - |
644
+ | 2.9956 | 680 | 0.1126 | - |
645
+ | 3.0 | 681 | - | 0.4462 |
646
+
647
+ * The bold row denotes the saved checkpoint.
648
+
649
+ ### Framework Versions
650
+ - Python: 3.10.10
651
+ - Sentence Transformers: 3.1.1
652
+ - Transformers: 4.45.1
653
+ - PyTorch: 2.2.1+cu121
654
+ - Accelerate: 0.34.2
655
+ - Datasets: 3.0.1
656
+ - Tokenizers: 0.20.0
657
+
658
+ ## Citation
659
+
660
+ ### BibTeX
661
+
662
+ #### Sentence Transformers
663
+ ```bibtex
664
+ @inproceedings{reimers-2019-sentence-bert,
665
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
666
+ author = "Reimers, Nils and Gurevych, Iryna",
667
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
668
+ month = "11",
669
+ year = "2019",
670
+ publisher = "Association for Computational Linguistics",
671
+ url = "https://arxiv.org/abs/1908.10084",
672
+ }
673
+ ```
674
+
675
+ #### MatryoshkaLoss
676
+ ```bibtex
677
+ @misc{kusupati2024matryoshka,
678
+ title={Matryoshka Representation Learning},
679
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
680
+ year={2024},
681
+ eprint={2205.13147},
682
+ archivePrefix={arXiv},
683
+ primaryClass={cs.LG}
684
+ }
685
+ ```
686
+
687
+ #### MultipleNegativesRankingLoss
688
+ ```bibtex
689
+ @misc{henderson2017efficient,
690
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
691
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
692
+ year={2017},
693
+ eprint={1705.00652},
694
+ archivePrefix={arXiv},
695
+ primaryClass={cs.CL}
696
+ }
697
+ ```
698
+
699
+ <!--
700
+ ## Glossary
701
+
702
+ *Clearly define terms in order to be accessible across audiences.*
703
+ -->
704
+
705
+ <!--
706
+ ## Model Card Authors
707
+
708
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
709
+ -->
710
+
711
+ <!--
712
+ ## Model Card Contact
713
+
714
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
715
+ -->
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "mixedbread-ai_deepset-mxbai-embed-de-large-v1_FareedKhan_prime_synthetic_data_2k_3_8/finetuned_model",
3
+ "architectures": [
4
+ "XLMRobertaModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 1024,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 4096,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 514,
17
+ "model_type": "xlm-roberta",
18
+ "num_attention_heads": 16,
19
+ "num_hidden_layers": 24,
20
+ "output_past": true,
21
+ "pad_token_id": 1,
22
+ "position_embedding_type": "absolute",
23
+ "torch_dtype": "float32",
24
+ "transformers_version": "4.45.1",
25
+ "type_vocab_size": 1,
26
+ "use_cache": false,
27
+ "vocab_size": 178885
28
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.1",
4
+ "transformers": "4.45.1",
5
+ "pytorch": "2.2.1+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:857d73dcb4ddbb0fca11bbe9c0518d3feba1eb6a577392eb977ab31e40e938a9
3
+ size 1948311760
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "[MXBAI_Q]",
4
+ "[MXBAI_P]"
5
+ ],
6
+ "bos_token": {
7
+ "content": "<s>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false
12
+ },
13
+ "cls_token": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "eos_token": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false
26
+ },
27
+ "mask_token": {
28
+ "content": "<mask>",
29
+ "lstrip": true,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ },
34
+ "pad_token": {
35
+ "content": "<pad>",
36
+ "lstrip": false,
37
+ "normalized": false,
38
+ "rstrip": false,
39
+ "single_word": false
40
+ },
41
+ "sep_token": {
42
+ "content": "</s>",
43
+ "lstrip": false,
44
+ "normalized": false,
45
+ "rstrip": false,
46
+ "single_word": false
47
+ },
48
+ "unk_token": {
49
+ "content": "<unk>",
50
+ "lstrip": false,
51
+ "normalized": false,
52
+ "rstrip": false,
53
+ "single_word": false
54
+ }
55
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b319c37e4f1e5f8f1c8dba7e1bc7b1a424184365d7f982ada0f18ab60c514c07
3
+ size 12283980
tokenizer_config.json ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "178882": {
36
+ "content": "<mask>",
37
+ "lstrip": true,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "178883": {
44
+ "content": "[MXBAI_Q]",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "178884": {
52
+ "content": "[MXBAI_P]",
53
+ "lstrip": false,
54
+ "normalized": false,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ }
59
+ },
60
+ "additional_special_tokens": [
61
+ "[MXBAI_Q]",
62
+ "[MXBAI_P]"
63
+ ],
64
+ "bos_token": "<s>",
65
+ "clean_up_tokenization_spaces": true,
66
+ "cls_token": "<s>",
67
+ "eos_token": "</s>",
68
+ "mask_token": "<mask>",
69
+ "max_length": 512,
70
+ "model_max_length": 512,
71
+ "pad_to_multiple_of": null,
72
+ "pad_token": "<pad>",
73
+ "pad_token_type_id": 0,
74
+ "padding_side": "right",
75
+ "sep_token": "</s>",
76
+ "stride": 0,
77
+ "tokenizer_class": "XLMRobertaTokenizer",
78
+ "truncation_side": "right",
79
+ "truncation_strategy": "longest_first",
80
+ "unk_token": "<unk>"
81
+ }