|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- fine-tuned/ArguAna-256-24-gpt-4o-2024-05-13-952023 |
|
- allenai/c4 |
|
language: |
|
- en |
|
pipeline_tag: feature-extraction |
|
tags: |
|
- sentence-transformers |
|
- feature-extraction |
|
- sentence-similarity |
|
- mteb |
|
- Argumentation |
|
- Corpus |
|
- Dataset |
|
- Research |
|
- Annotation |
|
--- |
|
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: |
|
|
|
academic research data search |
|
|
|
## How to Use |
|
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: |
|
|
|
```python |
|
from sentence_transformers import SentenceTransformer |
|
from sentence_transformers.util import cos_sim |
|
|
|
model = SentenceTransformer( |
|
'fine-tuned/ArguAna-256-24-gpt-4o-2024-05-13-952023', |
|
trust_remote_code=True |
|
) |
|
|
|
embeddings = model.encode([ |
|
'first text to embed', |
|
'second text to embed' |
|
]) |
|
print(cos_sim(embeddings[0], embeddings[1])) |
|
``` |
|
|