Update README.md
Browse files
README.md
CHANGED
@@ -7,11 +7,13 @@ license: mit
|
|
7 |
The approach is simple:
|
8 |
|
9 |
1. Combine all available NLI data without any domain-dependent re-balancing or re-weighting.
|
10 |
-
|
11 |
2. Finetune several SOTA transformers of different sizes (20m parameters to 300m parameters) on the combined data.
|
12 |
3. Evaluate on challenging NLI datasets.
|
13 |
|
14 |
-
This model
|
|
|
|
|
|
|
15 |
|
16 |
### Usage
|
17 |
|
@@ -45,7 +47,7 @@ with torch.no_grad():
|
|
45 |
```
|
46 |
|
47 |
|
48 |
-
In
|
49 |
|
50 |
```python
|
51 |
from sentence_transformers import CrossEncoder
|
|
|
7 |
The approach is simple:
|
8 |
|
9 |
1. Combine all available NLI data without any domain-dependent re-balancing or re-weighting.
|
|
|
10 |
2. Finetune several SOTA transformers of different sizes (20m parameters to 300m parameters) on the combined data.
|
11 |
3. Evaluate on challenging NLI datasets.
|
12 |
|
13 |
+
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. It is based on [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large).
|
14 |
+
|
15 |
+
### Data
|
16 |
+
20+ NLI datasets were combined to train a binary classification model. The contradiction and neutral labels were mixed to form a non-entailment class.
|
17 |
|
18 |
### Usage
|
19 |
|
|
|
47 |
```
|
48 |
|
49 |
|
50 |
+
In Sentence-Transformers
|
51 |
|
52 |
```python
|
53 |
from sentence_transformers import CrossEncoder
|