Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -70,7 +70,7 @@ We have validated the performance of our model on the [mteb-chinese-reranking le
|
|
70 |
|
71 |
| Model | T2Reranking | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|
72 |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|
|
73 |
-
| **360Zhinao-1_8B-reranking** | 68.55 | 37.29 | 86.75 | 87.92 | 70.13 |
|
74 |
| piccolo-large-zh-v2 | 67.15 | 33.39 | 90.14 | 89.31 | 70 |
|
75 |
| Baichuan-text-embedding | 67.85 | 34.3 | 88.46 | 88.06 | 69.67 |
|
76 |
| stella-mrl-large-zh-v3.5-1792d | 66.43 | 28.85 | 89.18 | 89.33 | 68.45 |
|
@@ -115,6 +115,8 @@ Unlike generative tasks that produce multiple characters, using generative model
|
|
115 |
|
116 |
# Inference Script
|
117 |
|
|
|
|
|
118 |
```python
|
119 |
from typing import cast, List, Union, Tuple, Dict, Optional
|
120 |
|
@@ -131,7 +133,6 @@ def preprocess(
|
|
131 |
tokenizer: transformers.PreTrainedTokenizer,
|
132 |
max_len: int = 1024,
|
133 |
system_message: str = "",
|
134 |
-
#system_message: str = "You are a helpful assistant.",
|
135 |
device = None,
|
136 |
) -> Dict:
|
137 |
roles = {"user": "<|im_start|>user", "assistant": "<|im_start|>assistant"}
|
|
|
70 |
|
71 |
| Model | T2Reranking | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|
72 |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|
|
73 |
+
| **360Zhinao-1_8B-reranking** | **68.55** | **37.29** | **86.75** | **87.92** | **70.13** |
|
74 |
| piccolo-large-zh-v2 | 67.15 | 33.39 | 90.14 | 89.31 | 70 |
|
75 |
| Baichuan-text-embedding | 67.85 | 34.3 | 88.46 | 88.06 | 69.67 |
|
76 |
| stella-mrl-large-zh-v3.5-1792d | 66.43 | 28.85 | 89.18 | 89.33 | 68.45 |
|
|
|
115 |
|
116 |
# Inference Script
|
117 |
|
118 |
+
You can copy the following scripts to [mteb-eval-scripts](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB), then replace FlagReranker with FlagRerankerCustom in [eval_cross_encoder](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/eval_cross_encoder.py) scripts, then run [eval_cross_encoder](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/eval_cross_encoder.py) to reproduce our complete performance on the [mteb-chinese-reranking leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
|
119 |
+
|
120 |
```python
|
121 |
from typing import cast, List, Union, Tuple, Dict, Optional
|
122 |
|
|
|
133 |
tokenizer: transformers.PreTrainedTokenizer,
|
134 |
max_len: int = 1024,
|
135 |
system_message: str = "",
|
|
|
136 |
device = None,
|
137 |
) -> Dict:
|
138 |
roles = {"user": "<|im_start|>user", "assistant": "<|im_start|>assistant"}
|