Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ tags:
|
|
5 |
- 奇虎360
|
6 |
- RAG-reranking
|
7 |
model-index:
|
8 |
-
- name: 360Zhinao-
|
9 |
results:
|
10 |
- task:
|
11 |
type: Reranking
|
@@ -66,11 +66,11 @@ library_name: transformers
|
|
66 |
<br>
|
67 |
|
68 |
# MTEB Leaderboard Chinese Reranking Results
|
69 |
-
We have validated the performance of our model on the [mteb-chinese-reranking leaderboard](https://huggingface.co/spaces/mteb/leaderboard). Currently, the open-source models on this leaderboard are primarily bidirectional discriminative models (BERT-like models). The only unidirectional generative model (GPT-like model) is gte-Qwen1.5-7B-instruct, which has an average score of 66.38, ranking 25th, with less than ideal results. Our self-developed unidirectional generative model,
|
70 |
|
71 |
| Model | T2Reranking | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|
72 |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|
|
73 |
-
| **360Zhinao-
|
74 |
| piccolo-large-zh-v2 | 67.15 | 33.39 | 90.14 | 89.31 | 70 |
|
75 |
| Baichuan-text-embedding | 67.85 | 34.3 | 88.46 | 88.06 | 69.67 |
|
76 |
| stella-mrl-large-zh-v3.5-1792d | 66.43 | 28.85 | 89.18 | 89.33 | 68.45 |
|
@@ -102,7 +102,7 @@ FLASH_ATTENTION_FORCE_BUILD=TRUE ./miniconda3/bin/python -m pip install flash-at
|
|
102 |
|
103 |
# Model Introduction
|
104 |
|
105 |
-
The
|
106 |
|
107 |
## Data Processing
|
108 |
|
@@ -278,7 +278,7 @@ class FlagRerankerCustom:
|
|
278 |
|
279 |
|
280 |
if __name__ == "__main__":
|
281 |
-
model_name_or_path = "360Zhinao-
|
282 |
model = FlagRerankerCustom(model_name_or_path, use_fp16=False)
|
283 |
inputs=[["What Color Is the Sky","Blue"], ["What Color Is the Sky","Pink"],]
|
284 |
ret = model.compute_score(inputs)
|
|
|
5 |
- 奇虎360
|
6 |
- RAG-reranking
|
7 |
model-index:
|
8 |
+
- name: 360Zhinao-1.8B-reranking
|
9 |
results:
|
10 |
- task:
|
11 |
type: Reranking
|
|
|
66 |
<br>
|
67 |
|
68 |
# MTEB Leaderboard Chinese Reranking Results
|
69 |
+
We have validated the performance of our model on the [mteb-chinese-reranking leaderboard](https://huggingface.co/spaces/mteb/leaderboard). Currently, the open-source models on this leaderboard are primarily bidirectional discriminative models (BERT-like models). The only unidirectional generative model (GPT-like model) is gte-Qwen1.5-7B-instruct, which has an average score of 66.38, ranking 25th, with less than ideal results. Our self-developed unidirectional generative model, 360Zhinao-1.8B-reranking, achieved an average score of 70.13, currently ranking first overall and first among open-source models, opening up new possibilities for generative models to undertake discriminative tasks.
|
70 |
|
71 |
| Model | T2Reranking | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|
72 |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|
|
73 |
+
| **360Zhinao-1.8B-Reranking** | **68.55** | **37.29** | **86.75** | **87.92** | **70.13** |
|
74 |
| piccolo-large-zh-v2 | 67.15 | 33.39 | 90.14 | 89.31 | 70 |
|
75 |
| Baichuan-text-embedding | 67.85 | 34.3 | 88.46 | 88.06 | 69.67 |
|
76 |
| stella-mrl-large-zh-v3.5-1792d | 66.43 | 28.85 | 89.18 | 89.33 | 68.45 |
|
|
|
102 |
|
103 |
# Model Introduction
|
104 |
|
105 |
+
The 360Zhinao-1.8B-reranking model utilizes the self-developed zhinao_1-8b_base model as its foundation. Through iterative discovery and resolution of the following technical issues, it continuously stimulates the world knowledge inherent in the large model during the pre-training phase, better bridging the gap between generative models and discriminative tasks.
|
106 |
|
107 |
## Data Processing
|
108 |
|
|
|
278 |
|
279 |
|
280 |
if __name__ == "__main__":
|
281 |
+
model_name_or_path = "360Zhinao-1.8B-Reranking"
|
282 |
model = FlagRerankerCustom(model_name_or_path, use_fp16=False)
|
283 |
inputs=[["What Color Is the Sky","Blue"], ["What Color Is the Sky","Pink"],]
|
284 |
ret = model.compute_score(inputs)
|