Update README.md
Browse files
README.md
CHANGED
@@ -16,8 +16,8 @@ should probably proofread and complete it, then remove this comment. -->
|
|
16 |
|
17 |
# Model Card for RankZephyr 7B V1 - Full
|
18 |
|
19 |
-
RankZephyr is a series of language models
|
20 |
-
RankZephyr Base is the model that follows single
|
21 |
|
22 |
|
23 |
## Model description
|
@@ -50,13 +50,13 @@ With the MS MARCO v1 collection:
|
|
50 |
| RankGPT-3.5 | -| SPLADE++ ED | 0.7504 | 0.7120|
|
51 |
|
52 |
|
|
|
53 |
|
|
|
54 |
|
55 |
-
|
56 |
|
57 |
-
The
|
58 |
-
|
59 |
-
In our case, RankZephyr is fine-tuned to act as a listwise reranking agent. You provide it with a query and documents and get back a reordered list of document identifiers.
|
60 |
|
61 |
|
62 |
## Bias, Risks, and Limitations
|
|
|
16 |
|
17 |
# Model Card for RankZephyr 7B V1 - Full
|
18 |
|
19 |
+
RankZephyr is a series of language models trained to act as helpful reranking assistants built on the Zephyr-7B-β model.
|
20 |
+
RankZephyr Base is the model that follows single-stage fine-tuning on the RankGPT-3.5 model, while RankZephyr Full is the model that is further fine-tuned on RankGPT-4 reorderings of OpenAI's Ada2 orderings for 5K queries.
|
21 |
|
22 |
|
23 |
## Model description
|
|
|
50 |
| RankGPT-3.5 | -| SPLADE++ ED | 0.7504 | 0.7120|
|
51 |
|
52 |
|
53 |
+
More details can be found in the paper.
|
54 |
|
55 |
+
## Intended uses & limitations
|
56 |
|
57 |
+
The model is to be used in conjunction with the [RankLLM repository](https://github.com/castorini/rank_llm). While `rank-llm` exists as a PyPI package, we are currently in the early stages of development and encourage users to directly check install from source.
|
58 |
|
59 |
+
The original Zephyr model is trained for chat. In our case, RankZephyr is fine-tuned to act as a listwise reranking agent. You provide it with a query and documents and get back a reordered list of document identifiers.
|
|
|
|
|
60 |
|
61 |
|
62 |
## Bias, Risks, and Limitations
|