howard-hou commited on
Commit
14bf8c8
1 Parent(s): 45fd030

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -10
README.md CHANGED
@@ -5,24 +5,23 @@ language:
5
  library_name: transformers
6
  ---
7
 
8
- ### RankingPrompter
9
- RankingPrompter是由人工智能与数字经济广东省实验室(深圳光明实验室)开发的一个开源的重排/精排模型。
10
- - 在1500万中文句对数据集上进行训练。
11
- - 在多项中文测试集上均取得最好的效果。
 
 
12
 
13
- 如果希望使用RankingPrompter更加丰富的功能(如完整的文档编码-召回-精排链路),我们推荐使用配套代码库(To be released)。
14
-
15
- ### 如何使用
16
 
17
  You can use this model simply as a re-ranker, note now the model is only available for Chinese.
18
- 本模型可简单用作一个强力的重排/精排模型,现阶段仅支持中文。
19
 
20
  ```python
21
  from transformers import AutoTokenizer, AutoModel
22
 
23
- tokenizer = AutoTokenizer.from_pretrained("howard-hou/RankingPrompterForPreTraining-small")
24
  # trust_remote_code=True 很重要,否则不会读取到正确的模型
25
- model = AutoModel.from_pretrained("howard-hou/RankingPrompterForPreTraining-small",
26
  trust_remote_code=True)
27
 
28
  #
 
5
  library_name: transformers
6
  ---
7
 
8
+ ### ICAA-ranker
9
+ Instruction-Aware Contextual Compressor(ICAA) is an open-source re-ranking/context compression model developed by the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen Guangming Laboratory).
10
+ This repository, IACC-ranker, is designated for housing the ranker. The compressor will be placed on a separate page.
11
+ It is trained on a dataset of 15 million Chinese sentence pairs.
12
+ It has consistently delivered the good results across various Chinese test datasets.
13
+ For those who wish to utilize the more extensive features of RankingPrompter, such as the complete document encoding-retrieval-fine-tuning pipeline, we recommend the use of the accompanying codebase[https://github.com/howard-hou/instruction-aware-contextual-compressor/tree/main].
14
 
15
+ ### How to use
 
 
16
 
17
  You can use this model simply as a re-ranker, note now the model is only available for Chinese.
 
18
 
19
  ```python
20
  from transformers import AutoTokenizer, AutoModel
21
 
22
+ tokenizer = AutoTokenizer.from_pretrained("howard-hou/IACC-ranker-small")
23
  # trust_remote_code=True 很重要,否则不会读取到正确的模型
24
+ model = AutoModel.from_pretrained("howard-hou/IACC-ranker-small",
25
  trust_remote_code=True)
26
 
27
  #