Edit model card

Table of contents

  1. Installation
  2. Pre-processing
  3. Usage with sentence-transformers
  4. Usage with transformers
  5. Performance
  6. Support me
  7. Citation

Installation

  • Install VnCoreNLP to word segment:

    • pip install py_vncorenlp
  • Install sentence-transformers (recommend) - Usage:

    • pip install sentence-transformers
  • Install transformers (optional) - Usage:

    • pip install transformers

Pre-processing

import py_vncorenlp
py_vncorenlp.download_model(save_dir='/absolute/path/to/vncorenlp')
rdrsegmenter = py_vncorenlp.VnCoreNLP(annotators=["wseg"], save_dir='/absolute/path/to/vncorenlp')

query = "Trường UIT là gì?"
sentences = [
    "Trường Đại học Công nghệ Thông tin có tên tiếng Anh là University of Information Technology (viết tắt là UIT) là thành viên của Đại học Quốc Gia TP.HCM.",
    "Trường Đại học Kinh tế – Luật (tiếng Anh: University of Economics and Law – UEL) là trường đại học đào tạo và nghiên cứu khối ngành kinh tế, kinh doanh và luật hàng đầu Việt Nam.",
    "Quĩ uỷ thác đầu tư (tiếng Anh: Unit Investment Trusts; viết tắt: UIT) là một công ty đầu tư mua hoặc nắm giữ một danh mục đầu tư cố định"
]

tokenized_query = rdrsegmenter.word_segment(query)
tokenized_sentences = [rdrsegmenter.word_segment(sent) for sent in sentences]

tokenized_pairs = [[tokenized_query, sent] for sent in tokenized_sentences]

MODEL_ID = 'itdainb/PhoRanker'
MAX_LENGTH = 256

Usage with sentence-transformers

from sentence_transformers import CrossEncoder
model = CrossEncoder(MODEL_ID, max_length=MAX_LENGTH)

# For fp16 usage
model.model.half()

scores = model.predict(tokenized_pairs)

# 0.982, 0.2444, 0.9253
print(scores)

Usage with transformers

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model = AutoModelForSequenceClassification.from_pretrained(MODEL_ID)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)

# For fp16 usage
model.half()

features = tokenizer(tokenized_pairs, padding=True, truncation="longest_first", return_tensors="pt", max_length=MAX_LENGTH)

model.eval()
with torch.no_grad():
    model_predictions = model(**features, return_dict=True)

    logits = model_predictions.logits
    logits = torch.nn.Sigmoid()(logits)
    scores = [logit[0] for logit in logits]

# 0.9819, 0.2444, 0.9253
print(scores)

Performance

In the following table, we provide various pre-trained Cross-Encoders together with their performance on the MS MMarco Passage Reranking - Vi - Dev dataset.

Model-Name NDCG@3 MRR@3 NDCG@5 MRR@5 NDCG@10 MRR@10 Docs / Sec
itdainb/PhoRanker 0.6625 0.6458 0.7147 0.6731 0.7422 0.6830 15
amberoad/bert-multilingual-passage-reranking-msmarco 0.4634 0.5233 0.5041 0.5383 0.5416 0.5523 22
kien-vu-uet/finetuned-phobert-passage-rerank-best-eval 0.0963 0.0883 0.1396 0.1131 0.1681 0.1246 15
BAAI/bge-reranker-v2-m3 0.6087 0.5841 0.6513 0.6062 0.6872 0.62091 3.51
BAAI/bge-reranker-v2-gemma 0.6088 0.5908 0.6446 0.6108 0.6785 0.6249 1.29

Note: Runtime was computed on a A100 GPU with fp16.

Support me

If you find this work useful and would like to support its continued development, here are a few ways you can help:

  1. Star the Repository: If you appreciate this work, please give it a star. Your support encourages continued development and improvement.
  2. Contribute: Contributions are always welcome! You can help by reporting issues, submitting pull requests, or suggesting new features.
  3. Share: Share this project with your colleagues, friends, or community. The more people know about it, the more feedback and contributions it can attract.
  4. Buy me a coffee: If you’d like to provide financial support, consider making a donation. You can donate via
    • Momo: 0948798843
    • BIDV Bank: DAINB
    • Paypal: 0948798843

Citation

Please cite as

@misc{PhoRanker,
  title={PhoRanker: A Cross-encoder Model for Vietnamese Text Ranking},
  author={Dai Nguyen Ba ({ORCID:0009-0008-8559-3154})},
  year={2024},
  publisher={Huggingface},
  journal={huggingface repository},
  howpublished={\url{https://huggingface.co/itdainb/PhoRanker}},
}
Downloads last month
1,934
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train itdainb/PhoRanker

Space using itdainb/PhoRanker 1

Collection including itdainb/PhoRanker