ruBert-base-reward / README.md
Andrilko's picture
Update README.md
bf06f4c
|
raw
history blame
No virus
1.6 kB
metadata
license: apache-2.0
language:
  - ru
tags:
  - PyTorch
  - Transformers

BERT base model for pair ranking (reward model for RLHF) in Russian language.

For training i use the next pair-ranking-loss

Datasets have been translated with google-translate-api for reward training:

For better quality, use mean token embeddings.

Usage (HuggingFace Models Repository)

You can use the model directly from the model repository to compute score:


from transformers import AutoModelForSequenceClassification, AutoTokenizer

#Create model object and inits pretrain weights:
reward_name = "Andrilko/ruBert-base-reward"
rank_model =  AutoModelForSequenceClassification.from_pretrained(reward_name)
tokenizer = AutoTokenizer.from_pretrained(reward_name)


#Sentences that we want to score:
sentences =  ['Человек: Что такое QR-код?','Ассистент: QR-код - это тип матричного штрих-кода.']

#Compute token embeddings
inputs = tokenizer(sentences[0], sentences[1], return_tensors='pt')
score = rank_model(**inputs).logits[0].cpu().detach()
print(score)

Authors