justtherightsize's picture
Update README.md
17205f6 verified
|
raw
history blame
6.25 kB
---
license: mit
datasets:
- facebook/empathetic_dialogues
language:
- en
base_model: alignment-handbook/zephyr-7b-sft-full
widget:
- example_title: Pirate!
messages:
- role: system
content: You are a friendly assistant, who provides empathetic responses to the user. The input contains previous turn of the dialog, where each utterance is prefaced with tags <|user>, or <|assistant|>. Be empathetic and precise. Make sure to give responses that make the dialogue flow. Avoid repeating the prompt. Please respond creatively and expressively to make the responses longer. You can offer advice.
- role: user
content: Yeah about 10 years ago I had a horrifying experience. It was 100% their fault but they hit the water barrels and survived. They had no injuries but they almost ran me off the road.
- role: assistant
content: Did you suffer any injuries?
- role: user
content: No I wasn't hit. It turned out they were drunk. I felt guilty but realized it was his fault.
output:
text: >-
That's good that you didn't get hurt. I hope they got in trouble for driving drunk.
pipeline_tag: text-generation
model-index:
- name: justtherightsize/zephyr-7b-sft-full124
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: Open LLM Leaderboard
type: various
config: various
split: various
args:
num_few_shot: 5
metrics:
- type: acc
name: accuracy
value: 0.2701
source:
name: Open LLM Leaderboard
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
name: accuracy
value: 58.50
source:
name: MMLU
url: >-
https://github.com/huggingface/lm-evaluation-harness.git
---
# Model Card for zephyr-7b-sft-full124
This model paricipated in multi-turn dialogues and responses empathetically.
## Model Description
We propose a data-driven solution for Empathetic Response Generation with LLMs: aligning LLMs via preference optimization algorithms. First, we build a preference dataset using the benchmark dataset EmpatheticDialogues (Rashkin et al., 2019). It contains short multi-turn human-to-human dialogues grounded by emotion labels. We leverage this emotion grounding to sample dialog completions labeled with polar opposite emotions using Plutchik’s wheel (Plutchik, 2001) such that each prompt is paired with preferred and non-preferred completions. We then fine-tune a foundational LLM using Direct Preference Optimization (DPO) (Rafailov et al., 2024) to generate responses aligned with the preferred candidate response.
- **Developed by:** TBA
- **Model type:** Autoregressive Encoder-Decoder
- **Language(s):** en
- **Finetuned from:** alignment-handbook/zephyr-7b-sft-full
## Sources
- **Repository:** <https://github.com/justtherightsize/empo>
- **(*non-anonymized*) Paper preprint:** <https://arxiv.org/abs/2406.19071>
## Usage - Generate a response in a dialogue. You must be logged in to HF and agree to the license of the base model!
```python
from peft import PeftModel
from transformers import BitsAndBytesConfig, AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
from huggingface_hub import login
# HF login: you have to be logged in and agree to the license of the base
# model: https://huggingface.co/alignment-handbook/zephyr-7b-sft-full
hf_key = "Your key here"
login(hf_key)
# Load tokenizer either from remote
adapter_id = "justtherightsize/zephyr-7b-sft-full124"
base_model_id = "alignment-handbook/zephyr-7b-sft-full"
tokenizer = AutoTokenizer.from_pretrained(adapter_id)
# Prepare dialog and convert to chat template
sys_msg = "You are a friendly assistant, who provides empathetic responses to the user. " \
"The input contains previous turn of the dialog, where each utterance is prefaced " \
"with tags <|user|>, or <|assistant|>. Be empathetic and precise. " \
"Make sure to give responses that make dialogue flow. Avoid repeating the prompt. " \
"Please respond creatively and expressively to make the responses longer. You can offer advice."
dialog = ["Yeah about 10 years ago I had a horrifying experience. It was 100% their fault but they hit the water barrels and survived. They had no injuries but they almost ran me off the road.",
"Did you suffer any injuries?",
"No I wasn't hit. It turned out they were drunk. I felt guilty but realized it was his fault."]
dwroles = [{"role": "system", "content": sys_msg}]
for j in range(len(dialog)):
dwroles.append(
{"role": "user", "content": dialog[j]} if j % 2 == 0 else
{"role": "assistant", "content": dialog[j]})
template = tokenizer.apply_chat_template(dwroles, tokenize=False, add_generation_prompt=True)
# Load the big model first & resize embeds, load PEFT model
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
quantization_config=quantization_config,
trust_remote_code=True
)
model.resize_token_embeddings(len(tokenizer))
model.config.use_cache = False
model = PeftModel.from_pretrained(model, adapter_id)
# Instantiate generation pipeline
pipe_gen = pipeline("text-generation", model=model, tokenizer=tokenizer)
# Generate the response
out = pipe_gen(template, return_full_text=False, max_new_tokens=500)[0]['generated_text']
print(out)
```
## Out-of-Scope Usage
Note that fine-tuning on the EmpatheticDialogues caused some specialization.
## Training
Please refer to: https://github.com/justtherightsize/empo?tab=readme-ov-file#training
## Cite
TBA, now please cite the **non-anonymized** [preprint](https://arxiv.org/abs/2305.15017)