MindWell
MindWell is a chat assistant trained by fine-tuning lmsys/vicuna-7b-v1.5 on two symptom-based depression datasets: BDI-Sen and PsySym. To expedite and optimize the fine-tuning process, we have implemented Low Rank Adaptation (LoRA) techniques, ensuring enhanced efficiency and faster adaptation.
How to Get Started with MindWell
Use the code below to get started with the model.
from peft import PeftModel
from transformers import AutoModelForCausalLM, LlamaTokenizer, GenerationConfig
config = PeftConfig.from_pretrained("irlab-udc/MindWell")
model = AutoModelForCausalLM.from_pretrained("lmsys/vicuna-13b-v1.5", device_map="auto")
model = PeftModel.from_pretrained(model, "irlab-udc/MindWell")
tokenizer = AutoTokenizer.from_pretrained("lmsys/vicuna-13b-v1.5")
def evaluate(input):
inputs = tokenizer(input, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(do_sample=True),
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=512,
)
for s in generation_output.sequences:
output = tokenizer.decode(s)
print("Answer:", output)
evaluate("What can you do?")
Answer: I can analyze the user's comments to determine if they exhibit any signs of depressive symptoms based on the provided list of symptoms. I will justify my decisions by means of excerpts from the user's comments. If I don't know the answer, I will truthfully say that I don't know.
Training
Configurations and Hyperparameters
The following LoraConfig
config was used during training:
- r: 8
- lora_alpha: 16
- target_modules: ["q_proj", "v_proj"]
- lora_dropout: 0.05
- bias: "none"
- task_type: "CAUSAL_LM"
The following TrainingArguments
config was used during training:
- per_device_train_batch_size: 64
- gradient_accumulation_steps: 32
- warmup_steps: 100
- num_train_epochs: 20
- learning_rate: 3e-4
- fp16=True
The following bitsandbytes
quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
Framework versions
- PyTorch 2.1.0
- PEFT 0.5.0
- 🤗 Transformers 4.34.0
- 🤗 Datasets 2.14.5
- 🤗 Tokenizers 0.14.0
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: NVIDIA RTX 6000 Ada Generation.
- Hours used: 20.
- Cloud Provider: Private infrastructure.
- Carbon Emitted: 2.59 Kg. CO2 eq.
- Downloads last month
- 0
Model tree for irlab-udc/MindWell
Base model
lmsys/vicuna-7b-v1.5