LLaMAdelic / README.md
choco58's picture
Updated model card
e9ab594 verified
|
raw
history blame
3.36 kB
metadata
library_name: transformers
tags: []

Model Card for LLaMAdelic

The LLaMAdelic model was fine-tuned to generate conversations with personality-specific adaptations based on the Big 5 personality traits: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. The model was trained on conversational data, specifically designed to enhance its capacity for personality-driven dialogue. Detailed information about the dataset will be updated soon.

Model Details

Will be updated soon

Model Description

This is the model card for the fine-tuned LLaMAdelic model on the 🤗 Hub.

  • Developed by: Will be updated soon
  • Funded by [optional]: Will be updated soon
  • Shared by [optional]: Will be updated soon
  • Model type: Fine-tuned LLaMA 3 8B Instruct
  • Language(s) (NLP): English
  • License: Will be updated soon
  • Finetuned from model: LLaMA 3 8B Instruct

Model Sources

  • Repository: Will be updated soon
  • Paper [optional]: Will be updated soon
  • Demo [optional]: Will be updated soon

Uses

Direct Use

Will be updated soon

Downstream Use

Will be updated soon

Out-of-Scope Use

Will be updated soon

Bias, Risks, and Limitations

Will be updated soon

Recommendations

Users (both direct and downstream) should be aware of potential biases and limitations. Further recommendations will be updated soon.

How to Get Started with the Model

Will be updated soon

Training Details

Training Data

Will be updated soon

Training Procedure

Preprocessing

Will be updated soon

Training Hyperparameters

Will be updated soon

Speeds, Sizes, Times

Will be updated soon

Evaluation

Testing Data, Factors & Metrics

Testing Data

Will be updated soon

Factors

Will be updated soon

Metrics

Will be updated soon

Results

The results from the EleutherAI/lm-evaluation-harness for OCEAN personality benchmarks show notable improvements in personality-specific scores for the LLaMAdelic model over the zeroshot pre-trained LLaMA 8B instruct model. Detailed evaluation results are below:

Model Description Openness Conscientiousness Extraversion Agreeableness Neuroticism Avg
LLaMA 8B ins Zeroshot 0.8760 0.7620 0.7170 0.9500 0.5220 0.7654
LLaMAdelic Finetuned on Conversational Data 0.9150 0.7840 0.6680 0.9440 0.7040 0.8030

Model Examination

Will be updated soon

Environmental Impact

Will be updated soon

Technical Specifications

Model Architecture and Objective

Will be updated soon

Compute Infrastructure

Will be updated soon

Hardware

Will be updated soon

Software

Will be updated soon

Citation

Will be updated soon

Glossary

Will be updated soon

More Information

Will be updated soon

Model Card Authors

Will be updated soon

Model Card Contact

Will be updated soon