|
--- |
|
language: |
|
- en |
|
- de |
|
- fr |
|
- it |
|
- pt |
|
- hi |
|
- es |
|
- th |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
license: llama3.2 |
|
|
|
base_model: meta-llama/Llama-3.2-1B-Instruct |
|
|
|
datasets: |
|
- nerdyface/project1-v1 |
|
|
|
--- |
|
|
|
## Model Information |
|
|
|
This uses the Llama 3.2 1B model as a starting point and uses the project1-v1 dataset. |
|
|
|
|
|
#### Our latest model uses a combination of SFT and DPO to achieve superior results than our initial experiments! |
|
|
|
#### Please let us know what you think by opening a discussion in the Community tab! |
|
|