license: apache-2.0
datasets:
- Locutusque/hercules-v5.0
base_model: M4-ai/Hercules-5.0-Qwen2-1.5B
language:
- en
inference:
parameters:
do_sample: true
temperature: 0.8
top_p: 0.95
top_k: 40
min_p: 0.1
max_new_tokens: 250
repetition_penalty: 1.1
pipeline_tag: text-generation
Hercules-5.0-Qwen2-1.5B-GGUF
This is quantized version of M4-ai/Hercules-5.0-Qwen2-1.5B created using llama.cpp
Model Description
We fine-tuned qwen2-1.5B on a high quality mix for general-purpose assistants. A DPO version of this will be released soon. We use the ChatML prompt format.
Model Details
This model has capabilities in math, coding, writing, and more. We fine-tuned it using a high quality mix for general-purpose assistants.
- Developed by: M4-ai
- Language(s) (NLP): English and maybe Chinese
- License: apache-2.0
- Finetuned from model: qwen2-1.5B
Uses
General purpose assistant, question answering, chain-of-thought, etc..
This language model made an impressive achievement, and correctly implemented a Multi Head Attention for use in a transformer neural network.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
Training Details
Training Data
- Locutusque/hercules-v5.0
Evaluations
coming soon
Training Hyperparameters
- Training regime: bf16 non-mixed precision
Technical Specifications
Hardware
We used 8 Kaggle TPUs, and we trained at a global batch size of 256 and sequence length of 1536.